Articles

6.4.1: Regular Singular Points Euler Equations (Exercises) - Mathematics

6.4.1: Regular Singular Points Euler Equations (Exercises) - Mathematics


We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

Q6.4.1

In Exercises 7.4.1-7.4.18 find the general solution of the given Euler equation on ((0,infty)).

1. (x^2y''+7xy'+8y=0)

2. (x^2y''-7xy'+7y=0)

3. (x^2y''-xy'+y=0)

4. (x^2y''+5xy'+4y=0)

5. (x^2y''+xy'+y=0)

6. (x^2y''-3xy'+13y=0)

7. (x^2y''+3xy'-3y=0)

8. (12x^2y''-5xy''+6y=0)

9. (4x^2y''+8xy'+y=0)

10. (3x^2y''-xy'+y=0)

11. (2x^2y''-3xy'+2y=0)

12. (x^2y''+3xy'+5y=0)

13. (9x^2y''+15xy'+y=0)

14. (x^2y''-xy'+10y=0)

15. (x^2y''-6y=0)

16. (2x^2y''+3xy'-y=0)

17. (x^2y''-3xy'+4y=0)

18. (2x^2y''+10xy'+9y=0)

Q6.4.2

19.

  1. Adapt the proof of Theorem 7.4.3 to show that (y=y(x)) satisfies the Euler equation [ax^2y''+bxy'+cy=0 ag{A}] on ((-infty,0)) if and only if (Y(t)=y(-e^t)) [a {d^2Yover dt^2}+(b-a){dYover dt}+cY=0. onumber] on ((-infty,infty)).
  2. Use (a) to show that the general solution of Equation A on ((-infty,0)) is [egin{aligned} y&=c_1|x|^{r_1}+c_2|x|^{r_2}mbox{ if $r_1$ and $r_2$ are distinct real numbers; } y&=|x|^{r_1}(c_1+c_2ln|x|)mbox{ if $r_1=r_2$; } y&=|x|^{lambda}left[c_1cosleft(omegaln|x| ight)+ c_2sinleft(omegaln|x| ight) ight]mbox{ if $r_1,r_2=lambdapm iomega$ with $omega>0$}.end{aligned} onumber]

20. Use reduction of order to show that if

[ar(r-1)+br+c=0 onumber]

has a repeated root (r_1) then (y=x^{r_1}(c_1+c_2ln x)) is the general solution of

[ax^2y''+bxy'+cy=0 onumber]

on ((0,infty)).

21. A nontrivial solution of

[P_0(x)y''+P_1(x)y'+P_2(x)y=0 onumber]

is said to be oscillatory on an interval ((a,b)) if it has infinitely many zeros on ((a,b)). Otherwise (y) is said to be nonoscillatory on ((a,b)). Show that the equation

[x^2y''+ky=0 quad (k=; mbox{constant}) onumber]

has oscillatory solutions on ((0,infty)) if and only if (k>1/4).

22. In Example 7.4.2 we saw that (x_0=1) and (x_0=-1) are regular singular points of Legendre’s equation

[(1-x^2)y''-2xy'+alpha(alpha+1)y=0. ag{A}]

  1. Introduce the new variables (t=x-1) and (Y(t)=y(t+1)), and show that (y) is a solution of (A) if and only if (Y) is a solution of [t(2+t){d^2Yover dt^2}+2(1+t){dYover dt}-alpha(alpha+1)Y=0, onumber] which has a regular singular point at (t_0=0).
  2. Introduce the new variables (t=x+1) and (Y(t)=y(t-1)), and show that (y) is a solution of (A) if and only if (Y) is a solution of [t(2-t){d^2Yover dt^2}+2(1-t){dYover dt}+alpha(alpha+1)Y=0, onumber] which has a regular singular point at (t_0=0).

23. Let (P_0,P_1), and (P_2) be polynomials with no common factor, and suppose (x_0 e0) is a singular point of

[P_0(x)y''+P_1(x)y'+P_2(x)y=0. ag{A}]

Let (t=x-x_0) and (Y(t)=y(t+x_0)).

  1. Show that (y) is a solution of (A) if and only if (Y) is a solution of [R_0(t){d^2Yover dt^2}+R_1(t){dYover dt}+R_2(t)Y=0. ag{B}] where [R_i(t)=P_i(t+x_0),quad i=0,1,2. onumber]
  2. Show that (R_0), (R_1), and (R_2) are polynomials in (t) with no common factors, and (R_0(0)=0); thus, (t_0=0) is a singular point of (B).

6.4.1: Regular Singular Points Euler Equations (Exercises) - Mathematics

You are about to erase your work on this activity. Are you sure you want to do this?

Updated Version Available

There is an updated version of this activity. If you update to the most recent version of this activity, then your current progress on this activity will be erased. Regardless, your record of completion will remain. How would you like to proceed?

Mathematical Expression Editor

Transformation of Homogeneous Equations into Separable Equations

Nonlinear Equations That Can be Transformed Into Separable Equations

We’ve seen that the nonlinear Bernoulli equation can be transformed into a separable equation by the substitution if is suitably chosen. Now let’s discover a sufficient condition for a nonlinear first order differential equation

to be transformable into a separable equation in the same way. Substituting into (eq:2.4.4) yields which is equivalent to If for some function , then (eq:2.4.5) becomes which is separable. After checking for constant solutions such that , we can separate variables to obtain

Homogeneous Nonlinear Equations

In the text we’ll consider only the most widely studied class of equations for which the method of the preceding paragraph works. Other types of equations appear in Exercises exer:2.4.44–exer:2.4.51.

The differential equation (eq:2.4.4) is said to be homogeneous if and occur in in such a way that depends only on the ratio that is, (eq:2.4.4) can be written as

where is a function of a single variable. For example, and are of the form (eq:2.4.7), with respectively. The general method discussed above can be applied to (eq:2.4.7) with (and therefore . Thus, substituting in (eq:2.4.7) yields and separation of variables (after checking for constant solutions such that ) yields

Before turning to examples, we point out something that you may’ve have already noticed: the definition of homogeneous equation given here isn’t the same as the definition given in Section 2.1, where we said that a linear equation of the form is homogeneous. We make no apology for this inconsistency, since we didn’t create it! Historically, homogeneous has been used in these two inconsistent ways. The one having to do with linear equations is the most important. This is the only section of the book where the meaning defined here will apply.

Since is in general undefined if , we’ll consider solutions of nonhomogeneous equations only on open intervals that do not contain the point .

Substituting into (eq:2.4.8) yields Simplifying and separating variables yields Integrating yields . Therefore and .

Figure figure:2.4.2 shows a direction field and integral curves for (eq:2.4.8).


6.4.1: Regular Singular Points Euler Equations (Exercises) - Mathematics

You are about to erase your work on this activity. Are you sure you want to do this?

Updated Version Available

There is an updated version of this activity. If you update to the most recent version of this activity, then your current progress on this activity will be erased. Regardless, your record of completion will remain. How would you like to proceed?

Mathematical Expression Editor

Euler’s Method

Introduction

In science and mathematics, finding exact solutions to differential equations is not always possible. We have already seen that slope fields give us a powerful way to understand the qualitative features of solutions. Sometimes we need a more precise quantitative understanding, meaning we would like numerical approximations of the solutions.

If an initial value problem

can’t be solved analytically, it’s necessary to resort to numerical methods to obtain useful approximations to a solution.

The simplest numerical method for solving (eq:3.1.1) is Euler’s method. This method is so crude that it is seldom used in practice however, its simplicity makes it useful for illustrative purposes.

Euler’s method is based on the assumption that the tangent line to the integral curve of (eq:3.1.1) at approximates the integral curve over the interval . Since the slope of the integral curve of (eq:3.1.1) at is , the equation of the tangent line to the integral curve at is

as an approximation to . Since is known, we can use (eq:3.1.3) with to compute However, setting in (eq:3.1.3) yields which isn’t useful, since we don’t know . Therefore we replace by its approximate value and redefine Having computed , we can compute In general, Euler’s method starts with the known value and computes successively using the formula

The next example illustrates the computational procedure indicated in Euler’s method.

Consider the following differential equation

Let us approximate the solution using just two subintervals Since , we know that by (eq:3.1.3).

Plotting our approximation with the actual solution we find:

This approximation could be improved by using more subintervals. Use the Sage Cell below to increase the number of subintervals to improve the approximation.

Fill out the following table for Euler’s method using subintervals. Use exact values. Thus .

The Sage Cell below illustrates Example ex:eulerIntro2 graphically. You can adjust the number of subintervals to see how the approximation is affected. You can also change the the interval, just for fun.

Sources of Error

When using numerical methods we’re interested in computing approximate values of the solution of (eq:3.1.1) at equally spaced points in an interval . Thus, where We’ll denote the approximate values of the solution at these points by thus, is an approximation to . We’ll call the error at the th step. Because of the initial condition , we’ll always have . However, in general if .

We encounter two sources of error in applying a numerical method to solve an initial value problem:

  • The formulas defining the method are based on some sort of approximation. Errors due to the inaccuracy of the approximation are called truncation errors.
  • Computers do arithmetic with a fixed number of digits, and therefore make errors in evaluating the formulas defining the numerical methods. Errors due to the computer’s inability to do exact arithmetic are called roundoff errors.

Since a careful analysis of roundoff error is beyond the scope of this book, we’ll consider only truncation errors.

You can see that decreasing the step size improves the accuracy of Euler’s method. For example, Based on this scanty evidence, you might guess that the error in approximating the exact solution at a fixed value of by Euler’s method is roughly halved when the step size is halved. You can find more evidence to support this conjecture by examining the table below which lists the approximate values of at .

Use the Sage Cell below to explore the error in Example example:3.1.2 visually by changing the size () of the subintervals.

Since we think it’s important in evaluating the accuracy of the numerical methods that we’ll be studying in this chapter, we often include a column listing values of the exact solution of the initial value problem, even if the directions in the example or exercise don’t specifically call for it. If quotation marks are included in the heading, the values were obtained by applying the Runge-Kutta method in a way that’s explained in Section 3.3. If quotation marks are not included, the values were obtained from a known formula for the solution. In either case, the values are exact to eight places to the right of the decimal point.

Truncation Error in Euler’s Method

Consistent with the results indicated in Tables table:3.1.1–table:3.1.4, we’ll now show that under reasonable assumptions on there’s a constant such that the error in approximating the solution of the initial value problem at a given point by Euler’s method with step size satisfies the inequality where is a constant independent of .

There are two sources of error (not counting roundoff) in Euler’s method:

(a) The error committed in approximating the integral curve by the tangent line (eq:3.1.2) over the interval . (b) The error committed in replacing by in (eq:3.1.2) and using (eq:3.1.4) rather than (eq:3.1.2) to compute .

Euler’s method assumes that defined in (eq:3.1.2) is an approximation to . We call the error in this approximation the local truncation error at the th step, and denote it by thus,

We’ll now use Taylor’s theorem to estimate , assuming for simplicity that , , and are continuous and bounded for all . Then exists and is bounded on . To see this, we differentiate to obtain

Note that the magnitude of the local truncation error in Euler’s method is determined by the second derivative of the solution of the initial value problem. Therefore the local truncation error will be larger where is large, or smaller where is small.

Since the local truncation error for Euler’s method is , it’s reasonable to expect that halving reduces the local truncation error by a factor of . This is true, but halving the step size also requires twice as many steps to approximate the solution at a given point. To analyze the overall effect of truncation error in Euler’s method, it’s useful to derive an equation relating the errors To this end, recall that

and Subtracting (eq:3.1.12) from (eq:3.1.11) yields The last term on the right is the local truncation error at the th step. The other terms reflect the way errors made at previous steps affect . Since , we see from (eq:3.1.13) that Since we assumed that is continuous and bounded, the mean value theorem implies that where is between and . Therefore for some constant . From this and (eq:3.1.14), For convenience, let . Since , applying (eq:3.1.15) repeatedly yields

Recalling the formula for the sum of a geometric series, we see that (since ). From this and (eq:3.1.16),

Since Taylor’s theorem implies that (verify), This and (eq:3.1.17) imply that with Because of (eq:3.1.18) we say that the global truncation error of Euler’s method is of order , which we write as .

Semilinear Equations and Variation of Parameters

An equation that can be written in the form

with is said to be semilinear. (Of course, (eq:3.1.19) is linear if is independent of .) One way to apply Euler’s method to an initial value problem for (eq:3.1.19) is to think of it as where However, we can also start by applying variation of parameters to (eq:3.1.20), as in Sections 2.1 and 2.4 thus, we write the solution of (eq:3.1.20) as , where is a nontrivial solution of the complementary equation . Then is a solution of (eq:3.1.20) if and only if is a solution of the initial value problem We can apply Euler’s method to obtain approximate values of this initial value problem, and then take as approximate values of the solution of (eq:3.1.20). We’ll call this procedure the Euler semilinear method.

The next two examples show that the Euler and Euler semilinear methods may yield drastically different results.

It’s easy to see why Euler’s method yields such poor results. Recall that the constant in (eq:3.1.10) – which plays an important role in determining the local truncation error in Euler’s method – must be an upper bound for the values of the second derivative of the solution of the initial value problem (eq:3.1.22) on . The problem is that assumes very large values on this interval. To see this, we differentiate (eq:3.1.24) to obtain where the second equality follows again from (eq:3.1.24). Since (eq:3.1.23) implies that if , For example, letting shows that .

item:3.1.4b Since is a solution of the complementary equation , we can apply the Euler semilinear method to (eq:3.1.22), with The results listed in the following table are clearly better than those obtained by Euler’s method.

We can’t give a general procedure for determining in advance whether Euler’s method or the semilinear Euler method will produce better results for a given semilinear initial value problem (eq:3.1.19). As a rule of thumb, the Euler semilinear method will yield better results than Euler’s method if is small on , while Euler’s method yields better results if is large on . In many cases the results obtained by the two methods don’t differ appreciably. However, we propose the an intuitive way to decide which is the better method: Try both methods with multiple step sizes, as we did in Example example:3.1.4, and accept the results obtained by the method for which the approximations change less as the step size decreases.

Applying the Euler semilinear method with yields the results in the table below.

Since the latter are clearly less dependent on step size than the former, we conclude that the Euler semilinear method is better than Euler’s method for (eq:3.1.25). This conclusion is supported by comparing the approximate results obtained by the two methods with the “exact” values of the solution.

Applying the Euler semilinear method with yields the following.

Noting the close agreement among the three columns of the first table (at least for larger values of ) and the lack of any such agreement among the columns of the second table, we conclude that Euler’s method is better than the Euler semilinear method for (eq:3.1.26). Comparing the results with the exact values supports this conclusion.

In the next two modules we’ll study other numerical methods for solving initial value problems, called the improved Euler method, the midpoint method, Heun’s method and the Runge-Kutta method. If the initial value problem is semilinear as in (eq:3.1.19), we also have the option of using variation of parameters and then applying the given numerical method to the initial value problem (eq:3.1.21) for . By analogy with the terminology used here, we’ll call the resulting procedure the improved Euler semilinear method, the midpoint semilinear method, Heun’s semilinear method or the Runge-Kutta semilinear method, as the case may be.

Text Source

Trench, William F., ”Elementary Differential Equations” (2013). Faculty Authored and Edited Books & CDs. 8. (CC-BY-NC-SA)


In real analysis, singularities are either discontinuities, or discontinuities of the derivative (sometimes also discontinuities of higher order derivatives). There are four kinds of discontinuities: type I, which has two subtypes, and type II, which can also be divided into two subtypes (though usually is not).

There are some functions for which these limits do not exist at all. For example, the function

  • A point of continuity is a value of c for which f ( c − ) = f ( c ) = f ( c + ) )=f(c)=f(c^<+>)> , as one expects for a smooth function. All the values must be finite. If c is not a point of continuity, then a discontinuity occurs at c .
  • A type I discontinuity occurs when both f ( c − ) )> and f ( c + ) )> exist and are finite, but at least one of the following three conditions also applies:
    • f ( c − ) ≠ f ( c + ) ) eq f(c^<+>)>
    • f ( x ) is not defined for the case of x = c or
    • f ( c ) has a defined value, which, however, does not match the value of the two limits.
    • A jump discontinuity occurs when f ( c − ) ≠ f ( c + ) ) eq f(c^<+>)> , regardless of whether f ( c ) is defined, and regardless of its value if it is defined.
    • A removable discontinuity occurs when f ( c − ) = f ( c + ) )=f(c^<+>)> , also regardless of whether f ( c ) is defined, and regardless of its value if it is defined (but which does not match that of the two limits).
    • An infinite discontinuity is the special case when either the left hand or right hand limit does not exist, specifically because it is infinite, and the other limit is either also infinite, or is some well defined finite number. In other words, the function has an infinite discontinuity when its graph has a vertical asymptote.
    • An essential singularity is a term borrowed from complex analysis (see below). This is the case when either one or the other limits f ( c − ) )> or f ( c + ) )> does not exist, but not because it is an infinite discontinuity. Essential singularities approach no limit, not even if valid answers are extended to include ± ∞ .

    In real analysis, a singularity or discontinuity is a property of a function alone. Any singularities that may exist in the derivative of a function are considered as belonging to the derivative, not to the original function.

    Coordinate singularities Edit

    A coordinate singularity occurs when an apparent singularity or discontinuity occurs in one coordinate frame, which can be removed by choosing a different frame. An example of this is the apparent singularity at the 90 degree latitude in spherical coordinates. An object moving due north (for example, along the line 0 degrees longitude) on the surface of a sphere will suddenly experience an instantaneous change in longitude at the pole (in the case of the example, jumping from longitude 0 to longitude 180 degrees). This discontinuity, however, is only apparent it is an artifact of the coordinate system chosen, which is singular at the poles. A different coordinate system would eliminate the apparent discontinuity (e.g., by replacing the latitude/longitude representation with an n -vector representation).

    In complex analysis, there are several classes of singularities. These include the isolated singularities, the nonisolated singularities and the branch points.

    Isolated singularities Edit

    Suppose that U is an open subset of the complex numbers C, with the point a being an element of U, and that f is a complex differentiable function defined on some neighborhood around a, excluding a: U <a>, then:

    • The point a is a removable singularity of f if there exists a holomorphic functiong defined on all of U such that f(z) = g(z) for all z in U <a>. The function g is a continuous replacement for the function f. [6]
    • The point a is a pole or non-essential singularity of f if there exists a holomorphic function g defined on U with g(a) nonzero, and a natural numbern such that f(z) = g(z) / (za) n for all z in U <a>. The least such number n is called the order of the pole. The derivative at a non-essential singularity itself has a non-essential singularity, with n increased by 1 (except if n is 0 so that the singularity is removable).
    • The point a is an essential singularity of f if it is neither a removable singularity nor a pole. The point a is an essential singularity if and only if the Laurent series has infinitely many powers of negative degree. [2]

    Nonisolated singularities Edit

    Other than isolated singularities, complex functions of one variable may exhibit other singular behaviour. These are termed nonisolated singularities, of which there are two types:


    6.4.1: Regular Singular Points Euler Equations (Exercises) - Mathematics

    Before we get into finding series solutions to differential equations we need to determine when we can find series solutions to differential equations. So, let’s start with the differential equation,

    [eginpleft( x ight)y'' + qleft( x ight)y' + rleft( x ight)y = 0labelend]

    This time we really do mean nonconstant coefficients. To this point we’ve only dealt with constant coefficients. However, with series solutions we can now have nonconstant coefficient differential equations. Also, in order to make the problems a little nicer we will be dealing only with polynomial coefficients.

    Now, we say that (x=x_<0>) is an ordinary point if provided both

    are analytic at (x=x_<0>). That is to say that these two quantities have Taylor series around (x=x_<0>). We are going to be only dealing with coefficients that are polynomials so this will be equivalent to saying that

    If a point is not an ordinary point we call it a singular point.

    The basic idea to finding a series solution to a differential equation is to assume that we can write the solution as a power series in the form,

    and then try to determine what the (a_)’s need to be. We will only be able to do this if the point (x=x_<0>), is an ordinary point. We will usually say that (eqref) is a series solution around (x=x_<0>).

    Let’s start with a very basic example of this. In fact, it will be so basic that we will have constant coefficients. This will allow us to check that we get the correct solution.

    Notice that in this case (p(x)=1) and so every point is an ordinary point. We will be looking for a solution in the form,

    We will need to plug this into our differential equation so we’ll need to find a couple of derivatives.

    Recall from the power series review section on power series that we can start these at (n=0) if we need to, however it’s almost always best to start them where we have here. If it turns out that it would have been easier to start them at (n=0) we can easily fix that up when the time comes around.

    So, plug these into our differential equation. Doing this gives,

    The next step is to combine everything into a single series. To do this requires that we get both series starting at the same point and that the exponent on the (x) be the same in both series.

    We will always start this by getting the exponent on the (x) to be the same. It is usually best to get the exponent to be an (n). The second series already has the proper exponent and the first series will need to be shifted down by 2 in order to get the exponent up to an (n). If you don’t recall how to do this take a quick look at the first review section where we did several of these types of problems.

    Shifting the first power series gives us,

    Notice that in the process of the shift we also got both series starting at the same place. This won’t always happen, but when it does we’ll take it. We can now add up the two series. This gives,

    Now recalling the fact from the power series review section we know that if we have a power series that is zero for all (x) (as this is) then all the coefficients must have been zero to start with. This gives us the following,

    This is called the recurrence relation and notice that we included the values of (n) for which it must be true. We will always want to include the values of (n) for which the recurrence relation is true since they won’t always start at (n) = 0 as it did in this case.

    Now let’s recall what we were after in the first place. We wanted to find a series solution to the differential equation. In order to do this, we needed to determine the values of the (a_)’s. We are almost to the point where we can do that. The recurrence relation has two different (a_)’s in it so we can’t just solve this for (a_) and get a formula that will work for all (n). We can however, use this to determine what all but two of the (a_)’s are.

    To do this we first solve the recurrence relation for the (a_) that has the largest subscript. Doing this gives,

    Now, at this point we just need to start plugging in some value of (n) and see what happens,

    (n=0) ( = frac<< - >><>) (n=1) ( = frac<< - >><>)
    (n=2) (egin &= - frac<<>><> & = frac<<>><>end) (n=3) (egin &= - frac<<>><> & = frac<<>><>end)
    (n=4) (egin & = - frac<<>><> & = frac<< - >><>end) (n=5) (egin & = - frac<<>><> & = frac<< - >><>end)
    (vdots ) (vdots )
    (> = frac <<< ight)>^k>>> < ight)!>>,,,,k = 1,2, ldots ) (> = frac <<< ight)>^k>>> < ight)!>>,,,,k = 1,2, ldots )

    Notice that at each step we always plugged back in the previous answer so that when the subscript was even we could always write the (a_) in terms of (a_<0>) and when the coefficient was odd we could always write the (a_) in terms of (a_<1>). Also notice that, in this case, we were able to find a general formula for (a_)’s with even coefficients and (a_)’s with odd coefficients. This won’t always be possible to do.

    There’s one more thing to notice here. The formulas that we developed were only for (k=1,2,ldots) however, in this case again, they will also work for (k=0). Again, this is something that won’t always work, but does here.

    Do not get excited about the fact that we don’t know what (a_<0>) and (a_<1>) are. As you will see, we actually need these to be in the problem to get the correct solution.

    Now that we’ve got formulas for the (a_)’s let’s get a solution. The first thing that we’ll do is write out the solution with a couple of the (a_)’s plugged in.

    The next step is to collect all the terms with the same coefficient in them and then factor out that coefficient.

    In the last step we also used the fact that we knew what the general formula was to write both portions as a power series. This is also our solution. We are done.

    Before working another problem let’s take a look at the solution to the previous example. First, we started out by saying that we wanted a series solution of the form,

    and we didn’t get that. We got a solution that contained two different power series. Also, each of the solutions had an unknown constant in them. This is not a problem. In fact, it’s what we want to have happen. From our work with second order constant coefficient differential equations we know that the solution to the differential equation in the last example is,

    [yleft( x ight) = cos left( x ight) + sin left( x ight)]

    Solutions to second order differential equations consist of two separate functions each with an unknown constant in front of them that are found by applying any initial conditions. So, the form of our solution in the last example is exactly what we want to get. Also recall that the following Taylor series,

    Recalling these we very quickly see that what we got from the series solution method was exactly the solution we got from first principles, with the exception that the functions were the Taylor series for the actual functions instead of the actual functions themselves.

    Now let’s work an example with nonconstant coefficients since that is where series solutions are most useful.

    As with the first example (p(x)=1) and so again for this differential equation every point is an ordinary point. Now we’ll start this one out just as we did the first example. Let’s write down the form of the solution and get its derivatives.

    Plugging into the differential equation gives,

    Unlike the first example we first need to get all the coefficients moved into the series.

    Now we will need to shift the first series down by 2 and the second series up by 1 to get both of the series in terms of (x^n).

    Next, we need to get the two series starting at the same value of (n). The only way to do that for this problem is to strip out the (n=0) term.

    We now need to set all the coefficients equal to zero. We will need to be careful with this however. The (n=0) coefficient is in front of the series and the (n=1,2,3,dots) are all in the series. So, setting coefficient equal to zero gives,

    Solving the first as well as the recurrence relation gives,

    Now we need to start plugging in values of (n).

    (displaystyle = frac<<>><>) (displaystyle = frac<<>><>) (displaystyle = frac<<>><> = 0)
    (egin &= frac<<>><> & = frac<<>><>end) (egin & = frac<<>><> & = frac<<>><>end) (displaystyle = frac<<>><> = 0)
    ( vdots ) ( vdots ) ( vdots )
    (egin> & = frac<<>> < ight)left( <3k> ight)>> k & = 1,2,3, ldots end) (egin> & = frac<<>> < ight)left( <3k + 1> ight)>> k & = 1,2,3, ldots end) (egin> & = 0 k & = 0,1,2, ldots end)

    There are a couple of things to note about these coefficients. First, every third coefficient is zero. Next, the formulas here are somewhat unpleasant and not all that easy to see the first time around. Finally, these formulas will not work for (k=0) unlike the first example.

    Again, collect up the terms that contain the same coefficient, factor the coefficient out and write the results as a new series,

    We couldn’t start our series at (k=0) this time since the general term doesn’t hold for (k=0).

    Now, we need to work an example in which we use a point other that (x=0). In fact, let’s just take the previous example and rework it for a different value of (x_<0>). We’re also going to need to change up the instructions a little for this example.

    Unfortunately for us there is nothing from the first example that can be reused here. Changing to ( = - 2) completely changes the problem. In this case our solution will be,

    The derivatives of the solution are,

    Plug these into the differential equation.

    We now run into our first real difference between this example and the previous example. In this case we can’t just multiply the (x) into the second series since in order to combine with the series it must be (x+2). Therefore, we will first need to modify the coefficient of the second series before multiplying it into the series.

    We now have three series to work with. This will often occur in these kinds of problems. Now we will need to shift the first series down by 2 and the second series up by 1 the get common exponents in all the series.

    In order to combine the series we will need to strip out the (n=0) terms from both the first and third series.

    Setting coefficients equal to zero gives,

    We now need to solve both of these. In the first case there are two options, we can solve for (a_<2>) or we can solve for (a_<0>). Out of habit I’ll solve for (a_<0>). In the recurrence relation we’ll solve for the term with the largest subscript as in previous examples.

    Notice that in this example we won’t be having every third term drop out as we did in the previous example.

    At this point we’ll also acknowledge that the instructions for this problem are different as well. We aren’t going to get a general formula for the (a_)’s this time so we’ll have to be satisfied with just getting the first couple of terms for each portion of the solution. This is often the case for series solutions. Getting general formulas for the (a_)’s is the exception rather than the rule in these kinds of problems.

    To get the first four terms we’ll just start plugging in terms until we’ve got the required number of terms. Note that we will already be starting with an (a_<0>) and an (a_<1>) from the first two terms of the solution so all we will need are three more terms with an (a_<0>) in them and three more terms with an (a_<1>) in them.

    We’ve got two (a_<0>)’s and one (a_<1>).

    We’ve got three (a_<0>)’s and two (a_<1>)’s.

    We’ve got four (a_<0>)’s and three (a_<1>)’s. We’ve got all the (a_<0>)’s that we need, but we still need one more (a_<1>). So, we’ll need to do one more term it looks like.

    We’ve got five (a_<0>)’s and four (a_<1>)’s. We’ve got all the terms that we need.

    Now, all that we need to do is plug into our solution.

    Finally collect all the terms up with the same coefficient and factor out the coefficient to get,

    That’s the solution for this problem as far as we’re concerned. Notice that this solution looks nothing like the solution to the previous example. It’s the same differential equation but changing (x_<0>) completely changed the solution.

    Let’s work one final problem.

    [left( <+ 1> ight)y'' - 4xy' + 6y = 0]

    We finally have a differential equation that doesn’t have a constant coefficient for the second derivative.

    [pleft( x ight) = + 1hspace<0.25in>pleft( 0 ight) = 1 e 0]

    So ( = 0) is an ordinary point for this differential equation. We first need the solution and its derivatives,

    Plug these into the differential equation.

    Now, break up the first term into two so we can multiply the coefficient into the series and multiply the coefficients of the second and third series in as well.

    We will only need to shift the second series down by two to get all the exponents the same in all the series.

    At this point we could strip out some terms to get all the series starting at (n=2), but that’s actually more work than is needed. Let’s instead note that we could start the third series at (n=0) if we wanted to because that term is just zero. Likewise, the terms in the first series are zero for both (n=1) and (n=0) and so we could start that series at (n=0). If we do this all the series will now start at (n=0) and we can add them up without stripping terms out of any series.

    Now set coefficients equal to zero.

    Now, we plug in values of (n).

    Now, from this point on all the coefficients are zero. In this case both of the series in the solution will terminate. This won’t always happen, and often only one of them will terminate.


    Partial Differential Equations

    Solutions and Boundary Conditions

    We already know that a second-order ODE in y has a general solution containing two independent constants, which can often permit us to obtain a unique particular solution with specified values of y at two points (perhaps the ends of an interval on which we desire a solution to the ODE). Alternatively, sometimes we can obtain a unique solution to a second-order ODE with the values of y ′ specified at the ends of an interval. These specifications are called boundary conditions.

    In two dimensions a second-order PDE in ψ = ψ ( x , y ) defined for a rectangular area might be solvable subject to a boundary condition on each of its x boundaries ( a and b ), i.e., specifications of ψ ( a , y ) ≡ ψ a ( y ) and ψ ( b , y ) ≡ ψ b ( y ) , where, as shown, ψ a and ψ b are functions of y . We might also have boundary conditions on the y boundaries ( c and d ), e.g., specifications of ψ ( x , c ) ≡ ψ c ( x ) and ψ ( x , d ) ≡ ψ d ( x ) , with ψ c and ψ d functions of x . Because ψ a through ψ d can be arbitrary functions, we see that the set of solutions to a two-dimensional PDE could span an extremely wide variety of functional forms.

    The discussion of the preceding paragraph can be generalized to a two-dimensional region defined by an arbitrary boundary curve, with the result that a unique solution to our 2-D PDE may perhaps be obtained subject to the requirement that ψ ( x , y ) have specified values everywhere on the boundary. Further generalizing to three dimensions, a PDE in ψ ( r ) for a given volume may perhaps only have a unique solution when ψ ( r ) is specified everywhere on the surface bounding the volume.

    Although the boundary conditions discussed in the two preceding paragraphs were all in terms of values of ψ , it is also possible to specify boundary conditions in terms of the first derivative of ψ in the direction normal to the boundary (the analogous situation for an ODE is a derivative directed toward the interior of the interval on which the ODE is defined).

    When one of the independent variables in a PDE is the time t , we often encounter problems in which the dependent variable ψ has its spatial dependence (everywhere) specified at t = 0 . This initial condition is a boundary condition for one end of an infinite time interval, and often appears with ∂ ψ / ∂ t also specified everywhere at t = 0 .

    It is important to know whether a set of boundary conditions is sufficient to determine a unique solution to a PDE or whether too many conditions are specified (with the result that no solution satisfying the conditions can exist). A complete analysis of this topic is beyond the scope of the present text, but we can summarize the results of such an analysis for the PDEs to be considered here.

    The solution to the Laplace equation (e.g., an electrostatic potential) is completely determined when the potential is specified everywhere on the boundary ( ∂ V ) of the spatial region ( V ) for which the PDE is to be solved. This specification is termed Dirichlet boundary conditions. The sufficiency of Dirichlet boundary conditions for the Laplace equation is to be expected, since it is in essence a statement that if a distribution of charge external to V produces a specific distribution of potential on ∂ V , it must also produce a unique potential at each point within V . Note also that a suitable external distribution of charge can produce a specific distribution of the normal derivative of potential, ∂ V / ∂ n , at each point of ∂ V , a specification called Neumann boundary conditions. Neumann boundary conditions are also sufficient to determine completely (except for an additive constant) a solution to the Laplace equation. However, if we attempt to impose both Dirichlet and Neumann boundary conditions (the combination is called Cauchy boundary conditions), the attempt will in general produce inconsistent results and therefore not lead to a solution to the Laplace PDE.

    For wave equations, either Dirichlet or Neumann boundary conditions suffice to define PDE solutions, but the solutions are in general, not unique. Cauchy boundary conditions are usually too restrictive. However, we are sometimes interested in wave-propagation problems in which we impose Cauchy boundary conditions, but not for all positions and times. This situation arises when waves are propagated from a fully specified source but their behavior elsewhere is not specified.

    Diffusion equations, having only a first time derivative, can accommodate fewer boundary conditions than the PDEs already discussed. In general they have no solutions if either Dirichlet or Neumann conditions are imposed at all spatial points at multiple times, but unique results are possible if either Dirichlet or Neumann conditions (but not both) are imposed at all spatial points at an initial time t = 0 .

    One observation regarding boundary conditions may be helpful: physical insight is a practical guide for understanding what to expect one usually knows intuitively whether sufficient (but not excessive) conditions defining a problem have been imposed.


    6.4.1: Regular Singular Points Euler Equations (Exercises) - Mathematics

    MATH 2400, Introduction to Differential Equations

    Textbook: W.E. Boyce and R.C. DiPrima, Elementary Differential Equations and

    Boundary Value Problems, Seventh Edition, Wiley, 2003

    Week Dates Sections Topics

    1 8:25-28 1.3,2.1 Terminology. Linear D.E.

    HW: 1.3 #1,2,6,7,11,18,20,,262.1#2,9,13,16,18,20,28,30

    2 9: 3, 4 2.2,2.4 Separation of Variables. Conditions for solutions

    3 9: 8-11 2.6, 3.1 Exact Equations. Second order linear D.E.

    4 9: 15-19 3.2, 3.3 General Properties

    5 9: 22-25 3.4, 3.5 Exam1 Complex and repeated roots. Reduction of order

    6 9:29,10:1,2 3.6,3.7 Method of undetermined coef.,Variation of Parameters

    3.8,3.9 These sections are recommended for personal reading

    7 10:6-9 5.3,5.5 Series solutions near an ordinary point

    8 10:14-16 5.5,5.6 Euler D.E.Series solutions near a regular singular point

    9 10:20-23 6.1-6.3 Exam2 Laplace Transformation its properties and applications

    10 10:27-30 6.4-6.6 Laplace Transformation (continued)

    HW: 6.4# 1, 3, 4, 5 6.6# 5, 6, 8, 10, 12

    11 11:3-6 7.3,7.5 Linear systems of D.E.

    12 11:10-13 7.6.7.8 Complex and repeated eigenvalues

    13 11:17-20 10.2,.3 Fourier series

    14 11:24 10.4,.5 Heat conduction in a rod

    15 12:1-4 10.7,.8 Exam3 Vibrations of a string. Laplace equation

    Grading system: 20% for each of the 3 exams HW (30%)Maple projects (10%).The final exam is optional it can only improve the student’s grade, and may replace the lowest of the 3 exam grades.


    Euler Factorization

    You learn from calculus that the derivative of a smooth function f(x), defined on some interval ( (a,b) , ) is another function defined by the limit (if it exists)

    The derivative operator is a linear operator, that is,

    The inverse of the derivative is called «antiderivative» in mathematical literature mostly because it is not an operator. Why? To be an operator, ( exttt^ <-1>) should assign to every input (namely, a function) a unique output (another function). Since the kernel (or null space) of the derivative operator ( exttt ) is a one dimensional space, it assigns infinite many antiderivatives, abusing mathematicians. This follows from the identities:

    The operator ( exttt_c^ <-1>) is not a truly inverse, but only right inverse:

    Example: Let us take a constant function y(x) &equiv 1, then

    First order differential operator

    There are some examples of first order differential operators and their inverses.

    Singular first order differential operator

    If one of the functions r(x) or p(x) (or both) has a zero withing the given interval, then the linear differential operator ( Lleft[ x, exttt ight] w(x) = r(x) , exttt left[ p(x), exttt w ight] ) becomes a singular one. We present some typical examples of such operators and their inverses.

    Second order differential operator

    Example: Not self-adjoint operator

    Example: . To be modified Consider the initial value problem

    Example: Legendre equation

    Example: . To be modified Consider the initial value problem

    Example: Degenerate Hypergeometric Equation

    Example: To be modified Consider the differential operator

    Singular second order differential operator

    If one of the functions r(x) or p(x) (or both) has a zero withing the given interval, then the linear differential operator ( Lleft[ x, exttt ight] w(x) = r(x) , exttt left[ p(x), exttt w ight] ) becomes a singular one. We present some typical examples of such operators and their inverses.

    Example: Confluent Hypergeometric function

    Example: because the solutions to many mathematical physics problems are expressed by the hypergeometric functions, we consider the corresponding differential equation ■

    We consider the nonlinear differential operator of the second order:

      Bougoffa, L., On the exact solutions for initial value problems of second-orderdifferential equations, Applied Mathematics Letters, 2009, Vol. 22, pp. 1248--1251.

    Return to Mathematica page
    Return to the main page (APMA0330)
    Return to the Part 1 (Plotting)
    Return to the Part 2 (First Order ODEs)
    Return to the Part 3 (Numerical Methods)
    Return to the Part 4 (Second and Higher Order ODEs)
    Return to the Part 5 (Series and Recurrences)
    Return to the Part 6 (Laplace Transform)
    Return to the Part 7 (Boundary Value Problems)


    Riccati Equations

    Jacopo Francesco Riccati (1676--1754) was an Venetian mathematician and jurist from Venice. He is best known for having studied the differential equation which bears his name:

    Riccati was educated first at the Jesuit school for the nobility in Brescia, and in 1693 he entered the University of Padua to study law. He received a doctorate in law in 1696. Encouraged by Stefano degli Angeli to pursue mathematics, he studied mathematical analysis. Riccati received various academic offers, but declined them in order to devote his full attention to the study of mathematical analysis on his own. Peter the Great invited him to Russia as president of the St. Petersburg Academy of Sciences. He was also invited to Vienna as an imperial councilor and was offered a professorship at the University of Padua. He declined all these offers. He was often consulted by the Senate of Venice on the construction of canals and dikes along rivers.

    When h(x) = 0, we get a Bernoulli equation. The Riccati equation has much in common with linear equations for example, it has no singular solution. Except special cases, the Riccati equation cannot be solved analytically using elementary functions or quadratures, and the most common way to obtain its solution is to represent it in series. Moreover, the Riccati equation can be reduced to the second order linear differential equation by substitution

    It is sometimes possible to find a solution of a Riccati equation by guessing. Without knowing a solution to the Riccati equation, there is no chance of finding its general solution explicitly. If one solution &varphi is known, then substitution w = y - &varphi reduces the Riccati equation to a Bernoulli equation. Another substitution y = &varphi + 1/v also reduces the Riccati equation to a Bernoulli type.

    The special Riccati equation can be represented as ( y' = -u' /(au) , ) where

    We can find the limit of the solution to the special Riccati equation when x &rarr 0:

    Example: The Riccati equation

    The solution of the Riccati equation ( y' = y^2 + x^2 ) subject to the initial condition y(0) = 1 is

    Example: We consider a problem similar to the previous example that involves another standard Riccati equation

    When we impose the homogeneous conditions y(0) = 0, then its solution becomes

    Example: : Consider the Riccati equation

    We plot the separatrix and the direction field

    1. Haaheim, D.R. and Stein, F.M., Methods of Solution of the Riccati Differential Equation, Mathematics Magazine, 1969, Vol. 42, No. 5, pp. 233--240 https://doi.org/10.1080/0025570X.1969.11975969
    2. Robin, W., A new Riccati equation arising in the theory of classical orthogonal polynomials, International Journal of Mathematical Education in Science and Technology, 2003, Volume 34, 2003 - Issue 1, pp. 31--42. https://doi.org/10.1080/0020739021000018746

    Return to computing page for the first course APMA0330
    Return to computing page for the second course APMA0340
    Return to Mathematica tutorial for the first course APMA0330
    Return to Mathematica tutorial for the second course APMA0340
    Return to the main page for the course APMA0330
    Return to the main page for the course APMA0340
    Return to Part II of the course APMA0330


    6.4.1: Regular Singular Points Euler Equations (Exercises) - Mathematics

    Up to this point practically every differential equation that we’ve been presented with could be solved. The problem with this is that these are the exceptions rather than the rule. The vast majority of first order differential equations can’t be solved.

    In order to teach you something about solving first order differential equations we’ve had to restrict ourselves down to the fairly restrictive cases of linear, separable, or exact differential equations or differential equations that could be solved with a set of very specific substitutions. Most first order differential equations however fall into none of these categories. In fact, even those that are separable or exact cannot always be solved for an explicit solution. Without explicit solutions to these it would be hard to get any information about the solution.

    So, what do we do when faced with a differential equation that we can’t solve? The answer depends on what you are looking for. If you are only looking for long term behavior of a solution you can always sketch a direction field. This can be done without too much difficulty for some fairly complex differential equations that we can’t solve to get exact solutions.

    The problem with this approach is that it’s only really good for getting general trends in solutions and for long term behavior of solutions. There are times when we will need something more. For instance, maybe we need to determine how a specific solution behaves, including some values that the solution will take. There are also a fairly large set of differential equations that are not easy to sketch good direction fields for.

    In these cases, we resort to numerical methods that will allow us to approximate solutions to differential equations. There are many different methods that can be used to approximate solutions to a differential equation and in fact whole classes can be taught just dealing with the various methods. We are going to look at one of the oldest and easiest to use here. This method was originally devised by Euler and is called, oddly enough, Euler’s Method.

    Let’s start with a general first order IVP

    where (f(t,y)) is a known function and the values in the initial condition are also known numbers. From the second theorem in the Intervals of Validity section we know that if (f) and (f_) are continuous functions then there is a unique solution to the IVP in some interval surrounding (t = ). So, let’s assume that everything is nice and continuous so that we know that a solution will in fact exist.

    We want to approximate the solution to (eqref) near (t = ). We’ll start with the two pieces of information that we do know about the solution. First, we know the value of the solution at (t = ) from the initial condition. Second, we also know the value of the derivative at (t = ). We can get this by plugging the initial condition into (f(t,y)) into the differential equation itself. So, the derivative at this point is.

    Now, recall from your Calculus I class that these two pieces of information are enough for us to write down the equation of the tangent line to the solution at (t = ). The tangent line is

    Take a look at the figure below

    If (t_<1>) is close enough to (t_<0>) then the point (y_<1>) on the tangent line should be fairly close to the actual value of the solution at (t_<1>), or (y(t_<1>)). Finding (y_<1>) is easy enough. All we need to do is plug (t_<1>) in the equation for the tangent line.

    Now, we would like to proceed in a similar manner, but we don’t have the value of the solution at (t_<1>) and so we won’t know the slope of the tangent line to the solution at this point. This is a problem. We can partially solve it however, by recalling that (y_<1>) is an approximation to the solution at (t_<1>). If (y_<1>) is a very good approximation to the actual value of the solution then we can use that to estimate the slope of the tangent line at (t_<1>).

    So, let’s hope that (y_<1>) is a good approximation to the solution and construct a line through the point ((t_<1>, y_<1>)) that has slope (f(t_<1>, y_<1>)). This gives

    Now, to get an approximation to the solution at (t=t_<2>) we will hope that this new line will be fairly close to the actual solution at (t_<2>) and use the value of the line at (t_<2>) as an approximation to the actual solution. This gives.

    We can continue in this fashion. Use the previously computed approximation to get the next approximation. So,

    In general, if we have (t_) and the approximation to the solution at this point, (y_), and we want to find the approximation at (t_) all we need to do is use the following.

    If we define ( = fleft( <,> ight)) we can simplify the formula to

    Often, we will assume that the step sizes between the points (t_<0>) , (t_<1>) , (t_<2>) , … are of a uniform size of (h). In other words, we will often assume that

    This doesn’t have to be done and there are times when it’s best that we not do this. However, if we do the formula for the next approximation becomes.

    So, how do we use Euler’s Method? It’s fairly simple. We start with (eqref) and decide if we want to use a uniform step size or not. Then starting with ((t_<0>, y_<0>)) we repeatedly evaluate (eqref) or (eqref) depending on whether we chose to use a uniform step size or not. We continue until we’ve gone the desired number of steps or reached the desired time. This will give us a sequence of numbers (y_<1>) , (y_<2>) , (y_<3>) , … (y_) that will approximate the value of the actual solution at (t_<1>) , (t_<2>) , (t_<3>) , … (t_).

    What do we do if we want a value of the solution at some other point than those used here? One possibility is to go back and redefine our set of points to a new set that will include the points we are after and redo Euler’s Method using this new set of points. However, this is cumbersome and could take a lot of time especially if we had to make changes to the set of points more than once.

    Another possibility is to remember how we arrived at the approximations in the first place. Recall that we used the tangent line

    to get the value of (y_<1>). We could use this tangent line as an approximation for the solution on the interval ([t_<0>, t_<1>]). Likewise, we used the tangent line

    to get the value of (y_<2>). We could use this tangent line as an approximation for the solution on the interval ([t_<1>, t_<2>]). Continuing in this manner we would get a set of lines that, when strung together, should be an approximation to the solution as a whole.

    In practice you would need to write a computer program to do these computations for you. In most cases the function (f(t,y)) would be too large and/or complicated to use by hand and in most serious uses of Euler’s Method you would want to use hundreds of steps which would make doing this by hand prohibitive. So, here is a bit of pseudo-code that you can use to write a program for Euler’s Method that uses a uniform step size, (h).

    1. define (fleft( ight)).
    2. input (t_<0>) and (y_<0>).
    3. input step size, (h) and the number of steps, (n).
    4. for (j) from 1 to (n) do
      1. (m = f(t_<0>, y_<0>))
      2. (y_ <1>= y_ <0>+ h*m)
      3. (t_ <1>= t_ <0>+ h)
      4. Print (t_<1>) and (y_<1>)
      5. (t_ <0>= t_<1>)
      6. (y_ <0>= y_<1>)

      The pseudo-code for a non-uniform step size would be a little more complicated, but it would essentially be the same.

      So, let’s take a look at a couple of examples. We’ll use Euler’s Method to approximate solutions to a couple of first order differential equations. The differential equations that we’ll be using are linear first order differential equations that can be easily solved for an exact solution. Of course, in practice we wouldn’t use Euler’s Method on these kinds of differential equations, but by using easily solvable differential equations we will be able to check the accuracy of the method. Knowing the accuracy of any approximation method is a good thing. It is important to know if the method is liable to give a good approximation or not.

      Use Euler’s Method with a step size of (h = 0.1) to find approximate values of the solution at (t) = 0.1, 0.2, 0.3, 0.4, and 0.5. Compare them to the exact values of the solution at these points.

      This is a fairly simple linear differential equation so we’ll leave it to you to check that the solution is

      In order to use Euler’s Method we first need to rewrite the differential equation into the form given in (eqref).

      From this we can see that (fleft( ight) = 2 - <<f>^< - 4t>> - 2y). Also note that (t_ <0>= 0) and (y_ <0>= 1). We can now start doing some computations.

      So, the approximation to the solution at (t_ <1>= 0.1) is (y_ <1>= 0.9).

      Therefore, the approximation to the solution at (t_ <2>= 0.2) is (y_ <2>= 0.852967995).

      I’ll leave it to you to check the remainder of these computations.

      [egin & = - 0.155264954 & hspace<0.25in> & = 0.837441500\ & = 0.023922788 & hspace<0.25in> & = 0.839833779\ & = 0.1184359245 & hspace<0.25in>& = 0.851677371end]

      Here’s a quick table that gives the approximations as well as the exact value of the solutions at the given points.

      Time, (t_) Approximation Exact Error
      (t_ <0>= 0) (y_ <0>=1) (y(0) = 1) 0 %
      (t_ <1>= 0.1) (y_ <1>=0.9) (y(0.1) = 0.925794646) 2.79 %
      (t_ <2>= 0.2) (y_ <2>=0.852967995) (y(0.2) = 0.889504459) 4.11 %
      (t_ <3>= 0.3) (y_ <3>=0.837441500) (y(0.3) = 0.876191288) 4.42 %
      (t_ <4>= 0.4) (y_ <4>=0.839833779) (y(0.4) = 0.876283777) 4.16 %
      (t_ <5>= 0.5) (y_ <5>=0.851677371) (y(0.5) = 0.883727921) 3.63 %

      We’ve also included the error as a percentage. It’s often easier to see how well an approximation does if you look at percentages. The formula for this is,

      We used absolute value in the numerator because we really don’t care at this point if the approximation is larger or smaller than the exact. We’re only interested in how close the two are.

      The maximum error in the approximations from the last example was 4.42%, which isn’t too bad, but also isn’t all that great of an approximation. So, provided we aren’t after very accurate approximations this didn’t do too badly. This kind of error is generally unacceptable in almost all real applications however. So, how can we get better approximations?

      Recall that we are getting the approximations by using a tangent line to approximate the value of the solution and that we are moving forward in time by steps of (h). So, if we want a more accurate approximation, then it seems like one way to get a better approximation is to not move forward as much with each step. In other words, take smaller (h)’s.

      Below are two tables, one gives approximations to the solution and the other gives the errors for each approximation. We’ll leave the computational details to you to check.

      Approximations
      Time Exact (h = 0.1) (h = 0.05) (h = 0.01) (h = 0.005) (h = 0.001)
      (t) = 1 0.9414902 0.9313244 0.9364698 0.9404994 0.9409957 0.9413914
      (t) = 2 0.9910099 0.9913681 0.9911126 0.9910193 0.9910139 0.9910106
      (t) = 3 0.9987637 0.9990501 0.9988982 0.9987890 0.9987763 0.9987662
      (t) = 4 0.9998323 0.9998976 0.9998657 0.9998390 0.9998357 0.9998330
      (t) = 5 0.9999773 0.9999890 0.9999837 0.9999786 0.9999780 0.9999774

      Percentage Errors
      Time (h = 0.1) (h = 0.05) (h = 0.01) (h = 0.005) (h = 0.001)
      (t) = 1 1.08 % 0.53 % 0.105 % 0.053 % 0.0105 %
      (t) = 2 0.036 % 0.010 % 0.00094 % 0.00041 % 0.0000703 %
      (t) = 3 0.029 % 0.013 % 0.0025 % 0.0013 % 0.00025 %
      (t) = 4 0.0065 % 0.0033 % 0.00067 % 0.00034 % 0.000067 %
      (t) = 5 0.0012 % 0.00064 % 0.00013 % 0.000068 % 0.000014 %

      We can see from these tables that decreasing (h) does in fact improve the accuracy of the approximation as we expected.

      There are a couple of other interesting things to note from the data. First, notice that in general, decreasing the step size, (h), by a factor of 10 also decreased the error by about a factor of 10 as well.

      Also, notice that as (t) increases the approximation actually tends to get better. This isn’t the case completely as we can see that in all but the first case the (t) = 3 error is worse than the error at (t) = 2, but after that point, it only gets better. This should not be expected in general. In this case this is more a function of the shape of the solution. Below is a graph of the solution (the line) as well as the approximations (the dots) for (h = 0.1).

      Notice that the approximation is worst where the function is changing rapidly. This should not be too surprising. Recall that we’re using tangent lines to get the approximations and so the value of the tangent line at a given (t) will often be significantly different than the function due to the rapidly changing function at that point.

      Also, in this case, because the function ends up fairly flat as (t) increases, the tangents start looking like the function itself and so the approximations are very accurate. This won’t always be the case of course.

      Let’s take a look at one more example.

      Use Euler’s Method to find the approximation to the solution at (t = 1), (t = 2), (t = 3), (t = 4), and (t = 5). Use (h = 0.1), (h = 0.05), (h = 0.01), (h = 0.005), and (h = 0.001) for the approximations.

      We’ll leave it to you to check the details of the solution process. The solution to this linear first order differential equation is.

      Here are two tables giving the approximations and the percentage error for each approximation.

      Approximations
      Time Exact (h = 0.1) (h = 0.05) (h = 0.01) (h = 0.005) (h = 0.001)
      (t) = 1 -1.58100 -0.97167 -1.26512 -1.51580 -1.54826 -1.57443
      (t) = 2 -1.47880 0.65270 -0.34327 -1.23907 -1.35810 -1.45453
      (t) = 3 2.91439 7.30209 5.34682 3.44488 3.18259 2.96851
      (t) = 4 6.74580 15.56128 11.84839 7.89808 7.33093 6.86429
      (t) = 5 -1.61237 21.95465 12.24018 1.56056 0.0018864 -1.28498

      Percentage Errors
      Time (h = 0.1) (h = 0.05) (h = 0.01) (h = 0.005) (h = 0.001)
      (t) = 1 38.54 % 19.98 % 4.12 % 2.07 % 0.42 %
      (t) = 2 144.14 % 76.79 % 16.21 % 8.16 % 1.64 %
      (t) = 3 150.55 % 83.46 % 18.20 % 9.20 % 1.86 %
      (t) = 4 130.68 % 75.64 % 17.08 % 8.67 % 1.76 %
      (t) = 5 1461.63 % 859.14 % 196.79 % 100.12 % 20.30 %

      So, with this example Euler’s Method does not do nearly as well as it did on the first IVP. Some of the observations we made in Example 2 are still true however. Decreasing the size of (h) decreases the error as we saw with the last example and would expect to happen. Also, as we saw in the last example, decreasing (h) by a factor of 10 also decreases the error by about a factor of 10.

      However, unlike the last example increasing (t) sees an increasing error. This behavior is fairly common in the approximations. We shouldn’t expect the error to decrease as (t) increases as we saw in the last example. Each successive approximation is found using a previous approximation. Therefore, at each step we introduce error and so approximations should, in general, get worse as (t) increases.

      Below is a graph of the solution (the line) as well as the approximations (the dots) for (h) = 0.05.

      As we can see the approximations do follow the general shape of the solution, however, the error is clearly getting much worse as (t) increases.

      So, Euler’s method is a nice method for approximating fairly nice solutions that don’t change rapidly. However, not all solutions will be this nicely behaved. There are other approximation methods that do a much better job of approximating solutions. These are not the focus of this course however, so I’ll leave it to you to look further into this field if you are interested.

      Also notice that we don’t generally have the actual solution around to check the accuracy of the approximation. We generally try to find bounds on the error for each method that will tell us how well an approximation should do. These error bounds are again not really the focus of this course, so I’ll leave these to you as well if you’re interested in looking into them.


      Hypergeometric Series

      II.C Standardized Notation

      Most hypergeometric series can be written as sums of rational products of binomial coefficients, but this representation is problematic because it is not unique. As an example,

      appears to be different from the Chu-Vandermonde identity [ Eq. (2) ]. But if we look at the ratio of consecutive summands, it is

      This is simply the Chu-Vandermonde identity with α = (k + 1 − m) /2, β = (km) /2, and γ = k + 1.

      There is clearly an advantage to using the rising factorial notation, in which case we write

      Even with this standardized notation, there are equivalent identities that look different because there are nontrivial transformation formulas for hypergeometric series. As an example, provided the series in question converge, we have that

      This is why, even if all identities for hypergeometric series were already known, it would not be enough to have a list of them against which one could compare the candidate in question. Just establishing the equivalence of two identities can be a very difficult task. This makes the WZ method all the more remarkable because it is independent of the form in which the identity is given and can even be used to verify (or disprove) a conjectured transformation formula.


      Watch the video: Dreisatz schnell erklärt I Beispiele mit Lösung I Mathe (May 2022).