Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Numerical Methods for Differential Equations: A MATLAB-Based Approach, Lecture notes of Differential Equations

differential equations lecture notes

Typology: Lecture notes

2018/2019

Uploaded on 05/14/2019

mkiris
mkiris 🇹🇷

1 document

1 / 25

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
1
Numerical Methods for Differential
Equations
1
http://faculty.olin.edu/bstorey/Notes/DiffEq.pdf
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19

Partial preview of the text

Download Numerical Methods for Differential Equations: A MATLAB-Based Approach and more Lecture notes Differential Equations in PDF only on Docsity!

Numerical Methods for Differential

Equations

1

http://faculty.olin.edu/bstorey/Notes/DiffEq.pdf

2 NUMERICAL METHODS FOR DIFFERENTIAL EQUATIONS

Introduction

Differential equations can describe nearly all systems undergoing change. They are ubiquitous is science and

engineering as well as economics, social science, biology, business, health care, etc. Many mathematicians have

studied the nature of these equations for hundreds of years and there are many well-developed solution techniques.

Often, systems described by differential equations are so complex, or the systems that they describe are so large,

that a purely analytical solution to the equations is not tractable. It is in these complex systems where computer

simulations and numerical methods are useful.

The techniques for solving differential equations based on numerical approximations were developed before

programmable computers existed. During World War II, it was common to find rooms of people (usually women)

working on mechanical calculators to numerically solve systems of differential equations for military calculations.

Before programmable computers, it was also common to exploit analogies to electrical systems to design analog

computers to study mechanical, thermal, or chemical systems. As programmable computers have increased in speed

and decreased in cost, increasingly complex systems of differential equations can be solved with simple programs

written to run on a common PC. Currently, the computer on your desk can tackle problems that were inaccessible

to the fastest supercomputers just 5 or 10 years ago.

This chapter will describe some basic methods and techniques for programming simulations of differential

equations. First, we will review some basic concepts of numerical approximations and then introduce Euler’s

method, the simplest method. We will provide details on algorithm development using the Euler method as an

example. Next we will discuss error approximation and discuss some better techniques. Finally we will use the

algorithms that are built into the MATLAB programming environment.

The fundamental concepts in this chapter will be introduced along with practical implementation programs. In

this chapter we will present the programs written in the MATLAB programming language. It should be stressed

that the results are not particular to MATLAB; all the programs in this chapter could easily be implemented in

any programming language, such as C, Java, or assembly. MATLAB is a convenient choice as it was designed

for scientific computing (not general purpose software development) and has a variety of numerical operations and

numerical graphical display capabilities built in. The use of MATLAB allows the student to focus more on the

concepts and less on the programming.

1.1 FIRST ORDER SYSTEMS

A simple first order differential equation has general form





(1.1)

where

means the change in y with respect to time and 

is any function of y and time. Note that the

derivative of the variable,

, depends upon itself. There are many different notations for

, common ones include   and

.

One of the simplest differential equations is (^) 

(1.2)

We will concentrate on this equation to introduce the many of the concepts. The equation is convenient because the

easy analytical solution will allow us to check if our numerical scheme is accurate. This first order equation is also

relevant in that it governs the behavior of a heating and cooling, radioactive decay of materials, absorption of drugs

in the body, the charging of a capacitor, and population growth just to name a few.

To solve the equation analytically, we start by rearranging the equation as 



(1.3)

and integrate once with respect to time to obtain 



(1.4)

where C is a constant of integration. We remove the natural log term by taking the exponential of the entire equation

!#"%$'&)(+* 

(1.5)

4 NUMERICAL METHODS FOR DIFFERENTIAL EQUATIONS

0 0.5 1 1.5 2

−0.

−0.

−0.

−0.

0

1

time

y

y=e

−t

dy/dt

Fig. 1.1 Graphical output from running program 1.1 in MATLAB. The plot shows the function 

 , the derivative of that

function taken numerically and analytically.

0 0.5 1 1.5 2

0

1

y=e

−t

time

y

extrapolation of derivative

Fig. 1.2 Extrapolation of the function  

 based on the initial condition, . For very short times, the estimate is

quite good, but clearly breaks down as we go forward in time.

slow-moving truck. Extrapolation is a good guess for where the system might be in a the near future, but eventually

the predictions will break down except in the simplest systems.

Since our predictions far into the future are not accurate, we will take small steps while the extrapolation

assumption is good, figure out where we are and then extrapolate forward again. Using our equation

and initial condition, we know the value of the function and the slope at the initial time,

 . The value at a later

time,

 , can be predicted by extrapolations as

    0 



 

(1.8)

where the notation



 means the derivative of^

evaluated at time equals zero. For our specific equation, the

extrapolation formula becomes   

(1.9)

This expression is equivalent to the discrete difference approximation in the last section, we can rewrite Equation

1.9 as (^) 

 



 

  (1.10)

Once the value of the function at

 is known we can re-evaluate the derivative and move forward to



. We typically

call the time interval over which we extrapolate, the time step 

 

. Equation 1.9 is used as an iteration

FIRST ORDER SYSTEMS 5

0 0.5 1 1.5 2

0

1

time

y

y=e−t Euler

[t 0 ,y 0 ]

[t 1 ,y 1 ]

[t 2 ,y 2 ]

[t 3 ,y 3 ] [t 4 ,y 4 ]

Fig. 1.3 Graphical output from running program 1 in MATLAB. The points connected by the dashed line are the results of

the numerical solution and the solid line is the exact solution. The time step size is  



. This large time step size results in

large error between the numerical and analytical solution, but is chosen to exaggerate the results. Better agreement between the

numerical and analytical solution can be obtained by decreasing the time step size.

equation to simply march forward in small increments, always solving for the value of y at the next time step given

the known information. This procedure is commonly called Euler’s method.

The result of this method for our model equation using a time step size of 

 is shown in Figure 1.3. We

see that the extrapolation of the initial slope,

 (^) , gets us to the point (0.5,0.5) after the first time step.

We then re-evaluate the slope, which is now

and use that slope to extrapolate the next time step to    where we land at (1,0.25). This process repeats. While the error in Figure 1.3 seems large the basic trend

seems correct. As we make the time step size smaller and smaller the numerical solution comes closer to the true

analytical solution.

A simple example of MATLAB script that will implement Euler’s method is shown below. This program also

plots the exact, known solution as a comparison.

Program 1.2: Euler’s method for the first order equation.

clear; %% clear exisiting workspace y = 1; %% initial condition dt = 0.5; %% set the time step interval time = 0; %% set the start time= t final = 2; %% end time of the simulation Nsteps = round(t final/dt); %% number of time steps to take, integer plot(time,y,’*’); %% plot initial conditions hold on; %% accumulate contents of the figure

for i = 1:Nsteps %% number of time steps to take y = y - dty; %% Equation 1. time = time + dt %% Increment time plot(time,y,’’); %% Plot the current point end t = linspace(0,t final,100); %% plot analytical solution y = exp(-t); plot(t,y,’r’) xlabel(’time’); %% add plot labels ylabel(’y’);

1.1.3 Evaluating error using Taylor series

When solving equations such as 1.2 we typically have information about the initial state of the system and would like

to understand how the system evolves. In the last section we used the intuitive approach of discussing extrapolation.

We simply used information about the slope, to propagate the solution forward. In this section we will place the

extrapolation notion in a more formal mathematical framework and discuss the error of these approximations using

the Taylor series.

FIRST ORDER SYSTEMS 7

0 0.2 0.4 0.6 0.8 1

0

1

exp(t)

1−t+t^2 /2−t^3 /

1+t+t^2 /

1+t

y(t)

t

Fig. 1.4 Taylor series approximation of the function 



. We have included the first three terms in the Taylor expansion. It is

clear that the approximation is valid for larger and larger x as more terms are retained.

10

− 10

− 10

− 10

(^10) −

10

10

10

10

error

∆ t

error

~∆ t

Fig. 1.5 Error of the Euler method as we change the time step size. For comparison we show the linear plot of . Since the

slope of the error curve matches the slope of the linear curve we know that the error scales as .

there result is behaving as expected. The knowledge of the scaling is also used in more complex methods that use

adaptive control of the time step size to ensure that the solution is within acceptable error bounds.

One way to confirm the scaling of the numerical methods is to plot the error on a log-log plot. In Figure 1.5 we

plot the error in applying Euler’s method to our model equation as a function of the time step size. We also plot

the line, error  

. We find that both lines have the same slope. A straight line on a a log-log plot means that the

plot follows a power law, i.e error  

. The slope of the line provides the power. We find from this plot that

our Euler method error is scaling linearly with 

as the slopes of the two displayed curves match. This graphical

result agrees with the prediction of the error using Taylor series.

To provide a little insight into how such graphical results can be easily generated we present the program that

created Figure 1.5. The program is a simple modification of Program 1.2. In the program we simply loop the Euler

solver for several time step sizes and store the values of the error for plotting. The error is defined as the difference

8 NUMERICAL METHODS FOR DIFFERENTIAL EQUATIONS

between the true and numerical solutions at the end of the integration period. In this example we integrate until   , but the time that we integrate until is irrelevant.

Program 1.3: Program to check error scaling in Euler method. The result is shown in Figure 1.5.

clear; %% clear exisiting workspace dt = [0.0001 0.0005 0.001 0.005 0.01 0.05 0.1 ]; for j = 1:length(dt) y = 1; %% initial condition time = 0; %% set the time= t final = 1.; %% final time Nsteps = round(t final/dt(j)); %% number of steps to take for i = 1:Nsteps y = y - dt(j)*y; %% extrapolate one time step time = time + dt(j); %% increment time end X(j,2) = exp(-t final) - y; %% compute the error and store X(j,1) = dt(j); %% store the time step end loglog(X(:,1),X(:,2)) %% display on log-log plot

1.1.4 Programming and implementation

In this section we provide a few different ways to create the program that implements Euler’s method. Program

1.2 is a perfectly acceptable way to solve Euler’s equation. Since we will be interested in applying this same basic

method to different sets of equations there is some benefit in breaking the program into functions. While the benefit

with something as simple as Euler’s method applied to first order systems is not substantial, this programming effort

will become worthwhile as we proceed in the next section and deal with systems of equations. We proceed with a

demonstration of several programs that are equivalent to Program 1.2. It is recommended to try these programs as

you proceed through the chapter, making sure to understand each program before proceeding to the next. Consult

the section on programming functions in MATLAB in order to understand the syntax of the programs presented.

One possible program is provided in Program 1.4. This program is makes use of different functions to keep the

Euler algorithm separate so that it only needs to be programmed once, and then left alone for all other equations.

Program 1.4 has the advantage in that the to solve a different system you do not need to modify lines of code that

deal with the Euler method.

The program has the program broken into two functions. The functions can then be called from the command

line. For example, the command euler1storder(1,0.01,1) means the function will solve the first order

equation with an initial condition of 1, a time step size of 0.01, until a time of 1. The first function, derivs,

contains the governing differential equation we would like to solve. The input to this function is the current value

of y and time and the output of the function is the derivative, or the right-hand-side of the differential equation. The

function euler1storder contains all the code directly related to Euler’s method. The benefit of this arrangement

is a different equation is easy to solve without the risk of corrupting the Euler method itself. You would never need

to modify the code inside the function euler1storder.

Program 1.4: Use of functions to generalize the Euler program. All the code should be in one m-file named

modeleqns.m.

%% derivative functions function dy = derivs(time,y) dy = -y; %% governing equation

%% Euler Solver function euler1storder(y,dt,t final) clf; Nsteps = round(t final/dt); %% number of steps to take time = 0;

for i =1:Nsteps dy = derivs(time,y); %% compute the derivatives y = y + dy*dt; %% extrapolate one time step time = time+dt; %% increment time

10 NUMERICAL METHODS FOR DIFFERENTIAL EQUATIONS

time = time+dt; t(i+1) = time; data(i+1) = y; end

Program 1.6.b: Form of the derivatives functions. In this context, the derivative function should be contained in a

separate file named derivs.m.

%% derivative function %% place in file derivs.m function dy = derivs(time,y) dy =-y; %% govering equation

Program 1.6 can be executed from the command line or inside of another script

[t,y] = eulersolver(1,0.01,1,@derivs);

This final program is the easiest to modify. If you have a different set of equations, then you only need to write

a new derivative function and set the initial conditions and integration parameters in the call to eulersolver.

Again, we state the advantages of the final program in the case of the simple first order system may not be immediately

apparent. As we proceed to systems of equations and more complex methods we will start to see the real advantages

of the modularized programming. These same concepts in programming can be dealt with in any programming

language, we have chosen MATLAB as our convention herein.

1.2 SECOND ORDER SYSTEMS: MASS-SPRING

Now that we have considered the first order system we will move on to second order systems. The differences

between first and second order systems and their basic behaviors were detailed in Chapter 1. The simplest example

of a second order system is a mass oscillating on a spring. A mass is attached to fixed spring where gravity is normal

to the direction of motion, the spring is pulled back and held at rest, finally the mass is then released and oscillates.

The governing equations for this system can be derived using Newton’s law

   (1.20)

where

is the force exerted on mass  and  is acceleration. Springs come in many shapes and sizes, but many

obey a simple linear relation, that the force exerted by the spring is proportional to the amount that it is stretched, or

 (1.21)

where  is called the spring constant and  is the displacement of the spring from the equilibrium state. Equating

the above expressions for the force lead to the expression



(1.22)

Remembering that acceleration is the second derivative of position and we have a second order differential equation,





 

  (1.23)

The initial conditions for starting our spring system are that the spring is pulled back, held steady, then released.

Mathematically, the initial condition is







and  



.

When solving differential equations numerically we usually like to work with systems of equations that involve

only first derivatives. This is convenient because the numerical implementation can be generalized to solve any

SECOND ORDER SYSTEMS: MASS-SPRING 11

problem, regardless of size and order of the highest derivative. In the above example, the second order system is

transformed quite easily using the relationships between velocity, position, and acceleration

 

(1.25)

where is the velocity. We can easily rewrite equation 1.23 as two equations,

 





 (1.26)

  

(1.27)

The reason for rewriting the equations as a system of two coupled equations will become clear as we proceed. We

say that the equations are coupled because the derivatives of velocity are related to the position and the derivative

of position is related to the velocity. We will not detail methods for the analytical solutions of this equation. Your

intuition for the system should be that the solution should be oscillatory. In our model there is no mechanism for

damping (such as friction), therefore the energy of the system must be conserved. We can easily confirm that the

exact solution to this problem is satisfied by

  

  

 

(1.28)

1.2.1 Implementation of Euler’s method for second order systems

From the initial condition and the equations, 1.26 & 1.27, we find that the instant that you release the spring



   0 





 (1.29)

 

    0 



(1.30)

These expressions show that the acceleration of the mass is negative (the spring is contracting) but the position of

the mass is not yet changed. The important thing to note from the above equation is that you know the value of

the function (position and velocity are given from the initial condition) and you know the value of their derivatives

from the governing equation.

To solve these equations with Euler’s method, we simply apply Euler’s method to both equations in our system

simultaneously to predict the state of the system a short time in the future.

 





  

  

  

(1.32)

Substituting equations 1.27 & 1.26 in to the above equations and generalizing beyond the first time step yields

   















(1.33)



    

 





(1.34)

This subscript N refers to the

time step. It is assumed that the time step N is known and N+1 is the next

unknown time step.

Without loss of generality, in the programs that follow we will simplify the constants in the equations by assuming

that 

   and the 

  . In order to write a program that is extendable to larger systems, we will make use

of MATLAB’s whole array operations. The use of such operations is detailed in the MATLAB Primer. Instead

SECOND ORDER SYSTEMS: MASS-SPRING 13

Nsteps = round(t final/dt) %% number of steps to take.

plot(time,y(1),’*’); hold on; %% accumulate contents of the figure

for i = 1:NSteps %% number of time steps to take dy(2) = -y(1) %% Equation for dv/dt dy(1) = y(2) %% Equation for dx/dt y = y + dtdy %% integrate both equations with Euler time = time + dt plot(time,y(1),’’); plot(time,cos(time),’r.’); end

Program 1.7 is nearly the same as Program 1.2, the first program we wrote to implement Euler’s method on first

order systems. In the program we simply define the initial condition for position and velocity and store these values

in the first and second elements of the array y. We then iterate the Euler method as before using the MATLAB

whole array operations.

We can follow the same logic as was done in to reach Program 1.6 and create the Euler solver as a general

function that can be called from anywhere. The new Euler solver is general in that it can be used on any system of

first order differential equation.

Program 1.8.a: Euler solver program that needs to be created once and then may be applied to different systems.

The function requires the initial condition, time step size, final time, and the handle to the derivative function. The

function can handle systems of differential equations of any size. This program should be contained in a separate

file called eulersolver.m.

%% Euler Solver %% Place this code in a file called eulersolver.m function [t,data] = eulersolver(y,dt,t final,derivs Handle) time = 0; Nsteps = round(t final/dt); %% number of steps to take. t = zeros(Nsteps,1); data = zeros(Nsteps,length(y)); t(1) = time; %% store intial condition data(1,:) = y’; for i =1:Nsteps dy = feval(derivs Handle,time,y); y = y + dy*dt; time = time+dt; t(i+1) = time; data(i+1,:) = y’; end

Program 1.6.b: Form of the derivatives functions for the mass-spring system. In this context, the derivative function

should be contained in a separate file named derivs.m.

%% derivative function %% place in file derivs.m function dy = derivs(time,y) dy = zeros(2,1); %% initialize dy array and orient as column dy(2) =-y(1); %% dv/dt = -x dy(1) = y(2); %% dx/dt = v

We can run the above program by typing in at the MATLAB prompt or embedding into another program or script

the following commands.

[T,Y] = eulersolver([1;0],0.1,3,@derivs); plot(T,Y(:,1)); %% plot position plot(T,Y(:,2)); %% plot velocity

A final programming tip is to define variables that correspond to the position in the data array,

, for the different

variables. This definition will help you keep the equations straight since once the system becomes large it is difficult

to remember if y(1) is position or velocity. We can define integers corresponding to the indices into the array so

instead of typing



we have

^

(^). There are other possible solutions, we provide one simple implementation.

14 NUMERICAL METHODS FOR DIFFERENTIAL EQUATIONS

0 0.2 0.4 0.6 0.8 1

0

1

time

y(t)

1 − x

1− x e−0.

1 − x e

Fig. 1.7 Midpoint approximation for    . The figure shows the function, plus extrapolations using the slope evaluated

at the beginning of the interval, the end of the interval, and at the midpoint. The slope at any point on the curve is .

%% derivative function %% place in file derivs.m function dy = derivs(time,y) dy = zeros(2,1); %% initialize dy array and orient as column X = 1; V = 2; dy(V) =-y(X); %% dv/dt = -x dy(X) = y(V); %% dx/dt = v

All the programs shown in this section are equivalent in function. The advantage in the way we have developed

the final program is that it is easy to change for any system of equations of any size. You now possess a very

general program for solving any system of differential equations using Euler’s method. The manner in which we

implemented these programs is not unique. There are an infinite number of ways to implement such a code. We

have tried to work toward a program that is easy to modify and easy to understand. There might be even cleaner

and easier programs that provide more flexibility and easier reading, can you come up with one?

1.3 MIDPOINT METHOD

When numerically solving differential equations, what we really want to do is find the best estimate for the average

slope across the time step interval. If we want to know how long it will take to drive from Boston to New York, we

must know what our average speed is over than interval accounting for driving time on open highway, traffic, getting

pulled over by highway patrol, and stops for gas. So far we have used the initial value of the derivative to extrapolate

across time step interval since this is the only location where we have any information about the function. We saw

the folly in this assumption in the previous section when we found that the oscillations in the mass-spring system

grew in amplitude over time. In this section we show a method to obtain a better estimate of the average slope by

using the slope of the function at the midpoint of the interval.

Consider figure 1.7 where we have plotted the function

and various approximations for the derivative

to shoot across the interval

   

. We have used the value of the derivative at the beginning, end, and midpoint

of the interval. We see that the simple Euler method based on the initial and final slope are quite far off, while the

extrapolation based upon the midpoint approximation is much better. The midpoint works better for this specific

case, but we can also prove that the midpoint is a better representation of the average slope for the interval. The

16 NUMERICAL METHODS FOR DIFFERENTIAL EQUATIONS

0 0.5 1

1

t

y(t)

0 0.5 1

1

t

y(t)

0 0.5 1

1

t

y(t)

0 0.5 1

1

t

y(t)

Fig. 1.8 Example of the midpoint method for    

 , where . The steps are shown in the four figures, going left

to right, then down. The first image shows the exact solution. At this time we only know the initial condition. In the second

image we use the Euler method to extrapolate to the midpoint (dashed line). In the third image we find the slope as if we were

going to continue with Euler’s method using the half step size. Finally, in the fourth frame we use the midpoint slope to shoot

across the interval.

Program 1.9: General midpoint solver that has the same usage as the the Euler solver created earlier. This program

should be created and placed in a file called midpointsolver.m.

%% Midpoint Solver function [t,data] = midpointsolver(y,dt,t final,derivs Handle) time = 0; Nsteps = round(t final/dt) %% number of steps to take t = zeros(Nsteps,1); data = zeros(Nsteps,length(y));

t(1) = time; %% store intial condition data(1,:) = y’; for i =1:Nsteps dy = feval(derivs Handle,time,y); %% evaluate the initial derivatives yH = y + dydt/2; %% take Euler step to midpoint dy = feval(derivs Handle,time,y); %% re-evaluate the derivs y = y + dydt; %% shoot across the interval time = time+dt; %% increment time t(i+1) = time; %% store for output data(i+1,:) = y’; %% store for output end

We can now test this program by using the same derivative function and construct as used in Program 1.6. Create

the function that computes the derivatives for the spring equations and rename the file derivs spring. We will

start naming the derivative files by more descriptive names as we will start accumulating more functions. We can

compare the midpoint solver to the Euler solver by typing in at the MATLAB prompt or embedding into another

program or script the following commands.

[T,Y] = eulersolver([1;0],0.1,3,@derivs spring); plot(T,Y(:,1),’r--’); [T,Y] = midpointsolver([1;0],0.1,3,@derivs spring); plot(T,Y(:,1),’b’);

The result of this exercise is shown in Figure 1.9.

MIDPOINT METHOD 17

0 5 10 15 20 25 30 35

0

1

2

3

4

t

y(t)

Euler

Midpoint

Fig. 1.9 Comparison of the solution of the mass spring system using the midpoint method and Euler’s method. It is clear that

the midpoint method is far superior in this case.

10

− 10

− 10

− 10

− 10

(^100)

10

10

10

10

10

0

error

∆ t

error

~∆ t^2

Fig. 1.10 Scaling of the error of the midpoint method as applied to the model problem   . We find that the midpoint

algorithm scales as .

Finally, we close the section on the midpoint method by evaluating the error in the midpoint method. We create

a derivatives function called derivs1st that represents the first order equation that was the focus of the first few

sections of this chapter.

function dy = derivs1st(time,y) dy = -y;

We can now use either the midpoint solver or the Euler solver to evaluate this equation. To test the error we solve

the equation until

  with the midpoint solver and compute the difference with the exact solution at that time.

We then repeat this test at several different time step sizes. The result is shown in Figure 1.10 and the generating

program is given by Program 1.9. The figure shows on the plot the slope of the line 



, showing that the midpoint

method scales as 



. This scaling means that if we halve the time step then we bring the error down by a factor

of 4.

BUILT-IN MATLAB SOLVERS 19

10

− 10

− 10

− 10

− 10

(^100)

10

10

10

10

0

error

∆ t

~∆ t

2

error

Fig. 1.11 The error between the Runge-Kutta method and exact solution as a function of time step size. One the plot we also

display a function that scales as

. We see that this fits the slop of the data quite well, therefore error in the Runge-Kutta

approximation scales as

. Once the error reaches  

  then the error is dominated by the resolution of double precision

numbers.

k3 = dtfeval(derivs Handle,time+dt/2,y+k2/2); k4 = dtfeval(derivs Handle,time+dt ,y+k3 ); y = y + k1/6 + k2/3 + k3/3 + k4/6; time = time+dt; t(i+1) = time; data(i+1,:) = y’; end

Since we have only given the equations to implement the Runge-Kutta method it is not clear how the error

behaves. Rather than perform the analysis, we will compute the error by solving an equation numerically and

compare the result to an exact solution as we vary the time step. To test the error we solve the model problem,   

, where

  and we integrate until time

 . We have conducted this same test with the

midpoint and Euler solvers. In Figure 1.11 we plot the error between the exact and numerical solutions at

as a function of the time step size. We also plot a function 

 on the same graph to show that the error of the

Runge-Kutta method scales as 



. This is quite good - if we halve the time step size we reduce the error by 16

times. To generate this figure only requires minor modification to Program 1.9. The minimum error of 

 

is

due to the accuracy of representing real numbers with a finite number of digits.

1.5 BUILT-IN MATLAB SOLVERS

At this point it is worth introducing the ODE solvers that are built into MATLAB. These solvers are very general,

employ adaptive time stepping (speed up or slow down when it needs to), and use the Runge-Kutta method as

the basic workhorse. So you ask, if MATLAB can do all this already then why did you make us write all these

programs? Well, it is very easy to employ packaged numerical techniques and obtain bad answers, especially in

complex problems. It is also easy to use a package that works just fine, but the operator (i.e. you) makes a mistake

and gets a bad answer. It is important to understand some of the basic issues of ODE solvers so that you will be

able to use them correctly and intelligently. On the other hand, if other people have already spent a lot of time

developing and debugging sophisticated techniques that work really well, why should we replicate all their work?

We turn to these routines at this time.

Just as you have developed three solvers that have the same functionality, MATLAB has employed several

different algorithms for solving differential equations. The most common of the MATLAB solvers is ode45; the

20 NUMERICAL METHODS FOR DIFFERENTIAL EQUATIONS

usage of the function will be very similar to the routines that you wrote for the Euler method, midpoint method,

and Runge-Kutta.

The ode45 command uses the same Runge-Kutta algorithm you developed, only the MATLAB version uses

adaptive time stepping. With these algorithms, the user specifies the amount of acceptable error and the algorithm

adjusts the time step size to maintain this value constant. Therefore, with adaptive algorithms you cannot generate

a plot of error vs. time step size. The algorithm does have the flexibility to force a fixed time step size. In general

these adaptive algorithms work by comparing the difference between taking a step with two methods that have

different orders (i.e. midpoint ( 



) and Runge-Kutta (

 )). The difference is indicative of the error, and the time

step is adjusted (increased or decreased) to hold this error constant.

An example of the usage of MATLAB’s ode45 command is illustrated in the commands below. We show the

same example of the mass on the spring as in previous sections. Below we can see that the usage of MATLAB

the ode45 command is quite similar to the methods we developed in previous sections. The command odeset

allows the user to control all the options for the solver including method, maximum time step size, acceptable

error tolerances, and output format options. The reader should consult the MATLAB help functions to discover the

different options with the odeset command.

options = odeset(’AbsTol’,1e-9); %% set solver options [t,y] = ode45(@derivs spring,[0 30],1,options); %% solve equations plot(t,y); %% plot position

Notice that the assumption of the ode45 command is that the function that supplies the derivatives has the

form dy dt = derivativefunction(t,y). We have assumed the same format for the derivative functions

throughout this chapter. You may use the same derivative functions with the routines that you have written as well

as the MATLAB solvers. Besides ode45, MATLAB has several other solvers that are designed for different types

of equations. There are also a variety of plotting and display functions that accompany the differential equation

solvers. The functionality of MATLAB is well documented on their web-page and we leave it to the student to

explore the different functions.

1.6 CHECKING THE SOLUTION

One of the common difficulties in using numerical methods is that takes very little time to get an answer, it takes

much longer to decide if it is right. The first test is to check that the system is behaving physically. Usually before

running simulations it is best to use physics to try and understand qualitatively what you think your system will do.

Will it oscillate, will it grow, will it decay? Do you expect your solution to be bounded, i.e. if you start a pendulum

swinging under free gravity you would not expect that the height of the swing would grow.

We already encountered unphysical growth when we solved the mass-spring motion using Euler’s method in

Figure 1.6. When the time step was large we noticed unphysical behavior: the amplitude of the mass on the spring

was growing with time. This growth in oscillation amplitude is violating the conservation of energy principle,

therefore we know that something is wrong with the result.

One simple test of a numerical method is to change the time step and see what happens. If you have implemented

the method correctly the answer should converge as the time step is decreased. If you know the order of your

approximation then you know how fast this convergence should happen. If the method has an error proportional to



then you know that cutting the time step in half should cut the error in half. You should note that just because

the solution converges does not mean that your answer is correct.

The MATLAB routines use adaptive time stepping, therefore you should vary the error tolerance rather than the

time step interval. You should always check the convergence as you vary the error. Plot the difference between

subsequent solutions as you vary the error tolerance.

Finally, we note that most of the examples that we cover in this class are easily tackled with with the tools

presented in this chapter. This does not mean that there are not systems where the details of the numerical method

can contaminate the results. However, the algorithms included with MATLAB are very robust and work well in

many applications.