Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Lorenz system in matlab software, Study Guides, Projects, Research of Computer Graphics

code matlab for simulation of lorenz system

Typology: Study Guides, Projects, Research

2019/2020

Uploaded on 01/18/2020

smyh-smyh
smyh-smyh 🇹🇷

1 document

1 / 6

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
The Lorenz “Butterfly”
Foster Morrison
Turtle Hollow Associates, Inc.
1. Dynamics before Chaos Theory
Nonlinear dynamics originated with Newton’s laws of physics, though much of the
development began earlier with the works of Galileo, Copernicus and Kepler, as well as
astronomers and mathematicians in the ancient world.
The study of linear systems did not flourish until the 19th century, when Josiah Willard
Gibbs, a physicist at Yale, developed the concept of a vector. Of course, to a scientist or
engineer, a vector is a column of numbers, but to a mathematician, it is an element in a vector
space. A mathematical function can be an element of a Hilbert space, a more exotic kind of
vector space, and this concept is useful in both pure and applied mathematics, as well as scientific
applications.
Kepler’s laws of planetary motion postulated that the orbits of the planets were ellipses,
one of whose foci was located at the center of the Sun. One qualitative law was that the line
connecting the center of the planet to the center of the sun swept out equal areas in equal times.
In other words, when the planet was close to the Sun, it moved faster than when it was more
distant. It was a critical landmark for science when it was shown that Kepler’s laws, deduced
empirically from observations, could be derived from Newton’s laws of motion and law of
gravity.
The ancient Ptolemaic theory of planetary motion, circles upon circles upon circles, was
collapsed into ellipses. Over the centuries this theory had been refined and was quite accurate by
the 17th century, when Newton flourished. But the Keplerian theory was not as accurate as the
ancient one, because the Moon and various planets were accelerated by the gravitational
attraction of each other. In other words, there were some circles, called epicycles, that remained
to be explained. Astronomers and mathematicians labored for the next three centuries
constructing perturbation theories that eventually surpassed the ancient model in precision.
The Ptolemaic theory was not a theory in the scientific sense, but rather a sophisticated
example of curve-fitting. Each orbit was like a 3-dimensional Fourier series (sums of sines and
cosines). And when the perturbation theories were computed, they also came out as Fourier
series. The difference was that the coefficients were computed from Newton’s laws rather than
empirically from observations.
The orbits of the planets and predictions thereof were computed with such great precision
that the philosophical notion of determinism was embraced. The concept was that with
sufficiently precise observations and the laws of classical mechanics, the future state of
something could be computed with virtual certainty. Actually, the opposite is true; the orbits of
the major planets are one of the very few things that can be predicted with high precision. Casual
observation can identify many glaring exceptions, such as the weather and the stock market.
When powerful mainframe computers became available in the early 1960s, astronomers
wrote programs for them to calculate the sums of the various Fourier series in their perturbation
theories. The benefits included both speed and accuracy. Scientists also created software to
compute the coefficients in the Fourier series symbolically, in the same way a human
mathematician would, using a sharp pencil and considerable skills in algebra. The same methods
were applied with considerable success to the orbits of artificial satellites, yielding more
pf3
pf4
pf5

Partial preview of the text

Download Lorenz system in matlab software and more Study Guides, Projects, Research Computer Graphics in PDF only on Docsity!

The Lorenz “Butterfly”

Foster Morrison

Turtle Hollow Associates, Inc.

1. Dynamics before Chaos Theory

Nonlinear dynamics originated with Newton’s laws of physics, though much of the development began earlier with the works of Galileo, Copernicus and Kepler, as well as astronomers and mathematicians in the ancient world. The study of linear systems did not flourish until the 19th^ century, when Josiah Willard Gibbs, a physicist at Yale, developed the concept of a vector. Of course, to a scientist or engineer, a vector is a column of numbers, but to a mathematician, it is an element in a vector space. A mathematical function can be an element of a Hilbert space, a more exotic kind of vector space, and this concept is useful in both pure and applied mathematics, as well as scientific applications. Kepler’s laws of planetary motion postulated that the orbits of the planets were ellipses, one of whose foci was located at the center of the Sun. One qualitative law was that the line connecting the center of the planet to the center of the sun swept out equal areas in equal times. In other words, when the planet was close to the Sun, it moved faster than when it was more distant. It was a critical landmark for science when it was shown that Kepler’s laws, deduced empirically from observations, could be derived from Newton’s laws of motion and law of gravity. The ancient Ptolemaic theory of planetary motion, circles upon circles upon circles, was collapsed into ellipses. Over the centuries this theory had been refined and was quite accurate by the 17th^ century, when Newton flourished. But the Keplerian theory was not as accurate as the ancient one, because the Moon and various planets were accelerated by the gravitational attraction of each other. In other words, there were some circles, called epicycles, that remained to be explained. Astronomers and mathematicians labored for the next three centuries constructing perturbation theories that eventually surpassed the ancient model in precision. The Ptolemaic theory was not a theory in the scientific sense, but rather a sophisticated example of curve-fitting. Each orbit was like a 3-dimensional Fourier series (sums of sines and cosines). And when the perturbation theories were computed, they also came out as Fourier series. The difference was that the coefficients were computed from Newton’s laws rather than empirically from observations. The orbits of the planets and predictions thereof were computed with such great precision that the philosophical notion of determinism was embraced. The concept was that with sufficiently precise observations and the laws of classical mechanics, the future state of something could be computed with virtual certainty. Actually, the opposite is true; the orbits of the major planets are one of the very few things that can be predicted with high precision. Casual observation can identify many glaring exceptions, such as the weather and the stock market. When powerful mainframe computers became available in the early 1960s, astronomers wrote programs for them to calculate the sums of the various Fourier series in their perturbation theories. The benefits included both speed and accuracy. Scientists also created software to compute the coefficients in the Fourier series symbolically, in the same way a human mathematician would, using a sharp pencil and considerable skills in algebra. The same methods were applied with considerable success to the orbits of artificial satellites, yielding more

sophisticated models of the earth’s gravity field and the density of the upper atmosphere, among other things. Computers initially gave a boost to perturbation theories and the long established methods of scientific modeling and theory construction. But other advances in technology made them hopelessly obsolete. When improved tracking methods were developed for artificial satellites, the simple first-order orbit theories could not be refined to the second order [having terms proportional to the squares of oblateness coefficient of the earth ( J 2 ) and to the first order in other parameters of magnitude similar to J 22 ]. The number of parameters became huge (in the thousands) and the perturbation theories could not cope with the complex interactions caused by the resonances and near resonances of many frequencies. Astronomers had been using numerical methods to solve the equations of motion for things like comets and asteroids for decades. The principle is simple enough: use a segment of a polynomial to approximate the solution over a short time (time step) after computing the time derivatives from Newton’s laws and the models for gravity and other forces. Creating such algorithms is as difficult as the concept is simple. But there were many good ones long tested during the earlier times of tables of logarithms, sharp pencils, and later, mechanical calculators. Improved mainframe computers had no trouble taking over the numerical orbit and other scientific computations during the 1970s. However, the computers led to discoveries that soon launched…

2. Chaos Theory: Mathematical Modeling Confronts Reality

The concept of determinism was challenged by the theory of probability. Temporary resolution was achieved by assuming that what appeared to be random could be predicted, if enough observations of sufficient precision could be made. The limits were economic or technological, not fundamental. This was overturned in the 1920s by the development of quantum mechanics, the physics of the very small. The process of observing something disturbs it somewhat, though in many familiar examples, by a negligible amount. But if the object in question is a subatomic particle, the disturbance is significant or even dominant. This is codified in a more or less quantitative way by the Heisenberg uncertainty principle. Some scientists, including the renowned Albert Einstein, continued to advocate determinism and he spent his final years attempting to develop a unified field theory that supposedly would restore it. This, despite the fact that his first notable achievement was a theory of Brownian motion, which is the random motions of very small particles floating on a liquid [Isaacson, 2007]. Despite the enormous numbers of everyday experiences, the scientific world did not embrace what has come to be called Chaos Theory until some very simple mathematical models displayed it. The best known of these is the Lorenz “butterfly,” a marvelous graphic display of a trajectory that circles two unstable equilibrium points, jumping back and forth from one to the other in an apparently random fashion. The butterfly metaphor is double-edged; it refers to the dynamics as well as the graphics: “…the Butterfly Effect – the notion that a butterfly stirring the air today in [Beijing] can transform storm systems next month in New York .” [Gleick, 1987, p. 8]. This concept would make sense only if the entire universe consisted of the Lorenz “butterfly” dynamic system (or something similar) and a single small impulse that occurs once in all eternity. The reality is that the model always contains errors and that noise inputs are there all of the time. To say that any particular segment of the noise flips the trajectory from one mode of oscillation to the other might be true in some sense, but it would be impossible to determine what it is. The fallacy of determinism is that an apparatus designed to measure the state of the universe

Note that the equation for d x /d t is linear, but the other two include quadratic terms, the products xz and xy. The only parameters are a , b , c , with the rest set to 1. This is a very special case of differential equations with linear and quadratic “right hand sides,” but the “classic” values of the parameters are a = 10, b = 28, c = 8/3. These values create the famous graphic that looks like a butterfly and helps to give rise to “butterfly” dynamics.


Exercise: Find the equilibrium points of (4). These are the coordinates for which d r /d t = 0 (5) r = ( x , y , z )T Note that this is a standard matrix-vector notation that expresses vectors as single column matrices. It is common in the literature of science and engineering, but not universal. Although the differential equations are not solvable in a practical way using any classic method such as power series or special functions, considerable information about the properties of the solutions may be obtained by analytical tools. Compute the Liapunov coefficients and thereby show that the equilibrium points are unstable for the special parameter values given. Liapunov coefficients and other analytical tools are described in Luenberger (1979), Morrison (2008), and Thompson and Stewart (1987), among others. In fact, if you put all these tools together, it should have been obvious that something with the almost magical properties of the classic Lorenz “butterfly” equations could be constructed. But as so often happens in science, it took a specialist seeking answers in his own discipline to uncover a new aspect for all of science, chaos theory.


The approximate numerical solution to Lorenz’s equations in worksheet “Gill-Lorenz” was generated by Gill’s method, a variation on the well know Runge-Kutta method. The implementation is not done with worksheet formulas in cells, but with a VBA (Visual Basic for Applications) macro, basically a simple computer program. The easiest way to gain access to the macros and the coding is to install the Macro Toolbar, which is in fact done on this spreadsheet. This will mark you as a “Power User.” A detailed derivation of Gill’s method is provided by Romanelli (1960). Code and an overall discussion of numerical integration (approximate solutions) of ODEs may be found in Press (2007). For those who wish to run the Gill ODE (ordinary differential equation) solver without any fuss, a button is provided. (The caveat is that Excel can be cranky and hang up for no apparent reason. Check the “Tips for Excel and VBA” below.) The parameters a , b , c go into cells A2 , B2 , C2 ; the time step into C5 , and the initial conditions x 0 , y 0 , z 0 into D5 , E5 , F. Finally, enter the number of time steps to be computed into A4 and press the button. Cell D1 will display the number of steps completed so that you can be sure all is going well. If it does not reach the number chosen, you have to go into the diagnostic mode and figure out why. Once you become facile with VBA that will not be hard, but getting there will require some reading and trial-and-error, like anything else. Note: You should correct the range for the variables in the graph if you increase the number of time steps. Start by recreating the classic example. First press the “Clear” button if an example is already in the worksheet. Then just press the “SOLVE!” button and watch cell D1. Hopefully,

that will come off without any problems. Note that the starting point is close to, but not exactly at the origin 0 = (0, 0, 0)T, which is an unstable equilibrium point. Why is it necessary to nudge the initial conditions away from the unstable equilibrium? Once you recreate the classic example, you are ready to explore the Lorenz equations in more detail, first by changing the initial conditions, and then by varying the parameters a , b , c. In the BC era (Before Computers) this would have required weeks of arduous, error prone manual computations, perhaps aided by a mechanical calculator or tables of logarithms. Now you can knock 5000 or 10000 steps in a few minutes, at most. Before spreadsheets you would have had to write code in something like Fortran, then compile, link, and execute it. Then take the results and feed those into a plotting program to draw the graph. But with Excel, once you set up the graph, it will automatically redraw itself for the new data. Occasionally, the setting for the graph may have to be tweaked, but not always. Once you have gained some experience recomputing and regraphing the Lorenz equations, you can start exploring other systems of ordinary differential equations as initial value problems. Start with textbook examples, a good source being Davis (1962). Some day you may have real data to analyze. In this case, you might face a boundary value problem, where you have to match starting and ending state vector values, r 0 and r n , or selected components thereof. For example, you might have an ODE from mechanics where you have equations for velocities and accelerations and state values for the beginning and ending times. More commonly the challenge is redundant observations of functions of state variables or derivatives, and a solution must be found through nonlinear regression. This is not nearly as bad as its sounds; an introduction is provided by Morrison (2008). A more thorough treatment is provided by Norton (1986) and useful code and heuristic descriptions are in Press & al. (2007). For the simpler cases, trial-and-error may suffice. In that case the VBA macro in this Excel file would be adequate. But if you have to resort to a nonlinear regression, be aware that matrix inversion, needed for most regression algorithms (to solve systems of linear algebraic equations), is supported by an array worksheet function, and such can be implemented from within a VBA macro. Additional reading - Davis, H.T., Introduction to Nonlinear Differential and Integral Equations , Dover Publications, New York, 1962. Gleick, J., Chaos: Making a New Science , Viking Penguin, New York, 1978. Isaacson, W., Einstein: His Life and Universe , Simon & Schuster, New York, 2007. Jelen, B. and T. Syrstad, VBA and Macros for Microsoft Excel , Que, 2004 Kac, M., “What is random?” American Scientist , 71 , 405-406, 1983. Kac, M., “More on randomness,” American Scientist , 72 , 282-283, 1984. MacDonald, M., Excel: The Missing Manual , Pogue Press, O’Reilly Media, Inc., 2005. Morrison, F., The Art of Modeling Dynamic Systems: Forecasting for Chaos, Randomness, and Determinism , Dover Publications, Mineola, NY, 2008 (hardback edition: Wiley, New York, 1991). Norton, J.P., An Introduction to Identification , Academic Press, Inc., Orlando, FL, 1986. Press, W.H., S.A. Tuekolsky, W.T. Vetterling and B.P. Flanery, Numerical Recipes: The Art of Scientific Computing , Cambridge University Press, third edition, 2007. Note : This is the most recent edition of a series that includes code, test cases, and supporting documentation for various