As I hope you understand by now ODE solvers are powerful and important tools, especially in the study of chaotic systems where paper and pencil won’t give you the solution but that they also make errors and that those errors are proportional to the step size and also the shape of the landscape. In forward Euler the error term made those effects very obvious. That error is proportional to the square, the step size and the second derivative of the function. If the landscape is linear, forward Euler will give you a perfect answer regardless of the size of the time step. You can think about that in a number of ways. The first derivative is a perfect approximation of this landscape. Another way to think about it is that all of those other terms in the Taylor series as well as this term are zero for a linear system. If the landscape is not linear then both the time step and the curvature matter. If the landscape is mildly curvy but you take a very long time step, that can be bad. If the landscape is very curvy then even a small time step may not save you. These issues are not unique to forward Euler. The other solvers that I mentioned in the previous segment all have similar issues. The exact form of the error is different in each one but both the time step and the geometry of the landscape affect all of them but those two factors, time step and geometry are not the only cause of error in ODE solvers. There are also computational effects. Computers use what’s called floating point arithmetic. If you have a thirty-two bit machine that means that every memory location in the computer has thirty two bits and if you’re storing a single number in a single memory location that means that you have two to the thirty two possible values to use to store that number. So, let’s say you want to be able to store numbers in a range of say negative one million to a million. A very simple way to do that with thirty-two bits is to divide up that range evenly into two to the thirty-two slots. Two to the thirty-two is… So, imagine we did that. Each of those little slots, not all of which I’m gonna draw here cause it would take me a long time, is two million over that wide which is point 00047 wide. Now what that means is that all the numbers in this range are all stored as the same thirty-two bit long binary pattern. It’s just like having a calculator with three digits. Everything from one point three 0000 to one point three 00 four nine would get rounded to one point three 00 on a calculator with three digits. This box size here is often called the machine epsilon and as you can well imagine if you’re doing arithmetic with numbers that are comparable to or, worse, smaller than that box size the errors in your calculations are going to be large. Imagine for instance subtracting one point three 000 from one point three 00 one on this calculator. Worse yet, imagine putting that result in the denominator of the calculation. Now this simplified system that I’ve just outlined that evenly divides up the number range into even sized chunks is not how floating point arithmetic actually works. You don’t need ten decimal places when you’re working with nine hundred thousand and forty seven, but you do need lots of decimal places when you’re working with a very small number. For those reasons real computer arithmetic systems use a kind of scientific notation with bases and exponents and signed bits for each of those quantities. That allows those arithmetic systems to store big numbers with low precision and small numbers with high precision and computers offer a range of different data types. You’ve probably heard of double precision numbers. Those use twice as much memory and provide both a wider range and better precision near zero. Now how does all of that affect computer solutions of ordinary differential equations? Because it ties into all those errors we’ve been talking about. Time step, landscape geometry and solver method are not the only things that affect the error, so does the arithmetic system of the computer. Here is some pictures that demonstrate all those effects. First, the time step, this is kind of a lousy scan. It’s from a paper by Lorenz called Computational Chaos and what he’s showing you is three different pictures of the same dynamical system solved using the three different time steps and as you can see the different time steps give you wildly different results on this same system. Indeed, the time step has caused a bifurcation in the dynamics. In the top picture the dynamics look pretty periodic but down here they’re very, very different and there’s definitely a topological change between these three pictures and that is a bifurcation, again, induced by changing the time step of the solver. This is a picture from a Ph. D. thesis that I supervised, Dr. Natalie Ross, and the system in question is what’s called the Von Karman Vortex Street and it’s a bunch of vortices, two columns of them, and what they’re supposed to do is move kind of upwards in this picture and jiggle back and forth. The two pictures here were generated using the same solver, the same differential equations, the same computer, the same time step but the left hand one is single precision arithmetic and the right hand one is double precision arithmetic and the right hand one is more faithful to what the system really should do and that’s generally the case. If you use better arithmetic the errors are smaller and the results are more accurate. This is a wonderful series. This is an integration of the differential equations that model the planets in the outer solar system. You can see the sun in the center and then there’s Jupiter, Saturn, Uranus, Neptune and Pluto at an odd angle and this is a pretty good solver. It’s a symplectic solver so, it preserves energy as should be the case with orbits of planets. By the way, the inner rocky planets like us are left out of this because we’re very small and we don’t matter in the evolution of the outer solar system. If you use a slightly less good solver the errors that that solver makes cause the planets to oscillate. If you use a even less good solver Jupiter gets ejected. Now we all know that Jupiter won’t get ejected so we laugh, but this is the kind of thing that can happen in solar systems when a star comes nearby and so this is the kind of thing that can happen in this kind of dynamical system, but it’s happening here not because it’s physical but because the solver did it and that’s kind of terrifying. This is like the Hubble space telescope turning quasars into nebulae. They’re both things that you would expect to see in the behavior that you’re looking at but your observing instrument is mutating one into the other and that’s what makes numerical dynamics so scary. Those numerical dynamics which can cause bifurcations as you saw in that first picture from Lorenz’s paper can have results that look like real dynamical systems produced and as you’ve seen over the past couple of units those numerical effects come from the algorithms, come from the time step, come from the arithmetic. They’ve also come from the dynamical system as well so, what you need to think about as a practitioner, you’re faced with a result. You don’t know whether it’s right or not. What can you do in order to decide whether or not you should believe it or whether those results include dynamics that come from the numerics and not the real system? If the algorithms, the arithmetic and the time step could be causing this, change them. Change the time step. See if it changes your results. If it doesn’t, believe them, more, but never all the way because it always could be a little different if you changed it further. What I tend to do is reduce the time step until the dynamics stop changing and then that will increase my belief that my results are right. You can use different solvers. Use fourth, fifth, sixth [INDISCERNIBLE] cutter. You know they’re all built into [SOUNDS LIKE] Math Love, just throw them at the problem and see if the results change. If the results don’t change believe them a little bit more. It’s always a good thing to change from single to double precision arithmetic and to see if your results change. You can never know that your results are perfect. You can just increase your belief that they’re right and as I mentioned before you always need to be careful of that machine epsilon floating point arithmetic effects because if you make the time step too small then numerical effects might bite you before the time step gets small enough to give you a good solution. So, a couple of big picture take aways from this lecture, again, numerical solvers aren’t always right. You need to keep that in mind, whether you wrote it or someone else wrote it. The ways in which they are wrong you now understand which means that you are now equipped to do what the lawyers call due diligence. You have some evidence and you need to pound on it. It is your responsibility to pound on it before you believe it, so due diligence is important with numerical solvers.