Outsmart World
Back Home{blogPostStyles.title}
Q: Why are numerical methods necessary? If we can’t get exact solutions, then how do we know when our approximate solutions are any good?

Physicist: When a problem can be solved exactly and in less time than forever, then it is “analytically solvable”. For example, “Jack has 2 apples and Jill has 3 apples, how many apples do they have together?” is analytically solvable. It’s 5. Exactly 5.

Precisely solving problems is what we often imagine that mathematicians are doing, but unfortunately you can’t swing a cat in the sciences without hitting a problem that can’t be solved analytically. In reality “doing math” generally involves finding an answer rather than the answer. While you may not be able to find the exact answer, you can often find answers with “arbitrary precision”. In other words, you can find an approximate answer and the more computer time / scratch paper you’re willing to spend, the closer that approximation will be to the correct answer.

A trick that lets you get closer and closer to an exact answer is a “numerical method”. Numerical methods do something rather bizarre: they find solutions close to the answer without ever knowing what that answer is. As such, an important part of every numerical method is a proof that it works. So that there is the answer: we need numerical methods because a lot of problems are not analytically solvable and we know they work because each separate method comes packaged with a proof that it works.

It’s remarkable how fast you can stumble from solvable to unsolvable problems. For example, there is an analytic solution for the motion of two objects interacting gravitationally but no solution for three or more objects. This is why we can prove that two objects orbit in ellipses and must use approximations and/or lots of computational power to predict the motion of three or more objects. This inability is the infamous “three body problem“. It shows up in atoms as well; we can analytically describe the shape of electron orbitals and energy levels in individual hydrogen atoms (1 proton + 1 electron = 2 bodies), but for every other element we need lots of computer time to get things right.

Even for purely mathematical problems the line between analytically solvable and only numerically solvable is razor thin. Questions with analytic solutions include finding the roots of 2nd degree polynomials, such as , which can be done using the quadratic equation:

The quadratic equation is a “solution by radicals”, meaning you can find the solution using only the coefficients in front of each term (in this case: 1, 2, -3). There’s a solution by radicals for 3rd degree polynomials and another for 4th degree polynomials (they’re both nightmares, so don’t). However, there can never be a solution by radicals for 5th or higher degree polynomials. If you wanted to find the solutions of (and who doesn’t?) there is literally no way to find an expression for the exact answers.

Numerical methods have really come into their own with the advent of computers, but the idea is a lot older. The decimal expansion of (3.14159…) never ends and never repeats, which is a big clue that you’ll never find its value exactly. At the same time, it has some nice properties that make it feasible to calculate to arbitrarily great precision. In other words: numerical methods. Back in the third century BC, Archimedes realized that you could approximate by taking a circle with circumference , then inscribing a polygon inside it and circumscribing another polygon around it. Since the circle’s perimeter is always longer than the inscribed polygon’s and always shorter than the circumscribed polygon’s, you can find bounds for the value of .

By increasing the number of sides, the polygons hug the circle tighter and produce a closer approximation, from both above and below, of . There are fancy mathematical ways to prove that this method approaches , but it’s a lot easier to just look at the picture, consider for a minute, and nod sagely.

Archimedes’ trick wasn’t just noticing that must be between the lengths of the two polygons. That’s easy. His true cleverness was in coming up with a mathematical method that takes the perimeters of a given pair of k-sided inscribed and circumscribed polygons with perimeters and and produces the perimeters for polygons with twice the numbers of sides, and . Here’s the method:

By starting with hexagons, where and , and doubling the number of sides 4 times Archie found that for inscribed and circumscribed enneacontahexagons and . In other words, he managed to nail down to about two decimal places: .

Some puzzlement has been evinced by Mr. Medes’ decision to stop where he did, with just two decimal points in . But not among mathematicians. The mathematician’s ancient refrain has always been: “Now that I have demonstrated my amazing technique to the world, someone else can do it.”.

To be fair to Archie, this method “converges slowly”. It turns out that, in general, and . Every time you double n the errors, and , get four times as small (because ), which translates to very roughly one new decimal place every two iterations. never ends, but still: you want to feel like you’re making at least a little progress.

Some numerical methods involve a degree of randomness and yet still manage to produce useful results. Speaking of , here’s how you can calculate it “accidentally”. Generate n pairs of random numbers, (x,y), between 0 and 1. Count up how many times and call that number k. If you do this many times, you’ll find that .

As you generate more and more pairs and tally up how many times the law of large numbers says that , since that’s the probability of randomly falling in the grey region in the picture above. This numerical method is even slower than Archimedes’ not-particularly-speedy trick. According to the central limit theorem, after n trials you’re likely to be within about of . That makes this a very slowly converging method; it takes about half a million trials before you can nail down “3.141”. This is not worth trying.

Long story short, most applicable math problems cannot be done directly. Instead we’re forced to use clever approximations and numerical methods to get really close to the right answer (assuming that “really close” is good enough). There’s no grand proof or philosophy that proves that all these methods work but, in general, if we’re not sure that a given method works, we don’t use it.

Answer Gravy: There are a huge number of numerical methods and entire sub-sciences dedicated to deciding which to use and when. Just for a more detailed taste of a common (fast) numerical method and the proof that it works, here’s an example of Newton’s Method, named for little-known mathematician Wilhelm Von Method.

Newton’s method finds (approximates) the zeros of a function, f(x). That is, it finds a value, , such that . The whole idea is that, assuming the function is smooth, when you follow the slope at a given point down you’ll find a new point closer to a zero/solution. All polynomials are “smooth”, so this is a good way to get around that whole “you can’t find the roots of 5th or higher degree polynomials” thing.

The big advantage of Newton’s method is that, unlike the two examples above, it converges preternaturally fast.

The derivative is the slope, so is the slope at the point . Considering the picture above, that same slope is given by the rise, , over the run, . In other words which can be solved for :

So given a point near a solution, , you can find another point that’s closer to the true solution, . Notice that if , then . That’s good: when you’ve got the right answer, you don’t want your approximation to change.

To start, you guess (literally… guess) a solution, call it . With this tiny equation in hand you can quickly find . With you can find and so on. Although it can take a few iterations for it to settle down, each new is closer than the last to the actual solution. To end, you decide you’re done.

Say you need to solve for x. Never mind why. There is no analytical solution (this comes up a lot when you mix polynomials, like x, or trig functions or logs or just about anything). The correct answer starts with

First you write it in such a way that you can apply Newton’s method: . The derivative is and therefore:

First make a guess. I do hereby guess . Plug that in and you find that:

Plug back in what you get out several times and:

In this particular case, through jump around a bit. Sometimes Newton’s method does this forever (try ) in which case: try something else or make a new guess. It’s not until that Newton’s method starts to really zero in on the solution. Notice that (starting at ) every iteration establishes about twice as many decimal digits than the previous step:

We know that Newton’s method works because we can prove that it converges to the solution. In fact, we can show that it converges quadratically (which is stupid fast). Something “converges quadratically” when the distance to the true solution is squared with every iteration. For example, if you’re off by 0.01, then in the next step you’ll be off by around . In other words, the number of digits you can be confident in doubles every time.

Here’s why it works:

A smooth function (which is practically everywhere, for practically any function you might want to write down) can be described by a Taylor series. In this case we’ll find the Taylor series about the point and use the facts that and .

The “…” becomes small much faster than as and get closer. At the same time, becomes effectively equal to . Therefore and that’s what quadratic convergence is. Note that this only works when you’re zeroing in; far away from the correct answer Newton’s method can really bounce around.

Prev Article
More from the Cool category
Next Article