Tuesday, April 1, 2014

-1/12 = oo, for small values of oo

Having too much time on my

Having too much time on my hands …

A few months ago, there was a minor Internet Brouhaha over a video (hereafter referred to as the first video) which proports to show that

1 + 2 + 3 + 4 + 5 + … = − 1

12
  .
(1)

This was picked up by Phil Plait, the Bad Astronomer, and was followed up by a slightly more rigorous proof, https://www.youtube.com/watch?v=E-d9mgo8FGk ("the second video"), some backtracking/correcting, a Wikipedia entry, and a lot of web pages.

So, is equation (1) really true?

Not really, but kinda-sorta.

It is true that the sum



n=1 
n
(2)
diverges, but we can approach the result (1) by several different lines of attack.

The standard method, which is referred to in the second video quoted above, starts with the Riemann Zeta Function:

ζ(s) =

n=1 
1

ns
  ,   ℜs > 1   .
(3)

This series only converges, as noted, when the real part of s is greater than one, but it can be analytically continued (a process that I understand in principle but don't know how to put into practice here) to give you results for other values of s. In particular, it is possible to show (see equation 25.6.3) that

ζ(−1) = − 1

12
  .
(4)

Now if you just set s = -1 in (3) it looks like you get equation (1). This isn't exactly true, but we can say that



n=1 
n → ζ(−1) = − 1

12
  ,
(5)
where, in this case, means by a procedure whose description will not fit in the margin of this web page.

That's one way to do it. I'm going to go at it another way. It's akin to the procedure that is described in the first video, but, I hope, more rigourous, though I have probably missed a few steps. It may be equivalent to Ramanujan summation, but don't quote me on that.

Let's start with the series

S1(x) =

n=0 
xn   ,   |x| < 1   .
(6)
This is well defined so long as x is between -1 and 1, and we all know that we can also write
S1(x) = 1

1−x
  .
(7)
This equation gives a finite value for all x, except the pole at x = 1, and is equal to (6) for |x| < 1. So we can justifiably say that (7) is the analytic continuation of (6). Also note that if we put x = −1 into (6) and evaluate it by using (7) we get the result
S1(−1) → 1 − 1 + 1 − 1 + 1 − 1 + …→ 1

2
  ,
(8)

where we use the arrow rather than an equals sign to show that these results are obtained only in the limit where x → −1 from above. The result (8) should be familiar from the first video.

Now look at the derivative of (6) and (7). We'll call this S2:

S2(x) = S1′(x)   .
(9)
Differentiating (7) we get
S2(x) = 1

(1−x)2
  ,
(10)
which is defined over the entire complex plane except at x = 1. We can also differentiate (6) and get
S2(x) =

n=1 
n  xn−1   ,   |x| < 1.
(11)
Again, we can analytically continue (11) into (10) to define the function over the entire complex plane, except for the pole at x = 1. And again, from (10) we have
S2(−1) = 1

4
  ,
(12)
while from (11) we get
S2(−1) → 1 − 2 + 3 − 4 + 5 − 6 + …  .
(13)
This, too, was shown in the first video.

Now let's take a look at the difference between S2(x) and S2(−x). Define

F2(x) = S2(x) − S2(−x) = 4 x

(1−x2)2
= 4 x  S2(x2)   .
(14)
For |x| < 1 we could write expand this in a conventional Taylor series, but we can also expand this using (11) to get
F2(x) =

n=1 
4  n  x2n+1   ,   |x| < 1   .
(15)

Now rearrange (14) to write

U2(x) = 1

3
(F2(x) − S2(x)) = − 1

3
S2(−x) = − 1

3 (1+x)2
  ,
(16)
where the reason for the factor of 1/3 will become obvious below.
Using the series expansions (11) and (15) we can write
U2(x) =

n=1 
n  cn(x)   ,   |x| < 1
(17)
where
cn(x) = 1

3
xn (4 xn+1 − 1)   .
(18)
It is obvious that

lim
x → 1 
cn(x) = 1   ,
(19)
for all n, so we can write

U2(x) → 1 + 2 + 3 + 4 + … , x → 1-     (20)

Where the 1 indicates that x → 1 from below. However, from (16) we have

U2(1) = − 1

12
  ,
(21)
so using this prescription
1 + 2 + 3 + 4 + …→ − 1

12
  .
(22)

So what's going on here? In the limit x → 1 the nth term in the sum within (17) tends to the value n. The trick is that for any value x < 1 there are only a finite number of cn(x) which are positive, and an infinite number that are negative. Take a look at the graph below, which shows cn(x) for several values of x:

Once we get x > 0.9999 or so, a significant number of the cn(x) are close to 1. However, for any x < 1, there are an infinite number of cn(x) which are negative. These have weights much smaller than the postive values, but there are a lot more of them. The net result is that the sum (17) is always negative, and tends to −1/12 as x → 1.

Is this a unique result? That is, do we always get

1 + 2 + 3 + 4 + …→ – 1

12
(23)
if we do it "properly"? Beats me. I would think that you could chose different a different set of functions cn(x) which still have the property cn(x) → 1 as x → 1, place them in (17), and get something other than -1/12 as x →1. But I haven't investigated that, and I certainly couldn't prove a general case.




File translated with help from TEX by TTH, version 4.03.
On 1 Apr 2014, 18:17.

Saturday, August 3, 2013

The 10° Solution

Last week we looked at how to get analytic expressions for the sine and cosine of angles between 0 and 90 °, provided that the angle was an integer multiple of 3°.

For aesthetic reasons, if nothing else, it would be nice to know have a table of exact expressions for sines and cosines for every degree, not just every 3°. In principle we could do this, following the tricks we did last time, by noting that

      cos 90° = 0  ,  (1)

and then expand

      cos 90 x = Σn=090 an cosn x = 0  ,  (2)

which gives us a 90th degree polynomial in cos x, one of whose solutions is cos 1°.

Unfortunately (2) is just a tad difficult to solve. But there are other ways to do this. If, for example, we knew cos 10°, then we could get cos 1° and sin 1° from the addition formulas:

     cos 1° = cos 10° cos 9° + sin 10° sin 9° ,  (3)

     sin 1° = sin 10° cos 9° - sin 9° cos 10°  .  (4)

So how to we find cos 10°, or, in radians, cos π/18 ? Since we know cos π/2 = 0, it follows that π/18 will be one of the solutions of the equation

      cos 9 x = 0  .  (5)

All we have to do is find that solution.

We start by expanding (5), writing

      cos 9 x = cos x (3 - 4 cos2 x) ( 3 - 36 cos2 x + 96 cos4 x - 64 cos6 x )  .  (6) *

Let's consider the possible solutions we can get for (6). We know that cos x is zero at x = π/2, and at multiples of π away from this point, so our solutions are going to be of the form

      9 x = ½ π + N π  ,  (7)

which gives 9 unique solutions:

      x = π/18, π/6, 5 π/18, 7 π/18, π/2, 17 π/18, 5 π/6, 13 π/18, and 11 π/18 .  (8)

Since cos(π-x) = - cos x, the cosines at the last four points here will be just the negatives of the cosines of the first four points, which is not surprising, since (6) obviously has a lot of ± roots.

Given that the cosine decreases as x goes from 0 to ½π, cos 10° will be the largest root of (6).

The first term of (6) vanishes when x = ½π, while the second term vanishes when x = π/6 or 5π/6. So all we have to worry about is the third term. If we set

      y = cos2 x ,  (9)

then we only need find the roots of the third degree polynomial:

      P(y) = 64 y3 - 96 y2 + 36 y - 3  .  (10)

There is a standard method for doing this, which dates back almost 500 years. I'm going to work it out, once, because, well, this is Stupid Math Tricks, and I've never written it down before.

The first step is to get rid of the quadratic term in (10). To do this, set y = z + δ and collect powers of z:

      P(z + δ) = 64 z3 + 96 (2 δ - 1) z2 + 12 (16 δ2 - 16 δ + 4) z + (64 δ3 - 96 δ2 + 36 δ - 3)  .  (11)

If we set δ = ½ we get

      P(z + ½) = 64 z3 - 12 z - 1  .  (12)

(You can check this is right by noting that P(0) = -3, P(1) = 1, P''(0) = 36, and P'''(0) = 384 in both (11) and (12).)

Now write

      z = u + v  .  (13)

We'll collect the terms in a somewhat non-obvious way: write

      z3 = u3 + v3 + 3 u v (u + v)  ,  (14)

and collect terms in (u+v):

      P(u + v + ½) = 64 u3 + 64 v3 + 12 (16 u v - 1) (u + v) - 1  .  (15)

Get rid of the third term in (15) by setting

      v = 1/(16 u)  .  (16)

Then

      P(u + 1/(16 u) + ½) = 64 u3 + 1/(64 u3) - 1  .  (17)

Solving for P(y) = 0 is therefore equivalent to solving the equation

      4096 u6 - 64 u3 + 1 = 0  .  (18)

This is a quadratic equation in u3. There are no real solutions, but it does have the complex solutions:

      u3 = (1 ± i √3)/128 = (1/64) ( cos π/3 ± i sin π/3 ) = (1/64) exp ±i π/3  .  (19)

This gives us

      u = (1/4) exp ±i π/9  .  (20)

Without loss of generality we can take the + sign in (20), in which case we find

      u = (1/4) exp i π/9

      v = (1/4) exp -i π/9  .  (21)

Then

      u + v = (1/4) (exp i π/9 + exp -i π/9) = ½ cos π/9  .  (22)

Or, to get back to the solution of (10),

      y = z + ½ = u + v + ½ = ½( 1 + cos π/9)  .  (23)

Using the half-angle formula for cosines, we find

      y = cos2 π/18  .  (24)

And so one of the solutions of (5) is TaDa!

      cos x = √y = cos π/18  .  (25)

But, of course we know that one of the solutions of (6) must be cos π/18, so we've come a long way to prove:

      cos π/18 = cos π/18  .  (26)

THAT's a big disappointment. While cos π/18 is an algebraic irrational, i.e. not transcendental, it cannot be expressed in any simple formula such as those we found last time.

You can, however, with enough patience get a solution. Using Newton's method, we can find a root of any differentiable function f(x) by setting x0 sufficiently close to the root and repeatedly calculating

      xn+1 = xn - f(xn)/f'(xn)  .  (27)

Let's start with the simplest polynomial we have, (12), which we'll write as

      F(z) = 64 z3 - 12 z - 1  .  (28)

The corresponding Newton function is

      G(z) = z - F(z)/F'(z) = (128 z3 + 1)/[12 (16 z2 + 1) ]  .  (28)

Now we know that z = y - ½ = cos2 π/18 - ½, and that cos π/18 is near one, so a good starting value for z is ½. That let's us right this little awk script. I picked awk because it's even available in Windows. I'm using bash to write the script, so you'll have to modify that for Windows systems. It will work on Macs and Linux boxes:

#! /bin/bash

Z=0.5

# A proper script would turn off when convergence is achieved,
#  and not before.  Obviously this is not a proper script

# This is set near the maximum precision of my computer. YMMV

for i in `seq 1 5`
do
    Z=`echo $Z | awk '{printf "%21.15f", (128*$1^3+1)/(12*(16*$1^2-1))}'`
    echo $Z
done

Y=`echo $Z | awk '{printf "%21.15f", $1+0.5}'`

echo "Y = " $Y

COS=`echo $Y | awk '{printf "%21.15f", sqrt($1)}'`

echo "cos pi/18 = " $COS

Which has the output:

0.472222222222222
0.469862891737892
0.469846311209168
0.469846310392954
0.469846310392954
Y =  0.969846310392954
cos pi/18 =  0.984807753012208

Now to go back to last week's barbarian captor scenario, since you'll be doing this with a stick on dirt it's going to take a while, but you can tell him that, in principle, you can get the exact answer to arbitrary precision.


* You can get this formula by

  • Carefully working out the expansion of cos 9x using the addition formulas, then factoring the resulting polynomial.
  • Or realize that cos (π - x) = - cos x means that any expansion of cos 9x must be an odd polynomial in cos x, and that x = π/6 is one of the solutions of (5), so that ½ √3 must be one of the solutions, and derive the remaining coefficients by careful consideration of how cos 9x behaves, e.g., both sides of (5) must be equal to 1 when x = 0. Or,
  • Or do what I did: write out (5) with unknowns for the variables, use a spreadsheet, or Perl or a Fortran program to write down cos 9x for one hundred or so values of x, and use gnuplot's fitting function to determine the values of the coefficients. [Go back to equation (6).]

† Actually, since exp 2 N i π = 1 for any integer N, there are also two other unique solutions, u = exp ±i 7π/9 and u = exp ±i 5π/9. We'll leave it as an exercise for the reader to follow through with these solutions. (Hint: the final solution will involve the cosines o 10°, 50°, and 70°.) [Go back to equation (20).]

Saturday, July 27, 2013

Exact Sines and Cosines

Civilization has fallen. Every computer has ceased to function. All libraries are rubble. Worse, every copy of Abramowitz and Stegun has been burnt.

Your barbarian captor tells you that you have one day to calculate the sine of 12° or he will feed you to his pack of feral Vietnamese Pot-Bellied Pigs.

Watcha gonna do?


Way, way back in high school geometry Mr. Wye taught me to draw triangles like these:

Which makes it easy to calculate the sines and cosines of 30°, 45°, and 60°:

     sin 30° = cos 60° = ½   (1)
     sin 45° = cos 45° = 1/√2   (2)
     cos 60° = sin 30° = ½ √3   (3)

From that you can use the half-angle formulas

     sin θ = [ 1 − cos 2 θ ]½   (4)  and
     cos θ = [ 1 + cos 2 θ ]½ .   (5)

to give you, e.g.,

     sin 15° = (√3 − 1)/√8   (6)   and
     cos 15° = (√3 + 1)/√8    .   (7)

And then you can go on to the values for 7.5°, 3.75°, 1.875°, … . You could then use the addition formulas

     sin x+y = sin x cos y + cos x sin y   (8)   and

     cos x+y = cos cos y − sin x sin y   (9)

to compute

     sin (15° − 3.75°) = sin 11.25°

and so on, eventually getting really, really close to sin 12°.

Of course you'll never quite get there, and that means you'll be sleeping with the piggies.

But wait! We don't have to use pictures to compute sines and cosines. In addition to all of the formulas above, we we've got a large number of trigonometric relations that we can draw on:

      sin 0 = 0   ,  (10)
      cos 0 = 1   ,  (11)
      sin 90° = 1   ,  (12)
      cos 90° = 0   ,  (13)
      sin2 x + cos2 y = 1   ,  (14)
      sin (90° − x) = cos x   ,  (15)
      sin (−x) = − sin x  ,  (16)
      cos (−x) = cos x  ,  (17)   and
      cos (x + 360°) = cos x , sin (x + 360°) = sin x (18)
      sin x > 0 , cos x > 0 , 0 < x < 90°   .  (19)

So, for example, by using (15) with x = 45° we find

      sin 45° = cos 45°   (20)  .

Combine this with the Pythagorean Theorem (14) and we get

      2 sin2 45° =1  ,  (21)   or

      sin 45° = cos 45° = 1/√2   .  (22)

For 30/60°, repeated application of (8) and (9) gives

      cos 3 x = 4 cos3 x −3 cos x   .  (23)

if we set 3 x = 90°, then the left hand side of (23) must vanish and, using (19)

      cos 30° = ½ √3  .  (24)

and by (14)

      sin 30° = ½  .  (25)

We can get the values for 60° using (15).

That's nice, but it doesn't get us any closer to sin 12°. But wait! If (23) involves cos 3 x, what about another integer? Say

      cos 5 x = 5 cos x − 20 cos3 x + 16 cos5 x   .  (26)

This is a fifth-order polynomial in cos x, so if we set cos 5 x = 0, we'll get five solutions. Let's think about those a bit.

The function cos 5 x has zeros whenever 5 x = 90° + N 180°, for any integer N. If we take N = 0, 1, 2, 3, 4, we get x = 18°, 54°, 90°, 126° and 162°, respectively. Equation (26) has one zero root, which is obviously cos 90°. That leaves

      16 cos4 x − 20 cos2 x + 5 = 0   ,  (26)

which has the solutions

     cos x = ± [ (5 ± √5)/8]½  .  (27)

Since cos x is decreasing as x goes from 0 to 90° we can match the smaller positive value in (27) to cos 54° and the larger to cos 18°. The negative solutions are cos 126° = − cos 54° and cos 162° = − cos 18°.

So we have cos 18° and cos 54°. Then we can use (14) to give us the sines of those angles. Even better, using (15) we can get the sines and cosines of 36° and 72°. And guess what! From the values for 18° and 15° we can get sin 3° and cos 3°, and using those and the values for 15° we can get sine 12° and cos 12°. You're not going to be pig supper!!!


In fact, you can calculate the sine and cosine at every 3 degrees. I've attached a table of these values below. Save it for future reference, just in case your barbarian captor doesn't give you a full 24 hours to do the calculations:

Values of the trigonometric functions at special angles

θ θ sinθ = cosϕ ϕ ϕ
(Radians) (degrees) (θ+ ϕ = π/2) (Radians) (degrees)
0 0 0 π/2 90
π/60 3 (1/4) √{8−√3 (√5+1) −√{2(5−√5)}} 29π/60 87
π/30 6 (1/4) √{9−√5−√{6 (5+√5)}} 7π/15 84
π/20 9 (1/2)  √{2 − √{(5+√5)/2}} 9π/20 81
π/15 12 (1/4) √{7−√5−√{6 (5−√5)}} 13π/30 78
π/12 15 (√3−1)/(2√2) 5π/12 75
π/10 18 (1/4)  (√5 − 1) 2π/5 72
7π/60 21 (1/4) √{8 −√3(√5−1)−√{2(5+√5)}} 23π/60 69
2π/15 24 (1/4)√{7+√5−√{6(5+√5)}} 11π/30 66
3π/20 27 (1/2)  √{2 − √{(5−√5)/2}} 7π/20 63
π/6 30 1/2 π/3 60
11π/60 33 (1/4) √{8−√3 (√5+1) + √{2 (5−√5)}} 19 π/60 57
π/5 36 (1/2)  √{(5−√5)/2} 3π/10 54
13π/60 39 (1/4) √{8 + √3 (√5−1) − √{2 (5+√5)}} 17π/60 51
7π/30 42 (1/4) √{9+√5−√{6 (5−√5)}} 4π/15 48
π/4 45 1/√2 π/4 45
4π/15 48 (1/4) √{7−√5+√{6(5−√5)}} 7π/30 42
17π/60 51 (1/4) √{8−√3(√5−1) + √{2(5+√5)}} 13π/60 39
3π/10 54 (1/4)  (√5 + 1) π/5 36
19π/60 57 (1/4) √{8+√3 (√5+1) − √{2(5−√5)}} 11π/60 33
π/3 60 (1/2)  √3 π/6 30
7π/20 63 (1/2)  √{2 + √{(5−√5)/2}} 3π/20 27
11π/30 66 (1/4) √{9 − √5 + √{6 (5 + √5)}} 2π/15 24
23π/60 69 (1/4) √{8 +√3(√5−1)+√{2(5+√5)}} 7π/60 21
2π/5 72 (1/2)  √{(5+√5)/2} π/10 18
5π/12 75 (√3+1)/(2√2) π/12 15
13π/30 78 (1/4)√{9+√5+√{6(5−√5)}} π/15 12
9π/20 81 (1/2) √{2 + √{(5+√5)/2}} π/20 9
7π/15 84 (1/4) √{7+√5+√{6 (5+√5)}} π/30 6
29π/60 87 (1/4) √{8+√3 (√5+1) +√{2 (5−√5)}} π/60 3
π/2 90 1 0 0

File translated from TEX by TTH, version 4.03.
On 2 Aug 2013, 14:50.

A few notes:

  • Note that you read the sines from the top down, and the cosines from the bottom up.
  • I haven't used radians here, because your captor won't know them from arc seconds, but what we've done is compute the sine and cosine for any value N π/60 radians.
  • I've checked all the formulas by plugging them into a calculator and then comparing the values to the calculator's value for the sine. Several times. So I'm pretty sure they are all correct.
  • Hopefully this is sufficiently readable. There isn't a good way of translating LaTeX to HTML on Blogger. TTH is rather old, but still workable. If you'd like a PDF (or the original LaTeX source), shoot me an email and I'll send it to you.
  • You might think we can use cos 9 x = 0 to compute cos 10°, which means we could compute cos 1° = cos(10°-9°), and have an analytic expression for the sine and cosine at every degree. We'll explore why that doesn't work next time.

Friday, July 19, 2013

The Logarithmic Function From Scratch

That counterpart to last week's development of the exponential function is the logarithm. As anyone who ever played with the antique known as a slide rule, the logarithmic function converts multiplication into addition:

Log(a b) = Log(a) + Log(b)  ,  (1)

but that's getting ahead of ourselves.

Assume that (1) is all we know: Consider a real function G defined on at least some part of the real numbers, and having the property

G(a b) = G(a) + G(b)  ,  (2)

for any real numbers a and b. What can we say about this function?

First, take a = 0. Then we have

G(0 b) = G(0) + G(b)  , or
G(0) = G(0) + G(b)  .  (3)

So either G(b) = 0 for all b, or G(0) is undefined. (Yes, we can say that G(0) = ∞, but I'd rather not go there right now.) Since G = 0 is uninteresting, we'll assume that G(0) is undefined. For now, at least, we'll restrict ourselves to x > 0 and so avoid the problem.

If we set a = 1 in (2) we get

G(b) = G(1) + G(b)  , which means that

G(1) = 0  ,  (4)

which implies G[a (1/a)] = G(a) + G(1/a) = G(1) = 0  , or

G(1/a) = - G(a)  .  (5)

For any integer M we can repeatedly apply (2) and get

G(aM) = M G(a)  .  (6)

If we let b = aM, or a = b1/M, then by turning around (6) we get

G(b1/M) = (1/M) G(b)  ,  (7)

and combining the two we get

G(xM/N) = (M/N) G(x)    (8) for any integers M and N.

Furthermore, we can play the same trick as we did last week and extend this to all positive real numbers y:

G(xy) = y G(x)  .  (9)

What about continuity? I'm going to be a little more physicist-like this week (though last week was probably not properly mathematical anyway), and go through it rather quickly. Let's take two numbers, x > 0 and y > -x. Then we can always write

x + y = x Q  ,  Q = 1+y/x > 0  ,

and we want to study the behavior of

G(x + y) = G(x Q) = G(x) + G(Q)   as y → 0 or Q → 1.

When we put it like that, the answer is obvious. Write Q = h1/N. Then as N → ∞, Q → ∞ 1, and

limN → ∞ G(Q) = limN → ∞ G(h)/N → 0  .  (10)

Thus

limy→0 |G(x+y) - G(x)| = 0  ,  (11)

and G(x) is a continuous function for x > 0.

Using the same tricks, let's look at the derivative, if it exists:

G'(x) = limy→∞0 [G(x+y)-G(x)]/y  ,  (12)

With the above substitutions we can write this as

G'(x) = limN→∞ [G(x h1/N) - G(x)]/[x (h1/N-1)  ,  (13)

which reduces to G'(x) = [G(h)/x]/ [limN→∞ N (h1/N - 1)]  .  (14)

Last week we showed that the limit in the denominator of (14) is finite, so it follows that G'(x) is well defined for x > 0.

To evaluate the derivative, we do the standard tricks:

d/dy G(x y) = x G'(x y) = d/dy [G(x) + G(y)] = G'(y)

and set y = 1. Then

G'(x) = G'(1)/x  , x > 0  .  (15)

Thus each and every log function (2) is uniquely defined once we define G'(1). Since G(1) = 0, we can integrate (15) to find

G(x) = G'(1) ∫1x dt / t  .  (16)

The logical prototype logarithm function (which we could call the, ahem, natural logarithm) is then the function with G'(1) = 1:

ln x = ∫1x dt / t  .  (17)

Which is exactly how my college calculus course started out.

One more thing: we want to show that ln x and exp x are inverse functions of one another. Write

H(x) = ln(exp x)  .  (18)

Obviously H(0) = ln(exp 0) = ln(1) = 0. Furthermore,

H'(x) = [d/dx exp x]/exp x = exp x/exp x = 1  ,  (19)

so H(x) = x. We could go the opposite way as well and show that

exp(ln x) = x  .  (20)

That concludes this little exploration of the log and exponential functions. Next week we'll take a look at sines and cosines.

Friday, July 12, 2013

The Exponential Function From Scratch

Welcome to Stupid Math Tricks. Over the last mumblty years or so I've found myself scratching out various proofs of things that are probably well known to anyone who graduated with a degree in math. But I'm a physicist, so what do I know?

So when I can't sleep at night, I start thinking about these little mathematical tidbits, and sometimes obsessing about them, to the point where I really can't get to sleep. The hope is that by writing them down in a permanent place I'll be able to forget them. Of course, maybe a real mathematician will come along and tell me how badly I've gotten things wrong, but that's the chance we'll have to take.

So for my first SMT, let's look at the exponential function. Back in Calculus 101, the book we used at KU started this subject by defining the natural logarithm using the formula:

ln x = ∫1x dt / t .    (1)

This is actually a neat way to go about it, as it answers one of the questions on the mind of every calculus student once integrals are taught. Since

1x tn dt = [xn+1 - 1]/(n+1)  , n ≠ -1,  (2)

what happens when n in (2) is -1? The answer is, well, we have to define a new function. Once we accepted that, it was rather easy to show that ln x has the properties of a logarithm, i.e.

ln (a b) = ln a + ln b  .    (3)

Once we accepted that, the text defined a function exp(x) as the inverse of ln(x):

exp(ln x) = x   ;   ln(exp x) = x ,    (4)

and it was shown that exp(x) had the properties of an exponential:

exp(x + y) = exp(x) exp(y)  .    (5)

But you don't have to do it that way. A few years later, in a thermodynamics class, we were discussing how Maxwell derived the distribution of speeds in an ideal gas. He did this by assuming that (1) the movement of a gas molecule along the x-axis was independent of its movement along the y- or z- axis, and, indeed, independent of the choice of directions. Which meant that the distribution of velocities in an ideal gas must obey the relation

F(vx2) F(vy2) F(vz2) = F(vx2 + vy2 + vz2)    (6)

where vx is the component of the velocity in the x direction.

Comparing (5) and (6), and adding a little physics, namely that no atom ever gets up to infinite speed, then the distribution of molecular speeds in an ideal gas must have the form

F(v) = A exp(- B v2)  .    (7)

where the values of A and B depend on temperature.

Believe it or not, the idea that the requirement F(x+y) = F(x) F(y) requires that F(x) be an exponential function was new to me, as I'd been taught to think of exponentials using (2)-(6).

Which, finally, brings us to the topic of the inaugural Stupid Math Tricks post, namely,

What are the properties of a function F(x)
if F(x+y) = F(x) F(y)?

(And try to be formal about it.)

Define a real function F(x) over the real numbers. We assume that

F(x + y) = F(x) F(y)  .    (8)

Let's see what properties we can derive:

  1. If x = y = 0, then

    F(0) = F(0)2  ,    (9)

    so F(0) is either 0 or 1. F(0) = 0 would lead to a very boring function, since then

    F(x) = F(x+0) = F(x) F(0) = 0   ∀ x  

    so we'll take

    F(0) = 1  .    (10)

  2. F(x) must be positive. To see this, first note that

    F(x) = F(x/2 + x/2) = F(x/2)2   ,

    so F(x) is either positive or 0. But if there was some point x0 such that F(x0) = 0, then we could always write

    F(x) = F(x-x0+x0) = F(x-x0) F(x0) = 0   ,

    and in particular

    F(0) = F(x0) F(-x0) = 0  ,

    which violates (10). Thus

    F(x) > 0   for all real x.   (11)

  3. It follows from (8) and (10) that

    F(-x) = 1/F(x)   .   (12)

  4. Since N = 1 + 1 + … + 1 (N times), we can write

    F(N) = F(1)N   .   (13)

  5. Conversely, since 1 = 1/N + 1/N + … + 1/N ,

    F(1) = F(1/N)N   ,

    from which it follows that

    F(1/N) = F(1)1/N   .   (14)

  6. Combining (13) and (14), we get

    F(R) = F(1)R   for any rational R   .   (15)

  7. We can extend (14) to any real number x:
    1. For any non-rational real number x and positive integer N, define an integer function M(N;x) such that

      M(N;x)/N < x < [M(N;x) + 1]/N   .   (16)

    2. Then define

      F(x) = limN → ∞ F[M(N;x)/N]

           = limN → ∞ F(1)M(N;x)/N   .     (17)

    3. This also proves that F(x) is a continuous function of x, since for any N,

      F(x + 1/N) = F(x) F(1/N) = F(x) F(1)1/N   ,   (18)

      and

      limN → ∞ F(1)1/N = 1 for all F(1) > 0   .   (19)

OK, so F(x) is a positive, continuous function on the reals. But is it differentiable? You'd think so, but there are many continuous functions which are not everywhere differentiable. So let's see. In the following we'll take F(1) = A > 0, without any loss of generality.

  1. We can write

    F'(x) = limN → ∞ [ F(x+1/N) - F(x) ]/(1/N)    .   (20)

    Using F(x + 1/N) = F(x) F(1/N) = F(x) A1/N ,

    F'(x) = F(x) limN → ∞ N [ A1/N – 1 ]    .   (21)

  2. Note that

    A1/N – 1 = [A – 1]/[ ∑n=0N-1 An/N ]    .   (22)

  3. If A > 1, then the sum in the denominator of (22) must be greater than N and

    limN → ∞ N [ A1/N – 1 ] < A – 1    .   (23)

  4. On the other hand, if A < 1, then the denominator of (22) must be greater than N A, and, keeping in mind that [ A1/N – 1 ] is now negative,

    limN → ∞ N [ A1/N – 1 ] > (A – 1)/A    .   (24)

  5. In either case, the limit in (21) is finite, and so the derivative F'(x) exists for all real x.

Given that, what is the derivative of F(x)?

  1. Go back to (8), and differentiate both sides of the equation with respect to y:

    F'(x + y) = F(x) F'(y)    .   (25)

  2. If we take y = 0, then

    F'(x) = F'(0) F(x)    .   (26)

  3. Note that since F(0) = 1, this implies that the entire set of functions F(x) is defined solely by the value of F'(0). So define a special function of F(x) where F'(0) = 1. Oh, what the heck, let's call it exp(x), where

    exp(x+y) = exp(x) exp(y)   by (8),

    exp(0) = 1   by (10), and

    exp'(x) = exp(x)    .   (27)

  4. By the chain rule

    d/dx exp( λ x ) = λ exp(λ x)    ,   (28)

    so we can define the general function

    F(x) = exp[ F'(0) x]    ,   (29)

  5. and, by (27), for any integer N > 0, the Nth derivative of exp(x) is itself,

    dN/dxN exp(x) = exp(x)    ,   (30)

  6. Leading to the most famous Taylor Series ever,

    exp(x) = ∑n=0 xn/n!   .   (31)

    Which is how my complex analysis textbook started the discussion of exp(x).

  7. (31), of course, lets us evaluate

    e = exp(1) = 2.71828182…    .   (32)

  8. And, with the help of (15) and the discussion in VII, write

    exp(x) = ex    .   (33)

And there you have it. From (8), we've derived all of the properties of exp(x), without ever invoking natural logarithms and their inverses. Not that I don't appreciate the other way of looking at things, but, hey, it's nice to know that Maxwell was right.

But wait! Is the exp(x) in (27) the same as the inverse of the function (1)? That remains to be seen, and will be left until next time.