Welcome to Stupid Math Tricks. Over the last mumblty years or so I've found myself scratching out various proofs
of things that are probably well known to anyone who graduated with a degree in math. But I'm a physicist, so what do I know?
So when I can't sleep at night, I start thinking about these little mathematical tidbits, and sometimes obsessing about them, to the point where I really can't get to sleep. The hope is that by writing them down in a permanent place I'll be able to forget them. Of course, maybe a real mathematician will come along and tell me how badly I've gotten things wrong, but that's the chance we'll have to take.
So for my first SMT, let's look at the exponential function. Back in Calculus 101, the book we used at KU started this subject by defining the natural logarithm using the formula:
ln x = ∫_{1}^{x} dt / t . (1)
This is actually a neat way to go about it, as it answers one of the questions on the mind of every calculus student once integrals are taught. Since
∫_{1}^{x} t^{n} dt = [x^{n+1}  1]/(n+1) , n ≠ 1, (2)
what happens when n in (2) is 1? The answer is, well, we have to define a new function.
Once we accepted that, it was rather easy to show that ln x has the properties of a logarithm, i.e.
ln (a b) = ln a + ln b . (3)
Once we accepted that, the text defined a function exp(x) as the inverse of ln(x):
exp(ln x) = x ; ln(exp x) = x , (4)
and it was shown that exp(x) had the properties of an exponential:
exp(x + y) = exp(x) exp(y) . (5)
But you don't have to do it that way. A few years later, in a thermodynamics class, we were discussing how Maxwell derived the distribution of speeds in an ideal gas. He did this by assuming that (1) the movement of a gas molecule along the xaxis was independent of its movement along the y or z axis, and, indeed, independent of the choice of directions. Which meant that the distribution of velocities in an ideal gas must obey the relation
F(v_{x}^{2}) F(v_{y}^{2}) F(v_{z}^{2}) = F(v_{x}^{2} + v_{y}^{2} + v_{z}^{2}) (6)
where v_{x} is the component of the velocity in the x direction.
Comparing (5) and (6), and adding a little physics, namely that no atom ever gets up to infinite speed, then the distribution of molecular speeds in an ideal gas must have the form
F(v) = A exp( B v^{2}) . (7)
where the values of A and B depend on temperature.
Believe it or not, the idea that the requirement F(x+y) = F(x) F(y) requires that F(x) be an exponential function was new to me, as I'd been taught to think of exponentials using (2)(6).
Which, finally, brings us to the topic of the inaugural Stupid Math Tricks post, namely,
What are the properties of a function F(x)
if F(x+y) = F(x) F(y)?
(And try to be formal about it.)
Define a real function F(x) over the real numbers. We assume that
F(x + y) = F(x) F(y) . (8)
Let's see what properties we can derive:

If x = y = 0, then
F(0) = F(0)^{2} , (9)
so F(0) is either 0 or 1. F(0) = 0 would lead to a very boring function, since then
F(x) = F(x+0) = F(x) F(0) = 0 ∀ x
so we'll take
F(0) = 1 . (10) 
F(x) must be positive. To see this, first note that
F(x) = F(x/2 + x/2) = F(x/2)^{2} ,
so F(x) is either positive or 0. But if there was some point x_{0} such that F(x_{0}) = 0, then we could always write
F(x) = F(xx_{0}+x_{0}) = F(xx_{0}) F(x_{0}) = 0 ,
and in particular
F(0) = F(x_{0}) F(x_{0}) = 0 ,
which violates (10). Thus
F(x) > 0 for all real x. (11) 
It follows from (8) and (10) that
F(x) = 1/F(x) . (12) 
Since N = 1 + 1 + … + 1 (N times), we can write
F(N) = F(1)^{N} . (13) 
Conversely, since 1 = 1/N + 1/N + … + 1/N ,
F(1) = F(1/N)^{N} ,
from which it follows that
F(1/N) = F(1)^{1/N} . (14) 
Combining (13) and (14), we get
F(R) = F(1)^{R} for any rational R . (15) 
We can extend (14) to any real number x:

For any nonrational real number x and positive integer N, define an integer function M(N;x) such that
M(N;x)/N < x < [M(N;x) + 1]/N . (16) 
Then define
F(x) = lim_{N → ∞} F[M(N;x)/N]
= lim_{N → ∞} F(1)^{M(N;x)/N} . (17) 
This also proves that F(x) is a continuous function of x, since for any N,
F(x + 1/N) = F(x) F(1/N) = F(x) F(1)^{1/N} , (18)
and
lim_{N → ∞} F(1)^{1/N} = 1 for all F(1) > 0 . (19)

For any nonrational real number x and positive integer N, define an integer function M(N;x) such that
OK, so F(x) is a positive, continuous function on the reals. But is it differentiable? You'd think so, but there are many continuous functions which are not everywhere differentiable. So let's see. In the following we'll take F(1) = A > 0, without any loss of generality.

We can write
F'(x) = lim_{N → ∞} [ F(x+1/N)  F(x) ]/(1/N) . (20)
Using F(x + 1/N) = F(x) F(1/N) = F(x) A^{1/N} ,
F'(x) = F(x) lim_{N → ∞} N [ A^{1/N} – 1 ] . (21) 
Note that
A^{1/N} – 1 = [A – 1]/[ ∑_{n=0}^{N1} A^{n/N} ] . (22) 
If A > 1, then the sum in the denominator of (22) must be greater than N and
lim_{N → ∞} N [ A^{1/N} – 1 ] < A – 1 . (23) 
On the other hand, if A < 1, then the denominator of (22) must be greater than N A, and, keeping in mind that [ A^{1/N} – 1 ] is now negative,
lim_{N → ∞} N [ A^{1/N} – 1 ] > (A – 1)/A . (24)  In either case, the limit in (21) is finite, and so the derivative F'(x) exists for all real x.
Given that, what is the derivative of F(x)?

Go back to (8), and differentiate both sides of the equation with respect to y:
F'(x + y) = F(x) F'(y) . (25) 
If we take y = 0, then
F'(x) = F'(0) F(x) . (26)

Note that since F(0) = 1, this implies that the entire set of functions F(x) is defined solely by the value of F'(0). So define a special function of F(x) where F'(0) = 1. Oh, what the heck, let's call it exp(x), where
exp(x+y) = exp(x) exp(y) by (8),
exp(0) = 1 by (10), and
exp'(x) = exp(x) . (27)

By the chain rule
d/dx exp( λ x ) = λ exp(λ x) , (28)
so we can define the general function
F(x) = exp[ F'(0) x] , (29)

and, by (27), for any integer N > 0, the N^{th} derivative of exp(x) is itself,
d^{N}/dx^{N} exp(x) = exp(x) , (30)

Leading to the most famous Taylor Series ever,
exp(x) = ∑_{n=0}^{∞} x^{n}/n! . (31)
Which is how my complex analysis textbook started the discussion of exp(x).

(31), of course, lets us evaluate
e = exp(1) = 2.71828182… . (32)

And, with the help of (15) and the discussion in VII, write
exp(x) = e^{x} . (33)
And there you have it. From (8), we've derived all of the properties of exp(x), without ever invoking natural logarithms and their inverses. Not that I don't appreciate the other way of looking at things, but, hey, it's nice to know that Maxwell was right.
But wait! Is the exp(x) in (27) the same as the inverse of the function (1)? That remains to be seen, and will be left until next time.
No comments:
Post a Comment