Appendix I: the exponential function

The exponential function arises in science and other fields in many ways. Here’s an introduction, based on algebra, not calculus, with a bit of extension into calculus afterward.

1. The positive exponential, ex or exp(x), is a rapidly growing function of its argument, x. The most common way for the nonscientist to encounter it is in financial compound interest. Unrestrained population growth at a constant growth rate is another example.

A practical way to look at the exponential is in describing compound interest. Consider a gain in value (interest) at a rate a (per day, per month, per year, or whatever). How fast does some initial value, V0, increase? If we do simple interest, adding a fraction at after a time t has elapsed, then, if the interest keeps accumulating for a total time T, the final value is

Now let us add interest at every time interval in an amount equal to the fraction of the new, current value. Let’s divide the final time into N small intervals, T/N. Then, the value rises as


That last line is the first definition of the exponential, for any argument (here, aT, but we might just call this the generic argument, x).


Let’s look at how we can calculate eaT (pronounced as “e to the aT”). Let’s start by writing out the values of V3 and V4:


We can keep going, and you can convince yourself that the final expression is


When N is very large, N(N-1) looks indefinitely close to N2, N(N-1)(N-2) looks like N3, and so on. This give us


All the terms after the first two are in excess of those with simple interest. This equation is the second definition of the exponential. It also gives us a way to compute the exponential as an infinite series, which always converges to a finite, final value. One special value of ex is for x=1:


This is one of the fundamental mathematical constants, along with π and several other common ones. If you want to impress people, memorize, say, 25 digits of π (lots of people try this) and then 25 digits of e (very few people know this).

Let’s see how much different compound interest is. Let a be 5% = 0.05 per year. Let the total time be 1 year:

Simple interest: the value increases by a factor (1+0.05*1) = 1.05

Compound interest, compounded every tiny interval: the value increases by a factor


This is bigger than simple interest by the sum of all the terms after +0.05. Sure, it’s not much bigger, but now let’s run this for 50 years, as if you held stocks that gained 5% per year for that long. Simple interest would give you an increase by a factor 1+50*0.05=3.5. Compound interest would give you an increase by a factor e50*0.05 = e2.5=12.18!

The positive exponential increases faster at long times (large values of its “argument”) than any polynomial function of a finite number of terms, largely because it has an infinite number of terms.

One can use the infinite series shown above in order to compute the exponential ex for any value of its argument, x. There are more efficient ways, used particularly in computers, to do this computation. There’s no need to go into detail at this time.

A third way to define the exponential, ex, is that its rate of increase at any value, x, is ex itself. You’ll run into differential equations later, which will state that ex is (defined as, a fourth time) as the solution for the function y of the differential equation


Let’s take this third definition, and see if the definition as the power series satisfies the third definition. We need to compute the derivative as the sum of the derivatives of all the individual terms. That requires that we know the concept of a derivative. It is the slope of the graph of a function, simply. Here’s a graph of ex with its slope at three places[1}:

That means the rate of change over the smallest of intervals. You’ll run into this in calculus as the derivative f’(x) of a function f(x) at being defined as the difference in the value of that function at x+h and at x, divided by that increment h as h becomes infinitely small. That is

 This is also commonly written as


There are some tricky parts for special kinds of functions, which we’ll ignore. Also, h can be positive or negative and the result has to be the same if the function is said to be differentiable.

We can work out the derivatives of powers of x:

For f(x)=x0=1, a constant, f(1)-f(1) =0; the derivative of a constant is zero

For f(x)=x1=x, we have f(x+h)-f(x) =x+h – x = h and, thus, f’(x) = h/h = 1

For f(x)=x2, we have

For f(x)=x3, we have

In general, for f(x)=xn, we have f’(x)=nxn-1

We can then differentiate the power series:


So, yes, it works.

2. The negative exponential, e-x or exp(-x), is a smoothly declining function of its argument. It arises naturally in radioactive decay, in which a constant fraction of a nuclide such as U-238 vanishes in each equal time interval. For chemists, it arises in first-order chemical reactions, in which a constant fraction of a reactant disappears in each equal time interval.

You’ve heard of half-lives, often written as t1/2. In a time t1/2, half of the element decays. In the next interval of length t1/2, half of the remaining half decays, leaving ¼ of the original amount. In general, after an arbitrary time T, the amount left is e-T/t1/2, better written out as exp(-T/t1/2).

To derive this, we use the same kind of argument as for the positive exponential, but with a negative sign. In a tiny interval t, the decrease in value is from V0 to V0(1-kt), writing k for a decay constant (we’ll find that it’s inversely related to the half life, as k=0.693/t1/2. This makes sense – a shorter half life is a faster decay, a larger decay constant).

Again breaking up a finite time interval T into a very large number, N, of tiny intervals, we have

In the limit of infinite N, this looks like V0 times an exponential, but with the argument being negative, -kT.

The infinite series expansion of e-x looks exactly like that for he positive exponential, with the sign changed:

Let’s try this out for the decay of uranium at a time equal to two half-lives. Shortly, I show that the decay constant, k, is the natural logarithm of 2 divided by the half-life, or 0.693…/t1/2. (We haven’t done natural logarithms yet; that’s a bit later yet.)

At two half lives, kT=2*0.693 = 1.386. We have

The answer should be (1/2)x(1/2) = ¼. To the accuracy that we represented ln(2), this is it.

Let’s now look at how k is related to t1/2. We have to satisfy the relation

We have to introduce the companion idea of logarithms, which are the inverse functions for exponentials. Basically, a number y is the natural logarithm of x if x=ey. This is similar to the idea of logarithms to the base 10, where, for example, 3 is the logarithm to the base 10 of the number 1,000 (log10(1000) =3).

We take the natural log of both sides of the display equation above:

In the above, I used the fact that the logarithm (natural, or base 10, or any base) of an inverse power of a number is the negative of that power. For example,

Another couple of examples of decay: first, How much of the original radioactive potassium-40 on Earth has already decayed, since the Earth formed 4.8 billion years ago? The half-life of 40K is 1.25 billion years. We have

That is, only 7% is left, or 93% of it has decayed. That put a lot of heat into the Earth, which is still coming out at the surface, though the total of all the heat from radioactive decay, gravitational accretion, and other processes makes a heat flux of only 0.06 watts per square meter. Second, What if the vitamin C in your refrigerated orange juice gets oxidized away by ½ in a week. How much did you lose in the first day? We have k=0.693/7 days=0.099 per day. In one day, the fraction left is

So, you lost 9.5%.

How are logarithms defined and calculated, anyhow?

The logarithm has a “simple” definition, as an integral

You’ll run into integrals in calculus, too. They are basically the area under a curve as it is graphed out. For example[2], the natural log of e, ln(e), is 1:

That might not be obvious as the area, I admit.

The numerical value of ln(x) can be calculated by simple integration, though it’s not the most efficient way in computers and the like. Still, here’s a cute calculation. Let’s calculate ln(2) by breaking up the ordinate between values 1 and 2 into 10 intervals. In each one, we have the width 0.1. In each interval, we’ll approximate 1/y as a simple constant, 1/ymid, with ymid as the value at the midpoint – that is, 1.05 for the first interval, 1.15 for the second interval, and so on. We get the approximate value

The true value is 0.69314. Our simple method is accurate to within 0.05%. The method gets intractable for large values of x.

It should be clear that the ln(x) grows increasingly more slowly as x gets large. Just from the definition as the integral, the value added to ln(x) at large x gets small as 1/y gets small. This is to be expected. If ln(x) is the inverse of ex, which grows very fast, then it should grow slowly.

What is not immediately apparent from their definitions is that ln(x) and ex are exact inverses of each other. For example, if we use the power series expression for ex, it’s not clear that this is true:

especially if we write out ln(x) as the integral used earlier.

Probably the simplest way to grasp that these functions are inverses is to use the fact that the exponential is its own derivative,

Let’s use this in the definition of ln(x):

We will rewrite the right-hand side into new variables. Let’s write

The limits of integration get changed to new values; ez=1 translates to z=0, so that the lower limit becomes 0. For the upper limit, ez = ex has z=x, so the upper limit becomes x.

Now we have

I used some further knowledge about integrals here, such as , which could take a little explaining, as could the legitimacy of changing variables in the integral and doing the corresponding changes in the limits of integration. I’ll skip these here. In any case, the natural logarithm and the exponential are inverses of each other; ln(ex)=x and eln(x)=x.

3. In passing: the exponential of complex numbers, ea+ib, is even cooler, with its geometric and other ramifications. Nobel Laureate Richard Feynman called Euler’s equation the most beautiful mathematical expression,

as it joins two fundamental constants, e and π; the equally fundamental imaginary number I; the multiplicative identity, 1; “+”, the equals sign,;and the additive identiy, zero (a concept that took millenia to be conceived, unless you count the Babylonians who used an equivalent character but whose use of it was lost for about 2,000 years). It also evokes the concept of the exponential on the unit circle as representing rotation.