# 5: Power Series - Mathematics

We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

A power series (in one variable) is an infinite series. Any polynomial can be easily expressed as a power series around any center c, although most of the coefficients will be zero since a power series has infinitely many terms by definition. One can view power series as being like "polynomials of infinite degree," although power series are not polynomials. The content in this Textmap's chapter is complemented by Guichard's Calculus Textmap.

• 5.1: Prelude to Power Series
Power series can be used to define functions and they allow us to write functions that cannot be expressed any other way than as “infinite polynomials.” An infinite series can also be truncated, resulting in a finite polynomial that we can use to approximate functional values. Representing functions using power series allows us to solve mathematical problems that cannot be solved with other techniques.
• 5.2: Power Series and Functions
A power series is a type of series with terms involving a variable. More specifically, if the variable is x, then all the terms of the series involve powers of x. As a result, a power series can be thought of as an infinite polynomial. Power series are used to represent common functions and also to define new functions. In this section we define power series and show how to determine when a power series converges and when it diverges. We also show how to represent certain functions using power
• 5.3: Properties of Power Series
Power series can be combined, differentiated, or integrated to create new power series. This capability is particularly useful for a couple of reasons. First, it allows us to find power series representations for certain elementary functions, by writing those functions in terms of functions with known power series. Second, it allows us to define new functions that cannot be written in terms of elementary functions. This capability is particularly useful for solving differential equations.
• 5.4: Taylor and Maclaurin Series
Here we discuss power series representations for other types of functions. In particular, we address the following questions: Which functions can be represented by power series and how do we find such representations? If we can find a power series representation for a particular function ff and the series converges on some interval, how do we prove that the series actually converges to f?
• 5.5: Working with Taylor Series
In this section we show how to use those Taylor series to derive Taylor series for other functions. We then present two common applications of power series. First, we show how power series can be used to solve differential equations. Second, we show how power series can be used to evaluate integrals when the antiderivative of the integrand cannot be expressed in terms of elementary functions.
• 5.E: Power Series (Exercises)
These are homework exercises to accompany OpenStax's "Calculus" Textmap.

Thumbnail: The graph shows the function (displaystyle y=sinx) and the Maclaurin polynomials (displaystyle p_1,p_3) and (displaystyle p_5). Image used with permission (CC BY-SA 3.0; OpenStax).

## 5: Power Series - Mathematics

Kl0PdY*:$au^lB, ]*Td3oIbS][email protected]'?Idr"-MoRJjZWdo.lS-1UGdTseE4LL0.'=kXuDN>E'@!u*u=5rJ'n 2Ye3_"m+89hNM8/Oh:RY]-SnK%!4(tg+ZlmkjE!I?%Db(p8mlA,W?^Pt]#&6"mu79.[sOQUhcKkH( ]&oUI$*0+R([email protected]!X#1B]f+,NQiud72?PZ*,#_7&ESL6c !#51!B:pU]07bU",_&#j/O uFp5CK7GB-Ahmi9_jftW03.^9=4RNW$1KLB #odB=)C?Q/MDK:1FT+EDX(h#I?tK&Pl%K"G,Q [email protected] WKCL)-Z?ODaLL$SLP'

N,]p=]iJ''iO O,sk4jjOjS5D,s(=IFt9sGcf,C0]@*F(eHE%URS8'[email protected]=2IV$09H(a_8bD>jS,aDo4OG 2[jRCitfXQidV8c[K+SY_Y%'cD$eWQ7#@&>0s"[email protected][810qtbuYPoHDM)->-mHT

$@)5DE,V)QQ!Me]_aLcUgZ[q$lp'0:P_f%=/QDN!RMV9WdGY9F7O1iR:s?kgipUc4#S9#S_ s jaKlX3Ys8k]i9JLkh U[X(0EKD.!E-CYk>].+LTQd

## 5: Power Series - Mathematics

Kl0PdY*:$au^lB, ]*Td3oIbS][email protected]'?Idr"-MoRJjZWdo.lS-1UGdTseE4LL0.'=kXuDN>E'@!u*u=5rJ'n 2Ye3_"m+89hNM8/Oh:RY]-SnK%!4(tg+ZlmkjE!I?%Db(p8mlA,W?^Pt]#&6"mu79.[sOQUhcKkH( ]&oUI$*0+R([email protected]!X#1B]f+,NQiud72?PZ*,#_7&ESL6c !#51!B:pU]07bU",_&#j/O uFp5CK7GB-Ahmi9_jftW03.^9=4RNW$1KLB #odB=)C?Q/MDK:1FT+EDX(h#I?tK&Pl%K"G,Q [email protected] WKCL)-Z?ODaLL$SLP'

N,]p=]iJ''iO O,sk4jjOjS5D,s(=IFt9sGcf,C0]@*F(eHE%URS8'[email protected]=2IV$09H(a_8bD>jS,aDo4OG 2[jRCitfXQidV8c[K+SY_Y%'cD$eWQ7#@&>0s"[email protected][810qtbuYPoHDM)->-mHT

$@)5DE,V)QQ!Me]_aLcUgZ[q$lp'0:P_f%=/QDN!RMV9WdGY9F7O1iR:s?kgipUc4#S9#S_ s jaKlX3Ys8k]i9JLkh U[X(0EKD.!E-CYk>].+LTQd

The homework assignments are meant for your independent practice. Some of homework problems may be discussed at recitations.

1/10: Homework 1 (HW1): #22 p. 360 ##18, 22, 32, 36 p. 367 ##26, 30, 34 p. 375 ##20, 22 p. 381

1/17: HW2: ##10, 18 p. 390 ##28, 30 p. 391 ##4, 6, 38 p. 402 #2 p. 406 ##8, 12 p. 409

1/30: HW3: ##4, 6 p. 418 ##4, 6 p. 425 ##8, 10, 12 p. 432

2/6: HW4: ##10, 16 p. 438 ##8, 14, 18 pp. 442-443 ##8, 14 p. 450

2/14: HW5: #12 p. 457 ##2, 4, 6, 8 pp. 462-463 ##16, 18 p. 469

2/23: HW6: ##4, 8, 12, 16 p. 624 ##4, 10, 12, 14 p. 629 (for the last two problems, first read Example 4 on p. 629)

3/13: HW7: ##4, 8, 10 p. 632 ##2, 14 p. 636 ##6,18 p. 640 #30 p. 651 ##12, 16 p. 663.

Final review problems: ##26, 30 p.391 #44 p. 402 #8 p.425 ##8, 14 p.438 ##2, 12 p. 450 #12 p. 457 #10 p. 468 ##2, 4, 16 p. 629 ##12, 16 p. 659 ##12, 14 p. 663.

## CLP-2 Integral Calculus

where (x) is some real number. As we have seen (back in Example 3.2.4 and Lemma 3.2.5), for (|x| lt 1) this series converges to a limit, that varies with (x ext<,>) while for (|x|geq 1) the series diverges. Consequently we can consider this series to be a function of (x)

Furthermore (also from Example 3.2.4 and Lemma 3.2.5) we know what the function is.

Hence we can consider the series (sum_^infty x^n) as a new way of representing the function (frac<1><1-x>) when (|x| lt 1 ext<.>) This series is an example of a power series.

Of course, representing a function as simple as (frac<1><1-x>) by a series doesn't seem like it is going to make life easier. However the idea of representing a function by a series turns out to be extremely helpful. Power series turn out to be very robust mathematical objects and interact very nicely with not only standard arithmetic operations, but also with differentiation and integration (see Theorem 3.5.13). This means, for example, that

and in a very similar way

We are hiding some mathematics under the word “just” in the above, but you can see that once we have a power series representation of a function, differentiation and integration become very straightforward.

So we should set as our goal for this section, the development of machinery to define and understand power series. This will allow us to answer questions 1 Recall that (n!=1 imes 2 imes 3 imescdots imes n) is called “(n) factorial”. By convention (0!=1 ext<.>) like

Our starting point (now that we have equipped ourselves with basic ideas about series), is the definition of power series.

## How to calculate the nth power of a number?

The nth power of a base, let&rsquos say &ldquoy&rdquo, means y multiplied to itself nth time. If we are to find the fifth power of y, it is y*y*y*y*y.

Some other solutions for the nth power calculator are in the following table.

 0.1 to the power of 3 0.00100 0.5 to the power of 3 0.12500 0.5 to the power of 4 0.06250 1.2 to the power of 4 2.07360 1.02 to the 10th power 1.21899 1.03 to the 10th power 1.34392 1.2 to the power of 5 2.48832 1.4 to the 10th power 28.92547 1.05 to the power of 5 1.27628 1.05 to the 10th power 1.62889 1.06 to the 10th power 1.79085 2 to the 3rd power 8 2 to the power of 3 8 2 raised to the power of 4 16 2 to the power of 6 64 2 to the power of 7 128 2 to the 9th power 512 2 to the tenth power 1024 2 to the 15th power 32768 2 to the 10th power 1024 2 to the power of 28 268435456 3 to the power of 2 9 3 to the 3 power 27 3 to the 4 power 81 3 to the 8th power 6561 3 to the 9th power 19683 3 to the 12th power 531441 3 to what power equals 81 3 4 4 to the power of 3 64 4 to the power of 4 256 4 to the power of 7 16384 7 to the power of 3 343 12 to the 2nd power 144 2.5 to the power of 3 15.625 12 to the power of 3 1728 10 exponent 3 1000 24 to the second power (24 2 ) 576

## 5: Power Series - Mathematics

Before we get into finding series solutions to differential equations we need to determine when we can find series solutions to differential equations. So, let’s start with the differential equation,

[eginpleft( x ight)y'' + qleft( x ight)y' + rleft( x ight)y = 0labelend]

This time we really do mean nonconstant coefficients. To this point we’ve only dealt with constant coefficients. However, with series solutions we can now have nonconstant coefficient differential equations. Also, in order to make the problems a little nicer we will be dealing only with polynomial coefficients.

Now, we say that (x=x_<0>) is an ordinary point if provided both

are analytic at (x=x_<0>). That is to say that these two quantities have Taylor series around (x=x_<0>). We are going to be only dealing with coefficients that are polynomials so this will be equivalent to saying that

If a point is not an ordinary point we call it a singular point.

The basic idea to finding a series solution to a differential equation is to assume that we can write the solution as a power series in the form,

and then try to determine what the (a_)’s need to be. We will only be able to do this if the point (x=x_<0>), is an ordinary point. We will usually say that (eqref) is a series solution around (x=x_<0>).

Let’s start with a very basic example of this. In fact, it will be so basic that we will have constant coefficients. This will allow us to check that we get the correct solution.

Notice that in this case (p(x)=1) and so every point is an ordinary point. We will be looking for a solution in the form,

We will need to plug this into our differential equation so we’ll need to find a couple of derivatives.

Recall from the power series review section on power series that we can start these at (n=0) if we need to, however it’s almost always best to start them where we have here. If it turns out that it would have been easier to start them at (n=0) we can easily fix that up when the time comes around.

So, plug these into our differential equation. Doing this gives,

The next step is to combine everything into a single series. To do this requires that we get both series starting at the same point and that the exponent on the (x) be the same in both series.

We will always start this by getting the exponent on the (x) to be the same. It is usually best to get the exponent to be an (n). The second series already has the proper exponent and the first series will need to be shifted down by 2 in order to get the exponent up to an (n). If you don’t recall how to do this take a quick look at the first review section where we did several of these types of problems.

Shifting the first power series gives us,

Notice that in the process of the shift we also got both series starting at the same place. This won’t always happen, but when it does we’ll take it. We can now add up the two series. This gives,

Now recalling the fact from the power series review section we know that if we have a power series that is zero for all (x) (as this is) then all the coefficients must have been zero to start with. This gives us the following,

This is called the recurrence relation and notice that we included the values of (n) for which it must be true. We will always want to include the values of (n) for which the recurrence relation is true since they won’t always start at (n) = 0 as it did in this case.

Now let’s recall what we were after in the first place. We wanted to find a series solution to the differential equation. In order to do this, we needed to determine the values of the (a_)’s. We are almost to the point where we can do that. The recurrence relation has two different (a_)’s in it so we can’t just solve this for (a_) and get a formula that will work for all (n). We can however, use this to determine what all but two of the (a_)’s are.

To do this we first solve the recurrence relation for the (a_) that has the largest subscript. Doing this gives,

Now, at this point we just need to start plugging in some value of (n) and see what happens,

 (n=0) ( = frac<< - >><>) (n=1) ( = frac<< - >><>) (n=2) (egin &= - frac<<>><> & = frac<<>><>end) (n=3) (egin &= - frac<<>><> & = frac<<>><>end) (n=4) (egin & = - frac<<>><> & = frac<< - >><>end) (n=5) (egin & = - frac<<>><> & = frac<< - >><>end) (vdots ) (vdots ) (> = frac <<< ight)>^k>>> < ight)!>>,,,,k = 1,2, ldots ) (> = frac <<< ight)>^k>>> < ight)!>>,,,,k = 1,2, ldots )

Notice that at each step we always plugged back in the previous answer so that when the subscript was even we could always write the (a_) in terms of (a_<0>) and when the coefficient was odd we could always write the (a_) in terms of (a_<1>). Also notice that, in this case, we were able to find a general formula for (a_)’s with even coefficients and (a_)’s with odd coefficients. This won’t always be possible to do.

There’s one more thing to notice here. The formulas that we developed were only for (k=1,2,ldots) however, in this case again, they will also work for (k=0). Again, this is something that won’t always work, but does here.

Do not get excited about the fact that we don’t know what (a_<0>) and (a_<1>) are. As you will see, we actually need these to be in the problem to get the correct solution.

Now that we’ve got formulas for the (a_)’s let’s get a solution. The first thing that we’ll do is write out the solution with a couple of the (a_)’s plugged in.

The next step is to collect all the terms with the same coefficient in them and then factor out that coefficient.

In the last step we also used the fact that we knew what the general formula was to write both portions as a power series. This is also our solution. We are done.

Before working another problem let’s take a look at the solution to the previous example. First, we started out by saying that we wanted a series solution of the form,

and we didn’t get that. We got a solution that contained two different power series. Also, each of the solutions had an unknown constant in them. This is not a problem. In fact, it’s what we want to have happen. From our work with second order constant coefficient differential equations we know that the solution to the differential equation in the last example is,

[yleft( x ight) = cos left( x ight) + sin left( x ight)]

Solutions to second order differential equations consist of two separate functions each with an unknown constant in front of them that are found by applying any initial conditions. So, the form of our solution in the last example is exactly what we want to get. Also recall that the following Taylor series,

Recalling these we very quickly see that what we got from the series solution method was exactly the solution we got from first principles, with the exception that the functions were the Taylor series for the actual functions instead of the actual functions themselves.

Now let’s work an example with nonconstant coefficients since that is where series solutions are most useful.

As with the first example (p(x)=1) and so again for this differential equation every point is an ordinary point. Now we’ll start this one out just as we did the first example. Let’s write down the form of the solution and get its derivatives.

Plugging into the differential equation gives,

Unlike the first example we first need to get all the coefficients moved into the series.

Now we will need to shift the first series down by 2 and the second series up by 1 to get both of the series in terms of (x^n).

Next, we need to get the two series starting at the same value of (n). The only way to do that for this problem is to strip out the (n=0) term.

We now need to set all the coefficients equal to zero. We will need to be careful with this however. The (n=0) coefficient is in front of the series and the (n=1,2,3,dots) are all in the series. So, setting coefficient equal to zero gives,

Solving the first as well as the recurrence relation gives,

Now we need to start plugging in values of (n).

 (displaystyle = frac<<>><>) (displaystyle = frac<<>><>) (displaystyle = frac<<>><> = 0) (egin &= frac<<>><> & = frac<<>><>end) (egin & = frac<<>><> & = frac<<>><>end) (displaystyle = frac<<>><> = 0) ( vdots ) ( vdots ) ( vdots ) (egin> & = frac<<>> < ight)left( <3k> ight)>> k & = 1,2,3, ldots end) (egin> & = frac<<>> < ight)left( <3k + 1> ight)>> k & = 1,2,3, ldots end) (egin> & = 0 k & = 0,1,2, ldots end)

There are a couple of things to note about these coefficients. First, every third coefficient is zero. Next, the formulas here are somewhat unpleasant and not all that easy to see the first time around. Finally, these formulas will not work for (k=0) unlike the first example.

Again, collect up the terms that contain the same coefficient, factor the coefficient out and write the results as a new series,

We couldn’t start our series at (k=0) this time since the general term doesn’t hold for (k=0).

Now, we need to work an example in which we use a point other that (x=0). In fact, let’s just take the previous example and rework it for a different value of (x_<0>). We’re also going to need to change up the instructions a little for this example.

Unfortunately for us there is nothing from the first example that can be reused here. Changing to ( = - 2) completely changes the problem. In this case our solution will be,

The derivatives of the solution are,

Plug these into the differential equation.

We now run into our first real difference between this example and the previous example. In this case we can’t just multiply the (x) into the second series since in order to combine with the series it must be (x+2). Therefore, we will first need to modify the coefficient of the second series before multiplying it into the series.

We now have three series to work with. This will often occur in these kinds of problems. Now we will need to shift the first series down by 2 and the second series up by 1 the get common exponents in all the series.

In order to combine the series we will need to strip out the (n=0) terms from both the first and third series.

Setting coefficients equal to zero gives,

We now need to solve both of these. In the first case there are two options, we can solve for (a_<2>) or we can solve for (a_<0>). Out of habit I’ll solve for (a_<0>). In the recurrence relation we’ll solve for the term with the largest subscript as in previous examples.

Notice that in this example we won’t be having every third term drop out as we did in the previous example.

At this point we’ll also acknowledge that the instructions for this problem are different as well. We aren’t going to get a general formula for the (a_)’s this time so we’ll have to be satisfied with just getting the first couple of terms for each portion of the solution. This is often the case for series solutions. Getting general formulas for the (a_)’s is the exception rather than the rule in these kinds of problems.

To get the first four terms we’ll just start plugging in terms until we’ve got the required number of terms. Note that we will already be starting with an (a_<0>) and an (a_<1>) from the first two terms of the solution so all we will need are three more terms with an (a_<0>) in them and three more terms with an (a_<1>) in them.

We’ve got two (a_<0>)’s and one (a_<1>).

We’ve got three (a_<0>)’s and two (a_<1>)’s.

We’ve got four (a_<0>)’s and three (a_<1>)’s. We’ve got all the (a_<0>)’s that we need, but we still need one more (a_<1>). So, we’ll need to do one more term it looks like.

We’ve got five (a_<0>)’s and four (a_<1>)’s. We’ve got all the terms that we need.

Now, all that we need to do is plug into our solution.

Finally collect all the terms up with the same coefficient and factor out the coefficient to get,

That’s the solution for this problem as far as we’re concerned. Notice that this solution looks nothing like the solution to the previous example. It’s the same differential equation but changing (x_<0>) completely changed the solution.

Let’s work one final problem.

[left( <+ 1> ight)y'' - 4xy' + 6y = 0]

We finally have a differential equation that doesn’t have a constant coefficient for the second derivative.

[pleft( x ight) = + 1hspace<0.25in>pleft( 0 ight) = 1 e 0]

So ( = 0) is an ordinary point for this differential equation. We first need the solution and its derivatives,

Plug these into the differential equation.

Now, break up the first term into two so we can multiply the coefficient into the series and multiply the coefficients of the second and third series in as well.

We will only need to shift the second series down by two to get all the exponents the same in all the series.

At this point we could strip out some terms to get all the series starting at (n=2), but that’s actually more work than is needed. Let’s instead note that we could start the third series at (n=0) if we wanted to because that term is just zero. Likewise, the terms in the first series are zero for both (n=1) and (n=0) and so we could start that series at (n=0). If we do this all the series will now start at (n=0) and we can add them up without stripping terms out of any series.

Now set coefficients equal to zero.

Now, we plug in values of (n).

Now, from this point on all the coefficients are zero. In this case both of the series in the solution will terminate. This won’t always happen, and often only one of them will terminate.

## Power Series Math Project

Hello all! I've been trying to figure out this math project for about three weeks now, and everything I try is actually garbage. I'm in Calc III and we're working on Power Series for the most part now. Here is the part of the project I have literally made no headway with!! I've found other examples online going the opposite direction of this one (That's the style of the project. My Prof took famous proofs and switched them around and we have to solve them another way) but it's not been helpful for me.

I think even if I could just get some help with first part then I'd have an idea of how to continue!! Thanks to anyone who take the time even to read this!
We had deﬁned the Fibonacci sequence recursively, but no for-mula was given for its general term. This part of the project will achieve that goal.Consider the function

f ( x ) =

x
1 − x − x 2

1. Suppose that x = sum ∞

k =0 a k x k . Justify that a 1 = 1 and a k = 0 for all k = 1.

2. Show that if f ( x ) = sum ∞

n =1 f n x n , then < f n >∞

n =1 is the Fibonacci sequence.

That is show that f 1 = f 2 = 1 and f n = f n − 1 + f n − 2 for n ≥ 3.3. Using partial fractions, ﬁnd a representation of f ( x ) in terms of a powerseries.4. Deduce from the steps above a formula for f n for all n ≥ 1

#### Subhotosh Khan

##### Super Moderator

Hello all! I've been trying to figure out this math project for about three weeks now, and everything I try is actually garbage. I'm in Calc III and we're working on Power Series for the most part now. Here is the part of the project I have literally made no headway with!! I've found other examples online going the opposite direction of this one (That's the style of the project. My Prof took famous proofs and switched them around and we have to solve them another way) but it's not been helpful for me.

I think even if I could just get some help with first part then I'd have an idea of how to continue!! Thanks to anyone who take the time even to read this!
We had deﬁned the Fibonacci sequence recursively, but no for-mula was given for its general term. This part of the project will achieve that goal.Consider the function

f ( x ) =

x
1 − x − x 2

1. Suppose that x = sum ∞

k =0 a k x k . Justify that a 1 = 1 and a k = 0 for all k = 1.

2. Show that if f ( x ) = sum ∞

n =1 f n x n , then < f n >∞

n =1 is the Fibonacci sequence.

That is show that f 1 = f 2 = 1 and f n = f n − 1 + f n − 2 for n ≥ 3.3. Using partial fractions, ﬁnd a representation of f ( x ) in terms of a powerseries.4. Deduce from the steps above a formula for f n for all n ≥ 1

#### Stapel

##### Super Moderator

We had deﬁned the Fibonacci sequence recursively, but no formula was given for its general term. This part of the project will achieve that goal.

Consider the function (displaystyle , f(x), =, dfrac<1, -, x, -, x^2>)

Have I corrected guessed the function above? If not, please reply with clarification.

I'm sorry, but I don't know what this means. I suspect that you're doing some sort of summation, but it looks like your equation is as follows:

. which makes no sense. Please reply with clarification. (You can use standard web-safe math formatting, as explained here.) Thank you!

## Contents

The term power (Latin: potentia, potestas, dignitas) is a mistranslation [4] [5] of the ancient Greek δύναμις (dúnamis, here: "amplification" [4] ) used by the Greek mathematician Euclid for the square of a line, [6] following Hippocrates of Chios. [7] Archimedes discovered and proved the law of exponents, 10 a ⋅ 10 b = 10 a+b , necessary to manipulate powers of 10 . [8] [ better source needed ] In the 9th century, the Persian mathematician Muhammad ibn Mūsā al-Khwārizmī used the terms مَال (māl, "possessions", "property") for a square—the Muslims, "like most mathematicians of those and earlier times, thought of a squared number as a depiction of an area, especially of land, hence property" [9] —and كَعْبَة (kaʿbah, "cube") for a cube, which later Islamic mathematicians represented in mathematical notation as the letters mīm (m) and kāf (k), respectively, by the 15th century, as seen in the work of Abū al-Hasan ibn Alī al-Qalasādī. [10]

In the late 16th century, Jost Bürgi used Roman numerals for exponents. [11]

Nicolas Chuquet used a form of exponential notation in the 15th century, which was later used by Henricus Grammateus and Michael Stifel in the 16th century. The word exponent was coined in 1544 by Michael Stifel. [12] [13] Samuel Jeake introduced the term indices in 1696. [6] In the 16th century, Robert Recorde used the terms square, cube, zenzizenzic (fourth power), sursolid (fifth), zenzicube (sixth), second sursolid (seventh), and zenzizenzizenzic (eighth). [9] Biquadrate has been used to refer to the fourth power as well.

Early in the 17th century, the first form of our modern exponential notation was introduced by René Descartes in his text titled La Géométrie there, the notation is introduced in Book I. [14]

Some mathematicians (such as Isaac Newton) used exponents only for powers greater than two, preferring to represent squares as repeated multiplication. Thus they would write polynomials, for example, as ax + bxx + cx 3 + d .

Another historical synonym, involution, is now rare [15] and should not be confused with its more common meaning.

"consider exponentials or powers in which the exponent itself is a variable. It is clear that quantities of this kind are not algebraic functions, since in those the exponents must be constant." [16]

With this introduction of transcendental functions, Euler laid the foundation for the modern introduction of natural logarithm—as the inverse function for the natural exponential function, f(x) = e x .

The expression b 2 = bb is called "the square of b" or "b squared", because the area of a square with side-length b is b 2 .

Similarly, the expression b 3 = bbb is called "the cube of b" or "b cubed", because the volume of a cube with side-length b is b 3 .

When it is a positive integer, the exponent indicates how many copies of the base are multiplied together. For example, 3 5 = 3 ⋅ 3 ⋅ 3 ⋅ 3 ⋅ 3 = 243 . The base 3 appears 5 times in the multiplication, because the exponent is 5 . Here, 243 is the 5th power of 3, or 3 raised to the 5th power.

The word "raised" is usually omitted, and sometimes "power" as well, so 3 5 can be simply read "3 to the 5th", or "3 to the 5". Therefore, the exponentiation b n can be expressed as "b to the power of n", "b to the nth power", "b to the nth", or most briefly as "b to the n".

A formula with nested exponentiation, such as 3 5 7 (which means 3 (5 7 ) and not (3 5 ) 7 ), is called a tower of powers, or simply a tower.

The exponentiation operation with integer exponents may be defined directly from elementary arithmetic operations.

### Positive exponents Edit

Powers with positive integer exponents may be defined by the base case [17]

The associativity of multiplication implies that for any positive integers m and n ,

### Zero exponent Edit

Any nonzero number raised to the 0 power is 1 : [18] [2]

One interpretation of such a power is as an empty product.

The case of 0 0 is more complicated, and the choice of whether to assign it a value and what value to assign may depend on context. For more details, see Zero to the power of zero.

### Negative exponents Edit

The following identity holds for any integer n and nonzero b :

Raising 0 to a negative exponent is undefined, but in some circumstances, it may be interpreted as infinity ( ∞ ).

The identity above may be derived through a definition aimed at extending the range of exponents to negative integers.

For non-zero b and positive n , the recurrence relation above can be rewritten as

By defining this relation as valid for all integer n and nonzero b , it follows that

and more generally for any nonzero b and any nonnegative integer n ,

This is then readily shown to be true for every integer n .

### Identities and properties Edit

The following identities hold for all integer exponents, provided that the base is non-zero: [2]

• Exponentiation is not commutative. For example, 2 3 = 8 ≠ 3 2 = 9 .
• Exponentiation is not associative. For example, (2 3 ) 4 = 8 4 = 4096 , whereas 2 (3 4 ) = 2 81 = 2 417 851 639 229 258 349 412 352 . Without parentheses, the conventional order of operations for serial exponentiation in superscript notation is top-down (or right-associative), not bottom-up [19][20][21][22] (or left-associative). That is, b p q = b ( p q ) , >=b^ ight)>,>

which, in general, is different from

### Powers of a sum Edit

The powers of a sum can normally be computed from the powers of the summands by the binomial formula

However, this formula is true only if the summands commute (i.e. that ab = ba ), which is implied if they belong to a structure that is commutative. Otherwise, if a and b are, say, square matrices of the same size, this formula cannot be used. It follows that in computer algebra, many algorithms involving integer exponents must be changed when the exponentiation bases do not commute. Some general purpose computer algebra systems use a different notation (sometimes ^^ instead of ^ ) for exponentiation with non-commuting bases, which is then called non-commutative exponentiation.

### Combinatorial interpretation Edit

For nonnegative integers n and m , the value of n m is the number of functions from a set of m elements to a set of n elements (see cardinal exponentiation). Such functions can be represented as m -tuples from an n -element set (or as m -letter words from an n -letter alphabet). Some examples for particular values of m and n are given in the following table:

n m The n m possible m -tuples of elements from the set <1, . n>
0 5 = 0 none
1 4 = 1 (1,1,1,1)
2 3 = 8 (1,1,1),(1,1,2),(1,2,1),(1,2,2),(2,1,1),(2,1,2),(2,2,1),(2,2,2)
3 2 = 9 (1,1),(1,2),(1,3),(2,1),(2,2),(2,3),(3,1),(3,2),(3,3)
4 1 = 4 (1),(2),(3),(4)
5 0 = 1 ()

### Particular bases Edit

#### Powers of ten Edit

In the base ten (decimal) number system, integer powers of 10 are written as the digit 1 followed or preceded by a number of zeroes determined by the sign and magnitude of the exponent. For example, 10 3 = 1000 and 10 −4 = 0.0001 .

Exponentiation with base 10 is used in scientific notation to denote large or small numbers. For instance, 299 792 458 m/s (the speed of light in vacuum, in metres per second) can be written as 2.997 924 58 × 10 8 m/s and then approximated as 2.998 × 10 8 m/s .

SI prefixes based on powers of 10 are also used to describe small or large quantities. For example, the prefix kilo means 10 3 = 1000 , so a kilometre is 1000 m .

#### Powers of two Edit

The first negative powers of 2 are commonly used, and have special names, e.g.: half and quarter.

Powers of 2 appear in set theory, since a set with n members has a power set, the set of all of its subsets, which has 2 n members.

Integer powers of 2 are important in computer science. The positive integer powers 2 n give the number of possible values for an n -bit integer binary number for example, a byte may take 2 8 = 256 different values. The binary number system expresses any number as a sum of powers of 2 , and denotes it as a sequence of 0 and 1 , separated by a binary point, where 1 indicates a power of 2 that appears in the sum the exponent is determined by the place of this 1 : the nonnegative exponents are the rank of the 1 on the left of the point (starting from 0 ), and the negative exponents are determined by the rank on the right of the point.

#### Powers of one Edit

The powers of one are all one: 1 n = 1 .

#### Powers of zero Edit

If the exponent n is positive ( n > 0 ), the n th power of zero is zero: 0 n = 0 .

If the exponent n is negative ( n < 0 ), the n th power of zero 0 n is undefined, because it must equal 1 / 0 − n > with -n > 0 , and this would be 1 / 0 according to above.

The expression 0 0 is either defined as 1, or it is left undefined (see Zero to the power of zero).

#### Powers of negative one Edit

If n is an even integer, then (−1) n = 1 .

If n is an odd integer, then (−1) n = −1 .

Because of this, powers of −1 are useful for expressing alternating sequences. For a similar discussion of powers of the complex number i , see § Powers of complex numbers.

### Large exponents Edit

The limit of a sequence of powers of a number greater than one diverges in other words, the sequence grows without bound:

b n → ∞ as n → ∞ when b > 1

This can be read as "b to the power of n tends to +∞ as n tends to infinity when b is greater than one".

Powers of a number with absolute value less than one tend to zero:

b n → 0 as n → ∞ when | b | < 1

Any power of one is always one:

b n = 1 for all n if b = 1

Powers of –1 alternate between 1 and –1 as n alternates between even and odd, and thus do not tend to any limit as n grows.

If b < –1 , b n , alternates between larger and larger positive and negative numbers as n alternates between even and odd, and thus does not tend to any limit as n grows.

If the exponentiated number varies while tending to 1 as the exponent tends to infinity, then the limit is not necessarily one of those above. A particularly important case is

(1 + 1/n) ne as n → ∞

Other limits, in particular those of expressions that take on an indeterminate form, are described in § Limits of powers below.

### List of whole-number powers Edit

n n 2 n 3 n 4 n 5 n 6 n 7 n 8 n 9 n 10
2 4 8 16 32 64 128 256 512 1024
3 9 27 81 243 729 2187 6561 19 683 59 049
4 16 64 256 1024 4096 16 384 65 536 262 144 1 048 576
5 25 125 625 3125 15 625 78 125 390 625 1 953 125 9 765 625
6 36 216 1296 7776 46 656 279 936 1 679 616 10 077 696 60 466 176
7 49 343 2401 16 807 117 649 823 543 5 764 801 40 353 607 282 475 249
8 64 512 4096 32 768 262 144 2 097 152 16 777 216 134 217 728 1 073 741 824
9 81 729 6561 59 049 531 441 4 782 969 43 046 721 387 420 489 3 486 784 401
10 100 1000 10 000 100 000 1 000 000 10 000 000 100 000 000 1 000 000 000 10 000 000 000

An nth root of a number b is a number x such that x n = b .

If b is a positive real number and n is a positive integer, then there is exactly one positive real solution to x n = b . This solution is called the principal nth root of b. It is denoted nb , where √ is the radical symbol alternatively, the principal nth root of b may be written b 1/n . For example: 9 1/2 = √ 9 = 3 and 8 1/3 = 3 √ 8 = 2 .

If b is equal to 0, the equation x n = b has one solution, which is x = 0 .

If n is even and b is positive, then x n = b has two real solutions, which are the positive and negative nth roots of b, that is, b 1/n > 0 and −(b 1/n ) < 0.

If n is even and b is negative, the equation has no solution in real numbers.

If n is odd, then x n = b has exactly one real solution, which is positive if b is positive ( b 1/n > 0 ) and negative if b is negative ( b 1/n < 0 ).

Taking a positive real number b to a rational exponent u/v, where u is an integer and v is a positive integer, and considering principal roots only, yields

Taking a negative real number b to a rational power u/v, where u/v is in lowest terms, yields a positive real result if u is even, and hence v is odd, because then b u is positive and yields a negative real result, if u and v are both odd, because then b u is negative. The case of even v (and, hence, odd u) cannot be treated this way within the reals, since there is no real number x such that x 2k = −1 , the value of b u/v in this case must use the imaginary unit i, as described more fully in the section § Powers of complex numbers.

Thus we have (−27) 1/3 = −3 and (−27) 2/3 = 9 . The number 4 has two 3/2 powers, namely 8 and −8 however, by convention the notation 4 3/2 employs the principal root, and results in 8. For employing the v-th root the u/v-th power is also called the v/u-th root, and for even v the term principal root denotes also the positive result.

This sign ambiguity needs to be taken care of when applying the power identities. For instance:

is clearly wrong. The problem starts already in the first equality by introducing a standard notation for an inherently ambiguous situation –asking for an even root– and simply relying wrongly on only one, the conventional or principal interpretation. The same problem occurs also with an inappropriately introduced surd-notation, inherently enforcing a positive result:

In general the same sort of problems occur for complex numbers as described in the section § Failure of power and logarithm identities.

Exponentiation to real powers of positive real numbers can be defined either by extending the rational powers to reals by continuity, or more usually as given in § Powers via logarithms below. The result is always a positive real number, and the identities and properties shown above for integer exponents are true for positive real bases with non-integer exponents as well.

On the other hand, exponentiation to a real power of a negative real number is much more difficult to define consistently, as it may be non-real and have several values (see § Real exponents with negative bases). One may choose one of these values, called the principal value, but there is no choice of the principal value for which an identity such as

is true see § Failure of power and logarithm identities. Therefore, exponentiation with a basis that is not a positive real number is generally viewed as a multivalued function.

### Limits of rational exponents Edit

Since any irrational number can be expressed as the limit of a sequence of rational numbers, exponentiation of a positive real number b with an arbitrary real exponent x can be defined by continuity with the rule [24]

where the limit as r gets close to x is taken only over rational values of r. This limit only exists for positive b. The (ε, δ)-definition of limit is used this involves showing that for any desired accuracy of the result b x one can choose a sufficiently small interval around x so all the rational powers in the interval are within the desired accuracy.

For example, if x = π , the nonterminating decimal representation π = 3.14159… can be used (based on strict monotonicity of the rational power) to obtain the intervals bounded by rational powers

### The exponential function Edit

The important mathematical constant e , sometimes called Euler's number, is approximately equal to 2.718 and is the base of the natural logarithm. Although exponentiation of e could, in principle, be treated the same as exponentiation of any other real number, such exponentials turn out to have particularly elegant and useful properties. Among other things, these properties allow exponentials of e to be generalized in a natural way to other types of exponents, such as complex numbers or even matrices, while coinciding with the familiar meaning of exponentiation with rational exponents.

As a consequence, the notation e x usually denotes a generalized exponentiation definition called the exponential function, exp(x), which can be defined in many equivalent ways, for example, by

Among other properties, exp satisfies the exponential identity

exp ⁡ ( x + y ) = exp ⁡ ( x ) ⋅ exp ⁡ ( y ) .

The exponential function is defined for all integer, fractional, real, and complex values of x. In fact, the matrix exponential is well-defined for square matrices (in which case this exponential identity only holds when x and y commute) and is useful for solving systems of linear differential equations.

Since exp(1) is equal to e, and exp(x) satisfies this exponential identity, it immediately follows that exp(x) coincides with the repeated-multiplication definition of e x for integer x, and it also follows that rational powers denote (positive) roots as usual, so exp(x) coincides with the e x definitions in the previous section for all real x by continuity.

### Powers via logarithms Edit

When e x is defined as the exponential function, b x can be defined, for other positive real numbers b , in terms of e x . Specifically, the natural logarithm ln(x) is the inverse of the exponential function e x . It is defined for b > 0 , and satisfies

If b x is to preserve the logarithm and exponent rules, then one must have

This can be used as an alternative definition of the real-number power b x and agrees with the definition given above using rational exponents and continuity. The definition of exponentiation using logarithms is more common in the context of complex numbers, as discussed below.

### Real exponents with negative bases Edit

Powers of a positive real number are always positive real numbers. The solution of x 2 = 4, however, can be either 2 or −2. The principal value of 4 1/2 is 2, but −2 is also a valid square root. If the definition of exponentiation of real numbers is extended to allow negative results then the result is no longer well-behaved.

Neither the logarithm method nor the rational exponent method can be used to define b r as a real number for a negative real number b and an arbitrary real number r. Indeed, e r is positive for every real number r, so ln(b) is not defined as a real number for b ≤ 0 .

The rational exponent method cannot be used for negative values of b because it relies on continuity. The function f(r) = b r has a unique continuous extension [24] from the rational numbers to the real numbers for each b > 0 . But when b < 0 , the function f is not even continuous on the set of rational numbers r for which it is defined.

For example, consider b = −1 . The nth root of −1 is −1 for every odd natural number n. So if n is an odd positive integer, (−1) (m/n) = −1 if m is odd, and (−1) (m/n) = 1 if m is even. Thus the set of rational numbers q for which (−1) q = 1 is dense in the rational numbers, as is the set of q for which (−1) q = −1 . This means that the function (−1) q is not continuous at any rational number q where it is defined.

On the other hand, arbitrary complex powers of negative numbers b can be defined by choosing a complex logarithm of b.

### Irrational exponents Edit

If b is a positive real algebraic number, and x is a rational number, it has been shown above that b x is an algebraic number. This remains true even if one accepts any algebraic number for b, with the only difference that b x may take several values (a finite number, see below), which are all algebraic. The Gelfond–Schneider theorem provides some information on the nature of b x when x is irrational (that is, not rational). It states:

If b is an algebraic number different from 0 and 1, and x an irrational algebraic number, then all values of b x (there are infinitely many) are transcendental (that is, not algebraic).

If b is a positive real number, and z is any complex number, the power b z is defined by

where x = ln(b) is the unique real solution to the equation e x = b , and the complex power of e is defined by the exponential function, which is the unique function of a complex variable that is equal to its derivative and takes the value 1 for x = 0 .

As, in general, b z is not a real number, an expression such as (b z ) w is not defined by the previous definition. It must be interpreted via the rules for powers of complex numbers, and, unless z is real or w is integer, does not generally equal b zw , as one might expect.

There are various definitions of the exponential function but they extend compatibly to complex numbers and satisfy the exponential property. For any complex numbers z and w, the exponential function satisfies e z + w = e z e w =e^e^> . In particular, for any complex number z = x + i y

This formula links problems in trigonometry and algebra.

Therefore, for any complex number z = x + i y ,

### Series definition Edit

The exponential function being equal to its derivative and satisfying e 0 = 1 , =1,> its Taylor series must be

This infinite series, which is often taken as the definition of the exponential function e z for arbitrary complex exponents, is absolutely convergent for all complex numbers z.

When z is purely imaginary, that is, z = iy for a real number y, the series above becomes

which (because it converges absolutely) may be reordered to

The real and the imaginary parts of this expression are Taylor expansions of cosine and sine respectively, centered at zero, implying Euler's formula:

### Periodicity Edit

That is, the complex exponential function e z = exp ⁡ ( z ) = exp ⁡ ( z + 2 k π i ) =exp(z)=exp(z+2kpi i)> for any integer k is a periodic function with period 2 π i .

### Examples Edit

Integer powers of nonzero complex numbers are defined by repeated multiplication or division as above. If i is the imaginary unit and n is an integer, then i n equals 1, i, −1, or −i, according to whether the integer n is congruent to 0, 1, 2, or 3 modulo 4. Because of this, the powers of i are useful for expressing sequences of period 4.

Complex powers of positive reals are defined via e x as in section Complex exponents with positive real bases above. These are continuous functions.

Trying to extend these functions to the general case of noninteger powers of complex numbers that are not positive reals leads to difficulties. Either we define discontinuous functions or multivalued functions. Neither of these options is entirely satisfactory.

The rational power of a complex number must be the solution to an algebraic equation. Therefore, it always has a finite number of possible values. For example, w = z 1/2 must be a solution to the equation w 2 = z . But if w is a solution, then so is −w, because (−1) 2 = 1 . A unique but somewhat arbitrary solution called the principal value can be chosen using a general rule which also applies for nonrational powers.

Complex powers and logarithms are more naturally handled as single valued functions on a Riemann surface. Single valued versions are defined by choosing a sheet. The value has a discontinuity along a branch cut. Choosing one out of many solutions as the principal value leaves us with functions that are not continuous, and the usual rules for manipulating powers can lead us astray.

Any nonrational power of a complex number has an infinite number of possible values because of the multi-valued nature of the complex logarithm. The principal value is a single value chosen from these by a rule which, amongst its other properties, ensures powers of complex numbers with a positive real part and zero imaginary part give the same value as does the rule defined above for the corresponding real base.

Exponentiating a real number to a complex power is formally a different operation from that for the corresponding complex number. However, in the common case of a positive real number the principal value is the same.

The powers of negative real numbers are not always defined and are discontinuous even where defined. In fact, they are only defined when the exponent is a rational number with the denominator being an odd integer. When dealing with complex numbers the complex number operation is normally used instead.

### Complex exponents with complex bases Edit

For complex numbers w and z with w ≠ 0 , the notation w z is ambiguous in the same sense that log w is.

To obtain a value of w z , first choose a logarithm of w call it log w . Such a choice may be the principal value Log w (the default, if no other specification is given), or perhaps a value given by some other branch of log w fixed in advance. Then, using the complex exponential function one defines

because this agrees with the earlier definition in the case where w is a positive real number and the (real) principal value of log w is used.

If z is an integer, then the value of w z is independent of the choice of log w , and it agrees with the earlier definition of exponentiation with an integer exponent.

If z is a rational number m/n in lowest terms with n > 0 , then the countably infinitely many choices of log w yield only n different values for w z these values are the n complex solutions s to the equation s n = w m .

If z is an irrational number, then the countably infinitely many choices of log w lead to infinitely many distinct values for w z .

The computation of complex powers is facilitated by converting the base w to polar form, as described in detail below.

A similar construction is employed in quaternions.

### Complex roots of unity Edit

A complex number w such that w n = 1 for a positive integer n is an nth root of unity. Geometrically, the nth roots of unity lie on the unit circle of the complex plane at the vertices of a regular n-gon with one vertex on the real number 1.

If w n = 1 but w k ≠ 1 for all natural numbers k such that 0 < k < n , then w is called a primitive nth root of unity. The negative unit −1 is the only primitive square root of unity. The imaginary unit i is one of the two primitive 4th roots of unity the other one is −i.

The other nth roots of unity are given by

### Roots of arbitrary complex numbers Edit

Although there are infinitely many possible values for a general complex logarithm, there are only a finite number of values for the power w q in the important special case where q = 1/n and n is a positive integer. These are the n th roots of w they are solutions of the equation z n = w . As with real roots, a second root is also called a square root and a third root is also called a cube root.

It is usual in mathematics to define w 1/n as the principal value of the root, which is, conventionally, the n th root whose argument has the smallest absolute value. When w is a positive real number, this is coherent with the usual convention of defining w 1/n as the unique positive real n th root. On the other hand, when w is a negative real number, and n is an odd integer, the unique real n th root is not one of the two n th roots whose argument has the smallest absolute value. In this case, the meaning of w 1/n may depend on the context, and some care may be needed for avoiding errors.

The set of n th roots of a complex number w is obtained by multiplying the principal value w 1/n by each of the n th roots of unity. For example, the fourth roots of 16 are 2, −2, 2 i , and −2 i , because the principal value of the fourth root of 16 is 2 and the fourth roots of unity are 1, −1, i , and − i .

### Computing complex powers Edit

It is often easier to compute complex powers by writing the number to be exponentiated in polar form. Every complex number z can be written in the polar form

where r is a nonnegative real number and θ is the (real) argument of z. The polar form has a simple geometric interpretation: if a complex number u + iv is thought of as representing a point (u, v) in the complex plane using Cartesian coordinates, then (r, θ) is the same point in polar coordinates. That is, r is the "radius" r 2 = u 2 + v 2 and θ is the "angle" θ = atan2(v, u) . The polar angle θ is ambiguous since any integer multiple of 2π could be added to θ without changing the location of the point. Each choice of θ gives in general a different possible value of the power. A branch cut can be used to choose a specific value. The principal value (the most common branch cut), corresponds to θ chosen in the interval (−π, π] . For complex numbers with a positive real part and zero imaginary part using the principal value gives the same result as using the corresponding real number.

In order to compute the complex power w z , write w in polar form:

If z is decomposed as c + di , then the formula for w z can be written more explicitly as

This final formula allows complex powers to be computed easily from decompositions of the base into polar form and the exponent into Cartesian form. It is shown here both in polar form and in Cartesian form (via Euler's identity).

The following examples use the principal value, the branch cut which causes θ to be in the interval (−π, π] . To compute i i , write i in polar and Cartesian forms:

Similarly, to find (−2) 3 + 4i , compute the polar form of −2:

and use the formula above to compute

The value of a complex power depends on the branch used. For example, if the polar form i = 1e 5πi/2 is used to compute i i , the power is found to be e −5π/2 the principal value of i i , computed above, is e −π/2 . The set of all possible values for i i is given by [28]

So there is an infinity of values that are possible candidates for the value of i i , one for each integer k. All of them have a zero imaginary part, so one can say i i has an infinity of valid real values.

### Failure of power and logarithm identities Edit

Some identities for powers and logarithms for positive real numbers will fail for complex numbers, no matter how complex powers and complex logarithms are defined as single-valued functions. For example:

• The identity log(bx ) = x ⋅ log b holds whenever b is a positive real number and x is a real number. But for the principal branch of the complex logarithm one has i π = log ⁡ ( − 1 ) = log ⁡ [ ( − i ) 2 ] ≠ 2 log ⁡ ( − i ) = 2 ( − i π 2 ) = − i π ight] eq 2log(-i)=2left(-<2>> ight)=-ipi >

Regardless of which branch of the logarithm is used, a similar failure of the identity will exist. The best that can be said (if only using this result) is that:

This identity does not hold even when considering log as a multivalued function. The possible values of log(w z ) contain those of z ⋅ log w as a subset. Using Log(w) for the principal value of log(w) and m , n as any integers the possible values of both sides are:

On the other hand, when x is an integer, the identities are valid for all nonzero complex numbers.

### Monoids Edit

Exponentiation with integer exponents can be defined in any multiplicative monoid. [30] A monoid is an algebraic structure consisting of a set X together with a rule for composition ("multiplication") satisfying an associative law and a multiplicative identity, denoted by 1. Exponentiation is defined inductively by

Monoids include many structures of importance in mathematics, including groups and rings (under multiplication), with more specific examples of the latter being matrix rings and fields.

### Matrices and linear operators Edit

If A is a square matrix, then the product of A with itself n times is called the matrix power. Also A 0 > is defined to be the identity matrix, [32] and if A is invertible, then A − n = ( A − 1 ) n =left(A^<-1> ight)^> .

These examples are for discrete exponents of linear operators, but in many circumstances it is also desirable to define powers of such operators with continuous exponents. This is the starting point of the mathematical theory of semigroups. [34] Just as computing matrix powers with discrete exponents solves discrete dynamical systems, so does computing matrix powers with continuous exponents solve systems with continuous dynamics. Examples include approaches to solving the heat equation, Schrödinger equation, wave equation, and other partial differential equations including a time evolution. The special case of exponentiating the derivative operator to a non-integer power is called the fractional derivative which, together with the fractional integral, is one of the basic operations of the fractional calculus.

### Finite fields Edit

A field is an algebraic structure in which multiplication, addition, subtraction, and division are all well-defined and satisfy their familiar properties. The real numbers, for example, form a field, as do the complex numbers and rational numbers. Unlike these familiar examples of fields, which are all infinite sets, some fields have only finitely many elements. The simplest example is the field with two elements F 2 = < 0 , 1 >=<0,1>> with addition defined by 0 + 1 = 1 + 0 = 1 and 0 + 0 = 1 + 1 = 0 , and multiplication 0 ⋅ 0 = 1 ⋅ 0 = 0 ⋅ 1 = 0 and 1 ⋅ 1 = 1 .

Exponentiation in finite fields has applications in public key cryptography. For example, the Diffie–Hellman key exchange uses the fact that exponentiation is computationally inexpensive in finite fields, whereas the discrete logarithm (the inverse of exponentiation) is computationally expensive.

### In abstract algebra Edit

Exponentiation for integer exponents can be defined for quite general structures in abstract algebra.

Let X be a set with a power-associative binary operation which is written multiplicatively. Then x n is defined for any element x of X and any nonzero natural number n as the product of n copies of x, which is recursively defined by

One has the following properties

If the operation has a two-sided identity element 1, then x 0 is defined to be equal to 1 for any x: [ citation needed ]

If the operation also has two-sided inverses and is associative, then the magma is a group. The inverse of x can be denoted by x −1 and follows all the usual rules for exponents:

If the multiplication operation is commutative (as, for instance, in abelian groups), then the following holds:

If the binary operation is written additively, as it often is for abelian groups, then "exponentiation is repeated multiplication" can be reinterpreted as "multiplication is repeated addition". Thus, each of the laws of exponentiation above has an analogue among laws of multiplication.

When there are several power-associative binary operations defined on a set, any of which might be iterated, it is common to indicate which operation is being repeated by placing its symbol in the superscript. Thus, xn is x ∗ . ∗ x , while x #n is x # . # x , whatever the operations ∗ and # might be.

Superscript notation is also used, especially in group theory, to indicate conjugation. That is, g h = h −1 gh , where g and h are elements of some group. Although conjugation obeys some of the same laws as exponentiation, it is not an example of repeated multiplication in any sense. A quandle is an algebraic structure in which these laws of conjugation play a central role.

### Over sets Edit

If n is a natural number, and A is an arbitrary set, then the expression A n is often used to denote the set of ordered n-tuples of elements of A. This is equivalent to letting A n denote the set of functions from the set <0, 1, 2, . n − 1> to the set A the n-tuple (a0, a1, a2, . an−1) represents the function that sends i to ai.

For an infinite cardinal number κ and a set A, the notation A κ is also used to denote the set of all functions from a set of size κ to A. This is sometimes written κ A to distinguish it from cardinal exponentiation, defined below.

This generalized exponential can also be defined for operations on sets or for sets with extra structure. For example, in linear algebra, it makes sense to index direct sums of vector spaces over arbitrary index sets. That is, we can speak of

where each Vi is a vector space.

Then if Vi = V for each i, the resulting direct sum can be written in exponential notation as VN , or simply V N with the understanding that the direct sum is the default. We can again replace the set N with a cardinal number n to get V n , although without choosing a specific standard set with cardinality n, this is defined only up to isomorphism. Taking V to be the field R of real numbers (thought of as a vector space over itself) and n to be some natural number, we get the vector space that is most commonly studied in linear algebra, the real vector space R n .

If the base of the exponentiation operation is a set, the exponentiation operation is the Cartesian product unless otherwise stated. Since multiple Cartesian products produce an n-tuple, which can be represented by a function on a set of appropriate cardinality, S N becomes simply the set of all functions from N to S in this case:

This fits in with the exponentiation of cardinal numbers, in the sense that | S N | = | S | | N | , where | X | is the cardinality of X. When "2" is defined as <0, 1 >, we have | 2 X | = 2 | X | , where 2 X , usually denoted by P(X), is the power set of X each subset Y of X corresponds uniquely to a function on X taking the value 1 for xY and 0 for xY .

### In category theory Edit

In a Cartesian closed category, the exponential operation can be used to raise an arbitrary object to the power of another object. This generalizes the Cartesian product in the category of sets. If 0 is an initial object in a Cartesian closed category, then the exponential object 0 0 is isomorphic to any terminal object 1.

### Of cardinal and ordinal numbers Edit

In set theory, there are exponential operations for cardinal and ordinal numbers.

If κ and λ are cardinal numbers, the expression κ λ represents the cardinality of the set of functions from any set of cardinality λ to any set of cardinality κ. [35] If κ and λ are finite, then this agrees with the ordinary arithmetic exponential operation. For example, the set of 3-tuples of elements from a 2-element set has cardinality 8 = 2 3 . In cardinal arithmetic, κ 0 is always 1 (even if κ is an infinite cardinal or zero).

Exponentiation of cardinal numbers is distinct from exponentiation of ordinal numbers, which is defined by a limit process involving transfinite induction.

Just as exponentiation of natural numbers is motivated by repeated multiplication, it is possible to define an operation based on repeated exponentiation this operation is sometimes called hyper-4 or tetration. Iterating tetration leads to another operation, and so on, a concept named hyperoperation. This sequence of operations is expressed by the Ackermann function and Knuth's up-arrow notation. Just as exponentiation grows faster than multiplication, which is faster-growing than addition, tetration is faster-growing than exponentiation. Evaluated at (3, 3) , the functions addition, multiplication, exponentiation, and tetration yield 6, 9, 27, and 7 625 597 484 987 ( = 3 27 = 3 3 3 = 3 3 ) respectively.

Zero to the power of zero gives a number of examples of limits that are of the indeterminate form 0 0 . The limits in these examples exist, but have different values, showing that the two-variable function x y has no limit at the point (0, 0) . One may consider at what points this function does have a limit.

More precisely, consider the function f(x, y) = x y defined on D = <(x, y) ∈ R 2 : x > 0>. Then D can be viewed as a subset of R 2 (that is, the set of all pairs (x, y) with x , y belonging to the extended real number line R = [−∞, +∞] , endowed with the product topology), which will contain the points at which the function f has a limit.

In fact, f has a limit at all accumulation points of D , except for (0, 0) , (+∞, 0) , (1, +∞) and (1, −∞) . [36] Accordingly, this allows one to define the powers x y by continuity whenever 0 ≤ x ≤ +∞ , −∞ ≤ y ≤ +∞ , except for 0 0 , (+∞) 0 , 1 +∞ and 1 −∞ , which remain indeterminate forms.

Under this definition by continuity, we obtain:

• x +∞ = +∞ and x −∞ = 0 , when 1 < x ≤ +∞ .
• x +∞ = 0 and x −∞ = +∞ , when 0 ≤ x < 1 .
• 0 y = 0 and (+∞) y = +∞ , when 0 < y ≤ +∞ .
• 0 y = +∞ and (+∞) y = 0 , when −∞ ≤ y < 0 .

These powers are obtained by taking limits of x y for positive values of x . This method does not permit a definition of x y when x < 0 , since pairs (x, y) with x < 0 are not accumulation points of D .

On the other hand, when n is an integer, the power x n is already meaningful for all values of x , including negative ones. This may make the definition 0 n = +∞ obtained above for negative n problematic when n is odd, since in this case x n → +∞ as x tends to 0 through positive values, but not negative ones.

Computing b n using iterated multiplication requires n − 1 multiplication operations, but it can be computed more efficiently than that, as illustrated by the following example. To compute 2 100 , note that 100 = 64 + 32 + 4 . Compute the following in order:

1. 2 2 = 4
2. (2 2 ) 2 = 2 4 = 16.
3. (2 4 ) 2 = 2 8 = 256.
4. (2 8 ) 2 = 2 16 = 65 536 .
5. (2 16 ) 2 = 2 32 = 4 294 967 296 .
6. (2 32 ) 2 = 2 64 = 18 446 744 073 709 551 616 .
7. 2 64 2 32 2 4 = 2 100 = 1 267 650 600 228 229 401 496 703 205 376 .

This series of steps only requires 8 multiplication operations (the last product above takes 2 multiplications) instead of 99.

In general, the number of multiplication operations required to compute b n can be reduced to Θ(log n) by using exponentiation by squaring or (more generally) addition-chain exponentiation. Finding the minimal sequence of multiplications (the minimal-length addition chain for the exponent) for b n is a difficult problem, for which no efficient algorithms are currently known (see Subset sum problem), but many reasonably efficient heuristic algorithms are available. [37]

Placing an integer superscript after the name or symbol of a function, as if the function were being raised to a power, commonly refers to repeated function composition rather than repeated multiplication. [38] [39] [40] Thus, f 3 (x) may mean f(f(f(x))) [41] in particular, f −1 (x) usually denotes the inverse function of f . This notation was introduced by Hans Heinrich Bürmann [ citation needed ] [39] [40] and John Frederick William Herschel. [38] [39] [40] Iterated functions are of interest in the study of fractals and dynamical systems. Babbage was the first to study the problem of finding a functional square root f 1/2 (x) .

To distinguish exponentiation from function composition, the common usage is to write the exponential exponent after the parenthesis enclosing the argument of the function that is, f(x) 3 means (f(x)) 3 , and f(x) –1 means 1/f(x) .

For historical reasons, and because of the ambiguity resulting of not enclosing arguments with parentheses, a superscript after a function name applied specifically to the trigonometric and hyperbolic functions has a deviating meaning: a positive exponent applied to the function's abbreviation means that the result is raised to that power, [42] [43] [44] [45] [46] [47] [48] [20] [40] while an exponent of −1 still denotes the inverse function. [40] That is, sin 2 x is just a shorthand way to write (sin x) 2 = sin(x) 2 without using parentheses, [16] [49] [50] [51] [52] [53] [54] [20] whereas sin −1 x refers to the inverse function of the sine, also called arcsin x . Each trigonometric and hyperbolic function has its own name and abbreviation both for the reciprocal (for example, 1/(sin x) = (sin x) −1 = sin(x) −1 = csc x ), and its inverse (for example cosh −1 x = arcosh x ). A similar convention exists for logarithms, [40] where today log 2 x usually means (log x) 2 , not log log x . [40]

To avoid ambiguity, some mathematicians [ citation needed ] choose to use ∘ to denote the compositional meaning, writing fn (x) for the n -th iterate of the function f(x) , as in, for example, f ∘3 (x) meaning f(f(f(x))) . For the same purpose, f [n] (x) was used by Benjamin Peirce [55] [40] whereas Alfred Pringsheim and Jules Molk suggested n f(x) instead. [56] [40] [nb 1]

Programming languages generally express exponentiation either as an infix operator or as a (prefix) function, as they are linear notations which do not support superscripts:

• x ↑ y : Algol, Commodore BASIC, TRS-80 Level II/III BASIC. [57][58]
• x ^ y : AWK, BASIC, J, MATLAB, Wolfram Language (Mathematica), R, Microsoft Excel, Analytica, TeX (and its derivatives), TI-BASIC, bc (for integer exponents), Haskell (for nonnegative integer exponents), Lua and most computer algebra systems. Conflicting uses of the symbol ^ include: XOR (in POSIX Shell arithmetic expansion, AWK, C, C++, C#, D, Go, Java, JavaScript, Perl, PHP, Python, Ruby and Tcl), indirection (Pascal), and string concatenation (OCaml and Standard ML).
• x ^^ y : Haskell (for fractional base, integer exponents), D.
• x ** y : Ada, Z shell, KornShell, Bash, COBOL, CoffeeScript, Fortran, FoxPro, Gnuplot, Groovy, JavaScript, OCaml, F#, Perl, PHP, PL/I, Python, Rexx, Ruby, SAS, Seed7, Tcl, ABAP, Mercury, Haskell (for floating-point exponents), Turing, VHDL.
• pown x y : F# (for integer base, integer exponent).
• x⋆y : APL.

Many other programming languages lack syntactic support for exponentiation, but provide library functions:

• pow(x, y) : C, C++.
• Math.Pow(x, y) : C#.
• math:pow(X, Y) : Erlang.
• Math.pow(x, y) : Java.
• [Math]::Pow(x, y) : PowerShell.
• (expt x y) : Common Lisp.

For certain exponents there are special ways to compute x y much faster than through generic exponentiation. These cases include small positive and negative integers (prefer x · x over x 2 prefer 1/x over x −1 ) and roots (prefer sqrt(x) over x 0.5 , prefer cbrt(x) over x 1/3 ).

## Students like Saxon because they feel successful in math instead of overwhelmed.

The most popular homeschooling math program hands down! Highly recommended by both Mary Pride and Cathy Duffy, Saxon Math also wins our award for the "Most Requested Text." Saxon math is a "user-friendly" math program - even for Algebra, Trigonometry, Calculus and other usually difficult math topics. Learning is incremental and each new concept is continuously reviewed, so the learning has time to "sink in" instead of being forgotten when the next topic is presented. Higher scores on standardized tests and increased enrollments in upper-level math and science classes have resulted where Saxon has been used in public schools. Students like Saxon because they feel successful in math instead of overwhelmed. Because of the format, children are able to work more independently.