An infinite series is a sum of infinitely many quantities. Series play a very relevant role in calculus and its generalizations such as functional analysis and discrete mathematics. Since summing infinitely many elements is hard to grasp, a better way to describe an infinite series is to consider it as a limiting process of summing the elements of a sequence. Another concept that is very important when dealing with infinite series is the notion of convergence. When considering real numbers this means that when you sum infinitely many numbers you will still get some finite number. In this article I will discuss some well known series and their convergence and I will end with a problem to solve.
Infinite geometric series
Most people that have taken some kind of math course have encountered a geometric series. A geometric series is a series that can be described only though a first term and a common ratio. The most standard example of a geometric series is
$$1 + \frac{1}{2} + \frac{1}{4} + \frac{1}{8} + \frac{1}{16} + \cdots$$.
Here we see that the first term is 1 and the common ratio is $1 / 2$. The common ratio is the factor that you have to multiply a term with to find the next term. A geometric series can thus always be written as
$$\sum_{n = 0}^\infty a r^n$$
where $a$ is the first term and $r$ is the common ratio. This series only converges for $|r| < 1$. The closed form solution of this series can be derived as follows:
$$\sum_{n = 0}^\infty a r^n = a \left(1 + r \sum_{n = 1}^\infty r^{n – 1}\right) = a \left(1 + r \sum_{n = 0}^\infty r^{n}\right)$$
from here we find
$$(1 – r) \sum_{n = 0}^\infty a r^n = a \implies \sum_{n = 0}^\infty a r^n = \frac{a}{1 – r}$$.
To prove that this series only converges for $|r| < 1$, you can consider the closed form of the finite geometric series. This closed form has a term involving $r^{n+1}$. If we take a limit of $n$ to infinity we get the infinite geometric series. For this to converge we thus need that $r^{n+1}$ converges to 0 and this only holds for $|r| < 1$.
Infinite Arithmetico-geometric series
The Arithmetico-geometric series is very similar to the geometric series. This series can be written as
$$\sum_{k = 1}^\infty k r^k$$.
This series converges for $|r| < 1$. This is the same as for the geometric series. To find the closed form solution of the Arithmetico-geometric series we can make use of the geometric series and a derivative argument using calculus. The closed form solution can be derived as follows:
$$\sum_{k = 1}^\infty k r^k = r \sum_{k = 1}^\infty k r^{k – 1} = r \sum_{k = 1}^\infty \frac{d}{dx} x^k \bigg|_{x = r} = r \frac{d}{dx} \sum_{k = 1}^\infty x^k \bigg|_{x = r} = r \frac{d}{dx} \frac{x}{1 – x} \bigg|_{x = r} = r \frac{1}{(1 – x)^2} \bigg|_{x = r} = \frac{r}{(1 – r)^2}$$.
Since we only made use of the convergence of the geometric series we are sure that the series converges for $|r| < 1$. The Arithmetico-geometric series finds its use in calculating expectations of discrete random variables. Try to use the given result to calculate the expected number of coin flips before seeing heads.
Taylor series
The Taylor series is related to a certain function. The Taylor series is an approximation of a function using polynomial terms. The series is defined through the derivatives of a function and a point at which the series is centered. For a given function $f$ the Taylor series is defined by
$$\sum_{n = 0}^\infty \frac{f^{(n)}(a)}{n!} (x – a)^n$$.
The most common Taylor series are the Taylor series centered at $x=0$. These series are known as the Maclaurin series. A Maclaurin series that appears in many fields of mathematics is
$$e^x = \sum_{n = 0}^\infty \frac{x^n}{n!}$$.
This series converges for all values of $x$. The values of $x$ for which the series converges is called the range of convergence. An example of a Taylor series with a small range of convergence is
$$\ln(1 + x) = \sum_{n = 1}^\infty (-1)^{n+1} \frac{x^n}{n}$$.
The above equality only holds for $x \in (-1, 1]$ and thus has a small range of convergence.
Fourier series
The Fourier series are similar to the Taylor series. They are similar as they both approximate a given function. The difference is that the fourier series uses sines and cosines or complex exponentials. The fourier series for complex exponentials is defined as
$$f(x) \sim \frac{1}{\sqrt{2\pi}} \sum_{n = -\infty}^\infty c_n e^{inx}$$.
Now we need to define the Fourier coefficients $c_n$. To do this we first need to define the inner product of two functions. We will define this as
$$\langle f, g \rangle = \int_{-\pi}^\pi f(x) \overline{g(x)} dx$$.
Now to we can calculate the Fourier coefficients of a function $f$ as
$$c_n = \langle f, \frac{e^{inx}}{\sqrt{2\pi}} \rangle = \frac{1}{\sqrt{2\pi}} \int_{-\pi}^\pi f(x) e^{-inx} dx$$.
Note that this Fourier series only approximates periodic functions over the real line. All other functions are only approximated in the interval $(-\pi, \pi)$. The Fourier series often appears when trying to solve complex partial differential equations. They appear when it is difficult to solve for the function analytically. In these cases we can approximate the solution on a given interval using the Fourier series. Another use of the Fourier series is to calculate explicit values for difficult infinite series. To do this we need Parseval’s theorem. This theorem states that for a given function $f$
$$\sum_{n = -\infty}^\infty |c_n|^2 = ||f||^2 = \langle f, f \rangle$$.
We will use this statement for the last part of this article.
Problem involving the zeta function
For this part of the article our goal is to find an explicit solution for certain values of the Riemann zeta function. The Riemann zeta function is an infinite series and is defined as
$$\zeta(s) = \sum_{n = 1}^\infty \frac{1}{n^s}$$.
I will now provide the tools to evaluate the zeta function for even positive integers. First consider the functions $f_n(x) = x^n$. The previous section explains how to calculate the Fourier coefficients for these functions. If you correctly calculate these Fourier coefficients you can apply Parseval’s theorem to find the values of $\zeta(2n)$ for each $n$. If you perform the calculations correctly for $n = 1$, you will find that $\zeta(2) = \frac{\pi^2}{6}$. This calculation is known as Bazels problem. It took mathematicians almost 100 years to solve this problem. This approach of using a Fourier series to calculate an explicit value for an infinite series is remarkable and relatively easy. For the hardcore mathematicians, if you do the calculations correctly for general $n$, you will find that
$$\zeta(2n) = \frac{(2\pi)^{2n} (-1)^{n + 1} B_{2n}}{2 \times (2n)!}$$
where the $B_{2n}$ are the Bernoullinumbers.