BREAKING NEWS

## Summary

In mathematics, an infinite series of numbers is said to converge absolutely (or to be absolutely convergent) if the sum of the absolute values of the summands is finite. More precisely, a real or complex series $\textstyle \sum _{n=0}^{\infty }a_{n}$ is said to converge absolutely if $\textstyle \sum _{n=0}^{\infty }\left|a_{n}\right|=L$ for some real number $\textstyle L.$ Similarly, an improper integral of a function, $\textstyle \int _{0}^{\infty }f(x)\,dx,$ is said to converge absolutely if the integral of the absolute value of the integrand is finite—that is, if $\textstyle \int _{0}^{\infty }|f(x)|dx=L.$ Absolute convergence is important for the study of infinite series because its definition is strong enough to have properties of finite sums that not all convergent series possess - a convergent series that is not absolutely convergent is called conditionally convergent, while absolutely convergent series behave "nicely". For instance, rearrangements do not change the value of the sum. This is not true for conditionally convergent series: The alternating harmonic series ${\textstyle 1-{\frac {1}{2}}+{\frac {1}{3}}-{\frac {1}{4}}+{\frac {1}{5}}-{\frac {1}{6}}+\cdots }$ converges to $\ln 2,$ while its rearrangement ${\textstyle 1+{\frac {1}{3}}-{\frac {1}{2}}+{\frac {1}{5}}+{\frac {1}{7}}-{\frac {1}{4}}+\cdots }$ (in which the repeating pattern of signs is two positive terms followed by one negative term) converges to ${\textstyle {\frac {3}{2}}\ln 2.}$ ## Background

In finite sums, the order in which terms are added does not matter. 1 + 2 + 3 is the same as 3 + 2 + 1. However, this is not true when adding infinitely many numbers, and wrongly assuming that it is true can lead to apparent paradoxes. One classic example is the alternating sum

$S=1-1+1-1+1-1...$

whose terms alternate between +1 and -1. What is the value of S? One way to evaluate S is to group the first and second term, the third and fourth, and so on:

$S_{1}=(1-1)+(1-1)+(1-1)....=0+0+0...=0$

But another way to evaluate S is to leave the first term alone and group the second and third term, then the fourth and fifth term, and so on:

$S_{2}=1+(-1+1)+(-1+1)+(-1+1)....=1+0+0+0...=1$

This leads to an apparent paradox: does $S=0$  or $S=1$ ?

The answer is that because S is not absolutely convergent, rearranging its terms changes the value of the sum. This means $S_{1}$  and $S_{2}$  are not equal. In fact, the series $1-1+1-1+...$  does not converge, so S does not have a value to find in the first place. A series that is absolutely convergent does not have this problem: rearranging its terms does not change the value of the sum.

### Explanation

This is an example of a mathematical sleight of hand. If the terms of S are rearranged in such a way that every term remains in its original position, one finds that S is either the infinite series

$S=1-1+1-1+...+1-1+1-1$

or with equal possibility, that

$S=1-1+1-1+...+1-1+1$

Evaluating S as before, by grouping every -1 with the +1 preceding it or by grouping every +1 except the first with the -1 preceding it, gives in the first case:

$S_{1}=(1-1)+....+(1-1)=0+....+0=0$
$S_{2}=1+(-1+1)+....+(-1+1)-1=1+0+....+0-1=1-1=0$

and in the second case:

$S_{1}=(1-1)+....+(1-1)+1=0+....+0+1=1$
$S_{2}=1+(-1+1)+....+(-1+1)=1+0+...+0=1$

This reveals the trick: the definition of S was interpreted as defining its last term as negative when evaluating $S_{1}=0$  but positive when evaluating $S_{2}=1$  when in fact the definition of S didn't define (and the rearrangement was independent of) either option.

## Definition for real and complex numbers

A sum of real numbers or complex numbers ${\textstyle \sum _{n=0}^{\infty }a_{n}}$  is absolutely convergent if the sum of the absolute values of the terms ${\textstyle \sum _{n=0}^{\infty }|a_{n}|}$  converges.

## Sums of more general elements

The same definition can be used for series ${\textstyle \sum _{n=0}^{\infty }a_{n}}$  whose terms $a_{n}$  are not numbers but rather elements of an arbitrary abelian topological group. In that case, instead of using the absolute value, the definition requires the group to have a norm, which is a positive real-valued function ${\textstyle \|\cdot \|:G\to \mathbb {R} _{+}}$  on an abelian group $G$  (written additively, with identity element 0) such that:

1. The norm of the identity element of $G$  is zero: $\|0\|=0.$
2. For every $x\in G,$  $\|x\|=0$  implies $x=0.$
3. For every $x\in G,$  $\|-x\|=\|x\|.$
4. For every $x,y\in G,$  $\|x+y\|\leq \|x\|+\|y\|.$

In this case, the function $d(x,y)=\|x-y\|$  induces the structure of a metric space (a type of topology) on $G.$

Then, a $G$ -valued series is absolutely convergent if ${\textstyle \sum _{n=0}^{\infty }\|a_{n}\|<\infty .}$

In particular, these statements apply using the norm $|x|$  (absolute value) in the space of real numbers or complex numbers.

### In topological vector spaces

If $X$  is a topological vector space (TVS) and ${\textstyle \left(x_{\alpha }\right)_{\alpha \in A}}$  is a (possibly uncountable) family in $X$  then this family is absolutely summable if

1. ${\textstyle \left(x_{\alpha }\right)_{\alpha \in A}}$  is summable in $X$  (that is, if the limit ${\textstyle \lim _{H\in {\mathcal {F}}(A)}x_{H}}$  of the net $\left(x_{H}\right)_{H\in {\mathcal {F}}(A)}$  converges in $X,$  where ${\mathcal {F}}(A)$  is the directed set of all finite subsets of $A$  directed by inclusion $\subseteq$  and ${\textstyle x_{H}:=\sum _{i\in H}x_{i}}$ ), and
2. for every continuous seminorm $p$  on $X,$  the family ${\textstyle \left(p\left(x_{\alpha }\right)\right)_{\alpha \in A}}$  is summable in $\mathbb {R} .$

If $X$  is a normable space and if ${\textstyle \left(x_{\alpha }\right)_{\alpha \in A}}$  is an absolutely summable family in $X,$  then necessarily all but a countable collection of $x_{\alpha }$ 's are 0.

Absolutely summable families play an important role in the theory of nuclear spaces.

## Relation to convergence

If $G$  is complete with respect to the metric $d,$  then every absolutely convergent series is convergent. The proof is the same as for complex-valued series: use the completeness to derive the Cauchy criterion for convergence—a series is convergent if and only if its tails can be made arbitrarily small in norm—and apply the triangle inequality.

In particular, for series with values in any Banach space, absolute convergence implies convergence. The converse is also true: if absolute convergence implies convergence in a normed space, then the space is a Banach space.

If a series is convergent but not absolutely convergent, it is called conditionally convergent. An example of a conditionally convergent series is the alternating harmonic series. Many standard tests for divergence and convergence, most notably including the ratio test and the root test, demonstrate absolute convergence. This is because a power series is absolutely convergent on the interior of its disk of convergence.[a]

### Proof that any absolutely convergent series of complex numbers is convergent

Suppose that ${\textstyle \sum \left|a_{k}\right|,a_{k}\in \mathbb {C} }$  is convergent. Then equivalently, ${\textstyle \sum \left[\operatorname {Re} \left(a_{k}\right)^{2}+\operatorname {Im} \left(a_{k}\right)^{2}\right]^{1/2}}$  is convergent, which implies that ${\textstyle \sum \left|\operatorname {Re} \left(a_{k}\right)\right|}$  and ${\textstyle \sum \left|\operatorname {Im} \left(a_{k}\right)\right|}$  converge by termwise comparison of non-negative terms. It suffices to show that the convergence of these series implies the convergence of ${\textstyle \sum \operatorname {Re} \left(a_{k}\right)}$  and ${\textstyle \sum \operatorname {Im} \left(a_{k}\right),}$  for then, the convergence of ${\textstyle \sum a_{k}=\sum \operatorname {Re} \left(a_{k}\right)+i\sum \operatorname {Im} \left(a_{k}\right)}$  would follow, by the definition of the convergence of complex-valued series.

The preceding discussion shows that we need only prove that convergence of ${\textstyle \sum \left|a_{k}\right|,a_{k}\in \mathbb {R} }$  implies the convergence of ${\textstyle \sum a_{k}.}$

Let ${\textstyle \sum \left|a_{k}\right|,a_{k}\in \mathbb {R} }$  be convergent. Since $0\leq a_{k}+\left|a_{k}\right|\leq 2\left|a_{k}\right|,$  we have

$0\leq \sum _{k=1}^{n}(a_{k}+\left|a_{k}\right|)\leq \sum _{k=1}^{n}2\left|a_{k}\right|.$

Since ${\textstyle \sum 2\left|a_{k}\right|}$  is convergent, ${\textstyle s_{n}=\sum _{k=1}^{n}\left(a_{k}+\left|a_{k}\right|\right)}$  is a bounded monotonic sequence of partial sums, and ${\textstyle \sum \left(a_{k}+\left|a_{k}\right|\right)}$  must also converge. Noting that ${\textstyle \sum a_{k}=\sum \left(a_{k}+\left|a_{k}\right|\right)-\sum \left|a_{k}\right|}$  is the difference of convergent series, we conclude that it too is a convergent series, as desired.

#### Alternative proof using the Cauchy criterion and triangle inequality

By applying the Cauchy criterion for the convergence of a complex series, we can also prove this fact as a simple implication of the triangle inequality. By the Cauchy criterion, ${\textstyle \sum |a_{i}|}$  converges if and only if for any $\varepsilon >0,$  there exists $N$  such that ${\textstyle \left|\sum _{i=m}^{n}\left|a_{i}\right|\right|=\sum _{i=m}^{n}|a_{i}|<\varepsilon }$  for any $n>m\geq N.$  But the triangle inequality implies that ${\textstyle {\big |}\sum _{i=m}^{n}a_{i}{\big |}\leq \sum _{i=m}^{n}|a_{i}|,}$  so that ${\textstyle \left|\sum _{i=m}^{n}a_{i}\right|<\varepsilon }$  for any $n>m\geq N,$  which is exactly the Cauchy criterion for ${\textstyle \sum a_{i}.}$

### Proof that any absolutely convergent series in a Banach space is convergent

The above result can be easily generalized to every Banach space $(X,\|\,\cdot \,\|).$  Let ${\textstyle \sum x_{n}}$  be an absolutely convergent series in $X.$  As ${\textstyle \sum _{k=1}^{n}\|x_{k}\|}$  is a Cauchy sequence of real numbers, for any $\varepsilon >0$  and large enough natural numbers $m>n$  it holds:

$\left|\sum _{k=1}^{m}\|x_{k}\|-\sum _{k=1}^{n}\|x_{k}\|\right|=\sum _{k=n+1}^{m}\|x_{k}\|<\varepsilon .$

By the triangle inequality for the norm ǁ⋅ǁ, one immediately gets:

$\left\|\sum _{k=1}^{m}x_{k}-\sum _{k=1}^{n}x_{k}\right\|=\left\|\sum _{k=n+1}^{m}x_{k}\right\|\leq \sum _{k=n+1}^{m}\|x_{k}\|<\varepsilon ,$

which means that ${\textstyle \sum _{k=1}^{n}x_{k}}$  is a Cauchy sequence in $X,$  hence the series is convergent in $X.$ 

## Rearrangements and unconditional convergence

### Real and complex numbers

When a series of real or complex numbers is absolutely convergent, any rearrangement or reordering of that series' terms will still converge to the same value. This fact is one reason absolutely convergent series are useful: showing a series is absolutely convergent allows terms to be paired or rearranged in convenient ways without changing the sum's value.

The Riemann rearrangement theorem shows that the converse is also true: every real or complex-valued series whose terms cannot be reordered to give a different value is absolutely convergent.

### Series with coefficients in more general space

The term unconditional convergence is used to refer to a series where any rearrangement of its terms still converges to the same value. For any series with values in a normed abelian group $G$ , as long as $G$  is complete, every series which converges absolutely also converges unconditionally.

Stated more formally:

Theorem —  Let $G$  be a normed abelian group. Suppose

$\sum _{i=1}^{\infty }a_{i}=A\in G,\quad \sum _{i=1}^{\infty }\|a_{i}\|<\infty .$

If $\sigma :\mathbb {N} \to \mathbb {N}$  is any permutation, then
$\sum _{i=1}^{\infty }a_{\sigma (i)}=A.$

For series with more general coefficients, the converse is more complicated. As stated in the previous section, for real-valued and complex-valued series, unconditional convergence always implies absolute convergence. However, in the more general case of a series with values in any normed abelian group $G$ , the converse does not always hold: there can exist series which are not absolutely convergent, yet unconditionally convergent.

For example, in the Banach space, one series which is unconditionally convergent but not absolutely convergent is:

$\sum _{n=1}^{\infty }{\tfrac {1}{n}}e_{n},$

where $\{e_{n}\}_{n=1}^{\infty }$  is an orthonormal basis. A theorem of A. Dvoretzky and C. A. Rogers asserts that every infinite-dimensional Banach space has an unconditionally convergent series that is not absolutely convergent.

### Proof of the theorem

For any $\varepsilon >0,$  we can choose some $\kappa _{\varepsilon },\lambda _{\varepsilon }\in \mathbb {N} ,$  such that:

{\begin{aligned}{\text{ for all }}N>\kappa _{\varepsilon }&\quad \sum _{n=N}^{\infty }\|a_{n}\|<{\tfrac {\varepsilon }{2}}\\{\text{ for all }}N>\lambda _{\varepsilon }&\quad \left\|\sum _{n=1}^{N}a_{n}-A\right\|<{\tfrac {\varepsilon }{2}}\end{aligned}}

Let

{\begin{aligned}N_{\varepsilon }&=\max \left\{\kappa _{\varepsilon },\lambda _{\varepsilon }\right\}\\M_{\sigma ,\varepsilon }&=\max \left\{\sigma ^{-1}\left(\left\{1,\ldots ,N_{\varepsilon }\right\}\right)\right\}\end{aligned}}

Finally for any integer $N>M_{\sigma ,\varepsilon }$  let

{\begin{aligned}I_{\sigma ,\varepsilon }&=\left\{1,\ldots ,N\right\}\setminus \sigma ^{-1}\left(\left\{1,\ldots ,N_{\varepsilon }\right\}\right)\\S_{\sigma ,\varepsilon }&=\min \left\{\sigma (k)\ :\ k\in I_{\sigma ,\varepsilon }\right\}\\L_{\sigma ,\varepsilon }&=\max \left\{\sigma (k)\ :\ k\in I_{\sigma ,\varepsilon }\right\}\end{aligned}}

Then

{\begin{aligned}\left\|\sum _{i=1}^{N}a_{\sigma (i)}-A\right\|&=\left\|\sum _{i\in \sigma ^{-1}\left(\{1,\dots ,N_{\varepsilon }\}\right)}a_{\sigma (i)}-A+\sum _{i\in I_{\sigma ,\varepsilon }}a_{\sigma (i)}\right\|\\&\leq \left\|\sum _{j=1}^{N_{\varepsilon }}a_{j}-A\right\|+\left\|\sum _{i\in I_{\sigma ,\varepsilon }}a_{\sigma (i)}\right\|\\&\leq \left\|\sum _{j=1}^{N_{\varepsilon }}a_{j}-A\right\|+\sum _{i\in I_{\sigma ,\varepsilon }}\left\|a_{\sigma (i)}\right\|\\&\leq \left\|\sum _{j=1}^{N_{\varepsilon }}a_{j}-A\right\|+\sum _{j=S_{\sigma ,\varepsilon }}^{L_{\sigma ,\varepsilon }}\left\|a_{j}\right\|\\&\leq \left\|\sum _{j=1}^{N_{\varepsilon }}a_{j}-A\right\|+\sum _{j=N_{\varepsilon }+1}^{\infty }\left\|a_{j}\right\|&&S_{\sigma ,\varepsilon }\geq N_{\varepsilon }+1\\&<\varepsilon \end{aligned}}

This shows that

${\text{ for all }}\varepsilon >0,{\text{ there exists }}M_{\sigma ,\varepsilon },{\text{ for all }}N>M_{\sigma ,\varepsilon }\quad \left\|\sum _{i=1}^{N}a_{\sigma (i)}-A\right\|<\varepsilon ,$

that is:
$\sum _{i=1}^{\infty }a_{\sigma (i)}=A.$

## Products of series

The Cauchy product of two series converges to the product of the sums if at least one of the series converges absolutely. That is, suppose that

$\sum _{n=0}^{\infty }a_{n}=A\quad {\text{ and }}\quad \sum _{n=0}^{\infty }b_{n}=B.$

The Cauchy product is defined as the sum of terms $c_{n}$  where:

$c_{n}=\sum _{k=0}^{n}a_{k}b_{n-k}.$

If either the $a_{n}$  or $b_{n}$  sum converges absolutely then

$\sum _{n=0}^{\infty }c_{n}=AB.$

## Absolute convergence over sets

A generalization of the absolute convergence of a series, is the absolute convergence of a sum of a function over a set. We can first consider a countable set $X$  and a function $f:X\to \mathbb {R} .$  We will give a definition below of the sum of $f$  over $X,$  written as ${\textstyle \sum _{x\in X}f(x).}$

First note that because no particular enumeration (or "indexing") of $X$  has yet been specified, the series ${\textstyle \sum _{x\in X}f(x)}$  cannot be understood by the more basic definition of a series. In fact, for certain examples of $X$  and $f,$  the sum of $f$  over $X$  may not be defined at all, since some indexing may produce a conditionally convergent series.

Therefore we define ${\textstyle \sum _{x\in X}f(x)}$  only in the case where there exists some bijection $g:\mathbb {Z} ^{+}\to X$  such that ${\textstyle \sum _{n=1}^{\infty }f(g(n))}$  is absolutely convergent. Note that here, "absolutely convergent" uses the more basic definition, applied to an indexed series. In this case, the value of the sum of $f$  over $X$  is defined by

$\sum _{x\in X}f(x):=\sum _{n=1}^{\infty }f(g(n))$

Note that because the series is absolutely convergent, then every rearrangement is identical to a different choice of bijection $g.$  Since all of these sums have the same value, then the sum of $f$  over $X$  is well-defined.

Even more generally we may define the sum of $f$  over $X$  when $X$  is uncountable. But first we define what it means for the sum to be convergent.

Let $X$  be any set, countable or uncountable, and $f:X\to \mathbb {R}$  a function. We say that the sum of $f$  over $X$  converges absolutely if

$\sup \left\{\sum _{x\in A}|f(x)|:A\subseteq X,A{\text{ is finite }}\right\}<\infty .$

There is a theorem which states that, if the sum of $f$  over $X$  is absolutely convergent, then $f$  takes non-zero values on a set that is at most countable. Therefore, the following is a consistent definition of the sum of $f$  over $X$  when the sum is absolutely convergent.

$\sum _{x\in X}f(x):=\sum _{x\in X:f(x)\neq 0}f(x).$

Note that the final series uses the definition of a series over a countable set.

Some authors define an iterated sum ${\textstyle \sum _{m=1}^{\infty }\sum _{n=1}^{\infty }a_{m,n}}$  to be absolutely convergent if the iterated series ${\textstyle \sum _{m=1}^{\infty }\sum _{n=1}^{\infty }|a_{m,n}|<\infty .}$  This is in fact equivalent to the absolute convergence of ${\textstyle \sum _{(m,n)\in \mathbb {N} \times \mathbb {N} }a_{m,n}.}$  That is to say, if the sum of $f$  over $X,$  ${\textstyle \sum _{(m,n)\in \mathbb {N} \times \mathbb {N} }a_{m,n},}$  converges absolutely, as defined above, then the iterated sum ${\textstyle \sum _{m=1}^{\infty }\sum _{n=1}^{\infty }a_{m,n}}$  converges absolutely, and vice versa.

## Absolute convergence of integrals

The integral ${\textstyle \int _{A}f(x)\,dx}$  of a real or complex-valued function is said to converge absolutely if ${\textstyle \int _{A}\left|f(x)\right|\,dx<\infty .}$  One also says that $f$  is absolutely integrable. The issue of absolute integrability is intricate and depends on whether the Riemann, Lebesgue, or Kurzweil-Henstock (gauge) integral is considered; for the Riemann integral, it also depends on whether we only consider integrability in its proper sense ($f$  and $A$  both bounded), or permit the more general case of improper integrals.

As a standard property of the Riemann integral, when $A=[a,b]$  is a bounded interval, every continuous function is bounded and (Riemann) integrable, and since $f$  continuous implies $|f|$  continuous, every continuous function is absolutely integrable. In fact, since $g\circ f$  is Riemann integrable on $[a,b]$  if $f$  is (properly) integrable and $g$  is continuous, it follows that $|f|=|\cdot |\circ f$  is properly Riemann integrable if $f$  is. However, this implication does not hold in the case of improper integrals. For instance, the function ${\textstyle f:[1,\infty )\to \mathbb {R} :x\mapsto {\frac {\sin x}{x}}}$  is improperly Riemann integrable on its unbounded domain, but it is not absolutely integrable:

$\int _{1}^{\infty }{\frac {\sin x}{x}}\,dx={\frac {1}{2}}{\bigl [}\pi -2\,\mathrm {Si} (1){\bigr ]}\approx 0.62,{\text{ but }}\int _{1}^{\infty }\left|{\frac {\sin x}{x}}\right|dx=\infty .$

Indeed, more generally, given any series ${\textstyle \sum _{n=0}^{\infty }a_{n}}$  one can consider the associated step function $f_{a}:[0,\infty )\to \mathbb {R}$  defined by $f_{a}([n,n+1))=a_{n}.$  Then ${\textstyle \int _{0}^{\infty }f_{a}\,dx}$  converges absolutely, converges conditionally or diverges according to the corresponding behavior of ${\textstyle \sum _{n=0}^{\infty }a_{n}.}$

The situation is different for the Lebesgue integral, which does not handle bounded and unbounded domains of integration separately (see below). The fact that the integral of $|f|$  is unbounded in the examples above implies that $f$  is also not integrable in the Lebesgue sense. In fact, in the Lebesgue theory of integration, given that $f$  is measurable, $f$  is (Lebesgue) integrable if and only if $|f|$  is (Lebesgue) integrable. However, the hypothesis that $f$  is measurable is crucial; it is not generally true that absolutely integrable functions on $[a,b]$  are integrable (simply because they may fail to be measurable): let $S\subset [a,b]$  be a nonmeasurable subset and consider $f=\chi _{S}-1/2,$  where $\chi _{S}$  is the characteristic function of $S.$  Then $f$  is not Lebesgue measurable and thus not integrable, but $|f|\equiv 1/2$  is a constant function and clearly integrable.

On the other hand, a function $f$  may be Kurzweil-Henstock integrable (gauge integrable) while $|f|$  is not. This includes the case of improperly Riemann integrable functions.

In a general sense, on any measure space $A,$  the Lebesgue integral of a real-valued function is defined in terms of its positive and negative parts, so the facts:

1. $f$  integrable implies $|f|$  integrable
2. $f$  measurable, $|f|$  integrable implies $f$  integrable

are essentially built into the definition of the Lebesgue integral. In particular, applying the theory to the counting measure on a set $S,$  one recovers the notion of unordered summation of series developed by Moore–Smith using (what are now called) nets. When $S=\mathbb {N}$  is the set of natural numbers, Lebesgue integrability, unordered summability and absolute convergence all coincide.

Finally, all of the above holds for integrals with values in a Banach space. The definition of a Banach-valued Riemann integral is an evident modification of the usual one. For the Lebesgue integral one needs to circumvent the decomposition into positive and negative parts with Daniell's more functional analytic approach, obtaining the Bochner integral.