BREAKING NEWS
Information field theory

## Summary

Information field theory (IFT) is a Bayesian statistical field theory relating to signal reconstruction, cosmography, and other related areas.[1][2] IFT summarizes the information available on a physical field using Bayesian probabilities. It uses computational techniques developed for quantum field theory and statistical field theory to handle the infinite number of degrees of freedom of a field and to derive algorithms for the calculation of field expectation values. For example, the posterior expectation value of a field generated by a known Gaussian process and measured by a linear device with known Gaussian noise statistics is given by a generalized Wiener filter applied to the measured data. IFT extends such known filter formula to situations with nonlinear physics, nonlinear devices, non-Gaussian field or noise statistics, dependence of the noise statistics on the field values, and partly unknown parameters of measurement. For this it uses Feynman diagrams, renormalisation flow equations, and other methods from mathematical physics.[3]

## Motivation

Fields play an important role in science, technology, and economy. They describe the spatial variations of a quantity, like the air temperature, as a function of position. Knowing the configuration of a field can be of large value. Measurements of fields, however, can never provide the precise field configuration with certainty. Physical fields have an infinite number of degrees of freedom, but the data generated by any measurement device is always finite, providing only a finite number of constraints on the field. Thus, an unambiguous deduction of such a field from measurement data alone is impossible and only probabilistic inference remains as a means to make statements about the field. Fortunately, physical fields exhibit correlations and often follow known physical laws. Such information is best fused into the field inference in order to overcome the mismatch of field degrees of freedom to measurement points. To handle this, an information theory for fields is needed, and that is what information field theory is.

## Concepts

### Bayesian inference

${\displaystyle s(x)}$  is a field value at a location ${\displaystyle x\in \Omega }$  in a space ${\displaystyle \Omega }$ . The prior knowledge about the unknown signal field ${\displaystyle s}$  is encoded in the probability distribution ${\displaystyle {\mathcal {P}}(s)}$ . The data ${\displaystyle d}$  provides additional information on ${\displaystyle s}$  via the likelihood ${\displaystyle {\mathcal {P}}(d|s)}$  that gets incorporated into the posterior probability

${\displaystyle {\mathcal {P}}(s|d)={\frac {{\mathcal {P}}(d|s)\,{\mathcal {P}}(s)}{{\mathcal {P}}(d)}}}$

according to Bayes theorem.

### Information Hamiltonian

In IFT Bayes theorem is usually rewritten in the language of a statistical field theory,

${\displaystyle {\mathcal {P}}(s|d)={\frac {{\mathcal {P}}(d,s)}{{\mathcal {P}}(d)}}\equiv {\frac {e^{-{\mathcal {H}}(d,s)}}{{\mathcal {Z}}(d)}},}$

with the information Hamiltonian defined as
${\displaystyle {\mathcal {H}}(d,s)\equiv -\ln {\mathcal {P}}(d,s)=-\ln {\mathcal {P}}(d|s)-\ln {\mathcal {P}}(s)\equiv {\mathcal {H}}(d|s)+{\mathcal {H}}(s),}$

the negative logarithm of the joint probability of data and signal and with the partition function being
${\displaystyle {\mathcal {Z}}(d)\equiv {\mathcal {P}}(d)=\int {\mathcal {D}}s\,{\mathcal {P}}(d,s).}$

This reformulation of Bayes theorem permits the usage of methods of mathematical physics developed for the treatment of statistical field theories and quantum field theories.

### Fields

As fields have an infinite number of degrees of freedom, the definition of probabilities over spaces of field configurations has subtleties. Identifying physical fields as elements of function spaces provides the problem that no Lebesgue measure is defined over the latter and therefore probability densities can not be defined there. However, physical fields have much more regularity than most elements of function spaces, as they are continuous and smooth at most of their locations. Therefore less general, but sufficiently flexible constructions can be used to handle the infinite number of degrees of freedom of a field.

A pragmatic approach is to regard the field to be discretized in terms of pixels. Each pixel carries a single field value that is assumed to be constant within the pixel volume. All statements about the continuous field have then to be cast into its pixel representation. This way, one deals with finite dimensional field spaces, over which probability densities are well definable.

In order for this description to be a proper field theory, it is further required that the pixel resolution ${\displaystyle \Delta x}$  can always be refined, while expectation values of the discretized field ${\displaystyle s_{\Delta x}}$  converge to finite values:

${\displaystyle \langle f(s)\rangle _{(s|d)}\equiv \lim _{\Delta x\rightarrow 0}\int ds_{\Delta x}f(s_{\Delta x})\,{\mathcal {P}}(s_{\Delta x}).}$

### Path integrals

If this limit exists, one can talk about the field configuration space integral or path integral

${\displaystyle \langle f(s)\rangle _{(s|d)}\equiv \int {\mathcal {D}}s\,f(s)\,{\mathcal {P}}(s).}$

irrespective of the resolution it might be evaluated numerically.

### Gaussian prior

The simplest prior for a field is that of a zero mean Gaussian probability distribution

${\displaystyle {\mathcal {P}}(s)={\mathcal {G}}(s,S)\equiv {\frac {1}{|2\pi S|}}e^{-{\frac {1}{2}}\,s^{\dagger }S^{-1}\,s}.}$

The determinant in the denominator might be ill-defined in the continuum limit ${\displaystyle \Delta x\rightarrow 0}$ , however, all what is necessary for IFT to be consistent is that this determinant can be estimated for any finite resolution field representation with ${\displaystyle \Delta x>0}$  and that this permits the calculation of convergent expectation values.

A Gaussian probability distribution requires the specification of the field two point correlation function ${\displaystyle S\equiv \langle s\,s^{\dagger }\rangle _{(s)}}$  with coefficients

${\displaystyle S_{xy}\equiv \langle s(x)\,{\overline {s(y)}}\rangle _{(s)}}$

and a scalar product for continuous fields
${\displaystyle a^{\dagger }b\equiv \int _{\Omega }dx\,{\overline {a(x)}}\,b(x),}$

with respect to which the inverse signal field covariance ${\displaystyle S^{-1}}$  is constructed, i.e. ${\displaystyle (S^{-1}S)_{xy}\equiv \int _{\Omega }dz\,(S^{-1})_{xz}S_{zy}=\mathbb {1} _{xy}\equiv \delta (x-y).}$

The corresponding prior information Hamiltonian reads

${\displaystyle {\mathcal {H}}(s)=-\ln {\mathcal {G}}(s,S)={\frac {1}{2}}\,s^{\dagger }S^{-1}\,s+{\frac {1}{2}}\,\ln |2\pi S|.}$

### Measurement equation

The measurement data ${\displaystyle d}$  was generated with the likelihood ${\displaystyle {\mathcal {P}}(d|s)}$ . In case the instrument was linear, a measurement equation of the form

${\displaystyle d=R\,s+n}$

can be given, in which ${\displaystyle R}$  is the instrument response, which describes how the data on average reacts to the signal, and ${\displaystyle n}$  is the noise, simply the difference between data ${\displaystyle d}$  and linear signal response ${\displaystyle R\,s}$ . It is essential to note that the response translates the infinite dimensional signal vector into the finite dimensional data space. In components this reads ${\displaystyle d_{i}=\int _{\Omega }dx\,R_{ix}\,s_{x}+n_{i},}$

where a vector component notation was also introduced for signal and data vectors.

If the noise follows a signal independent zero mean Gaussian statistics with covariance ${\displaystyle N}$ , ${\displaystyle {\mathcal {P}}(n|s)={\mathcal {G}}(n,N),}$  then the likelihood is Gaussian as well,

${\displaystyle {\mathcal {P}}(d|s)={\mathcal {G}}(d-R\,s,N),}$

and the likelihood information Hamiltonian is
${\displaystyle {\mathcal {H}}(d|s)=-\ln {\mathcal {G}}(d-R\,s,N)={\frac {1}{2}}\,(d-R\,s)^{\dagger }N^{-1}\,(d-R\,s)+{\frac {1}{2}}\,\ln |2\pi N|.}$

A linear measurement of a Gaussian signal, subject to Gaussian and signal independent noise leads to a free IFT.

## Free theory

### Free Hamiltonian

The joint information Hamiltonian of the Gaussian scenario described above is

{\displaystyle {\begin{aligned}{\mathcal {H}}(d,s)&={\mathcal {H}}(d|s)+{\mathcal {H}}(s)\\&{\widehat {=}}{\frac {1}{2}}\,(d-R\,s)^{\dagger }N^{-1}\,(d-R\,s)+{\frac {1}{2}}\,s^{\dagger }S^{-1}\,s\\&{\widehat {=}}{\frac {1}{2}}\,\left[s^{\dagger }\underbrace {(S^{-1}+R^{\dagger }N^{-1}R)} _{D^{-1}}\,s-s^{\dagger }\underbrace {R^{\dagger }N^{-1}d} _{j}-\underbrace {d^{\dagger }N^{-1}R} _{j^{\dagger }}\,s\right]\\&\equiv {\frac {1}{2}}\,\left[s^{\dagger }D^{-1}s-s^{\dagger }j-j^{\dagger }s\right]\\&={\frac {1}{2}}\,\left[s^{\dagger }D^{-1}s-s^{\dagger }D^{-1}\underbrace {D\,j} _{m}-\underbrace {j^{\dagger }D} _{m^{\dagger }}\,D^{-1}s\right]\\&{\widehat {=}}{\frac {1}{2}}\,(s-m)^{\dagger }D^{-1}(s-m),\end{aligned}}}

where ${\displaystyle {\widehat {=}}}$  denotes equality up to irrelevant constants, which, in this case, means expressions that are independent of ${\displaystyle s}$ . From this is it clear, that the posterior must be a Gaussian with mean ${\displaystyle m}$  and variance ${\displaystyle D}$ ,
${\displaystyle {\mathcal {P}}(s|d)\propto e^{-{\mathcal {H}}(d,s)}\propto e^{-{\frac {1}{2}}\,(s-m)^{\dagger }D^{-1}(s-m)}\propto {\mathcal {G}}(s-m,D)}$

where equality between the right and left hand sides holds as both distributions are normalized, ${\displaystyle \int {\mathcal {D}}s\,{\mathcal {P}}(s|d)=1=\int {\mathcal {D}}s\,{\mathcal {G}}(s-m,D)}$ .

### Generalized Wiener filter

The posterior mean

${\displaystyle m=D\,j=(S^{-1}+R^{\dagger }N^{-1}R)^{-1}R^{\dagger }N^{-1}d}$

is also known as the generalized Wiener filter solution and the uncertainty covariance
${\displaystyle D=(S^{-1}+R^{\dagger }N^{-1}R)^{-1}}$

as the Wiener variance.

In IFT, ${\displaystyle j=R^{\dagger }N^{-1}d}$  is called the information source, as it acts as a source term to excite the field (knowledge), and ${\displaystyle D}$  the information propagator, as it propagates information from one location to another in

${\displaystyle m_{x}=\int _{\Omega }dy\,D_{xy}j_{y}.}$

## Interacting theory

### Interacting Hamiltonian

If any of the assumptions that lead to the free theory is violated, IFT becomes an interacting theory, with terms that are of higher than quadratic order in the signal field. This happens when the signal or the noise are not following Gaussian statistics, when the response is non-linear, when the noise depends on the signal, or when response or covariances are uncertain.

In this case, the information Hamiltonian might be expandable in a Taylor-Fréchet series,

${\displaystyle {\mathcal {H}}(d,\,s)=\underbrace {{\frac {1}{2}}s^{\dagger }D^{-1}s-j^{\dagger }s+{\mathcal {H}}_{0}} _{={\mathcal {H}}_{\text{free}}(d,\,s)}+\underbrace {\sum _{n=3}^{\infty }{\frac {1}{n!}}\Lambda _{x_{1}...x_{n}}^{(n)}s_{x_{1}}...s_{x_{n}}} _{={\mathcal {H}}_{\text{int}}(d,\,s)},}$

where ${\displaystyle {\mathcal {H}}_{\text{free}}(d,\,s)}$  is the free Hamiltonian, which alone would lead to a Gaussian posterior, and ${\displaystyle {\mathcal {H}}_{\text{int}}(d,\,s)}$  is the interacting Hamiltonian, which encodes non-Gaussian corrections. The first and second order Taylor coefficients are often identified with the (negative) information source ${\displaystyle -j}$  and information propagator ${\displaystyle D}$ , respectively. The higher coefficients ${\displaystyle \Lambda _{x_{1}...x_{n}}^{(n)}}$ are associated with non-linear self-interactions.

### Classical field

The classical field ${\displaystyle s_{\text{cl}}}$  minimizes the information Hamiltonian,

${\displaystyle \left.{\frac {\partial {\mathcal {H}}(d,s)}{\partial s}}\right|_{s=s_{\text{cl}}}=0,}$

and therefore maximizes the posterior:
${\displaystyle \left.{\frac {\partial {\mathcal {P}}(s|d)}{\partial s}}\right|_{s=s_{\text{cl}}}=\left.{\frac {\partial }{\partial s}}\,{\frac {e^{-{\mathcal {H}}(d,s)}}{{\mathcal {Z}}(d)}}\right|_{s=s_{\text{cl}}}=-{\mathcal {P}}(d,s)\,\underbrace {\left.{\frac {\partial {\mathcal {H}}(d,s)}{\partial s}}\right|_{s=s_{\text{cl}}}} _{=0}=0}$

The classical field ${\displaystyle s_{\text{cl}}}$  is therefore the maximum a posteriori estimator of the field inference problem.

### Critical filter

The Wiener filter problem requires the two point correlation ${\displaystyle S\equiv \langle s\,s^{\dagger }\rangle _{(s)}}$  of a field to be known. If it is unknown, it has to be inferred along with the field itself. This requires the specification of a hyperprior ${\displaystyle {\mathcal {P}}(S)}$ . Often, statistical homogeneity (translation invariance) can be assumed, implying that ${\displaystyle S}$  is diagonal in Fourier space (for ${\displaystyle \Omega =\mathbb {R} ^{u}}$  being a ${\displaystyle u}$  dimensional Cartesian space). In this case, only the Fourier space power spectrum ${\displaystyle P_{s}({\vec {k}})}$  needs to be inferred. Given a further assumption of statistical isotropy, this spectrum depends only on the length ${\displaystyle k=|{\vec {k}}|}$  of the Fourier vector ${\displaystyle {\vec {k}}}$  and only a one dimensional spectrum ${\displaystyle P_{s}(k)}$  has to be determined. The prior field covariance reads then in Fourier space coordinates ${\displaystyle S_{{\vec {k}}{\vec {q}}}=(2\pi )^{u}\delta ({\vec {k}}-{\vec {q}})\,P_{s}(k)}$ .

If the prior on ${\displaystyle P_{s}(k)}$  is flat, the joint probability of data and spectrum is

{\displaystyle {\begin{aligned}{\mathcal {P}}(d,P_{s})&=\int {\mathcal {D}}s\,{\mathcal {P}}(d,s,P_{s})\\&=\int {\mathcal {D}}s\,{\mathcal {P}}(d|s,P_{s})\,{\mathcal {P}}(s|P_{s})\,{\mathcal {P}}(P_{s})\\&\propto \int {\mathcal {D}}s\,{\mathcal {G}}(d-Rs,N)\,{\mathcal {G}}(s,S)\\&\propto {\frac {1}{|S|^{\frac {1}{2}}}}\int {\mathcal {D}}s\,\exp \left[-{\frac {1}{2}}\left(s^{\dagger }D^{-1}s-j^{\dagger }s-s^{\dagger }j\right)\right]\\&\propto {\frac {|D|^{\frac {1}{2}}}{|S|^{\frac {1}{2}}}}\exp \left[{\frac {1}{2}}j^{\dagger }D\,j\right],\end{aligned}}}

where the notation of the information propagator ${\displaystyle D=(S^{-1}+R^{\dagger }N^{-1}R)^{-1}}$  and source ${\displaystyle j=R^{\dagger }N^{-1}d}$  of the Wiener filter problem was used again. The corresponding information Hamiltonian is
${\displaystyle {\mathcal {H}}(d,P_{s})\;{\widehat {=}}\;{\frac {1}{2}}\left[\ln |S\,D^{-1}|-j^{\dagger }D\,j\right]={\frac {1}{2}}\mathrm {Tr} \left[\ln \left(S\,D^{-1}\right)-j\,j^{\dagger }D\right],}$

where ${\displaystyle {\widehat {=}}}$  denotes equality up to irrelevant constants (here: constant with respect to ${\displaystyle P_{s}}$ ). Minimizing this with respect to ${\displaystyle P_{s}}$ , in order to get its maximum a posteriori power spectrum estimator, yields
{\displaystyle {\begin{aligned}{\frac {\partial {\mathcal {H}}(d,P_{s})}{\partial P_{s}(k)}}&={\frac {1}{2}}\mathrm {Tr} \left[D\,S^{-1}\,{\frac {\partial \left(S\,D^{-1}\right)}{\partial P_{s}(k)}}-j\,j^{\dagger }{\frac {\partial D}{\partial P_{s}(k)}}\right]\\&={\frac {1}{2}}\mathrm {Tr} \left[D\,S^{-1}\,{\frac {\partial \left(1+S\,R^{\dagger }N^{-1}R\right)}{\partial P_{s}(k)}}+j\,j^{\dagger }D\,{\frac {\partial D^{-1}}{\partial P_{s}(k)}}\,D\right]\\&={\frac {1}{2}}\mathrm {Tr} \left[D\,S^{-1}\,{\frac {\partial S}{\partial P_{s}(k)}}R^{\dagger }N^{-1}R+m\,m^{\dagger }\,{\frac {\partial S^{-1}}{\partial P_{s}(k)}}\right]\\&={\frac {1}{2}}\mathrm {Tr} \left[\left(R^{\dagger }N^{-1}R\,D\,S^{-1}-S^{-1}m\,m^{\dagger }\,S^{-1}\right)\,{\frac {\partial S}{\partial P_{s}(k)}}\right]\\&={\frac {1}{2}}\int \left({\frac {dq}{2\pi }}\right)^{u}\int \left({\frac {dq'}{2\pi }}\right)^{u}\left(\left(D^{-1}-S^{-1}\right)\,D\,S^{-1}-S^{-1}m\,m^{\dagger }\,S^{-1}\right)_{{\vec {q}}{\vec {q}}'}\,{\frac {\partial (2\pi )^{u}\delta ({\vec {q}}-{\vec {q}}')\,P_{s}(q)}{\partial P_{s}(k)}}\\&={\frac {1}{2}}\int \left({\frac {dq}{2\pi }}\right)^{u}\left(S^{-1}-S^{-1}D\,S^{-1}-S^{-1}m\,m^{\dagger }\,S^{-1}\right)_{{\vec {q}}{\vec {q}}}\,\delta (k-q)\\&={\frac {1}{2}}\mathrm {Tr} \left\{S^{-1}\left[S-\left(D+m\,m^{\dagger }\right)\right]\,S^{-1}\mathbb {P} _{k}\right\}\\&={\frac {\mathrm {Tr} \left[\mathbb {P} _{k}\right]}{2\,P_{s}(k)}}-{\frac {\mathrm {Tr} \left[\left(D+m\,m^{\dagger }\right)\,\mathbb {P} _{k}\right]}{2\,\left[P_{s}(k)\right]^{2}}}=0,\end{aligned}}}

where the Wiener filter mean ${\displaystyle m=D\,j}$  and the spectral band projector ${\displaystyle (\mathbb {P} _{k})_{{\vec {q}}{\vec {q}}'}\equiv (2\pi )^{u}\delta ({\vec {q}}-{\vec {q}}')\,\delta (|{\vec {q}}|-k)}$ were introduced. The latter commutes with ${\displaystyle S^{-1}}$ , since ${\displaystyle (S^{-1})_{{\vec {k}}{\vec {q}}}=(2\pi )^{u}\delta ({\vec {k}}-{\vec {q}})\,[P_{s}(k)]^{-1}}$  is diagonal in Fourier space. The maximum a posteriori estimator for the power spectrum is therefore
${\displaystyle P_{s}(k)={\frac {\mathrm {Tr} \left[\left(m\,m^{\dagger }+D\right)\,\mathbb {P} _{k}\right]}{\mathrm {Tr} \left[\mathbb {P} _{k}\right]}}.}$

It has to be calculated iteratively, as ${\displaystyle m=D\,j}$  and ${\displaystyle D=(S^{-1}+R^{\dagger }N^{-1}R)^{-1}}$  depend both on ${\displaystyle P_{s}}$  themselves. In an empirical Bayes approach, the estimated ${\displaystyle P_{s}}$  would be taken as given. As a consequence, the posterior mean estimate for the signal field is the corresponding ${\displaystyle m}$  and its uncertainty the corresponding ${\displaystyle D}$  in the empirical Bayes approximation.

The resulting non-linear filter is called the critical filter.[4] The generalization of the power spectrum estimation formula as

${\displaystyle P_{s}(k)={\frac {\mathrm {Tr} \left[\left(m\,m^{\dagger }+\delta \,D\right)\,\mathbb {P} _{k}\right]}{\mathrm {Tr} \left[\mathbb {P} _{k}\right]}}}$

exhibits a perception thresholds for ${\displaystyle \delta <1}$ , meaning that the data variance in a Fourier band has to exceed the expected noise level by a certain threshold before the signal reconstruction ${\displaystyle m}$  becomes non-zero for this band. Whenever the data variance exceeds this threshold slightly, the signal reconstruction jumps to a finite excitation level, similar to a first order phase transition in thermodynamic systems. For filter with ${\displaystyle \delta =1}$  perception of the signal starts continuously as soon the data variance exceeds the noise level. The disappearance of the discontinuous perception at ${\displaystyle \delta =1}$  is similar to a thermodynamic system going through a critical point. Hence the name critical filter.

The critical filter, extensions thereof to non-linear measurements, and the inclusion of non-flat spectrum priors, permitted the application of IFT to real world signal inference problems, for which the signal covariance is usually unknown a priori.

## IFT application examples

Radio interferometric image of radio galaxies in the galaxy cluster Abell 2219. The images were constructed by data back-projection (top), the CLEAN algorithm (middle), and the RESOLVE algorithm (bottom). Negative and therefore not physical fluxes are displayed in white.

The generalized Wiener filter, that emerges in free IFT, is in broad usage in signal processing. Algorithms explicitly based on IFT were derived for a number of applications. Many of them are implemented using the Numerical Information Field Theory (NIFTy) library.

• D³PO is a code for Denoising, Deconvolving, and Decomposing Photon Observations. It reconstructs images from individual photon count events taking into account the Poisson statistics of the counts and an instrument response function. It splits the sky emission into an image of diffuse emission and one of point sources, exploiting the different correlation structure and statistics of the two components for their separation. D³PO has been applied to data of the Fermi and the RXTE satellites.
• RESOLVE is a Bayesian algorithm for aperture synthesis imaging in radio astronomy. RESOLVE is similar to D³PO, but it assumes a Gaussian likelihood and a Fourier space response function. It has been applied to data of the Very Large Array.
• PySESA is a Python framework for Spatially Explicit Spectral Analysis for spatially explicit spectral analysis of point clouds and geospatial data.

Many techniques from quantum field theory can be used to tackle IFT problems, like Feynman diagrams, effective actions, and the field operator formalism.

### Feynman diagrams

First three Feynman diagrams contributing to the posterior mean estimate of a field. A line expresses an information propagator, a dot at the end of a line to an information source, and a vertex to an interaction term. The first diagram encodes the Wiener filter, the second a non-linear correction, and the third an uncertainty correction to the Wiener filter.

In case the interaction coefficients ${\displaystyle \Lambda ^{(n)}}$  in a Taylor-Fréchet expansion of the information Hamiltonian

${\displaystyle {\mathcal {H}}(d,\,s)=\underbrace {{\frac {1}{2}}s^{\dagger }D^{-1}s-j^{\dagger }s+{\mathcal {H}}_{0}} _{={\mathcal {H}}_{\text{free}}(d,\,s)}+\underbrace {\sum _{n=3}^{\infty }{\frac {1}{n!}}\Lambda _{x_{1}...x_{n}}^{(n)}s_{x_{1}}...s_{x_{n}}} _{={\mathcal {H}}_{\text{int}}(d,\,s)},}$

are small, the log partition function, or Helmholtz free energy,
${\displaystyle \ln {\mathcal {Z}}(d)=\ln \int {\mathcal {D}}s\,e^{-{\mathcal {H}}(d,s)}=\sum _{c\in C}c}$

can be expanded asymptotically in terms of these coefficients. The free Hamiltonian specifies the mean ${\displaystyle m=D\,j}$  and variance ${\displaystyle D}$  of the Gaussian distribution ${\displaystyle {\mathcal {G}}(s-m,D)}$  over which the expansion is integrated. This leads to a sum over the set ${\displaystyle C}$  of all connected Feynman diagrams. From the Helmholtz free energy, any connected moment of the field can be calculated via
${\displaystyle \langle s_{x_{1}}\ldots s_{x_{n}}\rangle _{(s|d)}^{\text{c}}={\frac {\partial ^{n}\ln {\mathcal {Z}}}{\partial j_{x_{1}}\ldots \partial j_{x_{n}}}}.}$

Situations where small expansion parameters exist that are needed for such a diagrammatic expansion to converge are given by nearly Gaussian signal fields, where the non-Gaussianity of the field statistics leads to small interaction coefficients ${\displaystyle \Lambda ^{(n)}}$ . For example, the statistics of the Cosmic Microwave Background is nearly Gaussian, with small amounts of non-Gaussianities believed to be seeded during the inflationary epoch in the Early Universe.

### Effective action

In order to have a stable numerics for IFT problems, a field functional that if minimized provides the posterior mean field is needed. Such is given by the effective action or Gibbs free energy of a field. The Gibbs free energy ${\displaystyle G}$  can be constructed from the Helmholtz free energy via a Legendre transformation. In IFT, it is given by the difference of the internal information energy

${\displaystyle U=\langle {\mathcal {H}}(d,s)\rangle _{{\mathcal {P}}'(s|d')}}$

and the Shannon entropy
${\displaystyle {\mathcal {S}}=-\int {\mathcal {D}}s\,{\mathcal {P}}'(s|d')\,\ln {\mathcal {P}}'(s|d')}$

for temperature ${\displaystyle T=1}$ , where a Gaussian posterior approximation ${\displaystyle {\mathcal {P}}'(s|d')={\mathcal {G}}(s-m,D)}$  is used with the approximate data ${\displaystyle d'=(m,D)}$  containing the mean and the dispersion of the field.[5]

The Gibbs free energy is then

{\displaystyle {\begin{aligned}G(m,D)&=U(m,D)-T\,{\mathcal {S}}(m,D)\\&=\langle {\mathcal {H}}(d,s)+\ln {\mathcal {P}}'(s|d')\rangle _{{\mathcal {P}}'(s|d')}\\&=\int {\mathcal {D}}s\,{\mathcal {P}}'(s|d')\,\ln {\frac {{\mathcal {P}}'(s|d')}{{\mathcal {P}}(d,s)}}\\&=\int {\mathcal {D}}s\,{\mathcal {P}}'(s|d')\,\ln {\frac {{\mathcal {P}}'(s|d')}{{\mathcal {P}}(s|d)\,{\mathcal {P}}(d)}}\\&=\int {\mathcal {D}}s\,{\mathcal {P}}'(s|d')\,\ln {\frac {{\mathcal {P}}'(s|d')}{{\mathcal {P}}(s|d)}}-\ln \,{\mathcal {P}}(d)\\&={\text{KL}}({\mathcal {P}}'(s|d')||{\mathcal {P}}(s|d))-\ln {\mathcal {Z}}(d),\end{aligned}}}

the Kullback-Leibler divergence ${\displaystyle {\text{KL}}({\mathcal {P}}',{\mathcal {P}})}$  between approximative and exact posterior plus the Helmholtz free energy. As the latter does not depend on the approximate data ${\displaystyle d'=(m,D)}$ , minimizing the Gibbs free energy is equivalent to minimizing the Kullback-Leibler divergence between approximate and exact posterior. Thus, the effective action approach of IFT is equivalent to the variational Bayesian methods, which also minimize the Kullback-Leibler divergence between approximate and exact posteriors.

Minimizing the Gibbs free energy provides approximatively the posterior mean field

${\displaystyle \langle s\rangle _{(s|d)}=\int {\mathcal {D}}s\,s\,{\mathcal {P}}(s|d),}$

whereas minimizing the information Hamiltonian provides the maximum a posteriori field. As the latter is known to over-fit noise, the former is usually a better field estimator.

### Operator formalism

The calculation of the Gibbs free energy requires the calculation of Gaussian integrals over an information Hamiltonian, since the internal information energy is

${\displaystyle U(m,D)=\langle {\mathcal {H}}(d,s)\rangle _{{\mathcal {P}}'(s|d')}=\int {\mathcal {D}}s\,{\mathcal {H}}(d,s)\,{\mathcal {G}}(s-m,D).}$

Such integrals can be calculated via a field operator formalism,[6] in which
${\displaystyle O_{m}=m+D\,{\frac {\mathrm {d} }{\mathrm {d} m}}}$

is the field operator. This generates the field expression ${\displaystyle s}$  within the integral if applied to the Gaussian distribution function,
{\displaystyle {\begin{aligned}O_{m}\,{\mathcal {G}}(s-m,D)&=(m+D\,{\frac {\mathrm {d} }{\mathrm {d} m}})\,{\frac {1}{|2\pi D|^{\frac {1}{2}}}}\,\exp \left[-{\frac {1}{2}}(s-m)^{\dagger }D^{-1}(s-m)\right]\\&=(m+D\,D^{-1}(s-m))\,{\frac {1}{|2\pi D|^{\frac {1}{2}}}}\,\exp \left[-{\frac {1}{2}}(s-m)^{\dagger }D^{-1}(s-m)\right]\\&=s\,{\mathcal {G}}(s-m,D),\end{aligned}}}

and any higher power of the field if applied several times,
{\displaystyle {\begin{aligned}(O_{m})^{n}\,{\mathcal {G}}(s-m,D)&=s^{n}\,{\mathcal {G}}(s-m,D).\end{aligned}}}

If the information Hamiltonian is analytical, all its terms can be generated via the field operator
${\displaystyle {\mathcal {H}}(d,O_{m})\,{\mathcal {G}}(s-m,D)={\mathcal {H}}(d,s)\,{\mathcal {G}}(s-m,D).}$

As the field operator does not depend on the field ${\displaystyle s}$  itself, it can be pulled out of the path integral of the internal information energy construction,
${\displaystyle U(m,D)=\int {\mathcal {D}}s\,{\mathcal {H}}(d,O_{m})\,{\mathcal {G}}(s-m,D)={\mathcal {H}}(d,O_{m})\int {\mathcal {D}}s\,{\mathcal {G}}(s-m,D)={\mathcal {H}}(d,O_{m})\,1_{m},}$

where ${\displaystyle 1_{m}=1}$  should be regarded as a functional that always returns the value ${\displaystyle 1}$  irrespective the value of its input ${\displaystyle m}$ . The resulting expression can be calculated by commuting the mean field annihilator ${\displaystyle D\,{\frac {\mathrm {d} }{\mathrm {d} m}}}$  to the right of the expression, where they vanish since ${\displaystyle {\frac {\mathrm {d} }{\mathrm {d} m}}\,1_{m}=0}$ . The mean field annihilator ${\displaystyle D\,{\frac {\mathrm {d} }{\mathrm {d} m}}}$  commutes with the mean field as
${\displaystyle \left[D\,{\frac {\mathrm {d} }{\mathrm {d} m}},m\right]=D\,{\frac {\mathrm {d} }{\mathrm {d} m}}\,m-m\,D\,{\frac {\mathrm {d} }{\mathrm {d} m}}=D+m\,D\,{\frac {\mathrm {d} }{\mathrm {d} m}}-m\,D\,{\frac {\mathrm {d} }{\mathrm {d} m}}=D.}$

By the usage of the field operator formalism the Gibbs free energy can be calculated, which permits the (approximate) inference of the posterior mean field via a numerical robust functional minimization.

## History

The book of Norbert Wiener[7] might be regarded as one of the first works on field inference. The usage of path integrals for field inference was proposed by a number of authors, e.g. Edmund Bertschinger[8] or William Bialek and A. Zee.[9] The connection of field theory and Bayesian reasoning was made explicit by Jörg Lemm.[10] The term information field theory was coined by Torsten Enßlin.[11] See the latter reference for more information on the history of IFT.

## References

1. ^ Enßlin, Torsten (2013). "Information field theory". AIP Conference Proceedings. 1553 (1): 184–191. arXiv:1301.2556. Bibcode:2013AIPC.1553..184E. doi:10.1063/1.4819999.
2. ^ Enßlin, Torsten A. (2019). "Information theory for fields". Annalen der Physik. 531 (3): 1800127. arXiv:1804.03350. Bibcode:2019AnP...53100127E. doi:10.1002/andp.201800127.
3. ^ "Information field theory". Max Planck Society. Retrieved 13 Nov 2014.
4. ^ Enßlin, Torsten A.; Frommert, Mona (2011-05-19). "Reconstruction of signals with unknown spectra in information field theory with parameter uncertainty". Physical Review D. 83 (10): 105014. arXiv:1002.2928. Bibcode:2011PhRvD..83j5014E. doi:10.1103/PhysRevD.83.105014.
5. ^ Enßlin, Torsten A. (2010). "Inference with minimal Gibbs free energy in information field theory". Physical Review E. 82 (5): 051112. arXiv:1004.2868. Bibcode:2010PhRvE..82e1112E. doi:10.1103/physreve.82.051112. PMID 21230442.
6. ^ Leike, Reimar H.; Enßlin, Torsten A. (2016-11-16). "Operator calculus for information field theory". Physical Review E. 94 (5): 053306. arXiv:1605.00660. Bibcode:2016PhRvE..94e3306L. doi:10.1103/PhysRevE.94.053306. PMID 27967173.
7. ^ Wiener, Norbert (1964). Extrapolation, interpolation, and smoothing of stationary time series with engineering applications (Fifth printing ed.). Cambridge, Mass.: Technology Press of the Massachusetts Institute of Technology. ISBN 0262730057. OCLC 489911338.
8. ^ Bertschinger, Edmund (December 1987). "Path integral methods for primordial density perturbations - Sampling of constrained Gaussian random fields". The Astrophysical Journal. 323: L103–L106. Bibcode:1987ApJ...323L.103B. doi:10.1086/185066. ISSN 0004-637X.
9. ^ Bialek, William; Zee, A. (1988-09-26). "Understanding the Efficiency of Human Perception". Physical Review Letters. 61 (13): 1512–1515. Bibcode:1988PhRvL..61.1512B. doi:10.1103/PhysRevLett.61.1512. PMID 10038817.
10. ^ Lemm, Jörg C. (2003). Bayesian field theory. Baltimore, Md.: Johns Hopkins University Press. ISBN 9780801872204. OCLC 52762436.
11. ^ Enßlin, Torsten A.; Frommert, Mona; Kitaura, Francisco S. (2009-11-09). "Information field theory for cosmological perturbation reconstruction and nonlinear signal analysis". Physical Review D. 80 (10): 105005. arXiv:0806.3474. Bibcode:2009PhRvD..80j5005E. doi:10.1103/PhysRevD.80.105005.