Wasserstein metric

Summary

In mathematics, the Wasserstein distance or KantorovichRubinstein metric is a distance function defined between probability distributions on a given metric space . It is named after Leonid Vaseršteĭn.

Intuitively, if each distribution is viewed as a unit amount of earth (soil) piled on , the metric is the minimum "cost" of turning one pile into the other, which is assumed to be the amount of earth that needs to be moved times the mean distance it has to be moved. This problem was first formalised by Gaspard Monge in 1781. Because of this analogy, the metric is known in computer science as the earth mover's distance.

The name "Wasserstein distance" was coined by R. L. Dobrushin in 1970, after learning of it in the work of Leonid Vaseršteĭn on Markov processes describing large systems of automata[1] (Russian, 1969). However the metric was first defined by Leonid Kantorovich in The Mathematical Method of Production Planning and Organization[2] (Russian original 1939) in the context of optimal transport planning of goods and materials. Some scholars thus encourage use of the terms "Kantorovich metric" and "Kantorovich distance". Most English-language publications use the German spelling "Wasserstein" (attributed to the name "Vaseršteĭn" being of German origin).

DefinitionEdit

Let   be a metric space for which every Borel probability measure on   is a Radon measure (a so-called Radon space). For  , let   denote the collection of all probability measures   on   with finite   moment, that is, there exists some   in   such that:

 

The   Wasserstein distance between two probability measures   and   in   is defined as

 

where   denotes the collection of all measures on   with marginals   and   on the first and second factors respectively. (The set   is also called the set of all couplings of   and  .)

The above distance is usually denoted   (typically among authors who prefer the "Wasserstein" spelling) or   (typically among authors who prefer the "Vaserstein" spelling). The remainder of this article will use the   notation.

The Wasserstein metric may be equivalently defined by

 

where   denotes the expected value of a random variable   and the infimum is taken over all joint distributions of the random variables   and   with marginals   and   respectively.

Intuition and connection to optimal transportEdit

 
Two one-dimensional distributions   and  , plotted on the x and y axes, and one possible joint distribution that defines a transport plan between them. The joint distribution/transport plan is not unique

One way to understand the above definition is to consider the optimal transport problem. That is, for a distribution of mass   on a space  , we wish to transport the mass in such a way that it is transformed into the distribution   on the same space; transforming the 'pile of earth'   to the pile  . This problem only makes sense if the pile to be created has the same mass as the pile to be moved; therefore without loss of generality assume that   and   are probability distributions containing a total mass of 1. Assume also that there is given some cost function

 

that gives the cost of transporting a unit mass from the point   to the point  . A transport plan to move   into   can be described by a function   which gives the amount of mass to move from   to  . You can imagine the task as the need to move a pile of earth of shape   to the hole in the ground of shape   such that at the end, both the pile of earth and the hole in the ground completely vanish. In order for this plan to be meaningful, it must satisfy the following properties

 

That is, that the total mass moved out of an infinitesimal region around   must be equal to   and the total mass moved into a region around   must be  . This is equivalent to the requirement that   be a joint probability distribution with marginals   and  . Thus, the infinitesimal mass transported from   to   is  , and the cost of moving is  , following the definition of the cost function. Therefore, the total cost of a transport plan   is

 

The plan   is not unique; the optimal transport plan is the plan with the minimal cost out of all possible transport plans. As mentioned, the requirement for a plan to be valid is that it is a joint distribution with marginals   and  ; letting   denote the set of all such measures as in the first section, the cost of the optimal plan is

 

If the cost of a move is simply the distance between the two points, then the optimal cost is identical to the definition of the   distance.

ExamplesEdit

Point masses (degenerate distributions)Edit

Let   and   be two degenerate distributions (i.e. Dirac delta distributions) located at points   and   in  . There is only one possible coupling of these two measures, namely the point mass   located at  . Thus, using the usual absolute value function as the distance function on  , for any  , the  -Wasserstein distance between   and   is

 

By similar reasoning, if   and   are point masses located at points   and   in  , and we use the usual Euclidean norm on   as the distance function, then

 

Normal distributionsEdit

Let   and   be two non-degenerate Gaussian measures (i.e. normal distributions) on  , with respective expected values   and   and symmetric positive semi-definite covariance matrices   and  . Then,[3] with respect to the usual Euclidean norm on  , the 2-Wasserstein distance between   and   is

 

This result generalises the earlier example of the Wasserstein distance between two point masses (at least in the case  ), since a point mass can be regarded as a normal distribution with covariance matrix equal to zero, in which case the trace term disappears and only the term involving the Euclidean distance between the means remains.

One-dimensional distributionsEdit

Let   be probability measures on  , and denote their cumulative distribution functions by   and  . Then the transport problem has an analytic solution: Optimal transport preserves the order of probability mass elements, so the mass at quantile   of   moves to quantile   of  . Thus, the  -Wasserstein distance between   and   is

 

where   and   are the quantile functions (inverse CDFs). In the case of  , a change of variables leads to the formula

 .

ApplicationsEdit

The Wasserstein metric is a natural way to compare the probability distributions of two variables X and Y, where one variable is derived from the other by small, non-uniform perturbations (random or deterministic).

In computer science, for example, the metric W1 is widely used to compare discrete distributions, e.g. the color histograms of two digital images; see earth mover's distance for more details.

In their paper 'Wasserstein GAN', Arjovsky et al.[4] use the Wasserstein-1 metric as a way to improve the original framework of Generative Adversarial Networks (GAN), to alleviate the vanishing gradient and the mode collapse issues. The special case of normal distributions is used in a Frechet Inception Distance.

The Wasserstein metric has a formal link with Procrustes analysis, with application to chirality measures,[5] and to shape analysis.[6]

In computational biology, Wasserstein metric can be used to compare between persistence diagrams of cytometry datasets.[7]

The Wasserstein metric also has been used in inverse problems in geophysics.[8]

PropertiesEdit

Metric structureEdit

It can be shown that Wp satisfies all the axioms of a metric on Pp(M). Furthermore, convergence with respect to Wp is equivalent to the usual weak convergence of measures plus convergence of the first pth moments.[9]

Dual representation of W1Edit

The following dual representation of W1 is a special case of the duality theorem of Kantorovich and Rubinstein (1958): when μ and ν have bounded support,

 

where Lip(f) denotes the minimal Lipschitz constant for f.

Compare this with the definition of the Radon metric:

 

If the metric d is bounded by some constant C, then

 

and so convergence in the Radon metric (identical to total variation convergence when M is a Polish space) implies convergence in the Wasserstein metric, but not vice versa.

ProofEdit

The following is an intuitive proof which skips over technical points. A fully rigorous proof is found in.[10]

Discrete case: When   is discrete, solving for the 1-Wasserstein distance is a problem in linear programming:

 
where   is a general "cost function".

By carefully writing the above equations as matrix equations, we obtain its dual problem[11]:

 
and by the duality theorem of linear programming, since the primal problem is feasible and bounded, so is the dual problem, and the minimum in the first problem equals the maximum in the second problem. That is, the problem pair exhibits strong duality.

For the general case, the dual problem is found by converting sums to integrals:

 
and the strong duality still holds. This is the Kantorovich duality theorem. Cédric Villani recounts the following interpretation from Luis Caffarelli:[12]

Suppose you want to ship some coal from mines, distributed as  , to factories, distributed as  . The cost function of transport is  . Now a shipper comes and offers to do the transport for you. You would pay him   per coal for loading the coal at  , and pay him   per coal for unloading the coal at  .

For you to accept the deal, the price schedule must satisfy  . The Kantorovich duality states that the shipper can make a price schedule that makes you pay almost as much as you would ship yourself.

This result can be pressed further to yield:

Theorem (Kantorovich-Rubenstein duality) — When the probability space   is a metric space, then for any fixed  ,

 
where   is the Lipschitz norm.
Proof

It suffices to prove the case of  . Start with

 
Then, for any choice of  , one can push the term higher by setting  , making it an infimal convolution of   with a cone. This implies   for any  , that is,  .

Thus,

 
Next, for any choice of  ,   can be optimized by setting  . Since  , this implies  .
 
Infimal convolution of a cone with a curve. Note how the lower envelope has slope  , and how the lower envelope is equal to the curve on the parts where the curve itself has slope  .

The two infimal convolution steps are visually clear when the probability space is  .

For notational convenience, let   denote the infimal convolution operation.

For the first step, where we used  , plot out the curve of  , then at each point, draw a cone of slope 1, and take the lower envelope of the cones as  , as shown in the diagram, then   cannot increase with slope larger than 1. Thus all its secants have slope  .

For the second step, picture the infimal convolution  , then if all secants of   have slope at most 1, then the lower envelope of   are just the cone-apices themselves, thus  .

1D Example. When both   are distributions on  , then integration by parts give

 
thus
 

Fluid mechanics interpretation of W2Edit

Benamou & Brenier found a dual representation of   by fluid mechanics, which allows efficient solution by convex optimization.[13][14]

Given two probability distributions on   with density  , then

 
where   is a velocity field, and   is the fluid density field, such that
 
That is, the mass should be conserved, and the velocity field should transport the probability distribution   to   during the time interval  .

Equivalence of W2 and a negative-order Sobolev normEdit

Under suitable assumptions, the Wasserstein distance   of order two is Lipschitz equivalent to a negative-order homogeneous Sobolev norm.[15] More precisely, if we take   to be a connected Riemannian manifold equipped with a positive measure  , then we may define for   the seminorm

 

and for a signed measure   on   the dual norm

 

Then any two probability measures   and   on   satisfy the upper bound

 

In the other direction, if   and   each have densities with respect to the standard volume measure on   that are both bounded above some  , and   has non-negative Ricci curvature, then

 

Separability and completenessEdit

For any p ≥ 1, the metric space (Pp(M), Wp) is separable, and is complete if (M, d) is separable and complete.[16]

See alsoEdit

ReferencesEdit

  1. ^ Vaserstein LN (1969). "Markov processes over denumerable products of spaces, describing large systems of automata" (PDF). Problemy Peredači Informacii. 5 (3): 64–72.
  2. ^ Kantorovich LV (1939). "Mathematical Methods of Organizing and Planning Production". Management Science. 6 (4): 366–422. doi:10.1287/mnsc.6.4.366. JSTOR 2627082.
  3. ^ Olkin I, Pukelsheim F (October 1982). "The distance between two random vectors with given dispersion matrices". Linear Algebra and Its Applications. 48: 257–263. doi:10.1016/0024-3795(82)90112-4. ISSN 0024-3795.
  4. ^ Arjovsky M, Chintala S, Bottou L (July 2017). "Wasserstein Generative Adversarial Networks". International Conference on Machine Learning 214-223: 214–223.
  5. ^ Petitjean M (2002). "Chiral mixtures" (PDF). Journal of Mathematical Physics. 43 (8): 4147–4157. Bibcode:2002JMP....43.4147P. doi:10.1063/1.1484559.
  6. ^ Petitjean M (2004). "From shape similarity to shape complementarity: toward a docking theory". Journal of Mathematical Chemistry. 35 (3): 147–158. doi:10.1023/B:JOMC.0000033252.59423.6b. S2CID 121320315.
  7. ^ Mukherjee S, Wethington D, Dey TK, Das J (March 2022). "Determining clinically relevant features in cytometry data using persistent homology". PLOS Computational Biology. 18 (3): e1009931. arXiv:2203.06263. Bibcode:2022PLSCB..18E9931M. doi:10.1371/journal.pcbi.1009931. PMC 9009779. PMID 35312683.
  8. ^ Frederick, Christina; Yang, Yunan (2022-05-06). "Seeing through rock with help from optimal transport". Snapshots of Modern Mathematics from Oberwolfach. doi:10.14760/SNAP-2022-004-EN.
  9. ^ Clement P, Desch W (2008). "An elementary proof of the triangle inequality for the Wasserstein metric". Proceedings of the American Mathematical Society. 136 (1): 333–339. doi:10.1090/S0002-9939-07-09020-X.
  10. ^ Villani, Cédric (2003). "Chapter 1: The Kantorovich Duality". Topics in optimal transportation. Providence, RI: American Mathematical Society. ISBN 0-8218-3312-X. OCLC 51477002.
  11. ^ Matoušek, Jiří; Gärtner, Bernd, "Duality of Linear Programming", Universitext, Berlin, Heidelberg: Springer Berlin Heidelberg, pp. 81–104, ISBN 978-3-540-30697-9, retrieved 2022-07-15
  12. ^ Villani, Cédric (2003). "1.1.3. The shipper's problem.". Topics in optimal transportation. Providence, RI: American Mathematical Society. ISBN 0-8218-3312-X. OCLC 51477002.
  13. ^ Benamou, Jean-David; Brenier, Yann (2000-01-01). "A computational fluid mechanics solution to the Monge-Kantorovich mass transfer problem". Numerische Mathematik. 84 (3): 375–393. doi:10.1007/s002110050002. ISSN 0945-3245.
  14. ^ Finlay, Chris; Jacobsen, Joern-Henrik; Nurbekyan, Levon; Oberman, Adam (2020-11-21). "How to Train Your Neural ODE: the World of Jacobian and Kinetic Regularization". International Conference on Machine Learning. PMLR: 3154–3164.
  15. ^ Peyre R (October 2018). "Comparison between W2 distance and −1 norm, and localization of Wasserstein distance". ESAIM: Control, Optimisation and Calculus of Variations. 24 (4): 1489–1501. doi:10.1051/cocv/2017050. ISSN 1292-8119. (See Theorems 2.1 and 2.5.)
  16. ^ Bogachev VI, Kolesnikov AV (October 2012). "The Monge–Kantorovich problem: achievements, connections, and perspectives". Russian Mathematical Surveys. 67 (5): 785–890. Bibcode:2012RuMaS..67..785B. doi:10.1070/RM2012v067n05ABEH004808.

Further readingEdit

  • Ambrosio L, Gigli N, Savaré G (2005). Gradient Flows in Metric Spaces and in the Space of Probability Measures. Basel: ETH Zürich, Birkhäuser Verlag. ISBN 978-3-7643-2428-5.
  • Jordan R, Kinderlehrer D, Otto F (January 1998). "The variational formulation of the Fokker–Planck equation". SIAM Journal on Mathematical Analysis. 29 (1): 1–17 (electronic). CiteSeerX 10.1.1.6.8815. doi:10.1137/S0036141096303359. ISSN 0036-1410. MR 1617171.
  • Rüschendorf L (2001) [1994], "Wasserstein metric", Encyclopedia of Mathematics, EMS Press
  • Villani C (2008). Optimal Transport, Old and New. Springer. ISBN 978-3-540-71050-9.

External linksEdit

  • "What is the advantages of Wasserstein metric compared to Kullback–Leibler divergence?". Stack Exchange. August 1, 2017.