Optimal Transport Problem and Wasserstein Distance

Posted by GwanSiu on December 6, 2018

1. Introduction to Optimal Transport Problem

Optimal transport problem is a classical problem in mathematics area. Recently, many researchers in machine learning community pay more attention to optimal transport, because Wasserstein distance provide a good tool to measure the similarity of two distribution. Optimal transport problem has two versions: Monge’s formulation and Kantorovich formulation.

I utilize the math symbol in [1]. Consider two signals $I_{0}$ and $I_{1}$ defined over their support set $\Omega_{0}$ and $\Omega_{1}$, where $\Omega_{0}, \Omega_{1}\in\mathbb{R}$. $I_{0}(x)$ and $I_{1}(y)$ are denoted as signal intensities, where $I_{0}(x)\geq 0, I_{1}(y)\geq 0$ for $x\in \Omega_{0},y\in\Omega_{1}$. In addition, the total amount of signal for both signals should be equal to the same constant, i.e. $\displaystyle{\int_{\Omega_{0}}I_{0}(x)\mathrm{d}x}$. In other words, $I_{0}$ and $I_{1}$ are assumeed to be probability density functions(PDFs).

1.1 Monge Formulation

Monge’s optimal transport problem is to find a function $f:\Omega_{0}\rightarrow \Omega_{1}$ that pushes $I_{0}$ and $I_{1}$ and minimizes the objective function:

where $c:\Omega_{0}\times\Omega_{1}\rightarrow\mathbb{R}^{+}$ is the cost of moving pixel intensity $I_{0}$ from $x$ to $f(x)$, i.e. Monge consider Euclidean distance as the cost function in his original formulation, $c(x,f(x))=\vert x-f(x)\vert$, and MP stands for a measure preserving map(or transport maps) that moves all the signal intensity from $I_{0}$ to $I_{1}$. That is, $\forall B\in\Omega_{1}$, the MP requirement is that:

if $f$ is one to one, this means $\forall A\in \Omega_{0}$

Rigirous speaking, Monge formulation of the problem seeks to rearrange signal $I_{0}$ into signal $I_{1}$ while minimizing a specific cost function. If $f$ is smooth and one-to-one, the equation(2) can be rewritten in differential form as

almost everywhere, where $Df$ is Jacobian of $f$. Note that both the objective function and the constraint in (1) are nonlinear with respect to $f(x)$.

1.2 Kantorovich Formulation

Kantorovich formulated the transport problem by optimizing over the joint distribution of $I_{0}$ and $I_{1}$, which is denoted as $\gamma$. The physical meaning is how much mass is being moved to different coordinates, i.e., let $A\subset \Omega_{0}$ and $B\subset\Omega_{1}$. To make a distinction between a probability distribution and density function, we define a probability distribution of $I_{0}$ is $\displaystyle{I_{0}(A)=\int_{A}I_{0}(x)\mathrm{d}x}$. The quatity $\gamma(A\times B)$ tells us how much mass in set $A$ is being moving to set $B$. Thus, the MP constraint can be expressed as $\gamma(\Omega_{0}\times B)=I_{1}(B)$ and $\gamma(A\times \Omega_{1})=I_{0}(A)$.

The Kantorovich formulation can be written as

Kantorovich formualtion has a discrete setting, i.e., for PDFs of the form $\displaystyle{I_{0}=\sum_{i=2}^{M}p_{i}\delta(x-x_{i}), \text{and }I_{1}=\sum_{j=1}^{N}q_{j}\delta(y-y_{j})}$. Kantorovich formulation allows mass splitting. Thus, Kantorovich formulation can be rewritten as:

where $\gamma_{ij}$ defiines how much of the mass particle $m_{i}$ at $x_{i}$ needs to be moved to $y_{j}$. The optimization obove has a linear objective function and linear constraints. Therefore, it is a linear programming problem. This problem is convex, but not strictly, and the constraints provides a polyhedral set of $M\times N$ matrices.

1.3 Kantorovich-Rubinstein Duality

2. Wasserstein Distance

2.1 Wasserstein Metric

$\Omega$ denotes a bounded subset of $\mathbb{R}^{d}$, and $p(\Omega)$ is the set of probability densities supported on $\Omega$. The p-Wasserstein metric, $W_{p}$, for $p\geq 1$ on $p(\Omega)$ is then defined as using the optimal transportation problem with the cost function $c(x,y)=\vert x-y\vert$. For $I_{0}$ and $I_{1}$ in $p(\Omega)$,

For any $p\geq 1$, $W_{p}$ is a metric on $p(\Omega)$. The metric space $(p(\Omega), W_{p})$ is denoted as the p-Wasserstein space. The convergence with respect to $W_{p}$ is equivalent to the weak convergence of measure, i.e., $W_{p}(I_{n}, I)\rightarrow 0$ as $n\rightarrow\infty$ if and ony if for every bounded and continuous function $f:\Omega\rightarrow\mathbb{R}$

For $p=1$, the p-Wasserstein metric is known as the Monge-Rubinstein metric or the Earth mover’s distance.

2.2 Earth Mover’s Distance

Background of Earth Mover’s Distance(EMD): Earth Mover’s Dsitance is the discrete version of Kantorovich formulation. We assume two distributions $p_{r}$ and $p_{g}$ are considered as two different heaps of a certain amount of earth, then the EMD is to meansure the minimal total amount of work it moves one heap into the other. The formulation is

where $\gamma(x,y)$ is called optimal tranport plan, and it states that how amount of the earth we distribute from location $x$ to location $y$. $\prod(p_{r},p_{g})$ is the joint distribution of $p_{r}$ and $p_{g}$. We can set $\Gamma=\gamma(x,y)$, and $D=\Vert x-y\Vert$. We can rewrite the formulation:

where $<,>_{F}$ is Forbenius inner product. It’s classical linear programming problem, more detail can see 2.

For p-Wasserstein metric in one dimension, the optimal map has a closed form solution. We define $F_{i}$ be the cumulative distribution function of $I_{i}$ for $i=0,1$, i.e.,

cumulative distribuiton $F_{i}$ is nondecreasing from $0$ to $1$. We also define the pseudoinverse of $F_{0}$ as follows: for $z\in(0,1), F^{-1}(z)$ is the smallest $x$ for which $F_{0}(x)\geq z$, i.e.,

the pseudoinverse provides a closed-form solution for the p-Wasserstein distance:

We assume $I_{0}$ is the empirical distribution $P$ of a dataset $X_{1}, X_{2},…,X_{n}$ and $I_{1}$ is also the empirical distribution $Q$ of a dataset $Y_{1},Y_{2},…,Y_{n}$ of the same size, then the distance takes a very siple function of order statistics:

2.3 Sliced-Wasserstein Metric

The idea behind the sliced-Wasserstein metric is to first obtain a set of 1-D respresentations for a higher-dimensional probability distribution through projection, and then calculate the distance between two distributions as a functional on the Wasserstein distance of their 1-D respresentations. In this sense, the distance is obtained by solving several 1-D optimal transport problems, which have closed-form solutions.

The projection of high-dimensional PDFs can be Radon transform, which is well-known in image processing area.

The d-dimensional Radon transform $\mathcal{R}$ maps a function $I\in L_{1}(\mathbb{R}^{d})$ where $L_{1}(\mathbb{R}^{d}):={I:\mathbb{R}^{d}\rightarrow \mathbb{R}\vert \int_{\mathbb{R}^{d}}\vert I(x)\vert \mathrm{d}x\leq \infty}$ into the set of its integrals over the hyperplanes of $\mathbb{R}^{n}$. It is defined as:

Thus, the sliced-Wasserstein metric for PDFs $I_{0}$ and $I_{1}$ on $\mathbb{R}^{d}$ is defined as

where $p\geq 1$, and $W_{p}$ is the p-Wasserstein metric, which, for 1-D PDFs,$\mathcal{R}I_{0}(\bullet,\theta),\mathcal{R}I_{1}(\bullet,\theta)$ has a cloased-form solution.

3. Compared with Other Metrics.

Reference

[1]. Kolouri S, Park S R, Thorpe M, et al. Optimal mass transport: Signal processing and machine-learning applications[J]. IEEE Signal Processing Magazine, 2017, 34(4): 43-59. [2]. Wasserstein GAN and the Kantorovich-Rubinstein Duality