Markov's inequality

Summary

In probability theory, Markov's inequality gives an upper bound on the probability that a non-negative random variable is greater than or equal to some positive constant. Markov's inequality is tight in the sense that for each chosen positive constant, there exists a random variable such that the inequality is in fact an equality.[1]

Markov's inequality gives an upper bound for the measure of the set (indicated in red) where exceeds a given level . The bound combines the level with the average value of .

It is named after the Russian mathematician Andrey Markov, although it appeared earlier in the work of Pafnuty Chebyshev (Markov's teacher), and many sources, especially in analysis, refer to it as Chebyshev's inequality (sometimes, calling it the first Chebyshev inequality, while referring to Chebyshev's inequality as the second Chebyshev inequality) or Bienaymé's inequality.

Markov's inequality (and other similar inequalities) relate probabilities to expectations, and provide (frequently loose but still useful) bounds for the cumulative distribution function of a random variable. Markov's inequality can also be used to upper bound the expectation of a non-negative random variable in terms of its distribution function.

Statement edit

If X is a nonnegative random variable and a > 0, then the probability that X is at least a is at most the expectation of X divided by a:[1]

 

Let   (where  ); then we can rewrite the previous inequality as

 

In the language of measure theory, Markov's inequality states that if (X, Σ, μ) is a measure space,   is a measurable extended real-valued function, and ε > 0, then

 

This measure-theoretic definition is sometimes referred to as Chebyshev's inequality.[2]

Extended version for nondecreasing functions edit

If φ is a nondecreasing nonnegative function, X is a (not necessarily nonnegative) random variable, and φ(a) > 0, then[3]

 

An immediate corollary, using higher moments of X supported on values larger than 0, is

 

Proofs edit

We separate the case in which the measure space is a probability space from the more general case because the probability case is more accessible for the general reader.

Intuition edit

  where   is larger than or equal to 0 as the random variable   is non-negative and   is larger than or equal to   because the conditional expectation only takes into account of values larger than or equal to   which r.v.   can take.

Hence intuitively  , which directly leads to  .

Probability-theoretic proof edit

Method 1: From the definition of expectation:

 

However, X is a non-negative random variable thus,

 

From this we can derive,

 

From here, dividing through by   allows us to see that

 

Method 2: For any event  , let   be the indicator random variable of  , that is,   if   occurs and   otherwise.

Using this notation, we have   if the event   occurs, and   if  . Then, given  ,

 

which is clear if we consider the two possible values of  . If  , then  , and so  . Otherwise, we have  , for which   and so  .

Since   is a monotonically increasing function, taking expectation of both sides of an inequality cannot reverse it. Therefore,

 

Now, using linearity of expectations, the left side of this inequality is the same as

 

Thus we have

 

and since a > 0, we can divide both sides by a.

Measure-theoretic proof edit

We may assume that the function   is non-negative, since only its absolute value enters in the equation. Now, consider the real-valued function s on X given by

 

Then  . By the definition of the Lebesgue integral

 

and since  , both sides can be divided by  , obtaining

 

Discrete case edit

We now provide a proof for the special case when   is a discrete random variable which only takes on non-negative integer values.

Let   be a positive integer. By definition            

Dividing by   yields the desired result.

Corollaries edit

Chebyshev's inequality edit

Chebyshev's inequality uses the variance to bound the probability that a random variable deviates far from the mean. Specifically,

 

for any a > 0.[3] Here Var(X) is the variance of X, defined as:

 

Chebyshev's inequality follows from Markov's inequality by considering the random variable

 

and the constant   for which Markov's inequality reads

 

This argument can be summarized (where "MI" indicates use of Markov's inequality):

 

Other corollaries edit

  1. The "monotonic" result can be demonstrated by:
     
  2. The result that, for a nonnegative random variable X, the quantile function of X satisfies:
     
    the proof using
     
  3. Let   be a self-adjoint matrix-valued random variable and a > 0. Then
     
    can be shown in a similar manner.

Examples edit

Assuming no income is negative, Markov's inequality shows that no more than 10% (1/10) of the population can have more than 10 times the average income.[4]

Another simple example is as follows: Andrew makes 4 mistakes on average on a random test his Statistics course tests. The best upper bound on the probability that Andrew will do at least 10 mistakes is 0.4 as   Note that Andrew might do exactly 10 mistakes with probability 0.4 and make no mistakes with probability 0.6; the expectation is exactly 4 mistakes.

See also edit

References edit

  1. ^ a b Huber, Mark (2019-11-26). "Halving the Bounds for the Markov, Chebyshev, and Chernoff Inequalities Using Smoothing". The American Mathematical Monthly. 126 (10): 915–927. arXiv:1803.06361. doi:10.1080/00029890.2019.1656484. ISSN 0002-9890.
  2. ^ Stein, E. M.; Shakarchi, R. (2005), Real Analysis, Princeton Lectures in Analysis, vol. 3 (1st ed.), p. 91.
  3. ^ a b Lin, Zhengyan (2010). Probability inequalities. Springer. p. 52.
  4. ^ Ross, Kevin. 5.4 Probability inequalitlies | An Introduction to Probability and Simulation.

External links edit

  • The formal proof of Markov's inequality in the Mizar system.