Coupon collector's problem

Summary

In probability theory, the coupon collector's problem refers to mathematical analysis of "collect all coupons and win" contests. It asks the following question: If each box of a brand of cereals contains a coupon, and there are n different types of coupons, what is the probability that more than t boxes need to be bought to collect all n coupons? An alternative statement is: Given n coupons, how many coupons do you expect you need to draw with replacement before having drawn each coupon at least once? The mathematical analysis of the problem reveals that the expected number of trials needed grows as .[a] For example, when n = 50 it takes about 225[b] trials on average to collect all 50 coupons.

Graph of number of coupons, n vs the expected number of trials (i.e., time) needed to collect them all, E (T )

Solution edit

Via generating functions edit

By definition of Stirling numbers of the second kind, the probability that exactly T draws are needed is

 
By manipulating the generating function of the Stirling numbers, we can explicitly calculate all moments of T:
 
In general, the k-th moment is  , where   is the derivative operator  . For example, the 0th moment is
 
and the 1st moment is  , which can be explicitly evaluated to  , etc.

Calculating the expectation edit

Let time T be the number of draws needed to collect all n coupons, and let ti be the time to collect the i-th coupon after i − 1 coupons have been collected. Then  . Think of T and ti as random variables. Observe that the probability of collecting a new coupon is  . Therefore,   has geometric distribution with expectation  . By the linearity of expectations we have:

 

Here Hn is the n-th harmonic number. Using the asymptotics of the harmonic numbers, we obtain:

 

where   is the Euler–Mascheroni constant.

Using the Markov inequality to bound the desired probability:

 

The above can be modified slightly to handle the case when we've already collected some of the coupons. Let k be the number of coupons already collected, then:

 

And when   then we get the original result.

Calculating the variance edit

Using the independence of random variables ti, we obtain:

 

since   (see Basel problem).

Bound the desired probability using the Chebyshev inequality:

 

Tail estimates edit

A stronger tail estimate for the upper tail be obtained as follows. Let   denote the event that the  -th coupon was not picked in the first   trials. Then

 

Thus, for  , we have  . Via a union bound over the   coupons, we obtain

 

Extensions and generalizations edit

  which is a Gumbel distribution. A simple proof by martingales is in the next section.
  • Donald J. Newman and Lawrence Shepp gave a generalization of the coupon collector's problem when m copies of each coupon need to be collected. Let Tm be the first time m copies of each coupon are collected. They showed that the expectation in this case satisfies:
 
Here m is fixed. When m = 1 we get the earlier formula for the expectation.
  • Common generalization, also due to Erdős and Rényi:
 
  • In the general case of a nonuniform probability distribution, according to Philippe Flajolet et al.[2]
 
This is equal to
 
where m denotes the number of coupons to be collected and PJ denotes the probability of getting any coupon in the set of coupons J.

Martingales edit

This section is based on.[3]

Define a discrete random process   by letting   be the number of coupons not yet seen after   draws. The random process is just a sequence generated by a Markov chain with states  , and transition probabilities

 
Now define
 
then it is a martingale, since
 
Consequently, we have  . In particular, we have a limit law   for any  . This suggests to us a limit law for  .

More generally, each   is a martingale process, which allows us to calculate all moments of  . For example,

 
giving another limit law  . More generally,
 
meaning that   has all moments converging to constants, so it converges to some probability distribution on  .

Let   be the random variable with the limit distribution. We have

 
By introducing a new variable  , we can sum up both sides explicitly:
 
giving  .

At the   limit, we have  , which is precisely what the limit law states.

By taking the derivative   multiple times, we find that  , which is a Poisson distribution.

See also edit

Notes edit

  1. ^ Here and throughout this article, "log" refers to the natural logarithm rather than a logarithm to some other base. The use of Θ here invokes big O notation.
  2. ^ E(50) = 50(1 + 1/2 + 1/3 + ... + 1/50) = 224.9603, the expected number of trials to collect all 50 coupons. The approximation   for this expected number gives in this case  .

References edit

  1. ^ Mitzenmacher, Michael (2017). Probability and computing : randomization and probabilistic techniques in algorithms and data analysis. Eli Upfal (2nd ed.). Cambridge, United Kingdom. Theorem 5.13. ISBN 978-1-107-15488-9. OCLC 960841613.{{cite book}}: CS1 maint: location missing publisher (link)
  2. ^ Flajolet, Philippe; Gardy, Danièle; Thimonier, Loÿs (1992), "Birthday paradox, coupon collectors, caching algorithms and self-organizing search", Discrete Applied Mathematics, 39 (3): 207–229, CiteSeerX 10.1.1.217.5965, doi:10.1016/0166-218x(92)90177-c
  3. ^ Kan, N. D. (2005-05-01). "Martingale approach to the coupon collection problem". Journal of Mathematical Sciences. 127 (1): 1737–1744. doi:10.1007/s10958-005-0134-y. ISSN 1573-8795.
  • Blom, Gunnar; Holst, Lars; Sandell, Dennis (1994), "7.5 Coupon collecting I, 7.6 Coupon collecting II, and 15.4 Coupon collecting III", Problems and Snapshots from the World of Probability, New York: Springer-Verlag, pp. 85–87, 191, ISBN 0-387-94161-4, MR 1265713.
  • Dawkins, Brian (1991), "Siobhan's problem: the coupon collector revisited", The American Statistician, 45 (1): 76–82, doi:10.2307/2685247, JSTOR 2685247.
  • Erdős, Paul; Rényi, Alfréd (1961), "On a classical problem of probability theory" (PDF), Magyar Tudományos Akadémia Matematikai Kutató Intézetének Közleményei, 6: 215–220, MR 0150807.
  • Laplace, Pierre-Simon (1812), Théorie analytique des probabilités, pp. 194–195.
  • Newman, Donald J.; Shepp, Lawrence (1960), "The double dixie cup problem", American Mathematical Monthly, 67 (1): 58–61, doi:10.2307/2308930, JSTOR 2308930, MR 0120672
  • Flajolet, Philippe; Gardy, Danièle; Thimonier, Loÿs (1992), "Birthday paradox, coupon collectors, caching algorithms and self-organizing search", Discrete Applied Mathematics, 39 (3): 207–229, doi:10.1016/0166-218X(92)90177-C, MR 1189469.
  • Isaac, Richard (1995), "8.4 The coupon collector's problem solved", The Pleasures of Probability, Undergraduate Texts in Mathematics, New York: Springer-Verlag, pp. 80–82, ISBN 0-387-94415-X, MR 1329545.
  • Motwani, Rajeev; Raghavan, Prabhakar (1995), "3.6. The Coupon Collector's Problem", Randomized algorithms, Cambridge: Cambridge University Press, pp. 57–63, ISBN 9780521474658, MR 1344451.

External links edit