In probability theory and statistics , the F -distribution or F -ratio , also known as Snedecor's F distribution or the Fisher–Snedecor distribution (after Ronald Fisher and George W. Snedecor ), is a continuous probability distribution that arises frequently as the null distribution of a test statistic , most notably in the analysis of variance (ANOVA) and other F -tests .[ 2] [ 3] [ 4] [ 5]
Fisher–Snedecor
Probability density function
Cumulative distribution function
Parameters
d 1 , d 2 > 0 deg. of freedom Support
x
∈
(
0
,
+
∞
)
{\displaystyle x\in (0,+\infty )\;}
if
d
1
=
1
{\displaystyle d_{1}=1}
, otherwise
x
∈
[
0
,
+
∞
)
{\displaystyle x\in [0,+\infty )\;}
PDF
(
d
1
x
)
d
1
d
2
d
2
(
d
1
x
+
d
2
)
d
1
+
d
2
x
B
(
d
1
2
,
d
2
2
)
{\displaystyle {\frac {\sqrt {\frac {(d_{1}x)^{d_{1}}d_{2}^{d_{2}}}{(d_{1}x+d_{2})^{d_{1}+d_{2}}}}}{x\,\mathrm {B} \!\left({\frac {d_{1}}{2}},{\frac {d_{2}}{2}}\right)}}\!}
CDF
I
d
1
x
d
1
x
+
d
2
(
d
1
2
,
d
2
2
)
{\displaystyle I_{\frac {d_{1}x}{d_{1}x+d_{2}}}\left({\tfrac {d_{1}}{2}},{\tfrac {d_{2}}{2}}\right)}
Mean
d
2
d
2
−
2
{\displaystyle {\frac {d_{2}}{d_{2}-2}}\!}
for d 2 > 2 Mode
d
1
−
2
d
1
d
2
d
2
+
2
{\displaystyle {\frac {d_{1}-2}{d_{1}}}\;{\frac {d_{2}}{d_{2}+2}}}
for d 1 > 2 Variance
2
d
2
2
(
d
1
+
d
2
−
2
)
d
1
(
d
2
−
2
)
2
(
d
2
−
4
)
{\displaystyle {\frac {2\,d_{2}^{2}\,(d_{1}+d_{2}-2)}{d_{1}(d_{2}-2)^{2}(d_{2}-4)}}\!}
for d 2 > 4 Skewness
(
2
d
1
+
d
2
−
2
)
8
(
d
2
−
4
)
(
d
2
−
6
)
d
1
(
d
1
+
d
2
−
2
)
{\displaystyle {\frac {(2d_{1}+d_{2}-2){\sqrt {8(d_{2}-4)}}}{(d_{2}-6){\sqrt {d_{1}(d_{1}+d_{2}-2)}}}}\!}
for d 2 > 6 Excess kurtosis
see text Entropy
ln
Γ
(
d
1
2
)
+
ln
Γ
(
d
2
2
)
−
ln
Γ
(
d
1
+
d
2
2
)
+
{\displaystyle \ln \Gamma \left({\tfrac {d_{1}}{2}}\right)+\ln \Gamma \left({\tfrac {d_{2}}{2}}\right)-\ln \Gamma \left({\tfrac {d_{1}+d_{2}}{2}}\right)+\!}
(
1
−
d
1
2
)
ψ
(
1
+
d
1
2
)
−
(
1
+
d
2
2
)
ψ
(
1
+
d
2
2
)
{\displaystyle \left(1-{\tfrac {d_{1}}{2}}\right)\psi \left(1+{\tfrac {d_{1}}{2}}\right)-\left(1+{\tfrac {d_{2}}{2}}\right)\psi \left(1+{\tfrac {d_{2}}{2}}\right)\!}
+
(
d
1
+
d
2
2
)
ψ
(
d
1
+
d
2
2
)
+
ln
d
2
d
1
{\displaystyle +\left({\tfrac {d_{1}+d_{2}}{2}}\right)\psi \left({\tfrac {d_{1}+d_{2}}{2}}\right)+\ln {\frac {d_{2}}{d_{1}}}\!}
[ 1] MGF
does not exist, raw moments defined in text and in [ 2] [ 3] CF
see text
Definition
edit
The F -distribution with d 1 and d 2 degrees of freedom is the distribution of
X
=
U
1
/
d
1
U
2
/
d
2
{\displaystyle X={\frac {U_{1}/d_{1}}{U_{2}/d_{2}}}}
where
U
1
{\textstyle U_{1}}
and
U
2
{\textstyle U_{2}}
are independent random variables with chi-square distributions with respective degrees of freedom
d
1
{\textstyle d_{1}}
and
d
2
{\textstyle d_{2}}
.
It can be shown to follow that the probability density function (pdf) for X is given by
f
(
x
;
d
1
,
d
2
)
=
(
d
1
x
)
d
1
d
2
d
2
(
d
1
x
+
d
2
)
d
1
+
d
2
x
B
(
d
1
2
,
d
2
2
)
=
1
B
(
d
1
2
,
d
2
2
)
(
d
1
d
2
)
d
1
2
x
d
1
2
−
1
(
1
+
d
1
d
2
x
)
−
d
1
+
d
2
2
{\displaystyle {\begin{aligned}f(x;d_{1},d_{2})&={\frac {\sqrt {\frac {(d_{1}x)^{d_{1}}\,\,d_{2}^{d_{2}}}{(d_{1}x+d_{2})^{d_{1}+d_{2}}}}}{x\operatorname {B} \left({\frac {d_{1}}{2}},{\frac {d_{2}}{2}}\right)}}\\[5pt]&={\frac {1}{\operatorname {B} \left({\frac {d_{1}}{2}},{\frac {d_{2}}{2}}\right)}}\left({\frac {d_{1}}{d_{2}}}\right)^{\frac {d_{1}}{2}}x^{{\frac {d_{1}}{2}}-1}\left(1+{\frac {d_{1}}{d_{2}}}\,x\right)^{-{\frac {d_{1}+d_{2}}{2}}}\end{aligned}}}
for real x > 0. Here
B
{\displaystyle \mathrm {B} }
is the beta function . In many applications, the parameters d 1 and d 2 are positive integers , but the distribution is well-defined for positive real values of these parameters.
The cumulative distribution function is
F
(
x
;
d
1
,
d
2
)
=
I
d
1
x
/
(
d
1
x
+
d
2
)
(
d
1
2
,
d
2
2
)
,
{\displaystyle F(x;d_{1},d_{2})=I_{d_{1}x/(d_{1}x+d_{2})}\left({\tfrac {d_{1}}{2}},{\tfrac {d_{2}}{2}}\right),}
where I is the regularized incomplete beta function .
The expectation, variance, and other details about the F(d 1 , d 2 ) are given in the sidebox; for d 2 > 8, the excess kurtosis is
γ
2
=
12
d
1
(
5
d
2
−
22
)
(
d
1
+
d
2
−
2
)
+
(
d
2
−
4
)
(
d
2
−
2
)
2
d
1
(
d
2
−
6
)
(
d
2
−
8
)
(
d
1
+
d
2
−
2
)
.
{\displaystyle \gamma _{2}=12{\frac {d_{1}(5d_{2}-22)(d_{1}+d_{2}-2)+(d_{2}-4)(d_{2}-2)^{2}}{d_{1}(d_{2}-6)(d_{2}-8)(d_{1}+d_{2}-2)}}.}
The k -th moment of an F(d 1 , d 2 ) distribution exists and is finite only when 2k < d 2 and it is equal to
μ
X
(
k
)
=
(
d
2
d
1
)
k
Γ
(
d
1
2
+
k
)
Γ
(
d
1
2
)
Γ
(
d
2
2
−
k
)
Γ
(
d
2
2
)
.
{\displaystyle \mu _{X}(k)=\left({\frac {d_{2}}{d_{1}}}\right)^{k}{\frac {\Gamma \left({\tfrac {d_{1}}{2}}+k\right)}{\Gamma \left({\tfrac {d_{1}}{2}}\right)}}{\frac {\Gamma \left({\tfrac {d_{2}}{2}}-k\right)}{\Gamma \left({\tfrac {d_{2}}{2}}\right)}}.}
[ 6]
The F -distribution is a particular parametrization of the beta prime distribution , which is also called the beta distribution of the second kind.
The characteristic function is listed incorrectly in many standard references (e.g.,[ 3] ). The correct expression [ 7] is
φ
d
1
,
d
2
F
(
s
)
=
Γ
(
d
1
+
d
2
2
)
Γ
(
d
2
2
)
U
(
d
1
2
,
1
−
d
2
2
,
−
d
2
d
1
ı
s
)
{\displaystyle \varphi _{d_{1},d_{2}}^{F}(s)={\frac {\Gamma \left({\frac {d_{1}+d_{2}}{2}}\right)}{\Gamma \left({\tfrac {d_{2}}{2}}\right)}}U\!\left({\frac {d_{1}}{2}},1-{\frac {d_{2}}{2}},-{\frac {d_{2}}{d_{1}}}\imath s\right)}
where U (a , b , z ) is the confluent hypergeometric function of the second kind.
Characterization
edit
A random variate of the F -distribution with parameters
d
1
{\displaystyle d_{1}}
and
d
2
{\displaystyle d_{2}}
arises as the ratio of two appropriately scaled chi-squared variates:[ 8]
X
=
U
1
/
d
1
U
2
/
d
2
{\displaystyle X={\frac {U_{1}/d_{1}}{U_{2}/d_{2}}}}
where
In instances where the F -distribution is used, for example in the analysis of variance , independence of
U
1
{\displaystyle U_{1}}
and
U
2
{\displaystyle U_{2}}
might be demonstrated by applying Cochran's theorem .
Equivalently, since the chi-squared distribution is the sum of independent standard normal random variables, the random variable of the F -distribution may also be written
X
=
s
1
2
σ
1
2
÷
s
2
2
σ
2
2
,
{\displaystyle X={\frac {s_{1}^{2}}{\sigma _{1}^{2}}}\div {\frac {s_{2}^{2}}{\sigma _{2}^{2}}},}
where
s
1
2
=
S
1
2
d
1
{\displaystyle s_{1}^{2}={\frac {S_{1}^{2}}{d_{1}}}}
and
s
2
2
=
S
2
2
d
2
{\displaystyle s_{2}^{2}={\frac {S_{2}^{2}}{d_{2}}}}
,
S
1
2
{\displaystyle S_{1}^{2}}
is the sum of squares of
d
1
{\displaystyle d_{1}}
random variables from normal distribution
N
(
0
,
σ
1
2
)
{\displaystyle N(0,\sigma _{1}^{2})}
and
S
2
2
{\displaystyle S_{2}^{2}}
is the sum of squares of
d
2
{\displaystyle d_{2}}
random variables from normal distribution
N
(
0
,
σ
2
2
)
{\displaystyle N(0,\sigma _{2}^{2})}
.
In a frequentist context, a scaled F -distribution therefore gives the probability
p
(
s
1
2
/
s
2
2
∣
σ
1
2
,
σ
2
2
)
{\displaystyle p(s_{1}^{2}/s_{2}^{2}\mid \sigma _{1}^{2},\sigma _{2}^{2})}
, with the F -distribution itself, without any scaling, applying where
σ
1
2
{\displaystyle \sigma _{1}^{2}}
is being taken equal to
σ
2
2
{\displaystyle \sigma _{2}^{2}}
. This is the context in which the F -distribution most generally appears in F -tests : where the null hypothesis is that two independent normal variances are equal, and the observed sums of some appropriately selected squares are then examined to see whether their ratio is significantly incompatible with this null hypothesis.
The quantity
X
{\displaystyle X}
has the same distribution in Bayesian statistics, if an uninformative rescaling-invariant Jeffreys prior is taken for the prior probabilities of
σ
1
2
{\displaystyle \sigma _{1}^{2}}
and
σ
2
2
{\displaystyle \sigma _{2}^{2}}
.[ 9] In this context, a scaled F -distribution thus gives the posterior probability
p
(
σ
2
2
/
σ
1
2
∣
s
1
2
,
s
2
2
)
{\displaystyle p(\sigma _{2}^{2}/\sigma _{1}^{2}\mid s_{1}^{2},s_{2}^{2})}
, where the observed sums
s
1
2
{\displaystyle s_{1}^{2}}
and
s
2
2
{\displaystyle s_{2}^{2}}
are now taken as known.
edit
If
X
∼
χ
d
1
2
{\displaystyle X\sim \chi _{d_{1}}^{2}}
and
Y
∼
χ
d
2
2
{\displaystyle Y\sim \chi _{d_{2}}^{2}}
(Chi squared distribution ) are independent , then
X
/
d
1
Y
/
d
2
∼
F
(
d
1
,
d
2
)
{\displaystyle {\frac {X/d_{1}}{Y/d_{2}}}\sim \mathrm {F} (d_{1},d_{2})}
If
X
k
∼
Γ
(
α
k
,
β
k
)
{\displaystyle X_{k}\sim \Gamma (\alpha _{k},\beta _{k})\,}
(Gamma distribution ) are independent, then
α
2
β
1
X
1
α
1
β
2
X
2
∼
F
(
2
α
1
,
2
α
2
)
{\displaystyle {\frac {\alpha _{2}\beta _{1}X_{1}}{\alpha _{1}\beta _{2}X_{2}}}\sim \mathrm {F} (2\alpha _{1},2\alpha _{2})}
If
X
∼
Beta
(
d
1
/
2
,
d
2
/
2
)
{\displaystyle X\sim \operatorname {Beta} (d_{1}/2,d_{2}/2)}
(Beta distribution ) then
d
2
X
d
1
(
1
−
X
)
∼
F
(
d
1
,
d
2
)
{\displaystyle {\frac {d_{2}X}{d_{1}(1-X)}}\sim \operatorname {F} (d_{1},d_{2})}
Equivalently, if
X
∼
F
(
d
1
,
d
2
)
{\displaystyle X\sim F(d_{1},d_{2})}
, then
d
1
X
/
d
2
1
+
d
1
X
/
d
2
∼
Beta
(
d
1
/
2
,
d
2
/
2
)
{\displaystyle {\frac {d_{1}X/d_{2}}{1+d_{1}X/d_{2}}}\sim \operatorname {Beta} (d_{1}/2,d_{2}/2)}
.
If
X
∼
F
(
d
1
,
d
2
)
{\displaystyle X\sim F(d_{1},d_{2})}
, then
d
1
d
2
X
{\displaystyle {\frac {d_{1}}{d_{2}}}X}
has a beta prime distribution :
d
1
d
2
X
∼
β
′
(
d
1
2
,
d
2
2
)
{\displaystyle {\frac {d_{1}}{d_{2}}}X\sim \operatorname {\beta ^{\prime }} \left({\tfrac {d_{1}}{2}},{\tfrac {d_{2}}{2}}\right)}
.
If
X
∼
F
(
d
1
,
d
2
)
{\displaystyle X\sim F(d_{1},d_{2})}
then
Y
=
lim
d
2
→
∞
d
1
X
{\displaystyle Y=\lim _{d_{2}\to \infty }d_{1}X}
has the chi-squared distribution
χ
d
1
2
{\displaystyle \chi _{d_{1}}^{2}}
F
(
d
1
,
d
2
)
{\displaystyle F(d_{1},d_{2})}
is equivalent to the scaled Hotelling's T-squared distribution
d
2
d
1
(
d
1
+
d
2
−
1
)
T
2
(
d
1
,
d
1
+
d
2
−
1
)
{\displaystyle {\frac {d_{2}}{d_{1}(d_{1}+d_{2}-1)}}\operatorname {T} ^{2}(d_{1},d_{1}+d_{2}-1)}
.
If
X
∼
F
(
d
1
,
d
2
)
{\displaystyle X\sim F(d_{1},d_{2})}
then
X
−
1
∼
F
(
d
2
,
d
1
)
{\displaystyle X^{-1}\sim F(d_{2},d_{1})}
.
If
X
∼
t
(
n
)
{\displaystyle X\sim t_{(n)}}
— Student's t-distribution — then:
X
2
∼
F
(
1
,
n
)
X
−
2
∼
F
(
n
,
1
)
{\displaystyle {\begin{aligned}X^{2}&\sim \operatorname {F} (1,n)\\X^{-2}&\sim \operatorname {F} (n,1)\end{aligned}}}
F -distribution is a special case of type 6 Pearson distribution
If
X
{\displaystyle X}
and
Y
{\displaystyle Y}
are independent, with
X
,
Y
∼
{\displaystyle X,Y\sim }
Laplace(μ , b ) then
|
X
−
μ
|
|
Y
−
μ
|
∼
F
(
2
,
2
)
{\displaystyle {\frac {|X-\mu |}{|Y-\mu |}}\sim \operatorname {F} (2,2)}
If
X
∼
F
(
n
,
m
)
{\displaystyle X\sim F(n,m)}
then
log
X
2
∼
FisherZ
(
n
,
m
)
{\displaystyle {\tfrac {\log {X}}{2}}\sim \operatorname {FisherZ} (n,m)}
(Fisher's z-distribution )
The noncentral F -distribution simplifies to the F -distribution if
λ
=
0
{\displaystyle \lambda =0}
.
The doubly noncentral F -distribution simplifies to the F -distribution if
λ
1
=
λ
2
=
0
{\displaystyle \lambda _{1}=\lambda _{2}=0}
If
Q
X
(
p
)
{\displaystyle \operatorname {Q} _{X}(p)}
is the quantile p for
X
∼
F
(
d
1
,
d
2
)
{\displaystyle X\sim F(d_{1},d_{2})}
and
Q
Y
(
1
−
p
)
{\displaystyle \operatorname {Q} _{Y}(1-p)}
is the quantile
1
−
p
{\displaystyle 1-p}
for
Y
∼
F
(
d
2
,
d
1
)
{\displaystyle Y\sim F(d_{2},d_{1})}
, then
Q
X
(
p
)
=
1
Q
Y
(
1
−
p
)
.
{\displaystyle \operatorname {Q} _{X}(p)={\frac {1}{\operatorname {Q} _{Y}(1-p)}}.}
F -distribution is an instance of ratio distributions
W -distribution[ 10] is a unique parametrization of F-distribution.
See also
edit
Beta prime distribution
Chi-square distribution
Chow test
Gamma distribution
Hotelling's T-squared distribution
Wilks' lambda distribution
Wishart distribution
Modified half-normal distribution [ 11] with the pdf on
(
0
,
∞
)
{\displaystyle (0,\infty )}
is given as
f
(
x
)
=
2
β
α
2
x
α
−
1
exp
(
−
β
x
2
+
γ
x
)
Ψ
(
α
2
,
γ
β
)
{\displaystyle f(x)={\frac {2\beta ^{\frac {\alpha }{2}}x^{\alpha -1}\exp(-\beta x^{2}+\gamma x)}{\Psi {\left({\frac {\alpha }{2}},{\frac {\gamma }{\sqrt {\beta }}}\right)}}}}
, where
Ψ
(
α
,
z
)
=
1
Ψ
1
(
(
α
,
1
2
)
(
1
,
0
)
;
z
)
{\displaystyle \Psi (\alpha ,z)={}_{1}\Psi _{1}\left({\begin{matrix}\left(\alpha ,{\frac {1}{2}}\right)\\(1,0)\end{matrix}};z\right)}
denotes the Fox–Wright Psi function .
References
edit
^ Lazo, A.V.; Rathie, P. (1978). "On the entropy of continuous probability distributions". IEEE Transactions on Information Theory . 24 (1). IEEE: 120–122. doi :10.1109/tit.1978.1055832.
^ a b Johnson, Norman Lloyd; Samuel Kotz; N. Balakrishnan (1995). Continuous Univariate Distributions, Volume 2 (Second Edition, Section 27) . Wiley. ISBN 0-471-58494-0 .
^ a b c Abramowitz, Milton ; Stegun, Irene Ann , eds. (1983) [June 1964]. "Chapter 26". Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables . Applied Mathematics Series. Vol. 55 (Ninth reprint with additional corrections of tenth original printing with corrections (December 1972); first ed.). Washington D.C.; New York: United States Department of Commerce, National Bureau of Standards; Dover Publications. p. 946. ISBN 978-0-486-61272-0 . LCCN 64-60036. MR 0167642. LCCN 65-12253.
^ NIST (2006). Engineering Statistics Handbook – F Distribution
^ Mood, Alexander; Franklin A. Graybill; Duane C. Boes (1974). Introduction to the Theory of Statistics (Third ed.). McGraw-Hill. pp. 246–249. ISBN 0-07-042864-6 .
^ Taboga, Marco. "The F distribution".
^ Phillips, P. C. B. (1982) "The true characteristic function of the F distribution," Biometrika , 69: 261–264 JSTOR 2335882
^ DeGroot, M. H. (1986). Probability and Statistics (2nd ed.). Addison-Wesley. p. 500. ISBN 0-201-11366-X .
^ Box, G. E. P.; Tiao, G. C. (1973). Bayesian Inference in Statistical Analysis . Addison-Wesley. p. 110. ISBN 0-201-00622-7 .
^ Mahmoudi, Amin; Javed, Saad Ahmed (October 2022). "Probabilistic Approach to Multi-Stage Supplier Evaluation: Confidence Level Measurement in Ordinal Priority Approach". Group Decision and Negotiation . 31 (5): 1051–1096. doi :10.1007/s10726-022-09790-1. ISSN 0926-2644. PMC 9409630 . PMID 36042813.
^ Sun, Jingchao; Kong, Maiying; Pal, Subhadip (22 June 2021). "The Modified-Half-Normal distribution: Properties and an efficient sampling scheme" (PDF) . Communications in Statistics - Theory and Methods . 52 (5): 1591–1613. doi :10.1080/03610926.2021.1934700. ISSN 0361-0926. S2CID 237919587.
External links
edit
Table of critical values of the F -distribution
Earliest Uses of Some of the Words of Mathematics: entry on F -distribution contains a brief history
Free calculator for F -testing