Winsorizing

Summary

Winsorizing or winsorization is the transformation of statistics by limiting extreme values in the statistical data to reduce the effect of possibly spurious outliers. It is named after the engineer-turned-biostatistician Charles P. Winsor (1895–1951). The effect is the same as clipping in signal processing.

The distribution of many statistics can be heavily influenced by outliers, values that are 'way outside' the bulk of the data. A typical strategy to account for, without eliminating altogether, these outlier values is to 'reset' outliers to a specified percentile (or an upper and lower percentile) of the data. For example, a 90% winsorization would see all data below the 5th percentile set to the 5th percentile, and all data above the 95th percentile set to the 95th percentile. Winsorized estimators are usually more robust to outliers than their more standard forms, although there are alternatives, such as trimming (see below), that will achieve a similar effect.

Example

edit

Consider a simple data set consisting of:

{92, 19, 101, 58, 1053, 91, 26, 78, 10, 13, −40, 101, 86, 85, 15, 89, 89, 28, −5, 41}
(N = 20, mean = 101.5)

The data below the 5th percentile lie between −40 and −5 inclusive, while the data above the 95th percentile lie between 101 and 1053 inclusive (pertinent values are shown in bold). Winsorization effectively resets the outlier values to the values of the data at the 5th and 95th percentiles. Accordingly, a 90% winsorization would result in the following data set:

{92, 19, 101, 58, 101, 91, 26, 78, 10, 13, −5, 101, 86, 85, 15, 89, 89, 28, −5, 41}
(N = 20, mean = 55.65)

After winsorization the mean has dropped to nearly half its previous value, and is consequently more in line or congruent with the data set from which it is calculated.

Explanation, and distinction from trimming/truncation

edit

Note that winsorizing is not equivalent to simply excluding data, which is a simpler procedure, called trimming or truncation, but is a method of censoring data.

In a trimmed estimator, the extreme values are discarded; in a winsorized estimator, the extreme values are instead replaced by certain percentiles (the trimmed minimum and maximum).

Thus a winsorized mean is not the same as a truncated or trimmed mean. For instance, the 10% trimmed mean is the average of the 5th to 95th percentile of the data, while the 90% winsorized mean sets the bottom 5% to the 5th percentile, the top 5% to the 95th percentile, and then averages the data. Winsorizing thus does not change the total number of values in the data set, N. In the example given above, the trimmed mean would be obtained from the smaller (truncated) set:

{92, 19, 101, 58,       91, 26, 78, 10, 13,       101, 86, 85, 15, 89, 89, 28, −5, 41}      
(N = 18, trimmed mean = 56.5)

In this case, the winsorized mean can equivalently be expressed as a weighted average of the 5th percentile, the truncated mean, and the 95th percentile (for this case of a 10% winsorized mean: 0.05 times the 5th percentile, 0.9 times the 10% trimmed mean, and 0.05 times the 95th percentile). However, in general, winsorized statistics need not be expressible in terms of the corresponding trimmed statistic.

More formally, they are distinct because the order statistics are not independent.

Uses

edit

Winsorization is used in the survey methodology context in order to "trim" extreme survey non-response weights.[1] It is also used in the construction of some stock indexes when looking at the range of certain factors (for example growth and value) for particular stocks.[2]

Coding methods

edit

Python can winsorize data using SciPy library:

import numpy as np
from scipy.stats.mstats import winsorize
winsorize(np.array([92, 19, 101, 58, 1053, 91, 26, 78, 10, 13, -40, 101, 86, 85, 15, 89, 89, 28, -5, 41]), limits=[0.05, 0.05])

R can winsorize data using the DescTools package:[3]

library(DescTools)
a<-c(92, 19, 101, 58, 1053, 91, 26, 78, 10, 13, -40, 101, 86, 85, 15, 89, 89, 28, -5, 41)
DescTools::Winsorize(a, probs = c(0.05, 0.95))

See also

edit

References

edit
  1. ^ Lee, Brian K.; Lessler, Justin; Stuart, Elizabeth A. (2011). "Weight Trimming and Propensity Score Weighting". PLoS ONE. 6 (3): e18174. doi:10.1371/journal.pone.0018174. ISSN 1932-6203. PMC 3069059. PMID 21483818.
  2. ^ "2.2.1. Winsorizing the Variable". MSCI Global Investable Market Value and Growth Index Methodology (PDF) (Report). MSCI. February 2021.
  3. ^ Andri Signorell et al. (2021). DescTools: Tools for descriptive statistics. R package version 0.99.41.
  • Hastings, Jr., Cecil; Mosteller, Frederick; Tukey, John W.; Winsor, Charles P. (1947). "Low moments for small samples: a comparative study of order statistics". Annals of Mathematical Statistics. 18 (3): 413–426. doi:10.1214/aoms/1177730388.
  • Dixon, W. J. (1960). "Simplified Estimation from Censored Normal Samples". Annals of Mathematical Statistics. 31 (2): 385–391. doi:10.1214/aoms/1177705900.
  • Tukey, J. W. (1962). "The Future of Data Analysis". Annals of Mathematical Statistics. 33 (1): 1–67 [p. 18]. doi:10.1214/aoms/1177704711. JSTOR 2237638.
edit
  • "Winsorization". R-bloggers. June 30, 2011.