KNOWPIA
WELCOME TO KNOWPIA

In machine learning and mathematical optimization, **loss functions for classification** are computationally feasible loss functions representing the price paid for inaccuracy of predictions in classification problems (problems of identifying which category a particular observation belongs to).^{[1]} Given as the space of all possible inputs (usually ), and as the set of labels (possible outputs), a typical goal of classification algorithms is to find a function which best predicts a label for a given input .^{[2]} However, because of incomplete information, noise in the measurement, or probabilistic components in the underlying process, it is possible for the same to generate different .^{[3]} As a result, the goal of the learning problem is to minimize expected loss (also known as the risk), defined as

where is a given loss function, and is the probability density function of the process that generated the data, which can equivalently be written as

Within classification, several commonly used loss functions are written solely in terms of the product of the true label and the predicted label . Therefore, they can be defined as functions of only one variable , so that with a suitably chosen function . These are called **margin-based loss functions**. Choosing a margin-based loss function amounts to choosing . Selection of a loss function within this framework impacts the optimal which minimizes the expected risk.

In the case of binary classification, it is possible to simplify the calculation of expected risk from the integral specified above. Specifically,

The second equality follows from the properties described above. The third equality follows from the fact that 1 and −1 are the only possible values for , and the fourth because . The term within brackets is known as the *conditional risk.*

One can solve for the minimizer of by taking the functional derivative of the last equality with respect to and setting the derivative equal to 0. This will result in the following equation

which is also equivalent to setting the derivative of the conditional risk equal to zero.

Given the binary nature of classification, a natural selection for a loss function (assuming equal cost for false positives and false negatives) would be the 0-1 loss function (0–1 indicator function), which takes the value of 0 if the predicted classification equals that of the true class or a 1 if the predicted classification does not match the true class. This selection is modeled by

where indicates the Heaviside step function.
However, this loss function is non-convex and non-smooth, and solving for the optimal solution is an NP-hard combinatorial optimization problem.^{[4]} As a result, it is better to substitute **loss function surrogates** which are tractable for commonly used learning algorithms, as they have convenient properties such as being convex and smooth. In addition to their computational tractability, one can show that the solutions to the learning problem using these loss surrogates allow for the recovery of the actual solution to the original classification problem.^{[5]} Some of these surrogates are described below.

In practice, the probability distribution is unknown. Consequently, utilizing a training set of independently and identically distributed sample points

drawn from the data sample space, one seeks to minimize empirical risk

as a proxy for expected risk.^{[3]} (See statistical learning theory for a more detailed description.)

Utilizing Bayes' theorem, it can be shown that the optimal , i.e., the one that minimizes the expected risk associated with the zero-one loss, implements the Bayes optimal decision rule for a binary classification problem and is in the form of

- .

A loss function is said to be *classification-calibrated or Bayes consistent* if its optimal is such that and is thus optimal under the Bayes decision rule. A Bayes consistent loss function allows us to find the Bayes optimal decision function by directly minimizing the expected risk and without having to explicitly model the probability density functions.

For convex margin loss , it can be shown that is Bayes consistent if and only if it is differentiable at 0 and .^{[6]}^{[1]} Yet, this result does not exclude the existence of non-convex Bayes consistent loss functions. A more general result states that Bayes consistent loss functions can be generated using the following formulation ^{[7]}

- ,

where is any invertible function such that and is any differentiable strictly concave function such that . Table-I shows the generated Bayes consistent loss functions for some example choices of and . Note that the Savage and Tangent loss are not convex. Such non-convex loss functions have been shown to be useful in dealing with outliers in classification.^{[7]}^{[8]} For all loss functions generated from (2), the posterior probability can be found using the invertible *link function* as . Such loss functions where the posterior probability can be recovered using the invertible link are called *proper loss functions*.

Loss name | ||||
---|---|---|---|---|

Exponential | ||||

Logistic | ||||

Square | ||||

Savage | ||||

Tangent |

The sole minimizer of the expected risk, , associated with the above generated loss functions can be directly found from equation (1) and shown to be equal to the corresponding . This holds even for the nonconvex loss functions, which means that gradient descent based algorithms such as gradient boosting can be used to construct the minimizer.

For proper loss functions, the *loss margin* can be defined as and shown to be directly related to the regularization properties of the classifier.^{[9]} Specifically a loss function of larger margin increases regularization and produces better estimates of the posterior probability. For example, the loss margin can be increased for the logistic loss by introducing a parameter and writing the logistic loss as where smaller increases the margin of the loss. It is shown that this is directly equivalent to decreasing the learning rate in gradient boosting where decreasing improves the regularization of the boosted classifier. The theory makes it clear that when a learning rate of is used, the correct formula for retrieving the posterior probability is now .

In conclusion, by choosing a loss function with larger margin (smaller ) we increase regularization and improve our estimates of the posterior probability which in turn improves the ROC curve of the final classifier.

While more commonly used in regression, the square loss function can be re-written as a function and utilized for classification. It can be generated using (2) and Table-I as follows

The square loss function is both convex and smooth. However, the square loss function tends to penalize outliers excessively, leading to slower convergence rates (with regards to sample complexity) than for the logistic loss or hinge loss functions.^{[1]} In addition, functions which yield high values of for some will perform poorly with the square loss function, since high values of will be penalized severely, regardless of whether the signs of and match.

A benefit of the square loss function is that its structure lends itself to easy cross validation of regularization parameters. Specifically for Tikhonov regularization, one can solve for the regularization parameter using leave-one-out cross-validation in the same time as it would take to solve a single problem.^{[10]}

The minimizer of for the square loss function can be directly found from equation (1) as

The logistic loss function can be generated using (2) and Table-I as follows

The logistic loss is convex and grows linearly for negative values which make it less sensitive to outliers. The logistic loss is used in the LogitBoost algorithm.

The minimizer of for the logistic loss function can be directly found from equation (1) as

This function is undefined when or (tending toward ∞ and −∞ respectively), but predicts a smooth curve which grows when increases and equals 0 when .^{[3]}

It's easy to check that the logistic loss and binary cross entropy loss (Log loss) are in fact the same (up to a multiplicative constant ). The cross entropy loss is closely related to the Kullback–Leibler divergence between the empirical distribution and the predicted distribution. The cross entropy loss is ubiquitous in modern deep neural networks.

The exponential loss function can be generated using (2) and Table-I as follows

The exponential loss is convex and grows exponentially for negative values which makes it more sensitive to outliers. The exponential loss is used in the AdaBoost algorithm.

The minimizer of for the exponential loss function can be directly found from equation (1) as

The Savage loss^{[7]} can be generated using (2) and Table-I as follows

The Savage loss is quasi-convex and is bounded for large negative values which makes it less sensitive to outliers. The Savage loss has been used in gradient boosting and the SavageBoost algorithm.

The minimizer of for the Savage loss function can be directly found from equation (1) as

The Tangent loss^{[11]} can be generated using (2) and Table-I as follows

The Tangent loss is quasi-convex and is bounded for large negative values which makes it less sensitive to outliers. Interestingly, the Tangent loss also assigns a bounded penalty to data points that have been classified "too correctly". This can help prevent over-training on the data set. The Tangent loss has been used in gradient boosting, the TangentBoost algorithm and Alternating Decision Forests.^{[12]}

The minimizer of for the Tangent loss function can be directly found from equation (1) as

The hinge loss function is defined with , where is the positive part function.

The hinge loss provides a relatively tight, convex upper bound on the 0–1 indicator function. Specifically, the hinge loss equals the 0–1 indicator function when and . In addition, the empirical risk minimization of this loss is equivalent to the classical formulation for support vector machines (SVMs). Correctly classified points lying outside the margin boundaries of the support vectors are not penalized, whereas points within the margin boundaries or on the wrong side of the hyperplane are penalized in a linear fashion compared to their distance from the correct boundary.^{[4]}

While the hinge loss function is both convex and continuous, it is not smooth (is not differentiable) at . Consequently, the hinge loss function cannot be used with gradient descent methods or stochastic gradient descent methods which rely on differentiability over the entire domain. However, the hinge loss does have a subgradient at , which allows for the utilization of subgradient descent methods.^{[4]} SVMs utilizing the hinge loss function can also be solved using quadratic programming.

The minimizer of for the hinge loss function is

when , which matches that of the 0–1 indicator function. This conclusion makes the hinge loss quite attractive, as bounds can be placed on the difference between expected risk and the sign of hinge loss function.^{[1]} The Hinge loss cannot be derived from (2) since is not invertible.

The generalized smooth hinge loss function with parameter is defined as

where

It is monotonically increasing and reaches 0 when .

- ^
^{a}^{b}^{c}^{d}Rosasco, L.; De Vito, E. D.; Caponnetto, A.; Piana, M.; Verri, A. (2004). "Are Loss Functions All the Same?" (PDF).*Neural Computation*.**16**(5): 1063–1076. CiteSeerX 10.1.1.109.6786. doi:10.1162/089976604773135104. PMID 15070510. S2CID 11845688. **^**Shen, Yi (2005),*Loss Functions For Binary Classification and Class Probability Estimation*(PDF), University of Pennsylvania, retrieved 6 December 2014- ^
^{a}^{b}^{c}Rosasco, Lorenzo; Poggio, Tomaso (2014),*A Regularization Tour of Machine Learning*, MIT-9.520 Lectures Notes, vol. Manuscript - ^
^{a}^{b}^{c}Piyush, Rai (13 September 2011),*Support Vector Machines (Contd.), Classification Loss Functions and Regularizers*(PDF), Utah CS5350/6350: Machine Learning, retrieved 4 May 2021 **^**Ramanan, Deva (27 February 2008),*Lecture 14*(PDF), UCI ICS273A: Machine Learning, retrieved 6 December 2014**^**Bartlett, Peter L.; Jordan, Michael I.; Mcauliffe, Jon D. (2006). "Convexity, Classification, and Risk Bounds".*Journal of the American Statistical Association*.**101**(473): 138–156. doi:10.1198/016214505000000907. ISSN 0162-1459. JSTOR 30047445. S2CID 2833811.- ^
^{a}^{b}^{c}Masnadi-Shirazi, Hamed; Vasconcelos, Nuno (2008). "On the Design of Loss Functions for Classification: Theory, Robustness to Outliers, and SavageBoost" (PDF).*Proceedings of the 21st International Conference on Neural Information Processing Systems*. NIPS'08. USA: Curran Associates Inc.: 1049–1056. ISBN 9781605609492. **^**Leistner, C.; Saffari, A.; Roth, P. M.; Bischof, H. (September 2009). "On robustness of on-line boosting - a competitive study".*2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops*: 1362–1369. doi:10.1109/ICCVW.2009.5457451. ISBN 978-1-4244-4442-7. S2CID 6032045.**^**Vasconcelos, Nuno; Masnadi-Shirazi, Hamed (2015). "A View of Margin Losses as Regularizers of Probability Estimates".*Journal of Machine Learning Research*.**16**(85): 2751–2795. ISSN 1533-7928.**^**Rifkin, Ryan M.; Lippert, Ross A. (1 May 2007),*Notes on Regularized Least Squares*(PDF), MIT Computer Science and Artificial Intelligence Laboratory**^**Masnadi-Shirazi, H.; Mahadevan, V.; Vasconcelos, N. (June 2010). "On the design of robust classifiers for computer vision".*2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition*: 779–786. CiteSeerX 10.1.1.172.6416. doi:10.1109/CVPR.2010.5540136. ISBN 978-1-4244-6984-0. S2CID 632758.**^**Schulter, S.; Wohlhart, P.; Leistner, C.; Saffari, A.; Roth, P. M.; Bischof, H. (June 2013). "Alternating Decision Forests".*2013 IEEE Conference on Computer Vision and Pattern Recognition*: 508–515. CiteSeerX 10.1.1.301.1305. doi:10.1109/CVPR.2013.72. ISBN 978-0-7695-4989-7. S2CID 6557162.