In information theory, the conditional entropy quantifies the amount of information needed to describe the outcome of a random variable given that the value of another random variable is known. Here, information is measured in shannons, nats, or hartleys. The entropy of conditioned on is written as .
Note: Here, the convention is that the expression should be treated as being equal to zero. This is because .[1]
Intuitively, notice that by definition of expected value and of conditional probability, can be written as , where is defined as . One can think of as associating each pair with a quantity measuring the information content of given . This quantity is directly related to the amount of information needed to describe the event given . Hence by computing the expected value of over all pairs of values , the conditional entropy measures how much information, on average, the variable encodes about .
Motivationedit
Let be the entropy of the discrete random variable conditioned on the discrete random variable taking a certain value . Denote the support sets of and by and . Let have probability mass function. The unconditional entropy of is calculated as , i.e.
Note that is the result of averaging over all possible values that may take. Also, if the above sum is taken over a sample , the expected value is known in some domains as equivocation.[2]
Given discrete random variables with image and with image , the conditional entropy of given is defined as the weighted sum of for each possible value of , using as the weights:[3]: 15
Propertiesedit
Conditional entropy equals zeroedit
if and only if the value of is completely determined by the value of .
Conditional entropy of independent random variablesedit
Assume that the combined system determined by two random variables and has joint entropy, that is, we need bits of information on average to describe its exact state. Now if we first learn the value of , we have gained bits of information. Once is known, we only need bits to describe the state of the whole system. This quantity is exactly , which gives the chain rule of conditional entropy:
Although the specific-conditional entropy can be either less or greater than for a given random variate of , can never exceed .
Conditional differential entropyedit
Definitionedit
The above definition is for discrete random variables. The continuous version of discrete conditional entropy is called conditional differential (or continuous) entropy. Let and be a continuous random variables with a joint probability density function. The differential conditional entropy is defined as[3]: 249
(Eq.2)
Propertiesedit
In contrast to the conditional entropy for discrete random variables, the conditional differential entropy may be negative.
As in the discrete case there is a chain rule for differential entropy:
Notice however that this rule may not be true if the involved differential entropies do not exist or are infinite.
Joint differential entropy is also used in the definition of the mutual information between continuous random variables:
with equality if and only if and are independent.[3]: 253
Relation to estimator erroredit
The conditional differential entropy yields a lower bound on the expected squared error of an estimator. For any random variable , observation and estimator the following holds:[3]: 255
^"David MacKay: Information Theory, Pattern Recognition and Neural Networks: The Book". www.inference.org.uk. Retrieved 2019-10-25.
^Hellman, M.; Raviv, J. (1970). "Probability of error, equivocation, and the Chernoff bound". IEEE Transactions on Information Theory. 16 (4): 368–372. CiteSeerX10.1.1.131.2865. doi:10.1109/TIT.1970.1054466.
^ abcdefgT. Cover; J. Thomas (1991). Elements of Information Theory. Wiley. ISBN 0-471-06259-6.