The conditional probability distribution induced by a given network and loss in supervised learning
Any neural network and a loss function in a supervised learning settings induces a conditional probability density. Explicitly working with the induced conditional probability is beneficial.
An Artificial neural network can be described as a function over some input set with values in some output set . In supervised learning, is different from while in unsupervised learning they are the same. The activity of the network is carried out by many layers of activation units linked together using weights (Goodfellow et al., 2016). Denote the collection of all the weights of the network by the symbol . is an element in , the set of all possible weight values the network can take at any one time. In large models, is a very large tuple. To reflect that for different we have different network realizations , we will adopt the notation .
In this post, I show in detail how a neural network (and an associated loss function ) induces a conditional probability distribution and briefly discuss the benefits of this view. In future posts, I extend this probabilistic view to other elements of supervised learning and show in details how this view is important to understanding deep learning.
1. The induced conditional probability distribution
To see how any neural network also defines a conditional probability density or mass function suppose a loss function was given. We can define the following probability density (or mass) function
where is a normalizing constant such that integrates to 1 for any network realization .
Equation (1) above is the well known Gibbs density where is playing the role of the energy of the configuration . For this to be a well defined probability density (or mass) function, we need the normalizing constant to be finite for all . This is indeed the case by the usual assumptions
- for all , with equality if .
- is an integrable function (w.r.t the unknown true distribution that generated the data). In other words, we require that the average loss equation
is finite for all . is sometimes called the expected loss in machine learning, or the risk function in decision theory.
While the integrability of cannot be verified in practice since is unknown, typical losses like squared error loss, absolute error loss, and Huber loss satisfy these properties under reasonable distributional assumptions. Let’s work out what happens in these important examples.
1.1 Examples
In supervised learning, one is given a data set and our objective is to construct a neural network one can use to predict future unseen value given future seen or unseen value (Friedman, 1994). Based on the nature of the observables one constructs an appropriate neural network and chooses a loss function that is deemed appropriate. In the following examples we drive the conditional probability densities (or mass) functions associated with a given network and loss function.
Example 1.1.1: Squared error loss
When is on a continuous scale (i.e. stock price, air temperature …etc) modelled as a subset of , we could use the squared euclidean norm as a loss function.
with , and is the -th entry of . The neural network and the loss induces the parameteric conditional probability density which one can immediately recognize as the dimensional normal distribution with mean and variance where is the transpose of the column vector , and is the determinant of .
Example 1.1.2: Absolute error loss
Another loss function used in practice is the norm.
The induced conditional probability density is
which is the product of independent laplace probability densities with parameter .
Example 1.1.3: Cross entropy loss
When is on a categorical scale (i.e. dog vs cat vs bird, happy vs sad, a number in the set ), one typically uses a network with number of output units matching the cardinality of and the cross entropy loss
where is the dirac delta, and is the -th component of . The associated conditional probability mass function is in fact explicit
When the cardinality of is , the cross-entropy reduces to the binary cross-entropy.
2. Why should we care?
Now that we have a good handle on the conditional distribution induced by our choice of the loss function and the nature of the output layer of the network, one can apply the tools of frequentist statistics such as maximum likelihood, hypothesis testing, and asymptotic theory for analyzing supervised learning methods.
For instance, under the assumption that the probability of any pair is independent, we can write the log likelihood of the dataset (under our model) for different parameters as
yielding the following maximum likelihood parameter everyone is familiar with from 2nd year undergraduate statistics.
Maximizing the likelihood equation (3) above is equivalent to minimizing the empirical loss
since , and doesn’t depend on our parameter by construction.
One might argue that we did not gain much by characterizing the conditional probability density (or mass) associated with a given loss and network. This is not exactly true. First, making the nature of the assumed noise in the model explicit provide us with more information about the nature of our model and ways to change it. For instance, in the probabilistic characterization one could choose a non-diagnoal matrix to represent known correlations in our noise. Even more, one could compute the observed errors and check if they indeed conform with the assumed noise distribution (a statistical technique for measuring model fit).
Second, one can apply tools of information theory to formally characterize what it means to be surprised when using our model to stand in for the unknown distribution that generated the data. Many notions of surprise are available in practice. If we subscribe to the infamous notion of Shannon surprise, then the average surprise when using the model instead of the true unknown distribution that generates the data is defined by the cross entropy
The minimum average (Shannon) surprise when observing samples from is the entropy of . As a result, the average excess surprise when using our model instead of the true distribution is
which is the Kullback-Leibler divergence (Kullback & Leibler, 1951). This is one important reason why machine learning minimizes the cross entropy (which is equivalent to minimizing the KL divergence from to ) since if our model is any good it should stand in for when making decisions related to our observables and .
Last but not least, by understanding the conditional probabilistic distribution induced by a network and loss, one can use laws that are unique to probability theory such as conditional expectation, and bayes’ rule to study complex models and building powerful learning machines.
- Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning. MIT Press.
- Friedman, J. H. (1994). An Overview of Predictive Learning and Function Approximation. In V. Cherkassky, J. H. Friedman, & H. Wechsler (Eds.), From Statistics to Neural Networks (pp. 1–61). Springer Berlin Heidelberg.
- Kullback, S., & Leibler, R. A. (1951). On Information and Sufficiency. The Annals of Mathematical Statistics, 22(1), 79–86. https://doi.org/10.1214/aoms/1177729694