Notation M N n , p ( M , U , V ) {displaystyle {mathcal {MN}}_{n,p}(mathbf {M} ,mathbf {U} ,mathbf {V} )} Parameters M {displaystyle mathbf {M} } location (real n × p {displaystyle n imes p} matrix) U {displaystyle mathbf {U} } scale (positive-definite real n × n {displaystyle n imes n} matrix) V {displaystyle mathbf {V} } scale (positive-definite real p × p {displaystyle p imes p} matrix) Support X ∈ R n × p {displaystyle mathbf {X} in mathbb {R} ^{n imes p}} PDF exp ( − 1 2 t r [ V − 1 ( X − M ) T U − 1 ( X − M ) ] ) ( 2 π ) n p / 2 | V | n / 2 | U | p / 2 {displaystyle {rac {exp left(-{rac {1}{2}},mathrm {tr} left[mathbf {V} ^{-1}(mathbf {X} -mathbf {M} )^{T}mathbf {U} ^{-1}(mathbf {X} -mathbf {M} )ight]ight)}{(2pi )^{np/2}|mathbf {V} |^{n/2}|mathbf {U} |^{p/2}}}} Mean M {displaystyle mathbf {M} } Variance U {displaystyle mathbf {U} } (among-row) and V {displaystyle mathbf {V} } (among-column) |
In statistics, the matrix normal distribution is a probability distribution that is a generalization of the multivariate normal distribution to matrix-valued random variables.
Contents
Definition
The probability density function for the random matrix X (n × p) that follows the matrix normal distribution
where
The matrix normal is related to the multivariate normal distribution in the following way:
if and only if
where
Proof
The equivalence between the above matrix normal and multivariate normal density functions can be shown using several properties of the trace and Kronecker product, as follows. We start with the argument of the exponent of the matrix normal PDF:
which is the argument of the exponent of the multivariate normal PDF. The proof is completed by using the determinant property:
Properties
If
Expected values
The mean, or expected value is:
and we have the following second-order expectations:
where
More generally, for appropriately dimensioned matrices A,B,C:
Transformation
Transpose transform:
Linear transform: let D (r-by-n), be of full rank r ≤ n and C (p-by-s), be of full rank s ≤ p, then:
Example
Let's imagine a sample of n independent p-dimensional random variables identically distributed according to a multivariate normal distribution:
When defining the n × p matrix
where each row of
Maximum likelihood parameter estimation
Given k matrices, each of size n × p, denoted
The solution for the mean has a closed form, namely
but the covariance parameters do not. However, these parameters can be iteratively maximized by zero-ing their gradients at:
and
See for example and references therein. The covariance parameters are non-identifiable in the sense that for any scale factor, s>0, we have:
Drawing values from the distribution
Sampling from the matrix normal distribution is a special case of the sampling procedure for the multivariate normal distribution. Let
Then let
so that
where A and B can be chosen by Cholesky decomposition or a similar matrix square root operation.
Relation to other distributions
Dawid (1981) provides a discussion of the relation of the matrix-valued normal distribution to other distributions, including the Wishart distribution, Inverse Wishart distribution and matrix t-distribution, but uses different notation from that employed here.