Girish Mahajan (Editor)

Negative multinomial distribution

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit
Notation
  
NM ( k 0 , p ) {displaystyle { extrm {NM}}(k_{0},,p)}

Parameters
  
k0 ∈ N0 — the number of failures before the experiment is stopped, p ∈ R — m-vector of “success” probabilities, p0 = 1 − (p1+…+pm) — the probability of a “failure”.

Support
  
k i ∈ { 0 , 1 , 2 , … } , 1 ≤ i ≤ m {displaystyle k_{i}in {0,1,2,ldots },1leq ileq m}

PDF
  
Γ ( ∑ i = 0 m k i ) p 0 k 0 Γ ( k 0 ) ∏ i = 1 m p i k i k i ! , {displaystyle Gamma !left(sum _{i=0}^{m}{k_{i}} ight){ rac {p_{0}^{k_{0}}}{Gamma (k_{0})}}prod _{i=1}^{m}{ rac {p_{i}^{k_{i}}}{k_{i}!}},} where Γ(x) is the Gamma function.

Mean
  
k 0 p 0 p {displaystyle { frac {k_{0}}{p_{0}}},p}

Variance
  
k 0 p 0 2 p p ′ + k 0 p 0 diag ⁡ ( p ) {displaystyle { frac {k_{0}}{p_{0}^{2}}},pp'+{ frac {k_{0}}{p_{0}}},operatorname {diag} (p)}

In probability theory and statistics, the negative multinomial distribution is a generalization of the negative binomial distribution (NB(r, p)) to more than two outcomes.

Contents

Suppose we have an experiment that generates m+1≥2 possible outcomes, {X0,…,Xm}, each occurring with non-negative probabilities {p0,…,pm} respectively. If sampling proceeded until n observations were made, then {X0,…,Xm} would have been multinomially distributed. However, if the experiment is stopped once X0 reaches the predetermined value k0, then the distribution of the m-tuple {X1,…,Xm} is negative multinomial. These variables are not multinomially distributed because their sum X1+…+Xm is not fixed, being a draw from a negative binomial distribution.

Negative multinomial distribution example

The table below shows an example of 400 melanoma (skin cancer) patients where the Type and Site of the cancer are recorded for each subject.

The sites (locations) of the cancer may be independent, but there may be positive dependencies of the type of cancer for a given location (site). For example, localized exposure to radiation implies that elevated level of one type of cancer (at a given location) may indicate higher level of another cancer type at the same location. The Negative Multinomial distribution may be used to model the cancer rates at a given site and help measure some of the cancer type dependencies within each location.

If x i , j denote the cancer rates for each site ( 0 i 2 ) and each type of cancer ( 0 j 3 ), for a fixed site ( i 0 ) the cancer rates are independent Negative Multinomial distributed random variables. That is, for each column index (site) the column-vector X has the following distribution:

X = { X 1 , X 2 , X 3 } N M ( k 0 , { p 1 , p 2 , p 3 } ) .

Different columns in the table (sites) are considered to be different instances of the random multinomially distributed vector, X. Then we have the following estimates of expected counts (frequencies of cancer):

μ ^ i , j = x i , . × x . , j x . , . x i , . = j = 0 3 x i , j x . , j = i = 0 2 x i , j x . , . = i = 0 2 j = 0 3 x i , j Example: μ ^ 1 , 1 = x 1 , . × x . , 1 x . , . = 34 × 68 400 = 5.78

For the first site (Head and Neck, j=0), suppose that X = { X 1 = 5 , X 2 = 1 , X 3 = 5 } and X N M ( k 0 = 10 , { p 1 = 0.2 , p 2 = 0.1 , p 3 = 0.2 } ) . Then:

p 0 = 1 i = 1 3 p i = 0.5 N M ( X | k 0 , { p 1 , p 2 , p 3 } ) = 0.00465585119998784 c o v [ X 1 , X 3 ] = 10 × 0.2 × 0.2 0.5 2 = 1.6 μ 2 = k 0 p 2 p 0 = 10 × 0.1 0.5 = 2.0 μ 3 = k 0 p 3 p 0 = 10 × 0.2 0.5 = 4.0 c o r r [ X 2 , X 3 ] = ( μ 2 × μ 3 ( k 0 + μ 2 ) ( k 0 + μ 3 ) ) 1 2 and therefore, c o r r [ X 2 , X 3 ] = ( 2 × 4 ( 10 + 2 ) ( 10 + 4 ) ) 1 2 = 0.21821789023599242.

Notice that the pair-wise NM correlations are always positive, whereas the correlations between multinomial counts are always negative. As the parameter k 0 increases, the paired correlations tend to zero! Thus, for large k 0 , the Negative Multinomial counts X i behave as independent Poisson random variables with respect to their means ( μ i = k 0 p i p 0 ) .

The marginal distribution of each of the X i variables is negative binomial, as the X i count (considered as success) is measured against all the other outcomes (failure). But jointly, the distribution of X = { X 1 , , X m } is negative multinomial, i.e., X N M ( k 0 , { p 1 , , p m } ) .

Parameter estimation

  • Estimation of the mean (expected) frequency counts ( μ j ) of each outcome ( X j ) using maximum likelihood is possible. If we have a single observation vector { x 1 , , x m } , then μ ^ i = x i . If we have several observation vectors, like in this case we have the cancer type frequencies for 3 different sites, then the MLE estimates of the mean counts are μ ^ j = x j , . I , where 0 j J is the cancer-type index and the summation is over the number of observed (sampled) vectors (I). For the cancer data above, we have the following MLE estimates for the expectations for the frequency counts:
  • There is no MLE estimate for the NM k 0 parameter. However, there are approximate protocols for estimating the k 0 parameter using the chi-squared goodness of fit statistic. In the usual chi-squared statistic:
  • X 2 = i ( x i μ i ) 2 μ i , we can replace the expected-means ( μ i ) by their estimates, μ i ^ , and replace denominators by the corresponding negative multinomial variances. Then we get the following test statistic for negative multinomial distributed data: X 2 ( k 0 ) = i ( x i μ i ^ ) 2 μ i ^ ( 1 + μ i ^ k 0 ) . Next, we can estimate the k 0 parameter by varying the values of k 0 in the expression X 2 ( k 0 ) and matching the values of this statistic with the corresponding asymptotic chi-squared distribution. The following protocol summarizes these steps using the cancer data above. DF: The degree of freedom for the Chi-squared distribution in this case is: df = (# rows – 1)(# columns – 1) = (3-1)*(4-1) = 6 Thus, we can solve the equation above X 2 ( k 0 ) = 5.261948 for the single variable of interest -- the unknown parameter k 0 . In the cancer example, suppose x = { x 1 = 5 , x 2 = 1 , x 3 = 5 } . Then, the solution is an asymptotic chi-squared distribution driven estimate of the parameter k 0 . X 2 ( k 0 ) = i = 1 3 ( x i μ i ^ ) 2 μ i ^ ( 1 + μ i ^ k 0 ) . X 2 ( k 0 ) = ( 5 61.67 ) 2 61.67 ( 1 + 61.67 / k 0 ) + ( 1 41.67 ) 2 41.67 ( 1 + 41.67 / k 0 ) + ( 5 18.67 ) 2 18.67 ( 1 + 18.67 / k 0 ) = 5.261948. Solving this equation for k 0 provides the desired estimate for the last parameter. Mathematica provides 3 distinct ( k 0 ) solutions to this equation: {50.5466, -21.5204, 2.40461}. Since k 0 > 0 there are 2 candidate solutions.
  • Estimates of probabilities: Assume k 0 = 2 and μ i k 0 p 0 = p i , then:
  • 61.67 k 0 p 0 = 31 p 0 = p 1 20 p 0 = p 2 9 p 0 = p 3 Hence, 1 p 0 = p 1 + p 2 + p 3 = 60 p 0 , and p 0 = 1 61 , p 1 = 31 61 , p 2 = 20 61 and p 3 = 9 61 . Therefore, the best model distribution for the observed sample x = { x 1 = 5 , x 2 = 1 , x 3 = 5 } is X N M ( 2 , { 31 61 , 20 61 , 9 61 } ) .
  • Negative binomial distribution
  • Multinomial distribution
  • Inverted Dirichlet distribution, a conjugate prior for the negative multinomial
  • References

    Negative multinomial distribution Wikipedia