Distributions Provided by mniw

Wishart Distribution

The Wishart distribution on a random positive-definite matrix Xq × q is is denoted X ∼ Wish (Ψ, ν), and defined as X = (LZ)(LZ)′, where:

  • Ψq × q = LL is the positive-definite matrix scale parameter,

  • ν > q is the shape parameter,

  • Zq × q is a random lower-triangular matrix with elements

    $$ Z_{ij} \begin{cases} \overset{\;\textrm{iid}\;}{\sim}\operatorname{Normal}(0,1) & i < j \\ \overset{\:\textrm{ind}\:}{\sim}\chi^2_{(\nu-i+1)} & i = j \\ = 0 & i > j. \end{cases} $$

The log-density of the Wishart distribution is

$$ \log p({\boldsymbol{X}}\mid {\boldsymbol{\Psi}}, \nu) = -\textstyle{\frac{1}{2}} \left[\mathrm{tr}({\boldsymbol{\Psi}}^{-1} {\boldsymbol{X}}) + (q+1-\nu)\log |{\boldsymbol{X}}| + \nu \log |{\boldsymbol{\Psi}}| + \nu q \log(2) + 2 \log \Gamma_q(\textstyle{\frac{\nu }{2}})\right], $$

where Γn(x) is the multivariate Gamma function defined as

$$ \Gamma_n(x) = \pi^{n(n-1)/4} \prod_{j=1}^n \Gamma\big(x + \textstyle{\frac{1}{2}} (1-j)\big). $$

Inverse-Wishart Distribution

The Inverse-Wishart distribution X ∼ InvWish (Ψ, ν) is defined as X−1 ∼ Wish (Ψ−1, ν). Its log-density is given by

$$ \log p({\boldsymbol{X}}\mid {\boldsymbol{\Psi}}, \nu) = -\textstyle{\frac{1}{2}} \left[\mathrm{tr}({\boldsymbol{\Psi}}{\boldsymbol{X}}^{-1}) + (\nu+q+1) \log |{\boldsymbol{X}}| - \nu \log |{\boldsymbol{\Psi}}| + \nu q \log(2) + 2 \log \Gamma_q(\textstyle{\frac{\nu }{2}})\right]. $$

Properties

If Xq × q ∼ Wish (Ψ, ν), the for a nonzero vector a ∈ ℝq we have

$$ \frac{{\boldsymbol{a}}'{\boldsymbol{X}}{\boldsymbol{a}}}{{\boldsymbol{a}}'{\boldsymbol{\Psi}}{\boldsymbol{a}}} \sim \chi^2_{(\nu)}. $$

Matrix-Normal Distribution

The Matrix-Normal distribution on a random matrix Xp × q is denoted X ∼ MatNorm (Λ, ΣR, ΣC), and defined as X = LZU + Λ, where:

  • Λp × q is the mean matrix parameter,
  • ΣRp × p = LL is the row-variance matrix parameter,
  • ΣCq × q = UU is the column-variance matrix parameter,
  • Zq × q is a random matrix with $Z_{ij} \overset{\;\textrm{iid}\;}{\sim}\operatorname{Normal}(0,1)$.

The log-density of the Matrix-Normal distribution is

$$ \log p({\boldsymbol{X}}\mid {\boldsymbol{\Lambda}}, {\boldsymbol{\Sigma}}_R, {\boldsymbol{\Sigma}}_C) = -\textstyle{\frac{1}{2}} \left[\mathrm{tr}\big({\boldsymbol{\Sigma}}_C^{-1}({\boldsymbol{X}}-{\boldsymbol{\Lambda}})'{\boldsymbol{\Sigma}}_R^{-1}({\boldsymbol{X}}-{\boldsymbol{\Lambda}})\big) + \nu q \log(2\pi) + \nu \log |{\boldsymbol{\Sigma}}_C| + q \log |{\boldsymbol{\Sigma}}_R|\right]. $$

Properties

If Xp × q ∼ MatNorm (Λ, ΣR, ΣC), then for nonzero vectors a ∈ ℝp and b ∈ ℝq we have

aXb ∼ Normal (aΛb, aΣRa ⋅ bΣCb).

Matrix-Normal Inverse-Wishart Distribution

The Matrix-Normal Inverse-Wishart Distribution on a random matrix Xp × q and random positive-definite matrix Vq × q is denoted (X, V) ∼ MNIW (Λ, Σ, Ψ, ν), and defined as

$$ \begin{aligned} {\boldsymbol{X}}\mid {\boldsymbol{V}}& \sim \operatorname{MatNorm}({\boldsymbol{\Lambda}}, {\boldsymbol{\Sigma}}, {\boldsymbol{V}}) \\ {\boldsymbol{V}}& \sim \operatorname{InvWish}({\boldsymbol{\Psi}}, \nu). \end{aligned} $$

Properties

The MNIX distribution is conjugate prior for the multivariable response regression model

Yn × q ∼ MatNorm (Xn × pβp × q, V, Σ).

That is, if (β, Σ) ∼ MNIW (Λ, Ω−1, Ψ, ν), then

$$ {\boldsymbol{\beta}}, {\boldsymbol{\Sigma}}\mid {\boldsymbol{Y}}\sim \operatorname{MNIW}(\hat {\boldsymbol{\Lambda}}, \hat {\boldsymbol{\Omega}}^{-1}, \hat {\boldsymbol{\Psi}}, \hat \nu), $$

where

$$ \begin{aligned} \hat {\boldsymbol{\Omega}}& = {\boldsymbol{X}}'{\boldsymbol{V}}^{-1}{\boldsymbol{X}}+ {\boldsymbol{\Omega}} & \hat {\boldsymbol{\Psi}}& = {\boldsymbol{\Psi}}+ {\boldsymbol{Y}}'{\boldsymbol{V}}^{-1}{\boldsymbol{Y}}+ {\boldsymbol{\Lambda}}'{\boldsymbol{\Omega}}{\boldsymbol{\Lambda}}- \hat {\boldsymbol{\Lambda}}'\hat {\boldsymbol{\Omega}}\hat {\boldsymbol{\Lambda}} \\ \hat {\boldsymbol{\Lambda}}& = \hat {\boldsymbol{\Omega}}^{-1}({\boldsymbol{X}}'{\boldsymbol{V}}^{-1}{\boldsymbol{Y}}+ {\boldsymbol{\Omega}}{\boldsymbol{\Lambda}}) & \hat \nu & = \nu + n. \end{aligned} $$

Matrix-t Distribution

The Matrix-t distribution on a random matrix Xp × q is denoted X ∼ MatT (Λ, ΣR, ΣC, ν), and defined as the marginal distribution of X for (X, V) ∼ MNIW (Λ, ΣR, ΣC, ν). Its log-density is given by

$$ \begin{aligned} \log p({\boldsymbol{X}}\mid {\boldsymbol{\Lambda}}, {\boldsymbol{\Sigma}}_R, {\boldsymbol{\Sigma}}_C, \nu) & = -\textstyle{\frac{1}{2}} \Big[(\nu+p+q-1)\log | I + {\boldsymbol{\Sigma}}_R^{-1}({\boldsymbol{X}}-{\boldsymbol{\Lambda}}){\boldsymbol{\Sigma}}_C^{-1}({\boldsymbol{X}}-{\boldsymbol{\Lambda}})'| \\ & \phantom{= -\textstyle{\frac{1}{2}} \Big[} + q \log |{\boldsymbol{\Sigma}}_R| + p \log |{\boldsymbol{\Sigma}}_C| \\ & \phantom{= -\textstyle{\frac{1}{2}} \Big[} + pq \log(\pi) - \log \Gamma_q(\textstyle{\frac{\nu+p+q-1}{2}}) + \log \Gamma_q(\textstyle{\frac{\nu+q-1}{2}})\Big]. \end{aligned} $$

Properties

If Xp × q ∼ MatT (Λ, ΣR, ΣC, ν), then for nonzero vectors a ∈ ℝp and b ∈ ℝq we have

$$ \frac{{\boldsymbol{a}}'{\boldsymbol{X}}{\boldsymbol{b}}- \mu}{\sigma} \sim t_{(\nu -q + 1)}, $$

where $$ \mu = {\boldsymbol{a}}'{\boldsymbol{\Lambda}}{\boldsymbol{b}}, \qquad \sigma^2 = \frac{{\boldsymbol{a}}'{\boldsymbol{\Sigma}}_R{\boldsymbol{a}}\cdot {\boldsymbol{b}}'{\boldsymbol{\Sigma}}_C{\boldsymbol{b}}}{\nu - q + 1}. $$

Random-Effects Normal Distribution

Consider the multivariate normal distribution on q-dimensional vectors x and μ given by

$$ \begin{aligned} {\boldsymbol{x}}\mid {\boldsymbol{\mu}}& \sim \operatorname{Normal}({\boldsymbol{\mu}}, {\boldsymbol{V}}) \\ {\boldsymbol{\mu}}& \sim \operatorname{Normal}({\boldsymbol{\lambda}}, {\boldsymbol{\Sigma}}). \end{aligned} $$

The random-effects normal distribution is defined as the posterior distribution μ ∼ p(μ ∣ x), which is given by

μ ∣ x ∼ Normal (G(x − λ) + λ, GV),   G = Σ(V + Σ)−1.

The notation for this distribution is μ ∼ RxNorm (x, V, λ, Σ).

Hierarchical Normal-Normal Model

The hierarchical Normal-Normal model is defined as

$$ \begin{aligned} {\boldsymbol{y}}_i \mid {\boldsymbol{\mu}}_i, {\boldsymbol{\beta}}, {\boldsymbol{\Sigma}}& \overset{\:\textrm{ind}\:}{\sim}\operatorname{Normal}({\boldsymbol{\mu}}_i, {\boldsymbol{V}}_i) \\ {\boldsymbol{\mu}}_i \mid {\boldsymbol{\beta}}, {\boldsymbol{\Sigma}}& \overset{\;\textrm{iid}\;}{\sim}\operatorname{Normal}({\boldsymbol{x}}_i'{\boldsymbol{\beta}}, {\boldsymbol{\Sigma}}) \\ ({\boldsymbol{\beta}}, {\boldsymbol{\Sigma}}) & \sim \operatorname{MNIW}({\boldsymbol{\Lambda}}, {\boldsymbol{\Omega}}^{-1}, {\boldsymbol{\Psi}}, \nu), \end{aligned} $$

where:

  • yiq × 1 is the response vector for subject i,
  • μiq × 1 is the random effect for subject i,
  • Viq × q is the error variance for subject i,
  • xip × 1 is the covariate vector for subject i,
  • βp × q is the random-effects coefficient matrix,
  • Σq × q is the random-effects error variance.

Let Yn × q = (y1, …, yn), Xn × p = (x1, …, xn), and Θn × q = (μ1, …, μn). If interest lies in the posterior distribution p(Θ, β, Σ ∣ Y, X), then a Gibbs sampler can be used to cycle through the following conditional distributions:

$$ \begin{aligned} {\boldsymbol{\mu}}_i \mid {\boldsymbol{\beta}}, {\boldsymbol{\Sigma}}, {\boldsymbol{Y}}, {\boldsymbol{X}}& \overset{\:\textrm{ind}\:}{\sim}\operatorname{RxNorm}({\boldsymbol{y}}_i, {\boldsymbol{V}}_i, {\boldsymbol{x}}_i'{\boldsymbol{\beta}}, {\boldsymbol{\Sigma}}) \\ {\boldsymbol{\beta}}, {\boldsymbol{\Sigma}}\mid {\boldsymbol{\Theta}}, {\boldsymbol{Y}}, {\boldsymbol{X}}& \sim \operatorname{MNIW}(\hat {\boldsymbol{\Lambda}}, \hat {\boldsymbol{\Omega}}^{-1}, \hat {\boldsymbol{\Psi}}, \hat \nu), \end{aligned} $$

where $\hat {\boldsymbol{\Lambda}}$, $\hat {\boldsymbol{\Omega}}$, $\hat {\boldsymbol{\Psi}}$, and ν̂ are obtained from the MNIW conjugate posterior formula with Y ← Θ.