Model description
The Student- random-effects model replaces the Gaussian distribution for study-level effects with a distribution. The degrees-of-freedom parameter is estimated from the data and controls tail heaviness: small (e.g., ) allows extreme study effects with much higher probability than the Gaussian, reducing the influence of outlier studies on the pooled estimate.
Mathematical specification
Likelihood:
Random effects:
Equivalently in a scale-mixture representation:
Priors:
Stan code
data {
int<lower=1> N;
int<lower=1> K;
vector[N] y;
vector<lower=0>[N] se;
array[N] int<lower=1> study;
}
parameters {
real mu;
real<lower=0> tau;
real<lower=2> nu;
vector[K] z;
vector<lower=0>[K] v;
}
transformed parameters {
vector[K] u = tau * z ./ sqrt(v / nu);
}
model {
target += normal_lpdf(mu | 0, 1);
target += cauchy_lpdf(tau | 0, 0.5);
target += gamma_lpdf(nu | 2, 0.1);
target += std_normal_lpdf(z);
target += chi_square_lpdf(v | nu);
target += normal_lpdf(y | mu + u[study], se);
}
generated quantities {
real b_Intercept = mu;
}How bayesma calls this model
Selected by model_type = "random_effect" with re_dist = "t". The nu_prior argument sets the prior on :
The Gamma(2, 0.1) prior places most mass on , allowing substantial flexibility between near-Gaussian () and heavy-tailed () behaviour.
Parameterisation notes
The scale-mixture representation samples and the auxiliary jointly. This is preferable to directly sampling from a distribution in Stan because it avoids the non-standard log-density computation and produces more efficient sampling.
The constraint real<lower=2> nu ensures the variance of the distribution is finite. For most meta-analytic applications, is not meaningful.
Identifiability
The degrees-of-freedom parameter is poorly identified when . With few studies, the data are consistent with a wide range of values, and the posterior for is largely prior-driven. In this case:
- Set to a fixed value (e.g., ) via a tight prior:
gamma(50, 10). - Report the sensitivity of the pooled estimate to the assumed .
Known sampling difficulties
The joint sampling of and can be slow when is large (near-Gaussian tail). Increasing the number of chains or iterations helps. Persistent divergences near are handled by the NCP as in the Gaussian model.
