Skip to contents

Model description

Small-sample adjustments correct the pooled estimate’s credible interval when the within-study standard errors sis_i are estimated rather than known. Two adjustments are implemented: the Hartung–Knapp–Sidik–Jonkman (HKSJ) multiplicative correction and the tt-approximation. See Small-sample adjustments for the statistical rationale.

HKSJ adjustment

Mathematical specification

Likelihood (HKSJ):

yiμ,τ,ϕtk1(μ+ui,ϕsi) y_i \mid \mu, \tau, \phi \sim t_{k-1}\!\left(\mu + u_i,\, \phi \cdot s_i\right)

ui𝒩(0,τ2) u_i \sim \mathcal{N}(0,\, \tau^2)

Priors:

μ𝒩(0,1),τHalf-Cauchy(0,0.5),ϕHalf-t3(0,1) \mu \sim \mathcal{N}(0,\, 1), \qquad \tau \sim \text{Half-Cauchy}(0,\, 0.5), \qquad \phi \sim \text{Half-}t_3(0,\, 1)

Stan code (HKSJ)

data {
  int<lower=1> N;
  int<lower=1> K;
  vector[N] y;
  vector<lower=0>[N] se;
  array[N] int<lower=1> study;
}

parameters {
  real mu;
  real<lower=0> tau;
  real<lower=0> phi;
  vector[K] z;
}

transformed parameters {
  vector[K] u = tau * z;
}

model {
  target += normal_lpdf(mu  | 0, 1);
  target += cauchy_lpdf(tau | 0, 0.5);
  target += student_t_lpdf(phi | 3, 0, 1);
  target += std_normal_lpdf(z);

  for (i in 1:N) {
    target += student_t_lpdf(y[i] | K - 1, mu + u[study[i]], phi * se[i]);
  }
}

generated quantities {
  real b_Intercept = mu;
}

tt-approximation

Mathematical specification

The tt-approximation widens the prior on μ\mu to a tk1t_{k-1} distribution, effectively matching the tail behaviour of the frequentist tt-based confidence interval:

μtk1(0,σμ) \mu \sim t_{k-1}(0,\, \sigma_\mu)

yiθi𝒩(θi,si2),θi𝒩(μ,τ2) y_i \mid \theta_i \sim \mathcal{N}(\theta_i,\, s_i^2), \quad \theta_i \sim \mathcal{N}(\mu,\, \tau^2)

Stan code (tt-approximation)

data {
  int<lower=1> N;
  int<lower=1> K;
  vector[N] y;
  vector<lower=0>[N] se;
  array[N] int<lower=1> study;
}

parameters {
  real mu;
  real<lower=0> tau;
  vector[K] z;
}

transformed parameters {
  vector[K] u = tau * z;
}

model {
  target += student_t_lpdf(mu | K - 1, 0, 1);
  target += cauchy_lpdf(tau   | 0, 0.5);
  target += std_normal_lpdf(z);

  target += normal_lpdf(y | mu + u[study], se);
}

generated quantities {
  real b_Intercept = mu;
}

How bayesma calls these models

bayesma(data, model_type = "random_effect", small_sample_adjustment = "hksj")
bayesma(data, model_type = "random_effect", small_sample_adjustment = "t_approx")

Parameterisation notes

The HKSJ model estimates ϕ\phi as an additional parameter. Values ϕ<1\phi < 1 indicate less variability than expected from the sis_i alone (uncommon); ϕ>1\phi > 1 indicates overdispersion. The ϕ\phi posterior serves as a diagnostic: if ϕ\phi is concentrated well above 1, the standard errors are systematically underestimated.

Known sampling difficulties

The HKSJ model involves a loop over observations with per-observation tt log-densities. This is slower than the vectorised Gaussian but typically converges without difficulty. The tt-approximation has no additional computational cost over the standard RE model.