Model description
Small-sample adjustments correct the pooled estimate’s credible interval when the within-study standard errors are estimated rather than known. Two adjustments are implemented: the Hartung–Knapp–Sidik–Jonkman (HKSJ) multiplicative correction and the -approximation. See Small-sample adjustments for the statistical rationale.
HKSJ adjustment
Mathematical specification
Likelihood (HKSJ):
Priors:
Stan code (HKSJ)
data {
int<lower=1> N;
int<lower=1> K;
vector[N] y;
vector<lower=0>[N] se;
array[N] int<lower=1> study;
}
parameters {
real mu;
real<lower=0> tau;
real<lower=0> phi;
vector[K] z;
}
transformed parameters {
vector[K] u = tau * z;
}
model {
target += normal_lpdf(mu | 0, 1);
target += cauchy_lpdf(tau | 0, 0.5);
target += student_t_lpdf(phi | 3, 0, 1);
target += std_normal_lpdf(z);
for (i in 1:N) {
target += student_t_lpdf(y[i] | K - 1, mu + u[study[i]], phi * se[i]);
}
}
generated quantities {
real b_Intercept = mu;
}-approximation
Mathematical specification
The -approximation widens the prior on to a distribution, effectively matching the tail behaviour of the frequentist -based confidence interval:
Stan code (-approximation)
data {
int<lower=1> N;
int<lower=1> K;
vector[N] y;
vector<lower=0>[N] se;
array[N] int<lower=1> study;
}
parameters {
real mu;
real<lower=0> tau;
vector[K] z;
}
transformed parameters {
vector[K] u = tau * z;
}
model {
target += student_t_lpdf(mu | K - 1, 0, 1);
target += cauchy_lpdf(tau | 0, 0.5);
target += std_normal_lpdf(z);
target += normal_lpdf(y | mu + u[study], se);
}
generated quantities {
real b_Intercept = mu;
}How bayesma calls these models
Parameterisation notes
The HKSJ model estimates as an additional parameter. Values indicate less variability than expected from the alone (uncommon); indicates overdispersion. The posterior serves as a diagnostic: if is concentrated well above 1, the standard errors are systematically underestimated.
Known sampling difficulties
The HKSJ model involves a loop over observations with per-observation log-densities. This is slower than the vectorised Gaussian but typically converges without difficulty. The -approximation has no additional computational cost over the standard RE model.
