TuringModels

Glimmer

In Statistical Rethinking Edition 1, McElreath shows that the R glimmer package introduces weakly regularizing priors by default. This can sometimes lead to nonsense estimates. To fix it, McElreath uses a very weakly informative prior in model m.good.

Data

# Outcome and predictor almost perfectly associated.
x = repeat([-1], 9); append!(x, repeat([1],11));
y = repeat([0], 10); append!(y, repeat([1],10));

Model

using Turing

@model m_good_stan(x, y) = begin
    α ~ Normal(0, 10)
    β ~ Normal(0, 10)

    logits = α .+ β * x

    y .~ BinomialLogit.(1, logits)
end;

Output

chns = sample(m_good_stan(x, y), NUTS(), 1000)
Chains MCMC chain (1000×14×1 Array{Float64, 3}):

Iterations        = 501:1:1500
Number of chains  = 1
Samples per chain = 1000
Wall duration     = 5.01 seconds
Compute duration  = 5.01 seconds
parameters        = α, β
internals         = lp, n_steps, is_accept, acceptance_rate, log_density, hamiltonian_energy, hamiltonian_energy_error, max_hamiltonian_energy_error, tree_depth, numerical_error, step_size, nom_step_size

Summary Statistics
  parameters      mean       std   naive_se      mcse       ess      rhat   ess_per_sec
      Symbol   Float64   Float64    Float64   Float64   Float64   Float64       Float64

           α   -5.8609    4.5808     0.1449    0.4745   97.1370    1.0024       19.3732
           β    8.6514    4.5616     0.1442    0.4735   95.7486    1.0021       19.0962

Quantiles
  parameters       2.5%     25.0%     50.0%     75.0%     97.5%
      Symbol    Float64   Float64   Float64   Float64   Float64

           α   -17.2687   -8.2588   -5.1852   -2.3055    0.3822
           β     2.5875    5.1591    7.7746   10.9812   19.9557

using StatsPlots

StatsPlots.plot(chns)

Original output

m_10_x,_results = "
    mean   sd   5.5% 94.5% n_eff Rhat
 a -5.09 4.08 -12.62 -0.25   100 1.00
 b  7.86 4.09   2.96 15.75   104 1.01
";