In Statistical Rethinking Edition 1, McElreath shows that the R glimmer
package introduces weakly regularizing priors by default. This can sometimes lead to nonsense estimates. To fix it, McElreath uses a very weakly informative prior in model m.good
.
# Outcome and predictor almost perfectly associated.
x = repeat([-1], 9); append!(x, repeat([1],11));
y = repeat([0], 10); append!(y, repeat([1],10));
using Turing
@model function m_good_stan(x, y)
α ~ Normal(0, 10)
β ~ Normal(0, 10)
logits = α .+ β * x
y .~ BinomialLogit.(1, logits)
end;
chns = sample(m_good_stan(x, y), NUTS(), 1000)
Chains MCMC chain (1000×14×1 Array{Float64, 3}):
Iterations = 501:1:1500
Number of chains = 1
Samples per chain = 1000
Wall duration = 5.83 seconds
Compute duration = 5.83 seconds
parameters = α, β
internals = lp, n_steps, is_accept, acceptance_rate, log_density, hamiltonian_energy, hamiltonian_energy_error, max_hamiltonian_energy_error, tree_depth, numerical_error, step_size, nom_step_size
Summary Statistics
parameters mean std naive_se mcse ess rhat ess_per_sec
Symbol Float64 Float64 Float64 Float64 Float64 Float64 Float64
α -4.8741 3.7527 0.1187 0.3227 106.1463 0.9990 18.2132
β 7.6160 3.7005 0.1170 0.3253 105.4194 0.9991 18.0884
Quantiles
parameters 2.5% 25.0% 50.0% 75.0% 97.5%
Symbol Float64 Float64 Float64 Float64 Float64
α -13.3681 -7.0493 -4.3679 -1.9088 0.5320
β 2.3367 4.6830 7.1109 9.8411 15.6855
using StatsPlots
StatsPlots.plot(chns)
"/home/runner/work/TuringModels.jl/TuringModels.jl/__site/assets/models/glimmer/code/output/chns.svg"
m_10_x,_results = "
mean sd 5.5% 94.5% n_eff Rhat
a -5.09 4.08 -12.62 -0.25 100 1.00
b 7.86 4.09 2.96 15.75 104 1.01
";