This model shows what happens if you use extremely flat priors, and is fixed in Weakly Informative Priors.
y = [-1, 1]
2-element Vector{Int64}:
-1
1
import Random
using Turing
Random.seed!(1)
@model function m8_2(y)
α ~ Flat() ## improper prior with pobability one everywhere
σ ~ FlatPos(0.0) ## improper prior with probability one everywhere above 0.0
y ~ Normal(α, σ)
end;
chns = sample(m8_2(y), NUTS(), 1000)
Chains MCMC chain (1000×14×1 Array{Float64, 3}):
Iterations = 501:1:1500
Number of chains = 1
Samples per chain = 1000
Wall duration = 6.38 seconds
Compute duration = 6.38 seconds
parameters = α, σ
internals = lp, n_steps, is_accept, acceptance_rate, log_density, hamiltonian_energy, hamiltonian_energy_error, max_hamiltonian_energy_error, tree_depth, numerical_error, step_size, nom_step_size
Summary Statistics
parameters mean std naive_se mcse ess rhat ess_per_sec
Symbol Float64 Float64 Float64 Float64 Float64 Float64 Float64
α 275292.2910 237764.2721 7518.7665 42230.8900 2.7848 1.7943 0.4363
σ 1687238.3100 8935827.6084 282575.6802 408050.6342 136.2190 1.0172 21.3442
Quantiles
parameters 2.5% 25.0% 50.0% 75.0% 97.5%
Symbol Float64 Float64 Float64 Float64 Float64
α -1211.5846 6502.3064 311091.2879 478292.6367 696694.7196
σ 29.9480 21584.9752 429059.9961 1164936.2456 10441460.6944
using StatsPlots
StatsPlots.plot(chns)
"/home/runner/work/TuringModels.jl/TuringModels.jl/__site/assets/models/wild-chain/code/output/chns.svg"