TuringModels

Wild chain

This model shows what happens if you use extremely flat priors, and is fixed in Weakly Informative Priors.

  1. Data
  2. Model
  3. Output

Data

y = [-1, 1]
2-element Vector{Int64}:
 -1
  1

Model

import Random

using Turing

Random.seed!(1)

@model function m8_2(y)
    α ~ Flat() ## improper prior with pobability one everywhere
    σ ~ FlatPos(0.0) ## improper prior with probability one everywhere above 0.0

    y ~ Normal(α, σ)
end;

Output

chns = sample(m8_2(y), NUTS(), 1000)
Chains MCMC chain (1000×14×1 Array{Float64, 3}):

Iterations        = 501:1:1500
Number of chains  = 1
Samples per chain = 1000
Wall duration     = 4.08 seconds
Compute duration  = 4.08 seconds
parameters        = α, σ
internals         = lp, n_steps, is_accept, acceptance_rate, log_density, hamiltonian_energy, hamiltonian_energy_error, max_hamiltonian_energy_error, tree_depth, numerical_error, step_size, nom_step_size

Summary Statistics
  parameters          mean           std     naive_se         mcse        ess      rhat   ess_per_sec
      Symbol       Float64       Float64      Float64      Float64    Float64   Float64       Float64

           α    11717.0719    39188.5953    1239.2522    6909.9736     3.1889    1.5814        0.7814
           σ   138216.7677   555367.7177   17562.2693   33472.1282   130.9402    1.0120       32.0853

Quantiles
  parameters          2.5%       25.0%        50.0%         75.0%         97.5%
      Symbol       Float64     Float64      Float64       Float64       Float64

           α   -65080.4273   -540.8044    3014.9895    25808.9647    89647.0598
           σ      343.6642   4001.1561   29725.0547   111472.9133   937315.6821

using StatsPlots

StatsPlots.plot(chns)