clip_08s

Load Julia packages (libraries) needed

using StatisticalRethinking
gr(size=(500,800))
Plots.GRBackend()

CmdStan uses a tmp directory to store the output of cmdstan

ProjDir = rel_path("..", "chapters", "02")
cd(ProjDir)

Define the Stan language model

binomialstanmodel = "
// Inferring a Rate
data {
  int N;
  int<lower=0> k[N];
  int<lower=1> n[N];
}
parameters {
  real<lower=0,upper=1> theta;
  real<lower=0,upper=1> thetaprior;
}
model {
  // Prior Distribution for Rate Theta
  theta ~ beta(1, 1);
  thetaprior ~ beta(1, 1);

  // Observed Counts
  k ~ binomial(n, theta);
}
"
"\n// Inferring a Rate\ndata {\n  int N;\n  int<lower=0> k[N];\n  int<lower=1> n[N];\n}\nparameters {\n  real<lower=0,upper=1> theta;\n  real<lower=0,upper=1> thetaprior;\n}\nmodel {\n  // Prior Distribution for Rate Theta\n  theta ~ beta(1, 1);\n  thetaprior ~ beta(1, 1);\n\n  // Observed Counts\n  k ~ binomial(n, theta);\n}\n"

Define the Stanmodel and set the output format to :mcmcchain.

stanmodel = Stanmodel(name="binomial", monitors = ["theta"], model=binomialstanmodel,
  output_format=:mcmcchain)

Use 16 observations

N2 = 4^2
d = Binomial(9, 0.66)
n2 = Int.(9 * ones(Int, N2))
k2 = rand(d, N2)
=====> /home/travis/build/StanJulia/StatisticalRethinking.jl/docs/build/02


File /home/travis/build/StanJulia/StatisticalRethinking.jl/docs/build/02/tmp/binomial.stan will be updated.

16-element Array{Int64,1}:
 7
 7
 6
 6
 5
 4
 2
 4
 3
 5
 6
 6
 4
 6
 5
 4

Input data for cmdstan

binomialdata = [
  Dict("N" => length(n2), "n" => n2, "k" => k2)
]
1-element Array{Dict{String,Any},1}:
 Dict("N"=>16,"k"=>[7, 7, 6, 6, 5, 4, 2, 4, 3, 5, 6, 6, 4, 6, 5, 4],"n"=>[9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9])

Sample using cmdstan

rc, chn, cnames = stan(stanmodel, binomialdata, ProjDir, diagnostics=false,
  CmdStanDir=CMDSTAN_HOME)

Describe the draws

describe(chn)


--- Translating Stan model to C++ code ---
bin/stanc  /home/travis/build/StanJulia/StatisticalRethinking.jl/docs/build/02/tmp/binomial.stan --o=/home/travis/build/StanJulia/StatisticalRethinking.jl/docs/build/02/tmp/binomial.hpp
Model name=binomial_model
Input file=/home/travis/build/StanJulia/StatisticalRethinking.jl/docs/build/02/tmp/binomial.stan
Output file=/home/travis/build/StanJulia/StatisticalRethinking.jl/docs/build/02/tmp/binomial.hpp

--- Linking C++ model ---
g++ -Wall -I . -isystem stan/lib/stan_math/lib/eigen_3.3.3 -isystem stan/lib/stan_math/lib/boost_1.66.0 -isystem stan/lib/stan_math/lib/sundials_3.1.0/include -std=c++1y -DBOOST_RESULT_OF_USE_TR1 -DBOOST_NO_DECLTYPE -DBOOST_DISABLE_ASSERTS -DBOOST_PHOENIX_NO_VARIADIC_EXPRESSION -Wno-unused-function -Wno-uninitialized -I src -isystem stan/src -isystem stan/lib/stan_math/ -DFUSION_MAX_VECTOR_SIZE=12 -Wno-unused-local-typedefs -DEIGEN_NO_DEBUG -DNO_FPRINTF_OUTPUT -pipe   src/cmdstan/main.cpp  -O3 -o /home/travis/build/StanJulia/StatisticalRethinking.jl/docs/build/02/tmp/binomial -include /home/travis/build/StanJulia/StatisticalRethinking.jl/docs/build/02/tmp/binomial.hpp stan/lib/stan_math/lib/sundials_3.1.0/lib/libsundials_nvecserial.a stan/lib/stan_math/lib/sundials_3.1.0/lib/libsundials_cvodes.a stan/lib/stan_math/lib/sundials_3.1.0/lib/libsundials_idas.a

Length of data array is not equal to nchains,
all chains will use the first data dictionary.

Calling /home/travis/cmdstan/bin/stansummary to infer across chains.

Inference for Stan model: binomial_model
4 chains: each with iter=(1000,1000,1000,1000); warmup=(0,0,0,0); thin=(1,1,1,1); 4000 iterations saved.

Warmup took (0.026, 0.026, 0.026, 0.025) seconds, 0.10 seconds total
Sampling took (0.038, 0.033, 0.034, 0.031) seconds, 0.14 seconds total

                Mean     MCSE  StdDev     5%   50%   95%    N_Eff  N_Eff/s    R_hat
lp__            -103  2.7e-02     1.1   -105  -102  -102  1.7e+03  1.2e+04  1.0e+00
accept_stat__   0.91  1.7e-03    0.12   0.65  0.96   1.0  5.0e+03  3.7e+04  1.0e+00
stepsize__      0.82  4.5e-02   0.064   0.74  0.83  0.91  2.0e+00  1.5e+01  5.0e+13
treedepth__      1.9  3.1e-02    0.62    1.0   2.0   3.0  3.9e+02  2.9e+03  1.0e+00
n_leapfrog__     4.3  1.0e-01     3.9    1.0   3.0    11  1.5e+03  1.1e+04  1.0e+00
divergent__     0.00     -nan    0.00   0.00  0.00  0.00     -nan     -nan     -nan
energy__         104  3.8e-02     1.5    102   103   107  1.5e+03  1.1e+04  1.0e+00
theta           0.56  7.2e-04   0.041   0.49  0.56  0.62  3.3e+03  2.4e+04  1.0e+00
thetaprior      0.50  4.8e-03    0.29  0.055  0.49  0.95  3.7e+03  2.7e+04  1.0e+00

Samples were drawn using hmc with nuts.
For each parameter, N_Eff is a crude measure of effective sample size,
and R_hat is the potential scale reduction factor on split chains (at
convergence, R_hat=1).

Iterations = 1:1000
Thinning interval = 1
Chains = 1,2,3,4
Samples per chain = 1000

Empirical Posterior Estimates:
         Mean        SD        Naive SE        MCSE      ESS
theta 0.5553603 0.040852594 0.00064593623 0.00073410828 1000

Quantiles:
         2.5%      25.0%    50.0%     75.0%     97.5%
theta 0.47368623 0.528056 0.5557385 0.5822565 0.6355348

Plot the 4 chains

if rc == 0
  p = Vector{Plots.Plot{Plots.GRBackend}}(undef, 4)
  x = 0:0.001:1
  for i in 1:4
    vals = convert.(Float64, chn.value[:, 1, i])
    @show res = fit_mle(Normal, vals)
    μ = round(res.μ, digits=2)
    σ = round(res.σ, digits=2)
    p[i] = density(vals, lab="Chain $i density", title="$(N2) data points")
    plot!(p[i], x, pdf.(Normal(res.μ, res.σ), x), lab="Fitted Normal($μ, $σ)")
  end
  plot(p..., layout=(4, 1))
end

##-
res = fit_mle(Normal, vals) = Normal{Float64}(μ=0.5556957300000003, σ=0.04102067289442115)
res = fit_mle(Normal, vals) = Normal{Float64}(μ=0.554291879000001, σ=0.04067651414738435)
res = fit_mle(Normal, vals) = Normal{Float64}(μ=0.5559319799999994, σ=0.040688547040458455)
res = fit_mle(Normal, vals) = Normal{Float64}(μ=0.5555215770000009, σ=0.04098328623202479)

This page was generated using Literate.jl.