Load Julia packages (libraries) needed
using StatisticalRethinking
gr(size=(500,800))Plots.GRBackend()CmdStan uses a tmp directory to store the output of cmdstan
ProjDir = rel_path("..", "chapters", "03")
cd(ProjDir)Define the Stan language model
binomialstanmodel = "
// Inferring a Rate
data {
int N;
int<lower=0> k[N];
int<lower=1> n[N];
}
parameters {
real<lower=0,upper=1> theta;
real<lower=0,upper=1> thetaprior;
}
model {
// Prior Distribution for Rate Theta
theta ~ beta(1, 1);
thetaprior ~ beta(1, 1);
// Observed Counts
k ~ binomial(n, theta);
}
""\n// Inferring a Rate\ndata {\n int N;\n int<lower=0> k[N];\n int<lower=1> n[N];\n}\nparameters {\n real<lower=0,upper=1> theta;\n real<lower=0,upper=1> thetaprior;\n}\nmodel {\n // Prior Distribution for Rate Theta\n theta ~ beta(1, 1);\n thetaprior ~ beta(1, 1);\n\n // Observed Counts\n k ~ binomial(n, theta);\n}\n"Define the Stanmodel and set the output format to :mcmcchain.
stanmodel = Stanmodel(name="binomial", monitors = ["theta"], model=binomialstanmodel,
output_format=:mcmcchain)Use 16 observations
N2 = 4
n2 = Int.(9 * ones(Int, N2))
k2 = [6, 5, 7, 6]=====> /home/travis/build/StanJulia/StatisticalRethinking.jl/docs/build/03
4-element Array{Int64,1}:
6
5
7
6Input data for cmdstan
binomialdata = [
Dict("N" => length(n2), "n" => n2, "k" => k2)
]1-element Array{Dict{String,Any},1}:
Dict("N"=>4,"k"=>[6, 5, 7, 6],"n"=>[9, 9, 9, 9])Sample using cmdstan
rc, chn, cnames = stan(stanmodel, binomialdata, ProjDir, diagnostics=false,
CmdStanDir=CMDSTAN_HOME)Describe the draws
describe(chn)
make: `/home/travis/build/StanJulia/StatisticalRethinking.jl/docs/build/03/tmp/binomial' is up to date.
Length of data array is not equal to nchains,
all chains will use the first data dictionary.
Calling /home/travis/cmdstan/bin/stansummary to infer across chains.
Inference for Stan model: binomial_model
4 chains: each with iter=(1000,1000,1000,1000); warmup=(0,0,0,0); thin=(1,1,1,1); 4000 iterations saved.
Warmup took (0.020, 0.020, 0.021, 0.020) seconds, 0.082 seconds total
Sampling took (0.030, 0.028, 0.028, 0.030) seconds, 0.12 seconds total
Mean MCSE StdDev 5% 50% 95% N_Eff N_Eff/s R_hat
lp__ -27 2.6e-02 1.1 -29 -27 -26 1.7e+03 1.5e+04 1.0e+00
accept_stat__ 0.89 2.1e-03 0.14 0.59 0.95 1.0 4.5e+03 3.8e+04 1.0e+00
stepsize__ 0.84 2.1e-02 0.030 0.79 0.85 0.88 2.0e+00 1.7e+01 2.2e+13
treedepth__ 1.8 8.9e-03 0.53 1.0 2.0 3.0 3.6e+03 3.1e+04 1.0e+00
n_leapfrog__ 3.8 5.8e-02 2.9 1.0 3.0 7.0 2.6e+03 2.2e+04 1.0e+00
divergent__ 0.00 -nan 0.00 0.00 0.00 0.00 -nan -nan -nan
energy__ 28 3.7e-02 1.5 26 27 31 1.6e+03 1.4e+04 1.0e+00
theta 0.66 1.2e-03 0.074 0.53 0.66 0.77 3.5e+03 3.0e+04 1.0e+00
thetaprior 0.51 4.9e-03 0.28 0.059 0.52 0.95 3.3e+03 2.8e+04 1.0e+00
Samples were drawn using hmc with nuts.
For each parameter, N_Eff is a crude measure of effective sample size,
and R_hat is the potential scale reduction factor on split chains (at
convergence, R_hat=1).
Iterations = 1:1000
Thinning interval = 1
Chains = 1,2,3,4
Samples per chain = 1000
Empirical Posterior Estimates:
Mean SD Naive SE MCSE ESS
theta 0.6558387 0.07391723 0.001168734 0.0013688586 1000
Quantiles:
2.5% 25.0% 50.0% 75.0% 97.5%
theta 0.50513505 0.60804325 0.6587485 0.7070755 0.794305Look at area of hpd
MCMCChain.hpd(chn) 95% Lower 95% Upper
theta 0.520048 0.80697Plot the 4 chains
if rc == 0
mixeddensity(chn)
bnds = MCMCChain.hpd(convert(Vector{Float64}, chn.value[:,1,1]))
vline!([bnds[1]], line=:dash)
vline!([bnds[2]], line=:dash)
end
End of clip0616s.jl
This page was generated using Literate.jl.