Moment Optimization Algorithms
SMM.MAlgo
— TypeThis abstract type nests all MProb
algorithms, for example SMM.MAlgoBGP
SMM.readMalgo
— MethodreadMalgo(filename::AbstractString)
Load MAlgo from disk
SMM.run!
— Methodrun!( algo::MAlgo )
Function to start estimation of an MAlgo
.
SMM.save
— Methodsave(algo::MAlgo, filename::AbstractString)
Save MAlgo to disk using JLD2
A particular implementation of such an algorithm is the BGP
algorithm:
BGP Algorithm
SMM.MAlgoBGP
— TypeMAlgoBGP: BGP MCMC Algorithm
This implements the BGP MCMC Algorithm Likelihood-Free Parallel Tempering by Baragatti, Grimaud and Pommeret (BGP):
Approximate Bayesian Computational (ABC) methods (or likelihood-free methods) have appeared in the past fifteen years as useful methods to perform Bayesian analyses when the likelihood is analytically or computationally intractable. Several ABC methods have been proposed: Monte Carlo Markov BGPChains (MCMC) methods have been developped by Marjoramet al. (2003) and by Bortotet al. (2007) for instance, and sequential methods have been proposed among others by Sissonet al. (2007), Beaumont et al. (2009) and Del Moral et al. (2009). Until now, while ABC-MCMC methods remain the reference, sequential ABC methods have appeared to outperforms them (see for example McKinley et al. (2009) or Sisson et al. (2007)). In this paper a new algorithm combining population-based MCMC methods with ABC requirements is proposed, using an analogy with the Parallel Tempering algorithm (Geyer, 1991). Performances are compared with existing ABC algorithms on simulations and on a real example.
Fields
SMM.CI
— MethodCI(c::BGPChain;level=0.95)
Confidence interval on parameters
SMM.extendBGPChain!
— MethodextendBGPChain!(chain::BGPChain, algo::MAlgoBGP, extraIter::Int64)
Starting from an existing MAlgoBGP
, allow for additional iterations by extending a specific chain. This function is used to restart a previous estimation run via restart!
SMM.restart!
— Methodrestart!(algo::MAlgoBGP, extraIter::Int64)
Starting from an existing AlgoBGP, restart the optimization from where it stopped. Add extraIter
additional steps to the optimization process.
SMM.summary
— Methodsummary(c::BGPChain)
Returns a summary of the chain. Condensed history
Statistics.mean
— Methodmean(c::BGPChain)
Returns the mean of all parameter values stored on the chain.
Statistics.median
— Methodmedian(c::BGPChain)
Returns the median of all parameter values stored on the chain.
SMM.BGPChain
— TypeBGPChain
MCMC Chain storage for BGP algorithm. This is the main datatype for the implementation of Baragatti, Grimaud and Pommeret (BGP) in Likelihood-free parallel tempring
Fields
evals
: Array ofEval
sbest_id
: index of besteval.value
so farbest_val
: best eval.value so farcurr_val
: current valueprobs_acc
: vector of probabilities with which to accept current valueid
: Chain identifieriter
: current iterationaccepted
:Array{Bool}
oflength(evals)
accept_rate
: current acceptance rateacc_tuner
: Acceptance tuner.acc_tuner > 1
means to be more restrictive: params that yield a worse function value are less likely to get accepted, the higheracc_tuner
is.exchanged
:Array{Int}
oflength(evals)
with index of chain that was exchanged withm
:MProb
sigma
:Float64
shock variancesigma_update_steps
: update sampling vars everysigma_update_steps
iterations. settingsigma_update_steps > maxiter
means to never update the variances.sigma_adjust_by
: adjust sampling vars bysigma_adjust_by
percent up or downsmpl_iters
: max number of trials to get a new parameter from MvNormal that lies within supportmin_improve
: minimally required improvement in chainj
over chaini
for an exchange movej->i
to talk place.batches
: in the proposal function update the parameter vector in batches. [default: update entire param vector]
SMM.allAccepted
— MethodallAccepted(c::BGPChain)
Get all accepted Eval
s from a chain
SMM.best
— Methodbest(c::BGPChain) -> (val,idx)
Returns the smallest value and index stored of the chain.
SMM.computeNextIteration!
— MethodcomputeNextIteration!( algo::MAlgoBGP )
computes new candidate vectors for each BGPChain
accepts/rejects that vector on each BGPChain, according to some rule. The evaluation objective functions is performed in parallel, is so desired.
- On each chain
c
:- computes new parameter vectors
- applies a criterion to accept/reject any new params
- stores the result in BGPChains
- Calls
exchangeMoves!
to swap chains
SMM.doAcceptReject!
— MethoddoAcceptReject!(c::BGPChain,eval_new::Eval)
Perform a Metropolis-Hastings accept-reject operation on the latest Eval
and update the sampling variance, if so desired (set via sigma_update_steps
in BGPChain
constructor.)
SMM.exchangeMoves!
— MethodexchangeMoves!(algo::MAlgoBGP)
Exchange chain i
and j
with if dist_fun(evi.value,evj.value)
is greate than a threshold value c.min_improve
. Commonly, this means that we only exchange if j
is better by at least c.min_improve
.
SMM.history
— Methodhistory(c::BGPChain)
Returns a DataFrame
with a history of the chain.
SMM.mysample
— Methodmysample(d::Distributions.MultivariateDistribution,lb::Vector{Float64},ub::Vector{Float64},iters::Int)
mysample from distribution d
until all poins are in support. This is a crude version of a truncated distribution: It just samples until all draws are within the admissible domain.
SMM.next_eval
— Methodnext_eval(c::BGPChain)
Computes the next Eval
for chain c
:
- Get last accepted param
- get a new param via
proposal
evaluateObjective
- Accept or Reject the new value via
doAcceptReject!
- Store
Eval
on chainc
.
SMM.proposal
— Methodproposal(c::BGPChain)
Gaussian Transition Kernel centered on current parameter value.
- Map all $k$ parameters into $\mu \in [0,1]^k$.
- update all parameters by sampling from
MvNormal
, $N(\mu,\sigma)$, where $sigma$ isc.sigma
until all params are in $[0,1]^k$ - Map $[0,1]^k$ back to original parameter spaces.
SMM.set_acceptRate!
— Methodset acceptance rate on chain. considers only iterations where no exchange happened.
SMM.swap_ev_ij!
— Methodreplace the current Eval
of chain $i$ with the one of chain $j$