Moment Optimization Algorithms

SMM.saveMethod
save(algo::MAlgo, filename::AbstractString)

Save MAlgo to disk using JLD2

source

A particular implementation of such an algorithm is the BGP algorithm:

BGP Algorithm

SMM.MAlgoBGPType

MAlgoBGP: BGP MCMC Algorithm

This implements the BGP MCMC Algorithm Likelihood-Free Parallel Tempering by Baragatti, Grimaud and Pommeret (BGP):

Approximate Bayesian Computational (ABC) methods (or likelihood-free methods) have appeared in the past fifteen years as useful methods to perform Bayesian analyses when the likelihood is analytically or computationally intractable. Several ABC methods have been proposed: Monte Carlo Markov BGPChains (MCMC) methods have been developped by Marjoramet al. (2003) and by Bortotet al. (2007) for instance, and sequential methods have been proposed among others by Sissonet al. (2007), Beaumont et al. (2009) and Del Moral et al. (2009). Until now, while ABC-MCMC methods remain the reference, sequential ABC methods have appeared to outperforms them (see for example McKinley et al. (2009) or Sisson et al. (2007)). In this paper a new algorithm combining population-based MCMC methods with ABC requirements is proposed, using an analogy with the Parallel Tempering algorithm (Geyer, 1991). Performances are compared with existing ABC algorithms on simulations and on a real example.

Fields

  • m: MProb
  • opts: a Dict of options
  • i: current iteration
  • chains: An array of BGPChain
  • anim: Plots.Animation
  • dist_fun: function to measure distance between one evaluation and the next.
source
SMM.CIMethod
CI(c::BGPChain;level=0.95)

Confidence interval on parameters

source
SMM.extendBGPChain!Method

extendBGPChain!(chain::BGPChain, algo::MAlgoBGP, extraIter::Int64)

Starting from an existing MAlgoBGP, allow for additional iterations by extending a specific chain. This function is used to restart a previous estimation run via restart!

source
SMM.restart!Method

restart!(algo::MAlgoBGP, extraIter::Int64)

Starting from an existing AlgoBGP, restart the optimization from where it stopped. Add extraIter additional steps to the optimization process.

source
Statistics.meanMethod
mean(c::BGPChain)

Returns the mean of all parameter values stored on the chain.

source
Statistics.medianMethod
median(c::BGPChain)

Returns the median of all parameter values stored on the chain.

source
SMM.BGPChainType

BGPChain

MCMC Chain storage for BGP algorithm. This is the main datatype for the implementation of Baragatti, Grimaud and Pommeret (BGP) in Likelihood-free parallel tempring

Fields

  • evals: Array of Evals
  • best_id: index of best eval.value so far
  • best_val: best eval.value so far
  • curr_val : current value
  • probs_acc: vector of probabilities with which to accept current value
  • id: Chain identifier
  • iter: current iteration
  • accepted: Array{Bool} of length(evals)
  • accept_rate: current acceptance rate
  • acc_tuner: Acceptance tuner. acc_tuner > 1 means to be more restrictive: params that yield a worse function value are less likely to get accepted, the higher acc_tuner is.
  • exchanged: Array{Int} of length(evals) with index of chain that was exchanged with
  • m: MProb
  • sigma: Float64 shock variance
  • sigma_update_steps: update sampling vars every sigma_update_steps iterations. setting sigma_update_steps > maxiter means to never update the variances.
  • sigma_adjust_by: adjust sampling vars by sigma_adjust_by percent up or down
  • smpl_iters: max number of trials to get a new parameter from MvNormal that lies within support
  • min_improve: minimally required improvement in chain j over chain i for an exchange move j->i to talk place.
  • batches: in the proposal function update the parameter vector in batches. [default: update entire param vector]
source
SMM.bestMethod
best(c::BGPChain) -> (val,idx)

Returns the smallest value and index stored of the chain.

source
SMM.computeNextIteration!Method
computeNextIteration!( algo::MAlgoBGP )

computes new candidate vectors for each BGPChain accepts/rejects that vector on each BGPChain, according to some rule. The evaluation objective functions is performed in parallel, is so desired.

  1. On each chain c:
    • computes new parameter vectors
    • applies a criterion to accept/reject any new params
    • stores the result in BGPChains
  2. Calls exchangeMoves! to swap chains
source
SMM.doAcceptReject!Method
doAcceptReject!(c::BGPChain,eval_new::Eval)

Perform a Metropolis-Hastings accept-reject operation on the latest Eval and update the sampling variance, if so desired (set via sigma_update_steps in BGPChain constructor.)

source
SMM.exchangeMoves!Method
exchangeMoves!(algo::MAlgoBGP)

Exchange chain i and j with if dist_fun(evi.value,evj.value) is greate than a threshold value c.min_improve. Commonly, this means that we only exchange if j is better by at least c.min_improve.

source
SMM.historyMethod
history(c::BGPChain)

Returns a DataFrame with a history of the chain.

source
SMM.mysampleMethod
mysample(d::Distributions.MultivariateDistribution,lb::Vector{Float64},ub::Vector{Float64},iters::Int)

mysample from distribution d until all poins are in support. This is a crude version of a truncated distribution: It just samples until all draws are within the admissible domain.

source
SMM.proposalMethod
proposal(c::BGPChain)

Gaussian Transition Kernel centered on current parameter value.

  1. Map all $k$ parameters into $\mu \in [0,1]^k$.
  2. update all parameters by sampling from MvNormal, $N(\mu,\sigma)$, where $sigma$ is c.sigma until all params are in $[0,1]^k$
  3. Map $[0,1]^k$ back to original parameter spaces.
source