Moment Algorithms

Moment Optimization Algorithms

MomentOpt.MAlgoType.

This abstract type nests all MProb algorithms, for example MomentOpt.MAlgoBGP

source
readMalgo(filename::AbstractString)

Load MAlgo from disk

source
MomentOpt.runMOpt!Method.
runMOpt!( algo::MAlgo )

Function to start estimation of an MAlgo.

source
MomentOpt.saveMethod.
save(algo::MAlgo, filename::AbstractString)

Save MAlgo to disk using JLD2

source

A particular implementation of such an algorithm is the BGP algorithm:

BGP Algorithm

MAlgoBGP: BGP MCMC Algorithm

This implements the BGP MCMC Algorithm Likelihood-Free Parallel Tempering by Baragatti, Grimaud and Pommeret (BGP):

Approximate Bayesian Computational (ABC) methods (or likelihood-free methods) have appeared in the past fifteen years as useful methods to perform Bayesian analyses when the likelihood is analytically or computationally intractable. Several ABC methods have been proposed: Monte Carlo Markov BGPChains (MCMC) methods have been developped by Marjoramet al. (2003) and by Bortotet al. (2007) for instance, and sequential methods have been proposed among others by Sissonet al. (2007), Beaumont et al. (2009) and Del Moral et al. (2009). Until now, while ABC-MCMC methods remain the reference, sequential ABC methods have appeared to outperforms them (see for example McKinley et al. (2009) or Sisson et al. (2007)). In this paper a new algorithm combining population-based MCMC methods with ABC requirements is proposed, using an analogy with the Parallel Tempering algorithm (Geyer, 1991). Performances are compared with existing ABC algorithms on simulations and on a real example.

Fields

  • m: MProb
  • opts: a Dict of options
  • i: current iteration
  • chains: An array of BGPChain
  • anim: Plots.Animation
  • dist_fun: function to measure distance between one evaluation and the next.
source
MomentOpt.CIMethod.
CI(c::BGPChain;level=0.95)

Confidence interval on parameters

source

extendBGPChain!(chain::BGPChain, algo::MAlgoBGP, extraIter::Int64)

Starting from an existing MAlgoBGP, allow for additional iterations by extending a specific chain. This function is used to restart a previous estimation run via restartMOpt!

source

restartMOpt!(algo::MAlgoBGP, extraIter::Int64)

Starting from an existing AlgoBGP, restart the optimization from where it stopped. Add extraIter additional steps to the optimization process.

source
MomentOpt.summaryMethod.
summary(c::BGPChain)

Returns a summary of the chain. Condensed history

source
Statistics.meanMethod.
mean(c::BGPChain)

Returns the mean of all parameter values stored on the chain.

source
Statistics.medianMethod.
median(c::BGPChain)

Returns the median of all parameter values stored on the chain.

source

BGPChain

MCMC Chain storage for BGP algorithm. This is the main datatype for the implementation of Baragatti, Grimaud and Pommeret (BGP) in Likelihood-free parallel tempring

Fields

  • evals: Array of Evals
  • best_id: index of best eval.value so far
  • best_val: best eval.value so far
  • curr_val : current value
  • probs_acc: vector of probabilities with which to accept current value
  • id: Chain identifier
  • iter: current iteration
  • accepted: Array{Bool} of length(evals)
  • accept_rate: current acceptance rate
  • acc_tuner: Acceptance tuner. acc_tuner > 1 means to be more restrictive: params that yield a worse function value are less likely to get accepted, the higher acc_tuner is.
  • exchanged: Array{Int} of length(evals) with index of chain that was exchanged with
  • m: MProb
  • sigma: Float64 shock variance
  • sigma_update_steps: update sampling vars every sigma_update_steps iterations. setting sigma_update_steps > maxiter means to never update the variances.
  • sigma_adjust_by: adjust sampling vars by sigma_adjust_by percent up or down
  • smpl_iters: max number of trials to get a new parameter from MvNormal that lies within support
  • min_improve: minimally required improvement in chain j over chain i for an exchange move j->i to talk place.
  • batches: in the proposal function update the parameter vector in batches. [default: update entire param vector]
source
allAccepted(c::BGPChain)

Get all accepted Evals from a chain

source
MomentOpt.bestMethod.
best(c::BGPChain) -> (val,idx)

Returns the smallest value and index stored of the chain.

source
computeNextIteration!( algo::MAlgoBGP )

computes new candidate vectors for each BGPChain accepts/rejects that vector on each BGPChain, according to some rule. The evaluation objective functions is performed in parallel, is so desired.

  1. On each chain c:
    • computes new parameter vectors
    • applies a criterion to accept/reject any new params
    • stores the result in BGPChains
  2. Calls exchangeMoves! to swap chains
source
doAcceptReject!(c::BGPChain,eval_new::Eval)

Perform a Metropolis-Hastings accept-reject operation on the latest Eval and update the sampling variance, if so desired (set via sigma_update_steps in BGPChain constructor.)

source
exchangeMoves!(algo::MAlgoBGP)

Exchange chain i and j with if dist_fun(evi.value,evj.value) is greate than a threshold value c.min_improve. Commonly, this means that we only exchange if j is better by at least c.min_improve.

source
MomentOpt.historyMethod.
history(c::BGPChain)

Returns a DataFrame with a history of the chain.

source
MomentOpt.mysampleMethod.
mysample(d::Distributions.MultivariateDistribution,lb::Vector{Float64},ub::Vector{Float64},iters::Int)

mysample from distribution d until all poins are in support. This is a crude version of a truncated distribution: It just samples until all draws are within the admissible domain.

source
next_eval(c::BGPChain)

Computes the next Eval for chain c:

  1. Get last accepted param
  2. get a new param via proposal
  3. evaluateObjective
  4. Accept or Reject the new value via doAcceptReject!
  5. Store Eval on chain c.
source
MomentOpt.proposalMethod.
proposal(c::BGPChain)

Gaussian Transition Kernel centered on current parameter value.

  1. Map all $k$ parameters into $\mu \in [0,1]^k$.
  2. update all parameters by sampling from MvNormal, $N(\mu,\sigma)$, where $sigma$ is c.sigma until all params are in $[0,1]^k$
  3. Map $[0,1]^k$ back to original parameter spaces.
source

set acceptance rate on chain. considers only iterations where no exchange happened.

source

replace the current Eval of chain $i$ with the one of chain $j$

source