Introduction

This is an introduction to ABC and Likelihood-free methods for readers unfamiliar with the topic. Some knowledge of Monte Carlo computational methods is assumed. Readers already familiar with ABC methods might benefit from reviews by Marin et al. (2011), Csillèry et al. (2010), Beaumont (2010), Cornuet and Robert (2010), and Marjoram and Tavarè (2006).

Sections 1 and 2: motivation for ABC methods. Section 3: intuition on how ABC works. Section 4: a statistical treatment of ABC based on rejection sampler. Section 5: a summary of the current state-of-the-art, pointers to some references and topics that have been studied in the literature.

1. Scientific motivation and need for ABC methods

Complex stochastic processes are often preferred over simple processes in modeling natural systems because they enable investigators to capture a greater proportion of the salient features of a system. However, probabilistic models resulting from such processes can lead to computationally intractable likelihood functions, posing challenges in the implementation of likelihood-based statistical inference methods. Since the most important object in Bayesian inference, the posterior distribution of the parameter given the data is, by Bayes Theorem, proportional to the likelihood, computationally intractable likelihoods translate to difficulties in Bayesian inference. Therefore, inference from models having computationally intractable likelihoods has received considerable attention in the Bayesian literature.

2. Levels of intractability in likelihoods

Likelihoods may be intractable at various levels. We follow a useful classification originated in Murray et al. (2006) and elaborated in Wilkinson (2008) to define the class of problems addressed by ABC methods.

a. A simple type of intractable likelihood has a constant (relative to the parameters of interest) unknown part. The likelihood has the form

P(D|θ) = ƒ(D|θ) / C

where ƒ is the “known” part of the probability mass or density function evaluated at the observed data D and C is a constant that is difficult to calculate (or intractable), and which is independent of the parameter of interest θ. Here, by “known” we mean that given a value of θ we can evaluate ƒ(D|θ). Many modern Monte Carlo based samplers of the posterior distribution π(θ|D), such as rejection and Markov chain Monte Carlo (MCMC), require the likelihood to be known only up to a proportionality constant. Since,

P(D|θ) ∝ ƒ(D|θ),

unknown constants such as C do not pose a real problem in Bayesian inference.

b. A type of intractability that is more difficult to overcome arises when the unknown part of the likelihood, C, depends on the parameter of interest θ. That is, the likelihood is of the form

P(D|θ) = ƒ(D|θ) / C(θ).

In this case, to evaluate the likelihood of a given parameter value, one has to compute or approximate C(θ). The constant usually depends on the details of the model and involves high dimensional hard-to-compute integrals. Numerical methods to compute such integrals can be devised. However, such methods lack generality. The  main drawback is that a slight change in the model may result in a new C that is not computable with the previously devised numerical method. One has to start from scratch to devise a new method to compute the new constant. Standard Monte Carlo based methods (e.g., rejection sampler or MCMC) all require the likelihoods to be known up to a constant that is independent of parameters, and hence cannot be used to sample the posterior distribution of the parameters without further modification when the constant is of the form C(θ). Murray et al. (2006) propose a tailored MCMC approach for this type of intractability. Alternatively, variational Bayes methods for models satisfying certain criteria can be used to make posterior inference. ABC and Likelihood-free methods can handle likelihoods with this type of intractability easily, but as we discus next, their main goal is much more ambitious since they target models with intractable likelihoods at a more basic level.

c. The intractability motivating ABC and Likelihood-free methods was first studied in detail in an early paper by Diggle and Gratton (1984). The authors consider statistical inference in a frequentist setup when the model is known only at the level of the stochastic mechanism generating the data. In other words, conditional on the parameters, data can be generated under the model, but that is all the information available about the likelihoods. In this case, no part of the likelihood P(D|θ) is explicit but it is possible to generate independent realizations of the data which we denote by D* from the model P(X|θ). ABC and Likelihood-free methods perform approximate inference on the posterior distribution of the parameters in such models. Their success depends on inexpensive simulation of pseudo-data sets under a stochastic model of interest. Now that we have stated what ABC aims to achieve, next we turn to how it works.

3. Overview of ABC and Likelihood-free Methods

For the sake of simplicity, the presentation in this section is confined to ABC based on the rejection sampler, although many concepts and issues are common to other samplers as well. Given a prior on the parameter, ABC methods exploit the ability to simulate realizations from the joint distribution of the data and the parameter, which are then used to approximate the likelihoods without explicitly evaluating them (for an alternative non-Bayesian approach to intractable likelihoods of this type see Gourieroux et al. (1993)).

The intuition behind ABC is that among realizations simulated under the joint distribution of the data and the parameter, parameter values corresponding to pseudo-data sets ”similar” to the observed data are proportional to their likelihood and hence to their posterior probability. The evaluation of the likelihood for a given parameter value is in a sense embedded in the comparison of the simulated data with the observed data. As explained in detail in section 4, if instead of pseudo-data sets “similar” to the observed data, only pseudo-data sets which exactly equal the observed data were used, the inferential procedure would be exact (excepting a Monte Carlo error arising from sampling the posterior). Therefore, a first approximation in the likelihoods (and hence the posterior inference) in an ABC procedure comes from assessing the similarity between two data sets. The similarity is often measured by Euclidean distance between the simulated data and the observed data. A second, quite different type of approximation in the likelihoods often (but not always) encountered in ABC procedures is performing the comparison based on only certain features (i.e., summary statistics) of the data rather than the full data. Especially in complex models, the reduction from the full data to the summaries usually entails a loss of information and hence results in an approximate likelihood.

Sampling the joint distribution is facilitated using the definition of conditional probability, which allows one to first simulate the parameter from its prior and then simulate a data set under the model, conditional on the value of the parameter. Simulating from the prior of the parameter is usually straightforward, since one usually posits fully-known priors. However, simulating the data under the model conditional on the parameters is not always easy, because the stochastic mechanisms generating the data can be complex. The computational cost is further exacerbated in assessing the similarity of the simulated and the observed data set, which needs a stringent criterion to obtain an accurate sample from the posterior of the parameter. For high dimensional parameter and data spaces, a majority of the simulated data sets will be sufficiently dissimilar to the observed data, which makes them useless in sampling the posterior. Hence, a much larger number of data sets than the desired sample size needs to be simulated under the model. Such properties make ABC methods computationally intensive.

4. Statistics of ABC based on rejection sampler

We let X=(X1,X2,…,XN) be a (possibly multivariate) exchangeable random data vector and denote the observed value of X by D. For example, in the context of genetics where ABC methods have often been used, each element of Xi might be a variable denoting the type of a gene at position i in the genome in a given individual and the vector Xi contains the values in a sample of individuals from the population. We define, P(X|θ) as the joint probability model generating the data, where θ is a parameter of interest. For simplicity we take θ to be a scalar parameter of interest and assume that there are no nuisance parameters in the model. In most applications, models involve high dimensional parameter spaces with nuisance parameters, but sampling the joint distribution of the parameters and computational marginalization over the nuisance parameters is straightforward Bayesian gymnastics, at least in principle. Let us denote the posterior distribution of interest by π(θ|D). If the model P(X|θ) is available to the extent that the likelihood of the observed data P(D|θ) can be evaluated, Bayesian inference about

π(θ|D) ∝ P(D|θ) π(θ),

where π(θ) is a prior, can proceed by standard computational methods such as rejection sampling. When these standard methods employ the exact likelihood P(D|θ), the inference is exact up to a Monte Carlo error, in the sense that the correct posterior distribution π(θ|D) is targeted. Computational methods designed to sample the correct posterior distribution without evaluating the likelihood function are precursors of ABC methods. An early example due to Tavarè et al. (1997) promotes sampling π(θ|D) by taking the values of θ* among independent realizations (X*,θ*) simulated from the joint distribution P(X,θ) only if X*=D. In this method, the target posterior π(θ|D) appears as the marginal of the augmented model ∫ π(θ,x|D) dx, where the pseudo-data X* are treated as nuisance parameters to be integrated out and the integration is over the set {X*=D}. By Bayes Theorem we have

∫ π(θ,x|D) dx ∝ ∫ P(D|θ,x) P(θ,x) dx 

                      = ∫ P(D|x) P(x|θ) π(θ) dx

where we take P(D|x)=1 if D=x and zero otherwise.

The method of Tavarè et al. shows that, apart from the specification of prior distributions, which is always required in a Bayesian context, simulating observations from the joint distribution of the data and parameter is in principle the sole requirement to sample the posterior distribution of the parameter. This is an intriguing fact with implications for philosophy of Bayesianism.

Unfortunately, sampling the posterior by this (exact) method can be computationally quite difficult for complex models. First, when the data are high dimensional, the condition X*=D is rarely satisfied. Substituting the data X with low dimensional summary statistics S has therefore became a common practice. In particular, D and pseudo-data X* are replaced by Sd and S* calculated from respective data sets. Although no information loss occurs if the statistics S are sufficient, complex models admitting sufficient statistics are exceptions in practice. The information loss entailed in reduction from the data to (non-sufficient) statistics constitutes the first level of approximation in ABC. Characterization of the relationship between P(θ|Sd) and P(θ|D) remains an active research area. Second, even when we target the posterior P(θ|Sd), satisfying S*=Sd exactly might still not be possible. This problem is remedied in ABC by relaxing the strict equality in the acceptance condition S*=Sd with a tolerance interval employing a kernel density Kε(S,Sd), where ε is a bandwidth. Each simulated data set that is within ε of the observed data is assigned a weight by the chosen Kernel Kε(S,Sd) with respect to some distance function (Wilkinson 2008, Robert et al. 2011). Incorporating these two approximations, the posterior distribution sampled by ABC then can be written as

πε(θ|D) ∝ ∫ Kε(S,Sd) P(S|θ) π(θ) dx.

The first application of this idea is due to Pritchard et al. (1999) who used a uniform kernel with support (Sd-ε,Sd+ε) to obtain an ABC rejection algorithm as an approximate version of the method of Tavarè et al. (also see Prangle and Feanhead 2011, Robert et al. 2011). The algorithm is as follows.

Algorithm (based on Pritchard et al. 1999)

1. Simulate a value (X*,θ*) from the joint distribution P(X,θ).

2. Keep θ* as a sample from the marginal posterior distribution πε(θ|D) only if d(Sd,S*) < ε,

where, d( . , . ) is a suitable distance function, (e.g., the Euclidean norm || . ||, implying Kε(S,Sd) = I{||S-Sd||<ε}, where I{A} is the indicator function taking value of 1 on set {A}. Large values of the user-defined error tolerance ε lead to higher acceptance rates but also decrease accuracy (Tavarè 2001).

5. Literature

In this section, some references in the ABC literature are pointed out, mainly as an aid to introduce and frame relevant concepts used in ABC. A classification of terminology and research in ABC is provided with the hope that it will be useful. The presentation is meant to depict a general lay-out of the field in a limited space. A complete literature review is not attempted, nor exactness in classification of the work is claimed. Readers are referred to original works to form their own ideas in terms of where the cited work falls in the broader area of computational statistics. For a comprehensive list of publications see the Literature tab.

a. Research adapting approximate computation into a sampler in the Bayesian framework

Strictly speaking, ABC methods are not novel samplers of the posterior distribution. Rather, ABC methods are based on samplers of the posterior distribution, the validity of which is well-established. Major classes of samplers used with ABC include the rejection sampler and MCMC. Sequential Monte Carlo methods have also been adapted to be used with ABC. The difference between ABC versions and standard versions of these samplers is that ABC versions use approximate likelihoods instead of exact likelihoods in the algorithms and consequently produce approximate posterior samples.

An exact rejection sampler implemented first in (Tavarè et al., 1997) can be considered as the precursor of ABC methods. As described in detail in section 4, the algorithm of Tavarè et al. (1997) is exact in the sense that no approximation to the likelihood was used. Pritchard et al. (1999) were first to employ approximate likelihoods using the Rejection sampler. Markov chain Monte Carlo was implemented next (Beaumont, 2003; Marjoram et al., 2003; Bortot et al., 2007; Andrieu, 2010), and Sequential Monte Carlo later (Sisson et al. 2007, 2009; Beaumont et al., 2009; Del Moral et al., 2009; Toni et al., 2009).

b. Research  introducing a novel method to improve the performance of the approximation involved in ABC

There are often two approximations involved in ABC procedures (see section 4 for details). The first approximation can be seen as a well-behaved error in the sense that as computational effort increases, the error decreases. For this type of error, a major advance in accuracy and performance of ABC has been post-sampling corrections of the posterior values  in the context of rejection sampler (Beaumont et al., 2002; Blum and François, 2010) and in the context of MCMC (Wegmann et al. 2009). The second approximation is less well understood and it involves substituting the full data likelihoods with likelihoods based on summary statistics, often not sufficient ones. Work on this topic has focused on how to select summary statistics  (e.g., Joyce and Marjoram, 2008; Prangle and Fearnhead, 2011).

c. Research focusing on a specific statistical concept in inference by ABC

Generalization and equivalence of posteriors targeted in different ABC methods is investigated in Wilkinson (2008, 2010) and Sisson et al. (2011). Model choice by ABC has been implemented in various contexts (Pritchard et al., 1999; Fagundes et al., 2007; Grelaud et al., 2009; Jakobsson and Blum, 2010) but only recently it has been studied in detail (Rattman and Robert, 2011; Marin et al., 2011; Robert et al. 2011).

d. Applications of ABC.

ABC methods have been used in many practical inference problems, especially in settings requiring complex stochastic modeling with high-dimensional data and parameter spaces. Some examples involving genetic data are inference on the demographic history of populations (Pritchard et al., 1999; François et al., 2008; Verdu et al., 2009; Blum and Jakobsson, 2010) and species (Plagnol and Tavarè, 2003; Estoup et al., 2004; Excoffier et al., 2005; Becquet and Przeworski, 2007; Fagundes et al., 2007; Wilkinson, 2007, 2010), evolution of cancer (Tavarè, 2005; Siegmund et al., 2008) and evolution of protein networks (Ratmann 2007, 2009). Applications outside of genetics include inference on the physics of stereological extremes (Bortot et al., 2007), ecology of tropical forests (Jabot and Chave, 2009), dynamical systems in biology (Toni et al., 2009), and small-network disease models (Walker et al., 2010).