import numpy as npimport scipy.stats as stimport polars as plimport cmdstanpyimport arviz as azimport iqplotimport bebi103import bokeh.ioimport bokeh.plottingbokeh.io.output_notebook()
Loading BokehJS ...
In Chapter 26, we performed parameter estimation using MCMC. We were thrilled to get parameter estimates! But how do we know if our model is a reasonable approximation of the true data generation process?
One obvious approach is to consider parameter values suggested by our posterior and then use those to parametrize the likelihood to generate new data sets from the model. We can then check to see if the observed data are consistent with those parametried by the posterior-parametrized model. This apprrach is referred to as posterior predictive checks. The procedure is the remarkably similar to prior predictive checks with one difference (highlighted in bold below).
Draw parameter values out of the posterior.
Use those parameter values in the likelihood to generate a new data set.
Store the result and repeat.
Conveniently, we get samples of parameter values out of the posterior from Markov chain Monte Carlo. Once we have the generated data sets, we can compare them to the measured data. This helps answer the question: Could this generative model actually produce the observed data? If the answer is yes, the generative model is not ruled out by the data (though it still may be a bad model). If the answer is no, then the generative model cannot fully describe the process by which the data were generated.
Part of the art of performing a posterior predictive check (or a prior predictive check for that matter) is choosing good summaries of the measured data that can be clearly and quantitatively visualized. Which summaries you choose to plot is up to you, and is often not a trivial choice; as Michael Betancourt says, “Constructing an interpretable yet informative summary statistic is very much a fine art.” For univariate measurements, the ECDF is a good summary. You may need to choose particular summaries that are best for your modeling task at hand.
Let us proceed with posterior predictive checked on our inferences in Chapter 26. As a reminder, the generative model, given by Equation 26.3, is below.
28.1 Getting posterior predictive samples using Stan
We could always get samples out of the posterior and then use Numpy to generate posterior predictive samples. However, it is more convenient to use Stan to do it. As such, we augment our Stan code with posterior predictive checks in the generated quantities block. Our updated Stan code is
Let’s grab our samples, including the posterior predictive checks!
# Load in as dataframedf = pl.read_csv(os.path.join(data_path, "singer_transcript_counts.csv"), comment_prefix="#")# Construct data dict, making sure data are intsdata =dict(N=len(df), n=df["Rest"].to_numpy())with bebi103.stan.disable_logging(): sm = cmdstanpy.CmdStanModel(stan_file='smfish_with_ppc.stan') samples = sm.sample(data=data)
When we convert the samples into an ArviZ instance, we should specify that the variable n_ppc is a posterior predictive variable.
Now, let’s look at the posterior predictive checks. Note that the n_ppc variable has an entire data set of \(N\) mRNA counts for each posterior sample of\(\alpha\) and \(b\). This can be verified by looking at its dimension.
samples.posterior_predictive.n_ppc.shape
(4, 1000, 279)
Indeed, for four chains, we have 1000 samples each, with each sample containing 279 mRNA counts. To perform a graphical posterior predictive check, we can plot all 4000 ECDFs of the posterior-sampled mRNA copy numbers. For each point along the \(n\)-axis, we compute percentiles of the ECDF values, and this gives us intervals within which we might expect data sets to lie. We can then overlay the measured data and compare.
The bebi103.viz.predictive_ecdf() function does this for us. It expects input having n_samples rows and N columns, where N is the number of data points and n_samples is the total number of posterior predictive data sets we generated. Because we sampled with four chains, the posterior predictive array is three-dimensional. The first index is the chain, the second the draw, and the third is the number of data points. The samples are stored as an xarray, which we can reshape using the stack function. We will collapse the chain and draw indexes into a single sample index. We also want to be sure to specify the ordering of the indexes; samples should go first, followed by the number of the data point. We can do this using the transpose() method of an xarray DataArray, which lets us specify the ordering of the indexes. We can then make the predictive ECDF plot, passing in our measured data using the datakeyword argument.
The dark line is the median posterior-parametrized ECDF, and the shaded regions contain 68% and 95% of the samples. The data seem consistent with the model. This can be seen more clearly be comparing the difference of the ECDFs from the median ECDF, which is accomplished using the diff='ecdf' keyword argument.
We see more clearly that the observed data set is consistent with data sets generated by the model.
28.2 How about Rex1?
Let us now do the same analysis with the Rex1 gene, which, if you recall the EDA (Section 26.2), exhibited bimodality and a Negative Binomial model may not be our best option.