import numpy as npimport scipy.stats as stimport polars as plimport cmdstanpyimport arviz as azimport iqplotimport bebi103import bokeh.ioimport bokeh.plottingbokeh.io.output_notebook()
Loading BokehJS ...
In this lesson, we will learn how to use Markov chain Monte Carlo to do parameter estimation. To get the basic idea behind MCMC, imagine for a moment that we can draw samples out of the posterior distribution. This means that the probability of choosing given values of a set of parameters is proportional to the posterior probability of that set of values. If we drew many many such samples, we could reconstruct the posterior from the samples, e.g., by making histograms. That’s a big thing to imagine: that we can draw properly weighted samples. But, it turns out that we can! That is what MCMC allows us to do.
We discussed some theory behind this seemingly miraculous capability in lecture. For this lesson, we will just use the fact that we can do the sampling to learn about posterior distributions in the context of parameter estimation.
26.1 The data set
The data come from the Elowitz lab, published in Singer et al., Dynamic Heterogeneity and DNA Methylation in Embryonic Stem Cells, Molec. Cell, 55, 319-331, 2014, available here. In the following paragraphs, I repeat the description of the data set and EDA from last term:
In this paper, the authors investigated cell populations of embryonic stem cells using RNA single molecule fluorescence in situ hybridization (smFISH), a technique that enables them to count the number of mRNA transcripts in a cell for a given gene. They were able to measure four different genes in the same cells. So, for one experiment, they get the counts of four different genes in a collection of cells.
The authors focused on genes that code for pluripotency-associated regulators to study cell differentiation. Indeed, differing gene expression levels are a hallmark of differentiated cells. The authors do not just look at counts in a given cell at a given time. The temporal nature of gene expression is also important. While the authors do not directly look at temporal data using smFISH (since the technique requires fixing the cells), they did look at time lapse fluorescence movies of other regulators. We will not focus on these experiments here, but will discuss how the distribution of mRNA counts acquired via smFISH can serve to provide some insight about the dynamics of gene expression.
Note the difference in the \(x\)-axis scales. Clearly, prdm14 has far fewer mRNA copies than the other genes. The presence of two inflection points in the Rex1 EDCF implies bimodality.
26.3 Building a generative model
We can model the transcript counts, which result from bursty gene expression, as being Negative Binomially distributed. (The details behind this model are a bit nuanced, and you can read about them here.) For a given gene, the likelihood for the counts is
\[
\begin{align}
n_i \mid \alpha, b \sim \text{NegBinom}(\alpha, 1/b) \;\forall i,
\end{align}
\tag{26.1}\]
where \(\alpha\) is the burst frequency (higher \(\alpha\) means gene expression comes on more frequently) and \(b\) is the burst size, i.e., the typical number of transcripts made per burst. We have therefore identified the two parameters we need to estimate, \(\alpha\) and \(b\).
Because the Negative Binomial distribution is often parametrized in terms of \(\alpha\) and \(\beta= 1/b\), we can alternatively state our likelihood as
Given that we have a Negative Binomial likelihood, we are left to specify priors the burst size \(b\) and the burst frequency \(\alpha\).
26.3.1 Priors for burst size and inter-burst time
We will apply the bet-the-farm technique to get our priors for the burst size and inter-burst times. I would expect the time between bursts to be longer than a second, since it takes time for the transcriptional machinery to assemble. I would expect it to be shorter than a few hours, since an organism would need to adapt its gene expression based on environmental changes on that time scale or faster. The time between bursts needs to be in units of RNA lifetimes, and bacterial RNA lifetimes are of order minutes. So, the range of values of \(\alpha\) is \(10^{-2}\) to \(10^2\), leading to a prior of
I would expect the burst size to depend on promoter strength and/or strength of transcriptional activators. I could imagine anywhere from a few to a few thousand transcripts per burst, giving a range of \(10^0\) to \(10^4\), and a prior of
\[\begin{align}
\log_{10} b \sim \text{Norm}(2, 1).
\end{align}
\]
Note that the raise-to-power operator is ^, not ** as in Python.
The data block contains the counts \(n\) of the mRNA transcripts. There are \(N\) cells that are measured. Most data blocks look like this. There is an integer parameter that specifies the size of the data set, and then the data set is given as an array. The declaration that n is an array of length N is array [N], followed by the type of data, is integer. We specified a lower bound on the data (as we will do on the parameters) using the <lower=0> syntax.
The parameters block tells us what the parameters of the posterior are. In this case, we wish to sample out of the posterior \(g(\alpha, b \mid \mathbf{n})\), where \(\mathbf{n}\) is the set of transcript counts for the gene. So, the two parameters are \(\alpha\) and \(b\). However, since defining the prior was more easily defined in terms of logarithms, we specify \(\log_{10} \alpha\) and \(\log_{10} b\) as the parameters.
The transformed parameters block allows you to do any transformation of the parameters you are sampling for convenience. In this case, Stan’s Negative Binomial distribution is parametrized by \(\beta = 1/b\), so we make the transformation of the b to beta_. Notice that I have called this variable beta_ and not beta. I did this because beta is one of Stan’s distributions, and you should avoid naming a variable after a word that is already in the Stan language. The other transformations we need to make involve converting the logarithms to the actual parameter values.
Finally, the model block is where the model is specified. The syntax of the model block is almost identical to that of the hand-written model.
Now that we have specified our model, we can compile it.
sm = cmdstanpy.CmdStanModel(stan_file='smfish.stan')
With our compiled model, we just need to specify the data and let Stan’s sampler do the work! When using CmdStanPy, the data has to be passed in as a dictionary with keys corresponding to the variable names declared in the data block of the Stan program and values as Numpy arrays with the appropriate data type. For this calculation, we will use the data set for the rest gene.
# Construct data dict, making sure data are intsdata =dict(N=len(df), n=df["Rest"].to_numpy())# Sample using Stansamples = sm.sample( data=data, chains=4, iter_sampling=1000,)# Convert to ArviZ InferenceData instancesamples = az.from_cmdstanpy(posterior=samples)
12:15:48 - cmdstanpy - INFO - CmdStan start processing
12:15:48 - cmdstanpy - INFO - CmdStan done processing.
12:15:48 - cmdstanpy - WARNING - Non-fatal error during sampling:
Exception: neg_binomial_lpmf: Shape parameter is inf, but must be positive finite! (in 'smfish.stan', line 26, column 2 to column 33)
Exception: neg_binomial_lpmf: Shape parameter is inf, but must be positive finite! (in 'smfish.stan', line 26, column 2 to column 33)
Exception: neg_binomial_lpmf: Shape parameter is inf, but must be positive finite! (in 'smfish.stan', line 26, column 2 to column 33)
Exception: neg_binomial_lpmf: Shape parameter is inf, but must be positive finite! (in 'smfish.stan', line 26, column 2 to column 33)
Exception: neg_binomial_lpmf: Shape parameter is inf, but must be positive finite! (in 'smfish.stan', line 26, column 2 to column 33)
Exception: neg_binomial_lpmf: Shape parameter is inf, but must be positive finite! (in 'smfish.stan', line 26, column 2 to column 33)
Exception: neg_binomial_lpmf: Shape parameter is inf, but must be positive finite! (in 'smfish.stan', line 26, column 2 to column 33)
Exception: neg_binomial_lpmf: Shape parameter is inf, but must be positive finite! (in 'smfish.stan', line 26, column 2 to column 33)
Exception: neg_binomial_lpmf: Inverse scale parameter is 0, but must be positive finite! (in 'smfish.stan', line 26, column 2 to column 33)
Exception: neg_binomial_lpmf: Shape parameter is inf, but must be positive finite! (in 'smfish.stan', line 26, column 2 to column 33)
Exception: neg_binomial_lpmf: Shape parameter is inf, but must be positive finite! (in 'smfish.stan', line 26, column 2 to column 33)
Exception: neg_binomial_lpmf: Shape parameter is inf, but must be positive finite! (in 'smfish.stan', line 26, column 2 to column 33)
Exception: neg_binomial_lpmf: Shape parameter is inf, but must be positive finite! (in 'smfish.stan', line 26, column 2 to column 33)
Exception: neg_binomial_lpmf: Inverse scale parameter is inf, but must be positive finite! (in 'smfish.stan', line 26, column 2 to column 33)
Exception: neg_binomial_lpmf: Shape parameter is inf, but must be positive finite! (in 'smfish.stan', line 26, column 2 to column 33)
Exception: neg_binomial_lpmf: Shape parameter is inf, but must be positive finite! (in 'smfish.stan', line 26, column 2 to column 33)
Exception: neg_binomial_lpmf: Shape parameter is inf, but must be positive finite! (in 'smfish.stan', line 26, column 2 to column 33)
Exception: neg_binomial_lpmf: Shape parameter is inf, but must be positive finite! (in 'smfish.stan', line 26, column 2 to column 33)
Exception: neg_binomial_lpmf: Inverse scale parameter is 0, but must be positive finite! (in 'smfish.stan', line 26, column 2 to column 33)
Exception: neg_binomial_lpmf: Shape parameter is inf, but must be positive finite! (in 'smfish.stan', line 26, column 2 to column 33)
Exception: neg_binomial_lpmf: Shape parameter is inf, but must be positive finite! (in 'smfish.stan', line 26, column 2 to column 33)
Exception: neg_binomial_lpmf: Shape parameter is inf, but must be positive finite! (in 'smfish.stan', line 26, column 2 to column 33)
Consider re-running with show_console=True if the above output is unclear!
We got lots of warnings! In particular, we get warnings that some of the parameters fed into the Negative Binomial distribution are invalid, being either zero or infinite. These warnings are arising during Stan’s warm-up phase as it is assessing optimal settings for sampling, and should not be of concern. It is generally a bad idea to silence warnings, but if you are sure that the warnings that the sampler will throw are of no concern, you can silence logging using the bebi103.stan.disable_logging context. In most notebooks in these notes, to avoid clutter for pedagogical purposes, we will disable the warnings.
with bebi103.stan.disable_logging(): samples = sm.sample( data=data, chains=4, iter_sampling=1000, )# Convert to ArviZ InferenceData instancesamples = az.from_cmdstanpy(posterior=samples)
As we have already seen, the samples are indexed by chain and draw. Parameters represented in the parameters and transformed parameters blocks are reported.
26.5 Plots of the samples
There are many ways of looking at the samples. In this case, since we have two parameters of interest, the pulse frequency and pulse size, we can plot the samples as a scatter plot to get the approximate density.
We see very strong correlation between \(\alpha\) and \(b\). This does not necessarily mean that they depend on each other. Rather, it means that our degree of belief about their values depends on both in a correlated way. The measurements we made cannot effectively separate the effects of \(\alpha\) and \(b\) on the transcript counts.
26.6 Marginalizing the posterior
We can also plot the marginalized posterior distributions. Remember that the marginalized distributions properly take into account the effects of the other variable, including the strong correlation I just mentioned. To obtain the marginalized distribution, we simply ignore the samples of the parameters we are marginalizing out. It is convenient to look at the marginalized distributions as ECDFs.
Alternatively, we can visualize the marginalized posterior PDFs as histograms. Because we have such a large number of samples, binning bias from histograms is less of a concern.
We now have a two-dimensional posterior distribution, with our two parameters being \(\alpha\) and \(b\). We can combine a plot of the samples from the full posterior with the histograms (or ECDFs) of samples from the marginal distributions in a corner plot, available via bebi103.viz.corner().
Corner plots generalize to dimensions beyond two. The off-diagonal plots are of samples from marginal distributions where two parameters remain and the diagonals are plots from univariate marginal distributions.