Contents

require(IgGeneUsage)
require(rstan)
require(knitr)
require(ggplot2)
require(ggforce)
require(ggrepel)
require(reshape2)
require(patchwork)

1 Introduction

Decoding the properties of immune receptor repertoires (IRRs) is key to understanding how our adaptive immune system responds to challenges, such as viral infection or cancer. One important quantitative property of IRRs is their immunoglobulin (Ig) gene usage, i.e. how often are the differnt Igs that make up the immune receptors used in a given IRR. Furthermore, we may ask: is there differential gene usage (DGU) between IRRs from different biological conditions (e.g. healthy vs tumor).

Both of these questions can be answered quantitatively by are answered by IgGeneUsage.

2 Input

The main input of IgGeneUsage is a data.frame that has the following columns:

  1. individual_id: name of the repertoire (e.g. Patient-1)
  2. condition: name of the condition to which each repertoire belongs (healthy, tumor_A, tumor_B, …)
  3. gene_name: gene name (e.g. IGHV1-10 or family TRVB1)
  4. gene_usage_count: numeric (count) of usage related in individual x gene x condition specified in columns 1-3
  5. [optional] repertoire: character/numeric identifier that tags the different biological replicates if they are available for a specific individual

3 Model

IgGeneUsage transforms the input data as follows.

First, given \(R\) repertoires with \(G\) genes each, IgGeneUsage generates a gene usage matrix \(Y^{R \times G}\). Row sums in \(Y\) define the total usage (\(N\)) in each repertoire.

Second, for the analysis of DGU between biological conditions, we use a Bayesian model (\(M\)) for zero-inflated beta-binomial regression. Empirically, we know that Ig gene usage data can be noisy also not exhaustive, i.e. some Ig genes that are systematically rearranged at low probability might not be sampled, and certain Ig genes are not encoded (or dysfunctional) in some individuals. \(M\) can fit over-dispersed and zero-inflated Ig gene usage data.

In the output of IgGeneUsage, we report the mean effect size (es or \(\gamma\)) and its 95% highest density interval (HDI). Genes with \(\gamma \neq 0\) (e.g. if 95% HDI of \(\gamma\) excludes 0) are most likely to experience differential usage. Additionally, we report the probability of differential gene usage (\(\pi\)): \[\begin{align} \pi = 2 \cdot \max\left(\int_{\gamma = -\infty}^{0} p(\gamma)\mathrm{d}\gamma, \int_{\gamma = 0}^{\infty} p(\gamma)\mathrm{d}\gamma\right) - 1 \end{align}\] with \(\pi = 1\) for genes with strong differential usage, and \(\pi = 0\) for genes with negligible differential gene usage. Both metrics are computed based on the posterior distribution of \(\gamma\), and are thus related.

4 Case Study A: analyzing IRRs

IgGeneUsage has a couple of built-in Ig gene usage datasets. Some were obtained from studies and others were simulated.

Lets look into the simulated dataset d_zibb_3. This dataset was generated by a zero-inflated beta-binomial (ZIBB) model, and IgGeneUsage was designed to fit ZIBB-distributed data.

data("d_zibb_3", package = "IgGeneUsage")
knitr::kable(head(d_zibb_3))
individual_id gene_name gene_usage_count condition
I_1 G_1 29 C_1
I_1 G_2 135 C_1
I_1 G_3 6 C_1
I_1 G_4 52 C_1
I_1 G_5 68 C_1
I_1 G_6 41 C_1

We can also visualize d_zibb_3 with ggplot:

ggplot(data = d_zibb_3)+
  geom_point(aes(x = gene_name, y = gene_usage_count, col = condition),
             position = position_dodge(width = .7), shape = 21)+
  theme_bw(base_size = 11)+
  ylab(label = "Gene usage [count]")+
  xlab(label = '')+
  theme(legend.position = "top")+
  theme(axis.text.x = element_text(angle = 90, hjust = 1, vjust = 0.4))

4.1 DGU analysis

As main input IgGeneUsage uses a data.frame formatted as e.g. d_zibb_3. Other input parameters allow you to configure specific settings of the rstan sampler.

In this example, we analyze d_zibb_3 with 3 MCMC chains, 1500 iterations each including 500 warm-ups using a single CPU core (Hint: for parallel chain execution set parameter mcmc_cores = 3). We report for each model parameter its mean and 95% highest density interval (HDIs).

Important remark: you should run DGU analyses using default IgGeneUsage parameters. If warnings or errors are reported with regard to the MCMC sampling, please consult the Stan manual1 https://mc-stan.org/misc/warnings.html and adjust the inputs accordingly. If the warnings persist, please submit an issue with a reproducible script at the Bioconductor support site or on Github2 https://github.com/snaketron/IgGeneUsage/issues.

M <- DGU(ud = d_zibb_3, # input data
         mcmc_warmup = 300, # how many MCMC warm-ups per chain (default: 500)
         mcmc_steps = 1500, # how many MCMC steps per chain (default: 1,500)
         mcmc_chains = 3, # how many MCMC chain to run (default: 4)
         mcmc_cores = 1, # how many PC cores to use? (e.g. parallel chains)
         hdi_lvl = 0.95, # highest density interval level (de fault: 0.95)
         adapt_delta = 0.8, # MCMC target acceptance rate (default: 0.95)
         max_treedepth = 10) # tree depth evaluated at each step (default: 12)
FALSE 
FALSE SAMPLING FOR MODEL 'dgu' NOW (CHAIN 1).
FALSE Chain 1: 
FALSE Chain 1: Gradient evaluation took 0.000195 seconds
FALSE Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 1.95 seconds.
FALSE Chain 1: Adjust your expectations accordingly!
FALSE Chain 1: 
FALSE Chain 1: 
FALSE Chain 1: Iteration:    1 / 1500 [  0%]  (Warmup)
FALSE Chain 1: Iteration:   50 / 1500 [  3%]  (Warmup)
FALSE Chain 1: Iteration:  100 / 1500 [  6%]  (Warmup)
FALSE Chain 1: Iteration:  150 / 1500 [ 10%]  (Warmup)
FALSE Chain 1: Iteration:  200 / 1500 [ 13%]  (Warmup)
FALSE Chain 1: Iteration:  250 / 1500 [ 16%]  (Warmup)
FALSE Chain 1: Iteration:  300 / 1500 [ 20%]  (Warmup)
FALSE Chain 1: Iteration:  301 / 1500 [ 20%]  (Sampling)
FALSE Chain 1: Iteration:  350 / 1500 [ 23%]  (Sampling)
FALSE Chain 1: Iteration:  400 / 1500 [ 26%]  (Sampling)
FALSE Chain 1: Iteration:  450 / 1500 [ 30%]  (Sampling)
FALSE Chain 1: Iteration:  500 / 1500 [ 33%]  (Sampling)
FALSE Chain 1: Iteration:  550 / 1500 [ 36%]  (Sampling)
FALSE Chain 1: Iteration:  600 / 1500 [ 40%]  (Sampling)
FALSE Chain 1: Iteration:  650 / 1500 [ 43%]  (Sampling)
FALSE Chain 1: Iteration:  700 / 1500 [ 46%]  (Sampling)
FALSE Chain 1: Iteration:  750 / 1500 [ 50%]  (Sampling)
FALSE Chain 1: Iteration:  800 / 1500 [ 53%]  (Sampling)
FALSE Chain 1: Iteration:  850 / 1500 [ 56%]  (Sampling)
FALSE Chain 1: Iteration:  900 / 1500 [ 60%]  (Sampling)
FALSE Chain 1: Iteration:  950 / 1500 [ 63%]  (Sampling)
FALSE Chain 1: Iteration: 1000 / 1500 [ 66%]  (Sampling)
FALSE Chain 1: Iteration: 1050 / 1500 [ 70%]  (Sampling)
FALSE Chain 1: Iteration: 1100 / 1500 [ 73%]  (Sampling)
FALSE Chain 1: Iteration: 1150 / 1500 [ 76%]  (Sampling)
FALSE Chain 1: Iteration: 1200 / 1500 [ 80%]  (Sampling)
FALSE Chain 1: Iteration: 1250 / 1500 [ 83%]  (Sampling)
FALSE Chain 1: Iteration: 1300 / 1500 [ 86%]  (Sampling)
FALSE Chain 1: Iteration: 1350 / 1500 [ 90%]  (Sampling)
FALSE Chain 1: Iteration: 1400 / 1500 [ 93%]  (Sampling)
FALSE Chain 1: Iteration: 1450 / 1500 [ 96%]  (Sampling)
FALSE Chain 1: Iteration: 1500 / 1500 [100%]  (Sampling)
FALSE Chain 1: 
FALSE Chain 1:  Elapsed Time: 2.318 seconds (Warm-up)
FALSE Chain 1:                5.521 seconds (Sampling)
FALSE Chain 1:                7.839 seconds (Total)
FALSE Chain 1: 
FALSE 
FALSE SAMPLING FOR MODEL 'dgu' NOW (CHAIN 2).
FALSE Chain 2: 
FALSE Chain 2: Gradient evaluation took 0.000142 seconds
FALSE Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 1.42 seconds.
FALSE Chain 2: Adjust your expectations accordingly!
FALSE Chain 2: 
FALSE Chain 2: 
FALSE Chain 2: Iteration:    1 / 1500 [  0%]  (Warmup)
FALSE Chain 2: Iteration:   50 / 1500 [  3%]  (Warmup)
FALSE Chain 2: Iteration:  100 / 1500 [  6%]  (Warmup)
FALSE Chain 2: Iteration:  150 / 1500 [ 10%]  (Warmup)
FALSE Chain 2: Iteration:  200 / 1500 [ 13%]  (Warmup)
FALSE Chain 2: Iteration:  250 / 1500 [ 16%]  (Warmup)
FALSE Chain 2: Iteration:  300 / 1500 [ 20%]  (Warmup)
FALSE Chain 2: Iteration:  301 / 1500 [ 20%]  (Sampling)
FALSE Chain 2: Iteration:  350 / 1500 [ 23%]  (Sampling)
FALSE Chain 2: Iteration:  400 / 1500 [ 26%]  (Sampling)
FALSE Chain 2: Iteration:  450 / 1500 [ 30%]  (Sampling)
FALSE Chain 2: Iteration:  500 / 1500 [ 33%]  (Sampling)
FALSE Chain 2: Iteration:  550 / 1500 [ 36%]  (Sampling)
FALSE Chain 2: Iteration:  600 / 1500 [ 40%]  (Sampling)
FALSE Chain 2: Iteration:  650 / 1500 [ 43%]  (Sampling)
FALSE Chain 2: Iteration:  700 / 1500 [ 46%]  (Sampling)
FALSE Chain 2: Iteration:  750 / 1500 [ 50%]  (Sampling)
FALSE Chain 2: Iteration:  800 / 1500 [ 53%]  (Sampling)
FALSE Chain 2: Iteration:  850 / 1500 [ 56%]  (Sampling)
FALSE Chain 2: Iteration:  900 / 1500 [ 60%]  (Sampling)
FALSE Chain 2: Iteration:  950 / 1500 [ 63%]  (Sampling)
FALSE Chain 2: Iteration: 1000 / 1500 [ 66%]  (Sampling)
FALSE Chain 2: Iteration: 1050 / 1500 [ 70%]  (Sampling)
FALSE Chain 2: Iteration: 1100 / 1500 [ 73%]  (Sampling)
FALSE Chain 2: Iteration: 1150 / 1500 [ 76%]  (Sampling)
FALSE Chain 2: Iteration: 1200 / 1500 [ 80%]  (Sampling)
FALSE Chain 2: Iteration: 1250 / 1500 [ 83%]  (Sampling)
FALSE Chain 2: Iteration: 1300 / 1500 [ 86%]  (Sampling)
FALSE Chain 2: Iteration: 1350 / 1500 [ 90%]  (Sampling)
FALSE Chain 2: Iteration: 1400 / 1500 [ 93%]  (Sampling)
FALSE Chain 2: Iteration: 1450 / 1500 [ 96%]  (Sampling)
FALSE Chain 2: Iteration: 1500 / 1500 [100%]  (Sampling)
FALSE Chain 2: 
FALSE Chain 2:  Elapsed Time: 2.311 seconds (Warm-up)
FALSE Chain 2:                5.478 seconds (Sampling)
FALSE Chain 2:                7.789 seconds (Total)
FALSE Chain 2: 
FALSE 
FALSE SAMPLING FOR MODEL 'dgu' NOW (CHAIN 3).
FALSE Chain 3: 
FALSE Chain 3: Gradient evaluation took 0.000155 seconds
FALSE Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 1.55 seconds.
FALSE Chain 3: Adjust your expectations accordingly!
FALSE Chain 3: 
FALSE Chain 3: 
FALSE Chain 3: Iteration:    1 / 1500 [  0%]  (Warmup)
FALSE Chain 3: Iteration:   50 / 1500 [  3%]  (Warmup)
FALSE Chain 3: Iteration:  100 / 1500 [  6%]  (Warmup)
FALSE Chain 3: Iteration:  150 / 1500 [ 10%]  (Warmup)
FALSE Chain 3: Iteration:  200 / 1500 [ 13%]  (Warmup)
FALSE Chain 3: Iteration:  250 / 1500 [ 16%]  (Warmup)
FALSE Chain 3: Iteration:  300 / 1500 [ 20%]  (Warmup)
FALSE Chain 3: Iteration:  301 / 1500 [ 20%]  (Sampling)
FALSE Chain 3: Iteration:  350 / 1500 [ 23%]  (Sampling)
FALSE Chain 3: Iteration:  400 / 1500 [ 26%]  (Sampling)
FALSE Chain 3: Iteration:  450 / 1500 [ 30%]  (Sampling)
FALSE Chain 3: Iteration:  500 / 1500 [ 33%]  (Sampling)
FALSE Chain 3: Iteration:  550 / 1500 [ 36%]  (Sampling)
FALSE Chain 3: Iteration:  600 / 1500 [ 40%]  (Sampling)
FALSE Chain 3: Iteration:  650 / 1500 [ 43%]  (Sampling)
FALSE Chain 3: Iteration:  700 / 1500 [ 46%]  (Sampling)
FALSE Chain 3: Iteration:  750 / 1500 [ 50%]  (Sampling)
FALSE Chain 3: Iteration:  800 / 1500 [ 53%]  (Sampling)
FALSE Chain 3: Iteration:  850 / 1500 [ 56%]  (Sampling)
FALSE Chain 3: Iteration:  900 / 1500 [ 60%]  (Sampling)
FALSE Chain 3: Iteration:  950 / 1500 [ 63%]  (Sampling)
FALSE Chain 3: Iteration: 1000 / 1500 [ 66%]  (Sampling)
FALSE Chain 3: Iteration: 1050 / 1500 [ 70%]  (Sampling)
FALSE Chain 3: Iteration: 1100 / 1500 [ 73%]  (Sampling)
FALSE Chain 3: Iteration: 1150 / 1500 [ 76%]  (Sampling)
FALSE Chain 3: Iteration: 1200 / 1500 [ 80%]  (Sampling)
FALSE Chain 3: Iteration: 1250 / 1500 [ 83%]  (Sampling)
FALSE Chain 3: Iteration: 1300 / 1500 [ 86%]  (Sampling)
FALSE Chain 3: Iteration: 1350 / 1500 [ 90%]  (Sampling)
FALSE Chain 3: Iteration: 1400 / 1500 [ 93%]  (Sampling)
FALSE Chain 3: Iteration: 1450 / 1500 [ 96%]  (Sampling)
FALSE Chain 3: Iteration: 1500 / 1500 [100%]  (Sampling)
FALSE Chain 3: 
FALSE Chain 3:  Elapsed Time: 2.158 seconds (Warm-up)
FALSE Chain 3:                5.487 seconds (Sampling)
FALSE Chain 3:                7.645 seconds (Total)
FALSE Chain 3:

4.2 Output format

In the output of DGU, we provide the following objects:

  • dgu and dgu_prob (main results of IgGeneUsage): quantitative DGU summary on a log- and probability-scale, respectively.
  • gu: condition-specific relative gene usage (GU) of each gene
  • theta: probabilities of gene usage in each sample
  • ppc: posterior predictive checks data (see section ‘Model checking’)
  • ud: processed Ig gene usage data
  • fit: rstan (‘stanfit’) object of the fitted model \(\rightarrow\) used for model checks (see section ‘Model checking’)
summary(M)
FALSE          Length Class      Mode
FALSE dgu       9     data.frame list
FALSE dgu_prob  9     data.frame list
FALSE gu        8     data.frame list
FALSE theta    12     data.frame list
FALSE ppc       2     -none-     list
FALSE ud       24     -none-     list
FALSE fit       1     stanfit    S4

4.3 Model checking

  • Check your model fit. For this, you can use the object glm.

    • Minimal checklist of successful MCMC sampling3 https://mc-stan.org/misc/warnings.html:
      • no divergences
      • no excessive warnings from rstan
      • Rhat < 1.05
      • high Neff
    • Minimal checklist for valid model:
      • posterior predictive checks (PPCs): is model consistent with reality, i.e. is there overlap between simulated and observed data?
      • leave-one-out analysis

4.3.1 MCMC sampling

  • divergences, tree-depth, energy
  • none found
rstan::check_hmc_diagnostics(M$fit)
FALSE 
FALSE Divergences:
FALSE 
FALSE Tree depth:
FALSE 
FALSE Energy:
  • rhat < 1.05 and n_eff > 0
rstan::stan_rhat(object = M$fit)|rstan::stan_ess(object = M$fit)

4.4 PPC: posterior predictive checks

4.4.1 PPCs: repertoire-specific

The model used by IgGeneUsage is generative, i.e. with the model we can generate usage of each Ig gene in a given repertoire (y-axis). Error bars show 95% HDI of mean posterior prediction. The predictions can be compared with the observed data (x-axis). For points near the diagonal \(\rightarrow\) accurate prediction.

ggplot(data = M$ppc$ppc_rep)+
  facet_wrap(facets = ~individual_id, ncol = 5)+
  geom_abline(intercept = 0, slope = 1, linetype = "dashed", col = "darkgray")+
  geom_errorbar(aes(x = observed_count, y = ppc_mean_count, 
                    ymin = ppc_L_count, ymax = ppc_H_count), col = "darkgray")+
  geom_point(aes(x = observed_count, y = ppc_mean_count), size = 1)+
  theme_bw(base_size = 11)+
  theme(legend.position = "top")+
  xlab(label = "Observed usage [counts]")+
  ylab(label = "PPC usage [counts]")

4.4.2 PPCs: overall

Prediction of generalized gene usage within a biological condition is also possible. We show the predictions (y-axis) of the model, and compare them against the observed mean usage (x-axis). If the points are near the diagonal \(\rightarrow\) accurate prediction. Errors are 95% HDIs of the mean.

ggplot(data = M$ppc$ppc_condition)+
  geom_errorbar(aes(x = gene_name, ymin = ppc_L_prop*100, 
                    ymax = ppc_H_prop*100, col = condition), 
                position = position_dodge(width = 0.65), width = 0.1)+
  geom_point(aes(x = gene_name, y = ppc_mean_prop*100,col = condition), 
                position = position_dodge(width = 0.65))+
  theme_bw(base_size = 11)+
  theme(legend.position = "top")+
  xlab(label = "Observed usage [%]")+
  ylab(label = "PPC usage [%]")+
  theme(axis.text.x = element_text(angle = 90, hjust = 1, vjust = 0.4))

4.5 Results

Each row of glm summarizes the degree of DGU observed for specific Igs. Two metrics are reported:

  • es (also referred to as \(\gamma\)): effect size on DGU, where contrast gives the direction of the effect (e.g. tumor - healthy or healthy - tumor)
  • pmax (also referred to as \(\pi\)): probability of DGU (parameter \(\pi\) from model \(M\))

For es we also have the mean, median standard error (se), standard deviation (sd), L (low bound of 95% HDI), H (high bound of 95% HDI)

kable(x = head(M$dgu), row.names = FALSE, digits = 2)
es_mean es_mean_se es_sd es_median es_L es_H contrast gene_name pmax
0.19 0.01 0.30 0.12 -0.23 0.98 C_1-vs-C_2 G_1 0.50
-0.01 0.00 0.20 0.00 -0.43 0.39 C_1-vs-C_2 G_4 0.01
-0.08 0.00 0.23 -0.05 -0.60 0.34 C_1-vs-C_2 G_3 0.26
-0.06 0.00 0.17 -0.05 -0.44 0.28 C_1-vs-C_2 G_2 0.28
0.06 0.00 0.18 0.04 -0.27 0.44 C_1-vs-C_2 G_5 0.25
-0.03 0.00 0.19 -0.02 -0.45 0.35 C_1-vs-C_2 G_8 0.14

4.5.1 DGU: differential gene usage

We know that the values of \gamma and \pi are related to each other. Lets visualize them for all genes (shown as a point). Names are shown for genes associated with \(\pi \geq 0.95\). Dashed horizontal line represents null-effect (\(\gamma = 0\)).

Notice that the gene with \(\pi \approx 1\) also has an effect size whose 95% HDI (error bar) does not overlap the null-effect. The genes with high degree of differential usage are easy to detect with this figure.

# format data
stats <- M$dgu
stats <- stats[order(abs(stats$es_mean), decreasing = FALSE), ]
stats$gene_fac <- factor(x = stats$gene_name, levels = unique(stats$gene_name))


ggplot(data = stats)+
  geom_hline(yintercept = 0, linetype = "dashed", col = "gray")+
  geom_errorbar(aes(x = pmax, y = es_mean, ymin = es_L, ymax = es_H), 
                col = "darkgray")+
  geom_point(aes(x = pmax, y = es_mean, col = contrast))+
  geom_text_repel(data = stats[stats$pmax >= 0.95, ],
                  aes(x = pmax, y = es_mean, label = gene_fac),
                  min.segment.length = 0, size = 2.75)+
  theme_bw(base_size = 11)+
  theme(legend.position = "top")+
  xlab(label = expression(pi))+
  xlim(c(0, 1))+
  ylab(expression(gamma))

4.5.2 Promising hits

Lets visualize the observed data of the genes with high probability of differential gene usage (\(\pi \geq 0.95\)). Here we show the gene usage in %.

promising_genes <- stats$gene_name[stats$pmax >= 0.95]

ppc_gene <- M$ppc$ppc_condition
ppc_gene <- ppc_gene[ppc_gene$gene_name %in% promising_genes, ]

ppc_rep <- M$ppc$ppc_rep
ppc_rep <- ppc_rep[ppc_rep$gene_name %in% promising_genes, ]



ggplot()+
  geom_point(data = ppc_rep,
             aes(x = gene_name, y = observed_prop*100, col = condition),
             size = 1, fill = "black",
             position = position_jitterdodge(jitter.width = 0.1, 
                                             jitter.height = 0, 
                                             dodge.width = 0.35))+
  geom_errorbar(data = ppc_gene, 
                aes(x = gene_name, ymin = ppc_L_prop*100, 
                    ymax = ppc_H_prop*100, group = condition),
                position = position_dodge(width = 0.35), width = 0.15)+
  theme_bw(base_size = 11)+
  theme(legend.position = "top")+
  theme(axis.text.x = element_text(angle = 90, hjust = 1, vjust = 0.4))+
  ylab(label = "PPC usage [%]")+
  xlab(label = '')

4.5.3 Promising hits [count]

Lets also visualize the predicted gene usage counts in each repertoire.

ggplot()+
  geom_point(data = ppc_rep,
             aes(x = gene_name, y = observed_count, col = condition),
             size = 1, fill = "black",
             position = position_jitterdodge(jitter.width = 0.1, 
                                             jitter.height = 0, 
                                             dodge.width = 0.5))+
  theme_bw(base_size = 11)+
  theme(legend.position = "top")+
  ylab(label = "PPC usage [count]")+
  xlab(label = '')+
  theme(axis.text.x = element_text(angle = 90, hjust = 1, vjust = 0.4))

4.6 GU: gene usage summary

IgGeneUsage also reports the inferred gene usage (GU) probability of individual genes in each condition. For a given gene we report its mean GU (prob_mean) and the 95% (for instance) HDI (prob_L and prob_H).

ggplot(data = M$gu)+
  geom_errorbar(aes(x = gene_name, y = prob_mean, ymin = prob_L,
                    ymax = prob_H, col = condition),
                width = 0.1, position = position_dodge(width = 0.4))+
  geom_point(aes(x = gene_name, y = prob_mean, col = condition), size = 1,
             position = position_dodge(width = 0.4))+
  theme_bw(base_size = 11)+
  theme(legend.position = "top")+
  ylab(label = "GU [probability]")+
  theme(axis.text.x = element_text(angle = 90, hjust = 1, vjust = 0.4))

5 Leave-one-out (LOO) analysis

To assert the robustness of the probability of DGU (\(\pi\)) and the effect size (\(\gamma\)), IgGeneUsage has a built-in procedure for fully Bayesian leave-one-out (LOO) analysis.

During each step of LOO, we discard the data of one of the R repertoires, and use the remaining data to analyze for DGU. In each step we record \(\pi\) and \(\gamma\) for all genes, including the mean and 95% HDI of \(\gamma\). We assert quantitatively the robustness of \(\pi\) and \(\gamma\) by evaluating their variability for a specific gene.

This analysis can be computationally demanding.

L <- LOO(ud = d_zibb_3, # input data
         mcmc_warmup = 500, # how many MCMC warm-ups per chain (default: 500)
         mcmc_steps = 1000, # how many MCMC steps per chain (default: 1,500)
         mcmc_chains = 1, # how many MCMC chain to run (default: 4)
         mcmc_cores = 1, # how many PC cores to use? (e.g. parallel chains)
         hdi_lvl = 0.95, # highest density interval level (de fault: 0.95)
         adapt_delta = 0.8, # MCMC target acceptance rate (default: 0.95)
         max_treedepth = 10) # tree depth evaluated at each step (default: 12)
FALSE 
FALSE SAMPLING FOR MODEL 'dgu' NOW (CHAIN 1).
FALSE Chain 1: 
FALSE Chain 1: Gradient evaluation took 0.000183 seconds
FALSE Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 1.83 seconds.
FALSE Chain 1: Adjust your expectations accordingly!
FALSE Chain 1: 
FALSE Chain 1: 
FALSE Chain 1: Iteration:   1 / 1000 [  0%]  (Warmup)
FALSE Chain 1: Iteration:  50 / 1000 [  5%]  (Warmup)
FALSE Chain 1: Iteration: 100 / 1000 [ 10%]  (Warmup)
FALSE Chain 1: Iteration: 150 / 1000 [ 15%]  (Warmup)
FALSE Chain 1: Iteration: 200 / 1000 [ 20%]  (Warmup)
FALSE Chain 1: Iteration: 250 / 1000 [ 25%]  (Warmup)
FALSE Chain 1: Iteration: 300 / 1000 [ 30%]  (Warmup)
FALSE Chain 1: Iteration: 350 / 1000 [ 35%]  (Warmup)
FALSE Chain 1: Iteration: 400 / 1000 [ 40%]  (Warmup)
FALSE Chain 1: Iteration: 450 / 1000 [ 45%]  (Warmup)
FALSE Chain 1: Iteration: 500 / 1000 [ 50%]  (Warmup)
FALSE Chain 1: Iteration: 501 / 1000 [ 50%]  (Sampling)
FALSE Chain 1: Iteration: 550 / 1000 [ 55%]  (Sampling)
FALSE Chain 1: Iteration: 600 / 1000 [ 60%]  (Sampling)
FALSE Chain 1: Iteration: 650 / 1000 [ 65%]  (Sampling)
FALSE Chain 1: Iteration: 700 / 1000 [ 70%]  (Sampling)
FALSE Chain 1: Iteration: 750 / 1000 [ 75%]  (Sampling)
FALSE Chain 1: Iteration: 800 / 1000 [ 80%]  (Sampling)
FALSE Chain 1: Iteration: 850 / 1000 [ 85%]  (Sampling)
FALSE Chain 1: Iteration: 900 / 1000 [ 90%]  (Sampling)
FALSE Chain 1: Iteration: 950 / 1000 [ 95%]  (Sampling)
FALSE Chain 1: Iteration: 1000 / 1000 [100%]  (Sampling)
FALSE Chain 1: 
FALSE Chain 1:  Elapsed Time: 3.143 seconds (Warm-up)
FALSE Chain 1:                2.163 seconds (Sampling)
FALSE Chain 1:                5.306 seconds (Total)
FALSE Chain 1: 
FALSE 
FALSE SAMPLING FOR MODEL 'dgu' NOW (CHAIN 1).
FALSE Chain 1: 
FALSE Chain 1: Gradient evaluation took 0.000158 seconds
FALSE Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 1.58 seconds.
FALSE Chain 1: Adjust your expectations accordingly!
FALSE Chain 1: 
FALSE Chain 1: 
FALSE Chain 1: Iteration:   1 / 1000 [  0%]  (Warmup)
FALSE Chain 1: Iteration:  50 / 1000 [  5%]  (Warmup)
FALSE Chain 1: Iteration: 100 / 1000 [ 10%]  (Warmup)
FALSE Chain 1: Iteration: 150 / 1000 [ 15%]  (Warmup)
FALSE Chain 1: Iteration: 200 / 1000 [ 20%]  (Warmup)
FALSE Chain 1: Iteration: 250 / 1000 [ 25%]  (Warmup)
FALSE Chain 1: Iteration: 300 / 1000 [ 30%]  (Warmup)
FALSE Chain 1: Iteration: 350 / 1000 [ 35%]  (Warmup)
FALSE Chain 1: Iteration: 400 / 1000 [ 40%]  (Warmup)
FALSE Chain 1: Iteration: 450 / 1000 [ 45%]  (Warmup)
FALSE Chain 1: Iteration: 500 / 1000 [ 50%]  (Warmup)
FALSE Chain 1: Iteration: 501 / 1000 [ 50%]  (Sampling)
FALSE Chain 1: Iteration: 550 / 1000 [ 55%]  (Sampling)
FALSE Chain 1: Iteration: 600 / 1000 [ 60%]  (Sampling)
FALSE Chain 1: Iteration: 650 / 1000 [ 65%]  (Sampling)
FALSE Chain 1: Iteration: 700 / 1000 [ 70%]  (Sampling)
FALSE Chain 1: Iteration: 750 / 1000 [ 75%]  (Sampling)
FALSE Chain 1: Iteration: 800 / 1000 [ 80%]  (Sampling)
FALSE Chain 1: Iteration: 850 / 1000 [ 85%]  (Sampling)
FALSE Chain 1: Iteration: 900 / 1000 [ 90%]  (Sampling)
FALSE Chain 1: Iteration: 950 / 1000 [ 95%]  (Sampling)
FALSE Chain 1: Iteration: 1000 / 1000 [100%]  (Sampling)
FALSE Chain 1: 
FALSE Chain 1:  Elapsed Time: 2.807 seconds (Warm-up)
FALSE Chain 1:                2.141 seconds (Sampling)
FALSE Chain 1:                4.948 seconds (Total)
FALSE Chain 1: 
FALSE 
FALSE SAMPLING FOR MODEL 'dgu' NOW (CHAIN 1).
FALSE Chain 1: 
FALSE Chain 1: Gradient evaluation took 0.00016 seconds
FALSE Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 1.6 seconds.
FALSE Chain 1: Adjust your expectations accordingly!
FALSE Chain 1: 
FALSE Chain 1: 
FALSE Chain 1: Iteration:   1 / 1000 [  0%]  (Warmup)
FALSE Chain 1: Iteration:  50 / 1000 [  5%]  (Warmup)
FALSE Chain 1: Iteration: 100 / 1000 [ 10%]  (Warmup)
FALSE Chain 1: Iteration: 150 / 1000 [ 15%]  (Warmup)
FALSE Chain 1: Iteration: 200 / 1000 [ 20%]  (Warmup)
FALSE Chain 1: Iteration: 250 / 1000 [ 25%]  (Warmup)
FALSE Chain 1: Iteration: 300 / 1000 [ 30%]  (Warmup)
FALSE Chain 1: Iteration: 350 / 1000 [ 35%]  (Warmup)
FALSE Chain 1: Iteration: 400 / 1000 [ 40%]  (Warmup)
FALSE Chain 1: Iteration: 450 / 1000 [ 45%]  (Warmup)
FALSE Chain 1: Iteration: 500 / 1000 [ 50%]  (Warmup)
FALSE Chain 1: Iteration: 501 / 1000 [ 50%]  (Sampling)
FALSE Chain 1: Iteration: 550 / 1000 [ 55%]  (Sampling)
FALSE Chain 1: Iteration: 600 / 1000 [ 60%]  (Sampling)
FALSE Chain 1: Iteration: 650 / 1000 [ 65%]  (Sampling)
FALSE Chain 1: Iteration: 700 / 1000 [ 70%]  (Sampling)
FALSE Chain 1: Iteration: 750 / 1000 [ 75%]  (Sampling)
FALSE Chain 1: Iteration: 800 / 1000 [ 80%]  (Sampling)
FALSE Chain 1: Iteration: 850 / 1000 [ 85%]  (Sampling)
FALSE Chain 1: Iteration: 900 / 1000 [ 90%]  (Sampling)
FALSE Chain 1: Iteration: 950 / 1000 [ 95%]  (Sampling)
FALSE Chain 1: Iteration: 1000 / 1000 [100%]  (Sampling)
FALSE Chain 1: 
FALSE Chain 1:  Elapsed Time: 2.793 seconds (Warm-up)
FALSE Chain 1:                2.128 seconds (Sampling)
FALSE Chain 1:                4.921 seconds (Total)
FALSE Chain 1: 
FALSE 
FALSE SAMPLING FOR MODEL 'dgu' NOW (CHAIN 1).
FALSE Chain 1: 
FALSE Chain 1: Gradient evaluation took 0.000158 seconds
FALSE Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 1.58 seconds.
FALSE Chain 1: Adjust your expectations accordingly!
FALSE Chain 1: 
FALSE Chain 1: 
FALSE Chain 1: Iteration:   1 / 1000 [  0%]  (Warmup)
FALSE Chain 1: Iteration:  50 / 1000 [  5%]  (Warmup)
FALSE Chain 1: Iteration: 100 / 1000 [ 10%]  (Warmup)
FALSE Chain 1: Iteration: 150 / 1000 [ 15%]  (Warmup)
FALSE Chain 1: Iteration: 200 / 1000 [ 20%]  (Warmup)
FALSE Chain 1: Iteration: 250 / 1000 [ 25%]  (Warmup)
FALSE Chain 1: Iteration: 300 / 1000 [ 30%]  (Warmup)
FALSE Chain 1: Iteration: 350 / 1000 [ 35%]  (Warmup)
FALSE Chain 1: Iteration: 400 / 1000 [ 40%]  (Warmup)
FALSE Chain 1: Iteration: 450 / 1000 [ 45%]  (Warmup)
FALSE Chain 1: Iteration: 500 / 1000 [ 50%]  (Warmup)
FALSE Chain 1: Iteration: 501 / 1000 [ 50%]  (Sampling)
FALSE Chain 1: Iteration: 550 / 1000 [ 55%]  (Sampling)
FALSE Chain 1: Iteration: 600 / 1000 [ 60%]  (Sampling)
FALSE Chain 1: Iteration: 650 / 1000 [ 65%]  (Sampling)
FALSE Chain 1: Iteration: 700 / 1000 [ 70%]  (Sampling)
FALSE Chain 1: Iteration: 750 / 1000 [ 75%]  (Sampling)
FALSE Chain 1: Iteration: 800 / 1000 [ 80%]  (Sampling)
FALSE Chain 1: Iteration: 850 / 1000 [ 85%]  (Sampling)
FALSE Chain 1: Iteration: 900 / 1000 [ 90%]  (Sampling)
FALSE Chain 1: Iteration: 950 / 1000 [ 95%]  (Sampling)
FALSE Chain 1: Iteration: 1000 / 1000 [100%]  (Sampling)
FALSE Chain 1: 
FALSE Chain 1:  Elapsed Time: 2.988 seconds (Warm-up)
FALSE Chain 1:                2.13 seconds (Sampling)
FALSE Chain 1:                5.118 seconds (Total)
FALSE Chain 1: 
FALSE 
FALSE SAMPLING FOR MODEL 'dgu' NOW (CHAIN 1).
FALSE Chain 1: 
FALSE Chain 1: Gradient evaluation took 0.000149 seconds
FALSE Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 1.49 seconds.
FALSE Chain 1: Adjust your expectations accordingly!
FALSE Chain 1: 
FALSE Chain 1: 
FALSE Chain 1: Iteration:   1 / 1000 [  0%]  (Warmup)
FALSE Chain 1: Iteration:  50 / 1000 [  5%]  (Warmup)
FALSE Chain 1: Iteration: 100 / 1000 [ 10%]  (Warmup)
FALSE Chain 1: Iteration: 150 / 1000 [ 15%]  (Warmup)
FALSE Chain 1: Iteration: 200 / 1000 [ 20%]  (Warmup)
FALSE Chain 1: Iteration: 250 / 1000 [ 25%]  (Warmup)
FALSE Chain 1: Iteration: 300 / 1000 [ 30%]  (Warmup)
FALSE Chain 1: Iteration: 350 / 1000 [ 35%]  (Warmup)
FALSE Chain 1: Iteration: 400 / 1000 [ 40%]  (Warmup)
FALSE Chain 1: Iteration: 450 / 1000 [ 45%]  (Warmup)
FALSE Chain 1: Iteration: 500 / 1000 [ 50%]  (Warmup)
FALSE Chain 1: Iteration: 501 / 1000 [ 50%]  (Sampling)
FALSE Chain 1: Iteration: 550 / 1000 [ 55%]  (Sampling)
FALSE Chain 1: Iteration: 600 / 1000 [ 60%]  (Sampling)
FALSE Chain 1: Iteration: 650 / 1000 [ 65%]  (Sampling)
FALSE Chain 1: Iteration: 700 / 1000 [ 70%]  (Sampling)
FALSE Chain 1: Iteration: 750 / 1000 [ 75%]  (Sampling)
FALSE Chain 1: Iteration: 800 / 1000 [ 80%]  (Sampling)
FALSE Chain 1: Iteration: 850 / 1000 [ 85%]  (Sampling)
FALSE Chain 1: Iteration: 900 / 1000 [ 90%]  (Sampling)
FALSE Chain 1: Iteration: 950 / 1000 [ 95%]  (Sampling)
FALSE Chain 1: Iteration: 1000 / 1000 [100%]  (Sampling)
FALSE Chain 1: 
FALSE Chain 1:  Elapsed Time: 2.907 seconds (Warm-up)
FALSE Chain 1:                2.143 seconds (Sampling)
FALSE Chain 1:                5.05 seconds (Total)
FALSE Chain 1: 
FALSE 
FALSE SAMPLING FOR MODEL 'dgu' NOW (CHAIN 1).
FALSE Chain 1: 
FALSE Chain 1: Gradient evaluation took 0.000157 seconds
FALSE Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 1.57 seconds.
FALSE Chain 1: Adjust your expectations accordingly!
FALSE Chain 1: 
FALSE Chain 1: 
FALSE Chain 1: Iteration:   1 / 1000 [  0%]  (Warmup)
FALSE Chain 1: Iteration:  50 / 1000 [  5%]  (Warmup)
FALSE Chain 1: Iteration: 100 / 1000 [ 10%]  (Warmup)
FALSE Chain 1: Iteration: 150 / 1000 [ 15%]  (Warmup)
FALSE Chain 1: Iteration: 200 / 1000 [ 20%]  (Warmup)
FALSE Chain 1: Iteration: 250 / 1000 [ 25%]  (Warmup)
FALSE Chain 1: Iteration: 300 / 1000 [ 30%]  (Warmup)
FALSE Chain 1: Iteration: 350 / 1000 [ 35%]  (Warmup)
FALSE Chain 1: Iteration: 400 / 1000 [ 40%]  (Warmup)
FALSE Chain 1: Iteration: 450 / 1000 [ 45%]  (Warmup)
FALSE Chain 1: Iteration: 500 / 1000 [ 50%]  (Warmup)
FALSE Chain 1: Iteration: 501 / 1000 [ 50%]  (Sampling)
FALSE Chain 1: Iteration: 550 / 1000 [ 55%]  (Sampling)
FALSE Chain 1: Iteration: 600 / 1000 [ 60%]  (Sampling)
FALSE Chain 1: Iteration: 650 / 1000 [ 65%]  (Sampling)
FALSE Chain 1: Iteration: 700 / 1000 [ 70%]  (Sampling)
FALSE Chain 1: Iteration: 750 / 1000 [ 75%]  (Sampling)
FALSE Chain 1: Iteration: 800 / 1000 [ 80%]  (Sampling)
FALSE Chain 1: Iteration: 850 / 1000 [ 85%]  (Sampling)
FALSE Chain 1: Iteration: 900 / 1000 [ 90%]  (Sampling)
FALSE Chain 1: Iteration: 950 / 1000 [ 95%]  (Sampling)
FALSE Chain 1: Iteration: 1000 / 1000 [100%]  (Sampling)
FALSE Chain 1: 
FALSE Chain 1:  Elapsed Time: 2.953 seconds (Warm-up)
FALSE Chain 1:                2.14 seconds (Sampling)
FALSE Chain 1:                5.093 seconds (Total)
FALSE Chain 1: 
FALSE 
FALSE SAMPLING FOR MODEL 'dgu' NOW (CHAIN 1).
FALSE Chain 1: 
FALSE Chain 1: Gradient evaluation took 0.000164 seconds
FALSE Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 1.64 seconds.
FALSE Chain 1: Adjust your expectations accordingly!
FALSE Chain 1: 
FALSE Chain 1: 
FALSE Chain 1: Iteration:   1 / 1000 [  0%]  (Warmup)
FALSE Chain 1: Iteration:  50 / 1000 [  5%]  (Warmup)
FALSE Chain 1: Iteration: 100 / 1000 [ 10%]  (Warmup)
FALSE Chain 1: Iteration: 150 / 1000 [ 15%]  (Warmup)
FALSE Chain 1: Iteration: 200 / 1000 [ 20%]  (Warmup)
FALSE Chain 1: Iteration: 250 / 1000 [ 25%]  (Warmup)
FALSE Chain 1: Iteration: 300 / 1000 [ 30%]  (Warmup)
FALSE Chain 1: Iteration: 350 / 1000 [ 35%]  (Warmup)
FALSE Chain 1: Iteration: 400 / 1000 [ 40%]  (Warmup)
FALSE Chain 1: Iteration: 450 / 1000 [ 45%]  (Warmup)
FALSE Chain 1: Iteration: 500 / 1000 [ 50%]  (Warmup)
FALSE Chain 1: Iteration: 501 / 1000 [ 50%]  (Sampling)
FALSE Chain 1: Iteration: 550 / 1000 [ 55%]  (Sampling)
FALSE Chain 1: Iteration: 600 / 1000 [ 60%]  (Sampling)
FALSE Chain 1: Iteration: 650 / 1000 [ 65%]  (Sampling)
FALSE Chain 1: Iteration: 700 / 1000 [ 70%]  (Sampling)
FALSE Chain 1: Iteration: 750 / 1000 [ 75%]  (Sampling)
FALSE Chain 1: Iteration: 800 / 1000 [ 80%]  (Sampling)
FALSE Chain 1: Iteration: 850 / 1000 [ 85%]  (Sampling)
FALSE Chain 1: Iteration: 900 / 1000 [ 90%]  (Sampling)
FALSE Chain 1: Iteration: 950 / 1000 [ 95%]  (Sampling)
FALSE Chain 1: Iteration: 1000 / 1000 [100%]  (Sampling)
FALSE Chain 1: 
FALSE Chain 1:  Elapsed Time: 2.864 seconds (Warm-up)
FALSE Chain 1:                2.128 seconds (Sampling)
FALSE Chain 1:                4.992 seconds (Total)
FALSE Chain 1: 
FALSE 
FALSE SAMPLING FOR MODEL 'dgu' NOW (CHAIN 1).
FALSE Chain 1: 
FALSE Chain 1: Gradient evaluation took 0.000155 seconds
FALSE Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 1.55 seconds.
FALSE Chain 1: Adjust your expectations accordingly!
FALSE Chain 1: 
FALSE Chain 1: 
FALSE Chain 1: Iteration:   1 / 1000 [  0%]  (Warmup)
FALSE Chain 1: Iteration:  50 / 1000 [  5%]  (Warmup)
FALSE Chain 1: Iteration: 100 / 1000 [ 10%]  (Warmup)
FALSE Chain 1: Iteration: 150 / 1000 [ 15%]  (Warmup)
FALSE Chain 1: Iteration: 200 / 1000 [ 20%]  (Warmup)
FALSE Chain 1: Iteration: 250 / 1000 [ 25%]  (Warmup)
FALSE Chain 1: Iteration: 300 / 1000 [ 30%]  (Warmup)
FALSE Chain 1: Iteration: 350 / 1000 [ 35%]  (Warmup)
FALSE Chain 1: Iteration: 400 / 1000 [ 40%]  (Warmup)
FALSE Chain 1: Iteration: 450 / 1000 [ 45%]  (Warmup)
FALSE Chain 1: Iteration: 500 / 1000 [ 50%]  (Warmup)
FALSE Chain 1: Iteration: 501 / 1000 [ 50%]  (Sampling)
FALSE Chain 1: Iteration: 550 / 1000 [ 55%]  (Sampling)
FALSE Chain 1: Iteration: 600 / 1000 [ 60%]  (Sampling)
FALSE Chain 1: Iteration: 650 / 1000 [ 65%]  (Sampling)
FALSE Chain 1: Iteration: 700 / 1000 [ 70%]  (Sampling)
FALSE Chain 1: Iteration: 750 / 1000 [ 75%]  (Sampling)
FALSE Chain 1: Iteration: 800 / 1000 [ 80%]  (Sampling)
FALSE Chain 1: Iteration: 850 / 1000 [ 85%]  (Sampling)
FALSE Chain 1: Iteration: 900 / 1000 [ 90%]  (Sampling)
FALSE Chain 1: Iteration: 950 / 1000 [ 95%]  (Sampling)
FALSE Chain 1: Iteration: 1000 / 1000 [100%]  (Sampling)
FALSE Chain 1: 
FALSE Chain 1:  Elapsed Time: 3.222 seconds (Warm-up)
FALSE Chain 1:                2.155 seconds (Sampling)
FALSE Chain 1:                5.377 seconds (Total)
FALSE Chain 1: 
FALSE 
FALSE SAMPLING FOR MODEL 'dgu' NOW (CHAIN 1).
FALSE Chain 1: 
FALSE Chain 1: Gradient evaluation took 0.000158 seconds
FALSE Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 1.58 seconds.
FALSE Chain 1: Adjust your expectations accordingly!
FALSE Chain 1: 
FALSE Chain 1: 
FALSE Chain 1: Iteration:   1 / 1000 [  0%]  (Warmup)
FALSE Chain 1: Iteration:  50 / 1000 [  5%]  (Warmup)
FALSE Chain 1: Iteration: 100 / 1000 [ 10%]  (Warmup)
FALSE Chain 1: Iteration: 150 / 1000 [ 15%]  (Warmup)
FALSE Chain 1: Iteration: 200 / 1000 [ 20%]  (Warmup)
FALSE Chain 1: Iteration: 250 / 1000 [ 25%]  (Warmup)
FALSE Chain 1: Iteration: 300 / 1000 [ 30%]  (Warmup)
FALSE Chain 1: Iteration: 350 / 1000 [ 35%]  (Warmup)
FALSE Chain 1: Iteration: 400 / 1000 [ 40%]  (Warmup)
FALSE Chain 1: Iteration: 450 / 1000 [ 45%]  (Warmup)
FALSE Chain 1: Iteration: 500 / 1000 [ 50%]  (Warmup)
FALSE Chain 1: Iteration: 501 / 1000 [ 50%]  (Sampling)
FALSE Chain 1: Iteration: 550 / 1000 [ 55%]  (Sampling)
FALSE Chain 1: Iteration: 600 / 1000 [ 60%]  (Sampling)
FALSE Chain 1: Iteration: 650 / 1000 [ 65%]  (Sampling)
FALSE Chain 1: Iteration: 700 / 1000 [ 70%]  (Sampling)
FALSE Chain 1: Iteration: 750 / 1000 [ 75%]  (Sampling)
FALSE Chain 1: Iteration: 800 / 1000 [ 80%]  (Sampling)
FALSE Chain 1: Iteration: 850 / 1000 [ 85%]  (Sampling)
FALSE Chain 1: Iteration: 900 / 1000 [ 90%]  (Sampling)
FALSE Chain 1: Iteration: 950 / 1000 [ 95%]  (Sampling)
FALSE Chain 1: Iteration: 1000 / 1000 [100%]  (Sampling)
FALSE Chain 1: 
FALSE Chain 1:  Elapsed Time: 3.454 seconds (Warm-up)
FALSE Chain 1:                2.123 seconds (Sampling)
FALSE Chain 1:                5.577 seconds (Total)
FALSE Chain 1: 
FALSE 
FALSE SAMPLING FOR MODEL 'dgu' NOW (CHAIN 1).
FALSE Chain 1: 
FALSE Chain 1: Gradient evaluation took 0.000153 seconds
FALSE Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 1.53 seconds.
FALSE Chain 1: Adjust your expectations accordingly!
FALSE Chain 1: 
FALSE Chain 1: 
FALSE Chain 1: Iteration:   1 / 1000 [  0%]  (Warmup)
FALSE Chain 1: Iteration:  50 / 1000 [  5%]  (Warmup)
FALSE Chain 1: Iteration: 100 / 1000 [ 10%]  (Warmup)
FALSE Chain 1: Iteration: 150 / 1000 [ 15%]  (Warmup)
FALSE Chain 1: Iteration: 200 / 1000 [ 20%]  (Warmup)
FALSE Chain 1: Iteration: 250 / 1000 [ 25%]  (Warmup)
FALSE Chain 1: Iteration: 300 / 1000 [ 30%]  (Warmup)
FALSE Chain 1: Iteration: 350 / 1000 [ 35%]  (Warmup)
FALSE Chain 1: Iteration: 400 / 1000 [ 40%]  (Warmup)
FALSE Chain 1: Iteration: 450 / 1000 [ 45%]  (Warmup)
FALSE Chain 1: Iteration: 500 / 1000 [ 50%]  (Warmup)
FALSE Chain 1: Iteration: 501 / 1000 [ 50%]  (Sampling)
FALSE Chain 1: Iteration: 550 / 1000 [ 55%]  (Sampling)
FALSE Chain 1: Iteration: 600 / 1000 [ 60%]  (Sampling)
FALSE Chain 1: Iteration: 650 / 1000 [ 65%]  (Sampling)
FALSE Chain 1: Iteration: 700 / 1000 [ 70%]  (Sampling)
FALSE Chain 1: Iteration: 750 / 1000 [ 75%]  (Sampling)
FALSE Chain 1: Iteration: 800 / 1000 [ 80%]  (Sampling)
FALSE Chain 1: Iteration: 850 / 1000 [ 85%]  (Sampling)
FALSE Chain 1: Iteration: 900 / 1000 [ 90%]  (Sampling)
FALSE Chain 1: Iteration: 950 / 1000 [ 95%]  (Sampling)
FALSE Chain 1: Iteration: 1000 / 1000 [100%]  (Sampling)
FALSE Chain 1: 
FALSE Chain 1:  Elapsed Time: 2.956 seconds (Warm-up)
FALSE Chain 1:                2.139 seconds (Sampling)
FALSE Chain 1:                5.095 seconds (Total)
FALSE Chain 1: 
FALSE 
FALSE SAMPLING FOR MODEL 'dgu' NOW (CHAIN 1).
FALSE Chain 1: 
FALSE Chain 1: Gradient evaluation took 0.000157 seconds
FALSE Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 1.57 seconds.
FALSE Chain 1: Adjust your expectations accordingly!
FALSE Chain 1: 
FALSE Chain 1: 
FALSE Chain 1: Iteration:   1 / 1000 [  0%]  (Warmup)
FALSE Chain 1: Iteration:  50 / 1000 [  5%]  (Warmup)
FALSE Chain 1: Iteration: 100 / 1000 [ 10%]  (Warmup)
FALSE Chain 1: Iteration: 150 / 1000 [ 15%]  (Warmup)
FALSE Chain 1: Iteration: 200 / 1000 [ 20%]  (Warmup)
FALSE Chain 1: Iteration: 250 / 1000 [ 25%]  (Warmup)
FALSE Chain 1: Iteration: 300 / 1000 [ 30%]  (Warmup)
FALSE Chain 1: Iteration: 350 / 1000 [ 35%]  (Warmup)
FALSE Chain 1: Iteration: 400 / 1000 [ 40%]  (Warmup)
FALSE Chain 1: Iteration: 450 / 1000 [ 45%]  (Warmup)
FALSE Chain 1: Iteration: 500 / 1000 [ 50%]  (Warmup)
FALSE Chain 1: Iteration: 501 / 1000 [ 50%]  (Sampling)
FALSE Chain 1: Iteration: 550 / 1000 [ 55%]  (Sampling)
FALSE Chain 1: Iteration: 600 / 1000 [ 60%]  (Sampling)
FALSE Chain 1: Iteration: 650 / 1000 [ 65%]  (Sampling)
FALSE Chain 1: Iteration: 700 / 1000 [ 70%]  (Sampling)
FALSE Chain 1: Iteration: 750 / 1000 [ 75%]  (Sampling)
FALSE Chain 1: Iteration: 800 / 1000 [ 80%]  (Sampling)
FALSE Chain 1: Iteration: 850 / 1000 [ 85%]  (Sampling)
FALSE Chain 1: Iteration: 900 / 1000 [ 90%]  (Sampling)
FALSE Chain 1: Iteration: 950 / 1000 [ 95%]  (Sampling)
FALSE Chain 1: Iteration: 1000 / 1000 [100%]  (Sampling)
FALSE Chain 1: 
FALSE Chain 1:  Elapsed Time: 3.186 seconds (Warm-up)
FALSE Chain 1:                3.842 seconds (Sampling)
FALSE Chain 1:                7.028 seconds (Total)
FALSE Chain 1: 
FALSE 
FALSE SAMPLING FOR MODEL 'dgu' NOW (CHAIN 1).
FALSE Chain 1: 
FALSE Chain 1: Gradient evaluation took 0.000193 seconds
FALSE Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 1.93 seconds.
FALSE Chain 1: Adjust your expectations accordingly!
FALSE Chain 1: 
FALSE Chain 1: 
FALSE Chain 1: Iteration:   1 / 1000 [  0%]  (Warmup)
FALSE Chain 1: Iteration:  50 / 1000 [  5%]  (Warmup)
FALSE Chain 1: Iteration: 100 / 1000 [ 10%]  (Warmup)
FALSE Chain 1: Iteration: 150 / 1000 [ 15%]  (Warmup)
FALSE Chain 1: Iteration: 200 / 1000 [ 20%]  (Warmup)
FALSE Chain 1: Iteration: 250 / 1000 [ 25%]  (Warmup)
FALSE Chain 1: Iteration: 300 / 1000 [ 30%]  (Warmup)
FALSE Chain 1: Iteration: 350 / 1000 [ 35%]  (Warmup)
FALSE Chain 1: Iteration: 400 / 1000 [ 40%]  (Warmup)
FALSE Chain 1: Iteration: 450 / 1000 [ 45%]  (Warmup)
FALSE Chain 1: Iteration: 500 / 1000 [ 50%]  (Warmup)
FALSE Chain 1: Iteration: 501 / 1000 [ 50%]  (Sampling)
FALSE Chain 1: Iteration: 550 / 1000 [ 55%]  (Sampling)
FALSE Chain 1: Iteration: 600 / 1000 [ 60%]  (Sampling)
FALSE Chain 1: Iteration: 650 / 1000 [ 65%]  (Sampling)
FALSE Chain 1: Iteration: 700 / 1000 [ 70%]  (Sampling)
FALSE Chain 1: Iteration: 750 / 1000 [ 75%]  (Sampling)
FALSE Chain 1: Iteration: 800 / 1000 [ 80%]  (Sampling)
FALSE Chain 1: Iteration: 850 / 1000 [ 85%]  (Sampling)
FALSE Chain 1: Iteration: 900 / 1000 [ 90%]  (Sampling)
FALSE Chain 1: Iteration: 950 / 1000 [ 95%]  (Sampling)
FALSE Chain 1: Iteration: 1000 / 1000 [100%]  (Sampling)
FALSE Chain 1: 
FALSE Chain 1:  Elapsed Time: 2.911 seconds (Warm-up)
FALSE Chain 1:                2.171 seconds (Sampling)
FALSE Chain 1:                5.082 seconds (Total)
FALSE Chain 1: 
FALSE 
FALSE SAMPLING FOR MODEL 'dgu' NOW (CHAIN 1).
FALSE Chain 1: 
FALSE Chain 1: Gradient evaluation took 0.000158 seconds
FALSE Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 1.58 seconds.
FALSE Chain 1: Adjust your expectations accordingly!
FALSE Chain 1: 
FALSE Chain 1: 
FALSE Chain 1: Iteration:   1 / 1000 [  0%]  (Warmup)
FALSE Chain 1: Iteration:  50 / 1000 [  5%]  (Warmup)
FALSE Chain 1: Iteration: 100 / 1000 [ 10%]  (Warmup)
FALSE Chain 1: Iteration: 150 / 1000 [ 15%]  (Warmup)
FALSE Chain 1: Iteration: 200 / 1000 [ 20%]  (Warmup)
FALSE Chain 1: Iteration: 250 / 1000 [ 25%]  (Warmup)
FALSE Chain 1: Iteration: 300 / 1000 [ 30%]  (Warmup)
FALSE Chain 1: Iteration: 350 / 1000 [ 35%]  (Warmup)
FALSE Chain 1: Iteration: 400 / 1000 [ 40%]  (Warmup)
FALSE Chain 1: Iteration: 450 / 1000 [ 45%]  (Warmup)
FALSE Chain 1: Iteration: 500 / 1000 [ 50%]  (Warmup)
FALSE Chain 1: Iteration: 501 / 1000 [ 50%]  (Sampling)
FALSE Chain 1: Iteration: 550 / 1000 [ 55%]  (Sampling)
FALSE Chain 1: Iteration: 600 / 1000 [ 60%]  (Sampling)
FALSE Chain 1: Iteration: 650 / 1000 [ 65%]  (Sampling)
FALSE Chain 1: Iteration: 700 / 1000 [ 70%]  (Sampling)
FALSE Chain 1: Iteration: 750 / 1000 [ 75%]  (Sampling)
FALSE Chain 1: Iteration: 800 / 1000 [ 80%]  (Sampling)
FALSE Chain 1: Iteration: 850 / 1000 [ 85%]  (Sampling)
FALSE Chain 1: Iteration: 900 / 1000 [ 90%]  (Sampling)
FALSE Chain 1: Iteration: 950 / 1000 [ 95%]  (Sampling)
FALSE Chain 1: Iteration: 1000 / 1000 [100%]  (Sampling)
FALSE Chain 1: 
FALSE Chain 1:  Elapsed Time: 3.005 seconds (Warm-up)
FALSE Chain 1:                2.138 seconds (Sampling)
FALSE Chain 1:                5.143 seconds (Total)
FALSE Chain 1: 
FALSE 
FALSE SAMPLING FOR MODEL 'dgu' NOW (CHAIN 1).
FALSE Chain 1: 
FALSE Chain 1: Gradient evaluation took 0.000155 seconds
FALSE Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 1.55 seconds.
FALSE Chain 1: Adjust your expectations accordingly!
FALSE Chain 1: 
FALSE Chain 1: 
FALSE Chain 1: Iteration:   1 / 1000 [  0%]  (Warmup)
FALSE Chain 1: Iteration:  50 / 1000 [  5%]  (Warmup)
FALSE Chain 1: Iteration: 100 / 1000 [ 10%]  (Warmup)
FALSE Chain 1: Iteration: 150 / 1000 [ 15%]  (Warmup)
FALSE Chain 1: Iteration: 200 / 1000 [ 20%]  (Warmup)
FALSE Chain 1: Iteration: 250 / 1000 [ 25%]  (Warmup)
FALSE Chain 1: Iteration: 300 / 1000 [ 30%]  (Warmup)
FALSE Chain 1: Iteration: 350 / 1000 [ 35%]  (Warmup)
FALSE Chain 1: Iteration: 400 / 1000 [ 40%]  (Warmup)
FALSE Chain 1: Iteration: 450 / 1000 [ 45%]  (Warmup)
FALSE Chain 1: Iteration: 500 / 1000 [ 50%]  (Warmup)
FALSE Chain 1: Iteration: 501 / 1000 [ 50%]  (Sampling)
FALSE Chain 1: Iteration: 550 / 1000 [ 55%]  (Sampling)
FALSE Chain 1: Iteration: 600 / 1000 [ 60%]  (Sampling)
FALSE Chain 1: Iteration: 650 / 1000 [ 65%]  (Sampling)
FALSE Chain 1: Iteration: 700 / 1000 [ 70%]  (Sampling)
FALSE Chain 1: Iteration: 750 / 1000 [ 75%]  (Sampling)
FALSE Chain 1: Iteration: 800 / 1000 [ 80%]  (Sampling)
FALSE Chain 1: Iteration: 850 / 1000 [ 85%]  (Sampling)
FALSE Chain 1: Iteration: 900 / 1000 [ 90%]  (Sampling)
FALSE Chain 1: Iteration: 950 / 1000 [ 95%]  (Sampling)
FALSE Chain 1: Iteration: 1000 / 1000 [100%]  (Sampling)
FALSE Chain 1: 
FALSE Chain 1:  Elapsed Time: 3.078 seconds (Warm-up)
FALSE Chain 1:                2.149 seconds (Sampling)
FALSE Chain 1:                5.227 seconds (Total)
FALSE Chain 1: 
FALSE 
FALSE SAMPLING FOR MODEL 'dgu' NOW (CHAIN 1).
FALSE Chain 1: 
FALSE Chain 1: Gradient evaluation took 0.000177 seconds
FALSE Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 1.77 seconds.
FALSE Chain 1: Adjust your expectations accordingly!
FALSE Chain 1: 
FALSE Chain 1: 
FALSE Chain 1: Iteration:   1 / 1000 [  0%]  (Warmup)
FALSE Chain 1: Iteration:  50 / 1000 [  5%]  (Warmup)
FALSE Chain 1: Iteration: 100 / 1000 [ 10%]  (Warmup)
FALSE Chain 1: Iteration: 150 / 1000 [ 15%]  (Warmup)
FALSE Chain 1: Iteration: 200 / 1000 [ 20%]  (Warmup)
FALSE Chain 1: Iteration: 250 / 1000 [ 25%]  (Warmup)
FALSE Chain 1: Iteration: 300 / 1000 [ 30%]  (Warmup)
FALSE Chain 1: Iteration: 350 / 1000 [ 35%]  (Warmup)
FALSE Chain 1: Iteration: 400 / 1000 [ 40%]  (Warmup)
FALSE Chain 1: Iteration: 450 / 1000 [ 45%]  (Warmup)
FALSE Chain 1: Iteration: 500 / 1000 [ 50%]  (Warmup)
FALSE Chain 1: Iteration: 501 / 1000 [ 50%]  (Sampling)
FALSE Chain 1: Iteration: 550 / 1000 [ 55%]  (Sampling)
FALSE Chain 1: Iteration: 600 / 1000 [ 60%]  (Sampling)
FALSE Chain 1: Iteration: 650 / 1000 [ 65%]  (Sampling)
FALSE Chain 1: Iteration: 700 / 1000 [ 70%]  (Sampling)
FALSE Chain 1: Iteration: 750 / 1000 [ 75%]  (Sampling)
FALSE Chain 1: Iteration: 800 / 1000 [ 80%]  (Sampling)
FALSE Chain 1: Iteration: 850 / 1000 [ 85%]  (Sampling)
FALSE Chain 1: Iteration: 900 / 1000 [ 90%]  (Sampling)
FALSE Chain 1: Iteration: 950 / 1000 [ 95%]  (Sampling)
FALSE Chain 1: Iteration: 1000 / 1000 [100%]  (Sampling)
FALSE Chain 1: 
FALSE Chain 1:  Elapsed Time: 3.108 seconds (Warm-up)
FALSE Chain 1:                2.141 seconds (Sampling)
FALSE Chain 1:                5.249 seconds (Total)
FALSE Chain 1:

Next, we collected the results (GU and DGU) from each LOO iteration:

L_gu <- do.call(rbind, lapply(X = L, FUN = function(x){return(x$gu)}))
L_dgu <- do.call(rbind, lapply(X = L, FUN = function(x){return(x$dgu)}))

… and plot them:

5.1 LOO-DGU: variability of effect size \(\gamma\)

ggplot(data = L_dgu)+
  facet_wrap(facets = ~contrast, ncol = 1)+
  geom_hline(yintercept = 0, linetype = "dashed", col = "gray")+
  geom_errorbar(aes(x = gene_name, y = es_mean, ymin = es_L,
                    ymax = es_H, col = contrast, group = loo_id),
                width = 0.1, position = position_dodge(width = 0.75))+
  geom_point(aes(x = gene_name, y = es_mean, col = contrast,
                 group = loo_id), size = 1,
             position = position_dodge(width = 0.75))+
  theme_bw(base_size = 11)+
  theme(legend.position = "none")+
  ylab(expression(gamma))

5.2 LOO-DGU: variability of \(\pi\)

ggplot(data = L_dgu)+
  facet_wrap(facets = ~contrast, ncol = 1)+
  geom_point(aes(x = gene_name, y = pmax, col = contrast,
                 group = loo_id), size = 1,
             position = position_dodge(width = 0.5))+
  theme_bw(base_size = 11)+
  theme(legend.position = "none")+
  ylab(expression(pi))

5.3 LOO-GU: variability of the gene usage

ggplot(data = L_gu)+
  geom_hline(yintercept = 0, linetype = "dashed", col = "gray")+
  geom_errorbar(aes(x = gene_name, y = prob_mean, ymin = prob_L,
                    ymax = prob_H, col = condition, 
                    group = interaction(loo_id, condition)),
                width = 0.1, position = position_dodge(width = 1))+
  geom_point(aes(x = gene_name, y = prob_mean, col = condition,
                 group = interaction(loo_id, condition)), size = 1,
             position = position_dodge(width = 1))+
  theme_bw(base_size = 11)+
  theme(legend.position = "top")+
  ylab("GU [probability]")+
  theme(axis.text.x = element_text(angle = 90, hjust = 1, vjust = 0.4))

6 Case Study B: analyzing IRRs containing biological replicates

data("d_zibb_4", package = "IgGeneUsage")
knitr::kable(head(d_zibb_4))
individual_id condition gene_name replicate gene_usage_count
I_1 C_1 G_1 R_1 29
I_1 C_1 G_2 R_1 66
I_1 C_1 G_3 R_1 285
I_1 C_1 G_4 R_1 20
I_1 C_1 G_5 R_1 38
I_1 C_1 G_6 R_1 709

We can also visualize d_zibb_4 with ggplot:

ggplot(data = d_zibb_4)+
  geom_point(aes(x = gene_name, y = gene_usage_count, col = condition, 
                 shape = replicate), position = position_dodge(width = 0.8))+
  theme_bw(base_size = 11)+
  ylab(label = "Gene usage [count]")+
  xlab(label = '')+
  theme(legend.position = "top")+
  theme(axis.text.x = element_text(angle = 90, hjust = 1, vjust = 0.4))

6.1 Modeling

M <- DGU(ud = d_zibb_4, # input data
         mcmc_warmup = 500, # how many MCMC warm-ups per chain (default: 500)
         mcmc_steps = 1500, # how many MCMC steps per chain (default: 1,500)
         mcmc_chains = 2, # how many MCMC chain to run (default: 4)
         mcmc_cores = 1, # how many PC cores to use? (e.g. parallel chains)
         hdi_lvl = 0.95, # highest density interval level (de fault: 0.95)
         adapt_delta = 0.8, # MCMC target acceptance rate (default: 0.95)
         max_treedepth = 10) # tree depth evaluated at each step (default: 12)
FALSE 
FALSE SAMPLING FOR MODEL 'dgu_rep' NOW (CHAIN 1).
FALSE Chain 1: 
FALSE Chain 1: Gradient evaluation took 0.00047 seconds
FALSE Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 4.7 seconds.
FALSE Chain 1: Adjust your expectations accordingly!
FALSE Chain 1: 
FALSE Chain 1: 
FALSE Chain 1: Iteration:    1 / 1500 [  0%]  (Warmup)
FALSE Chain 1: Iteration:   50 / 1500 [  3%]  (Warmup)
FALSE Chain 1: Iteration:  100 / 1500 [  6%]  (Warmup)
FALSE Chain 1: Iteration:  150 / 1500 [ 10%]  (Warmup)
FALSE Chain 1: Iteration:  200 / 1500 [ 13%]  (Warmup)
FALSE Chain 1: Iteration:  250 / 1500 [ 16%]  (Warmup)
FALSE Chain 1: Iteration:  300 / 1500 [ 20%]  (Warmup)
FALSE Chain 1: Iteration:  350 / 1500 [ 23%]  (Warmup)
FALSE Chain 1: Iteration:  400 / 1500 [ 26%]  (Warmup)
FALSE Chain 1: Iteration:  450 / 1500 [ 30%]  (Warmup)
FALSE Chain 1: Iteration:  500 / 1500 [ 33%]  (Warmup)
FALSE Chain 1: Iteration:  501 / 1500 [ 33%]  (Sampling)
FALSE Chain 1: Iteration:  550 / 1500 [ 36%]  (Sampling)
FALSE Chain 1: Iteration:  600 / 1500 [ 40%]  (Sampling)
FALSE Chain 1: Iteration:  650 / 1500 [ 43%]  (Sampling)
FALSE Chain 1: Iteration:  700 / 1500 [ 46%]  (Sampling)
FALSE Chain 1: Iteration:  750 / 1500 [ 50%]  (Sampling)
FALSE Chain 1: Iteration:  800 / 1500 [ 53%]  (Sampling)
FALSE Chain 1: Iteration:  850 / 1500 [ 56%]  (Sampling)
FALSE Chain 1: Iteration:  900 / 1500 [ 60%]  (Sampling)
FALSE Chain 1: Iteration:  950 / 1500 [ 63%]  (Sampling)
FALSE Chain 1: Iteration: 1000 / 1500 [ 66%]  (Sampling)
FALSE Chain 1: Iteration: 1050 / 1500 [ 70%]  (Sampling)
FALSE Chain 1: Iteration: 1100 / 1500 [ 73%]  (Sampling)
FALSE Chain 1: Iteration: 1150 / 1500 [ 76%]  (Sampling)
FALSE Chain 1: Iteration: 1200 / 1500 [ 80%]  (Sampling)
FALSE Chain 1: Iteration: 1250 / 1500 [ 83%]  (Sampling)
FALSE Chain 1: Iteration: 1300 / 1500 [ 86%]  (Sampling)
FALSE Chain 1: Iteration: 1350 / 1500 [ 90%]  (Sampling)
FALSE Chain 1: Iteration: 1400 / 1500 [ 93%]  (Sampling)
FALSE Chain 1: Iteration: 1450 / 1500 [ 96%]  (Sampling)
FALSE Chain 1: Iteration: 1500 / 1500 [100%]  (Sampling)
FALSE Chain 1: 
FALSE Chain 1:  Elapsed Time: 43.23 seconds (Warm-up)
FALSE Chain 1:                93.777 seconds (Sampling)
FALSE Chain 1:                137.007 seconds (Total)
FALSE Chain 1: 
FALSE 
FALSE SAMPLING FOR MODEL 'dgu_rep' NOW (CHAIN 2).
FALSE Chain 2: 
FALSE Chain 2: Gradient evaluation took 0.000416 seconds
FALSE Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 4.16 seconds.
FALSE Chain 2: Adjust your expectations accordingly!
FALSE Chain 2: 
FALSE Chain 2: 
FALSE Chain 2: Iteration:    1 / 1500 [  0%]  (Warmup)
FALSE Chain 2: Iteration:   50 / 1500 [  3%]  (Warmup)
FALSE Chain 2: Iteration:  100 / 1500 [  6%]  (Warmup)
FALSE Chain 2: Iteration:  150 / 1500 [ 10%]  (Warmup)
FALSE Chain 2: Iteration:  200 / 1500 [ 13%]  (Warmup)
FALSE Chain 2: Iteration:  250 / 1500 [ 16%]  (Warmup)
FALSE Chain 2: Iteration:  300 / 1500 [ 20%]  (Warmup)
FALSE Chain 2: Iteration:  350 / 1500 [ 23%]  (Warmup)
FALSE Chain 2: Iteration:  400 / 1500 [ 26%]  (Warmup)
FALSE Chain 2: Iteration:  450 / 1500 [ 30%]  (Warmup)
FALSE Chain 2: Iteration:  500 / 1500 [ 33%]  (Warmup)
FALSE Chain 2: Iteration:  501 / 1500 [ 33%]  (Sampling)
FALSE Chain 2: Iteration:  550 / 1500 [ 36%]  (Sampling)
FALSE Chain 2: Iteration:  600 / 1500 [ 40%]  (Sampling)
FALSE Chain 2: Iteration:  650 / 1500 [ 43%]  (Sampling)
FALSE Chain 2: Iteration:  700 / 1500 [ 46%]  (Sampling)
FALSE Chain 2: Iteration:  750 / 1500 [ 50%]  (Sampling)
FALSE Chain 2: Iteration:  800 / 1500 [ 53%]  (Sampling)
FALSE Chain 2: Iteration:  850 / 1500 [ 56%]  (Sampling)
FALSE Chain 2: Iteration:  900 / 1500 [ 60%]  (Sampling)
FALSE Chain 2: Iteration:  950 / 1500 [ 63%]  (Sampling)
FALSE Chain 2: Iteration: 1000 / 1500 [ 66%]  (Sampling)
FALSE Chain 2: Iteration: 1050 / 1500 [ 70%]  (Sampling)
FALSE Chain 2: Iteration: 1100 / 1500 [ 73%]  (Sampling)
FALSE Chain 2: Iteration: 1150 / 1500 [ 76%]  (Sampling)
FALSE Chain 2: Iteration: 1200 / 1500 [ 80%]  (Sampling)
FALSE Chain 2: Iteration: 1250 / 1500 [ 83%]  (Sampling)
FALSE Chain 2: Iteration: 1300 / 1500 [ 86%]  (Sampling)
FALSE Chain 2: Iteration: 1350 / 1500 [ 90%]  (Sampling)
FALSE Chain 2: Iteration: 1400 / 1500 [ 93%]  (Sampling)
FALSE Chain 2: Iteration: 1450 / 1500 [ 96%]  (Sampling)
FALSE Chain 2: Iteration: 1500 / 1500 [100%]  (Sampling)
FALSE Chain 2: 
FALSE Chain 2:  Elapsed Time: 49.023 seconds (Warm-up)
FALSE Chain 2:                93.01 seconds (Sampling)
FALSE Chain 2:                142.033 seconds (Total)
FALSE Chain 2:

6.2 Posterior predictive checks

ggplot(data = M$ppc$ppc_rep)+
  facet_wrap(facets = ~individual_id, ncol = 3)+
  geom_abline(intercept = 0, slope = 1, linetype = "dashed", col = "darkgray")+
  geom_errorbar(aes(x = observed_count, y = ppc_mean_count, 
                    ymin = ppc_L_count, ymax = ppc_H_count), col = "darkgray")+
  geom_point(aes(x = observed_count, y = ppc_mean_count), size = 1)+
  theme_bw(base_size = 11)+
  theme(legend.position = "top")+
  xlab(label = "Observed usage [counts]")+
  ylab(label = "PPC usage [counts]")

6.3 Analysis of estimated effect sizes

The top panel shows the average gene usage (GU) in different biological conditions. The bottom panels shows the differential gene usage (DGU) between pairs of biological conditions.

g1 <- ggplot(data = M$gu)+
  geom_errorbar(aes(x = gene_name, y = prob_mean, ymin = prob_L,
                    ymax = prob_H, col = condition),
                width = 0.1, position = position_dodge(width = 0.4))+
  geom_point(aes(x = gene_name, y = prob_mean, col = condition), size = 1,
             position = position_dodge(width = 0.4))+
  theme_bw(base_size = 11)+
  theme(legend.position = "top")+
  ylab(label = "GU [probability]")+
  theme(axis.text.x = element_text(angle = 90, hjust = 1, vjust = 0.4))


stats <- M$dgu
stats <- stats[order(abs(stats$es_mean), decreasing = FALSE), ]
stats$gene_fac <- factor(x = stats$gene_name, levels = unique(stats$gene_name))

g2 <- ggplot(data = stats)+
  facet_wrap(facets = ~contrast)+
  geom_hline(yintercept = 0, linetype = "dashed", col = "gray")+
  geom_errorbar(aes(x = pmax, y = es_mean, ymin = es_L, ymax = es_H), 
                col = "darkgray")+
  geom_point(aes(x = pmax, y = es_mean, col = contrast))+
  geom_text_repel(data = stats[stats$pmax >= 0.95, ],
                  aes(x = pmax, y = es_mean, label = gene_fac),
                  min.segment.length = 0, size = 2.75)+
  theme_bw(base_size = 11)+
  theme(legend.position = "top")+
  xlab(label = expression(pi))+
  xlim(c(0, 1))+
  ylab(expression(gamma))
(g1/g2)

7 Session

sessionInfo()
FALSE R version 4.4.0 beta (2024-04-15 r86425)
FALSE Platform: x86_64-pc-linux-gnu
FALSE Running under: Ubuntu 22.04.4 LTS
FALSE 
FALSE Matrix products: default
FALSE BLAS:   /home/biocbuild/bbs-3.19-bioc/R/lib/libRblas.so 
FALSE LAPACK: /usr/lib/x86_64-linux-gnu/lapack/liblapack.so.3.10.0
FALSE 
FALSE locale:
FALSE  [1] LC_CTYPE=en_US.UTF-8       LC_NUMERIC=C              
FALSE  [3] LC_TIME=en_US.UTF-8        LC_COLLATE=en_US.UTF-8    
FALSE  [5] LC_MONETARY=en_US.UTF-8    LC_MESSAGES=en_US.UTF-8   
FALSE  [7] LC_PAPER=en_US.UTF-8       LC_NAME=C                 
FALSE  [9] LC_ADDRESS=C               LC_TELEPHONE=C            
FALSE [11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C       
FALSE 
FALSE time zone: America/New_York
FALSE tzcode source: system (glibc)
FALSE 
FALSE attached base packages:
FALSE [1] stats     graphics  grDevices utils     datasets  methods   base     
FALSE 
FALSE other attached packages:
FALSE  [1] patchwork_1.2.0    reshape2_1.4.4     ggrepel_0.9.5      ggforce_0.4.2     
FALSE  [5] ggplot2_3.5.1      knitr_1.46         rstan_2.32.6       StanHeaders_2.32.7
FALSE  [9] IgGeneUsage_1.18.0 BiocStyle_2.32.0  
FALSE 
FALSE loaded via a namespace (and not attached):
FALSE  [1] tidyselect_1.2.1            dplyr_1.1.4                
FALSE  [3] farver_2.1.1                loo_2.7.0                  
FALSE  [5] fastmap_1.1.1               tweenr_2.0.3               
FALSE  [7] digest_0.6.35               lifecycle_1.0.4            
FALSE  [9] magrittr_2.0.3              compiler_4.4.0             
FALSE [11] rlang_1.1.3                 sass_0.4.9                 
FALSE [13] tools_4.4.0                 utf8_1.2.4                 
FALSE [15] yaml_2.3.8                  S4Arrays_1.4.0             
FALSE [17] labeling_0.4.3              pkgbuild_1.4.4             
FALSE [19] curl_5.2.1                  DelayedArray_0.30.0        
FALSE [21] plyr_1.8.9                  abind_1.4-5                
FALSE [23] withr_3.0.0                 purrr_1.0.2                
FALSE [25] BiocGenerics_0.50.0         grid_4.4.0                 
FALSE [27] polyclip_1.10-6             stats4_4.4.0               
FALSE [29] fansi_1.0.6                 colorspace_2.1-0           
FALSE [31] inline_0.3.19               scales_1.3.0               
FALSE [33] MASS_7.3-60.2               tinytex_0.50               
FALSE [35] SummarizedExperiment_1.34.0 cli_3.6.2                  
FALSE [37] rmarkdown_2.26              crayon_1.5.2               
FALSE [39] generics_0.1.3              RcppParallel_5.1.7         
FALSE [41] httr_1.4.7                  cachem_1.0.8               
FALSE [43] stringr_1.5.1               zlibbioc_1.50.0            
FALSE [45] parallel_4.4.0              BiocManager_1.30.22        
FALSE [47] XVector_0.44.0              matrixStats_1.3.0          
FALSE [49] vctrs_0.6.5                 V8_4.4.2                   
FALSE [51] Matrix_1.7-0                jsonlite_1.8.8             
FALSE [53] bookdown_0.39               IRanges_2.38.0             
FALSE [55] S4Vectors_0.42.0            magick_2.8.3               
FALSE [57] jquerylib_0.1.4             tidyr_1.3.1                
FALSE [59] glue_1.7.0                  codetools_0.2-20           
FALSE [61] stringi_1.8.3               gtable_0.3.5               
FALSE [63] GenomeInfoDb_1.40.0         QuickJSR_1.1.3             
FALSE [65] GenomicRanges_1.56.0        UCSC.utils_1.0.0           
FALSE [67] munsell_0.5.1               tibble_3.2.1               
FALSE [69] pillar_1.9.0                htmltools_0.5.8.1          
FALSE [71] GenomeInfoDbData_1.2.12     R6_2.5.1                   
FALSE [73] evaluate_0.23               lattice_0.22-6             
FALSE [75] Biobase_2.64.0              highr_0.10                 
FALSE [77] bslib_0.7.0                 rstantools_2.4.0           
FALSE [79] Rcpp_1.0.12                 gridExtra_2.3              
FALSE [81] SparseArray_1.4.0           xfun_0.43                  
FALSE [83] MatrixGenerics_1.16.0       pkgconfig_2.0.3