Simplify Functional Enrichment Results

Zuguang Gu (z.gu@dkfz.de)

2020-10-27

The simplifyEnrichment package clusters functional terms into groups by clustering the similarity matrix of the terms with a new proposed method “binary cut” which recursively applies partition around medoids (PAM) with two groups on the similarity matrix and in each iteration step, a score is assigned to decide whether the group of gene sets that corresponds to the current sub-matrix should be split or not. For more details of the method, please refer to the simplifyEnrichment paper.

Simplify GO enrichment results

The major use case for simplifyEnrichment is for simplying the GO enrichment results by clustering the corresponding semantic similarity matrix of the significant GO terms. To demonstrate the usage, we first generate a list of random GO IDs from the Biological Process (BP) ontology category:

library(simplifyEnrichment)
set.seed(888)
go_id = random_GO(500)

simplifyEnrichment starts with the GO similarity matrix. Users can use their own similarity matrices or use the GO_similarity() function to calculate the semantic similarity matrix. The GO_similarity() function is simply a wrapper on GOSemSim::termSim(). The function accepts a vector of GO IDs. Note the GO terms should only belong to one same ontology (i.e., BP, CC or MF).

mat = GO_similarity(go_id)

By default, GO_similarity() uses Rel method in GOSemSim::termSim(). Other methods to calculate GO similarities can be set by measure argument, e.g.:

GO_similarity(go_id, measure = "Wang")

With the similarity matrix mat, users can directly apply simplifyGO() function to perform the clustering as well as visualizing the results.

df = simplifyGO(mat)
## Cluster 500 terms by 'binary_cut'... 43 clusters, used 2.679589 secs.

On the right side of the heatmap there are the word cloud annotations which summarize the functions with keywords in every GO cluster. Note there is no word cloud for the cluster that is merged from small clusters (size < 5).

The returned variable df is a data frame with GO IDs, GO terms and the cluster labels:

head(df)
##           id                                           term cluster
## 1 GO:0003283                      atrial septum development       1
## 2 GO:0022018 lateral ganglionic eminence cell proliferation       1
## 3 GO:0030032                         lamellipodium assembly       2
## 4 GO:0061508                            CDP phosphorylation       3
## 5 GO:1901222          regulation of NIK/NF-kappaB signaling       4
## 6 GO:0060164 regulation of timing of neuron differentiation       1

The size of GO clusters can be retrieved by:

sort(table(df$cluster))
## 
##   5   7   8  10  12  13  17  18  21  22  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38 
##   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1 
##  39  40  41  42  43  20  23   9  16  19  15  14  11   2   6   4   1   3 
##   1   1   1   1   1   2   2   3   5   5   6  10  12  37  45  97 114 132

Or split the data frame by the cluster labels:

split(df, df$cluster)

plot argument can be set to FALSE in simplifyGO(), so that no plot is generated and only the data frame is returned.

If the aim is only to cluster GO terms, binary_cut() or cluster_terms() functions can be directly applied:

binary_cut(mat)
##   [1]  1  1  2  3  4  1  2  3  1  3  5  6  1  3  1  3  6  7  3  3  4  8  4  4  1  1  3  6  3  1  4
##  [32]  3  3  2  3  4  3  4  4  2  2  4  6  6  9  2  6  2  2  3  3  4  2  2  4  6  3  3  4  3  4  3
##  [63]  3  3 10  3  1 11  3  1  6  4  6  3 12  1 13  4 14  2  4 11  6  1  3  4  1  4  4  4 15  3  6
##  [94]  3  3  3  4  3 14  6  3  4 16  6  1  2  2  2 11  4  3  3  3 17  1  4  1  3  6  3  1  3  1  3
## [125]  4  3 16  4  6  4  3  9  3  3  3  3  1  2  3  4  3  1  3  3  3 18  1  2  3  1  3 19  1  3  4
## [156]  1  1  1  1  4  3 20 15  2  3  1  1  1  1  1 21  4  1  4  6  4  4  3  1  4  1  4 11 11 11  1
## [187]  4 11  1  3  6  2  3  1 22  1  3  6  1 14  1  3  4  4  4  2  4  6  3  3  1  3  1  6  3  4  4
## [218] 11  4  1  1 23  1 24  6  4  3  2  1  1  3  1  1  6  1  1  1  4  6  3  4  3  4 16  3  1  1  4
## [249] 25  4  4  4  1  1 26  4  4  4  6  4  1  3  3  3  2 19  4  3  4 27  4  4  2  6  3  3 11  3  6
## [280]  2 16  3  3  2  1  1  6  6  6  3  3  3  1  3  4  3  1  3  1 14 15 28  2 20  1  3  1  1  1  4
## [311]  1  3  3  4  1 19 29  4  4  4  1  4  6  4 30  3  3  6  1  1  3  1  4  2  1  3  3  3  3 19  6
## [342] 14  3  1  1 11  1  1  4 14  6 11  3  3  4  3  3  2  1 14  1  1  4  1  2 31  1  1  4  1  3  4
## [373]  3  1  3 32  1  1  3  1  3  6  3  3  3  1 19  6  3 11  3  1  3  3 33  2  1  4  4  1  6  1  4
## [404]  2 15  6  4  2  3  4  4  4 14  3  3  4  4  1  1  3  6  2  3  2  1  4 34  1  3  1  1 23  3  6
## [435]  4  9  1  1  6  4  1 35 36 37 38  1  1  1  4  2  1 14 15  4  3 14  3  3  6  2  1 15 16  3  4
## [466]  3  3  4  3  3  4  1 39  6  4  3  3  4  3  4  6  2  2  3 40  4  6  4 41  3  1  3  1  3  1  6
## [497] 42 43  4  1

or

cluster_terms(mat, method = "binary_cut")

binary_cut() and cluster_terms() basically generate the same clusterings, but the labels of clusters might differ.

Simplify general functional enrichment results

Semantic measurements can be used for the similarity of GO terms. However, there are still a lot of ontologies (e.g. MsigDB gene sets) that are only represented as a list of genes where the similarity between gene sets are mainly measured by gene overlap. simplifyEnrichment provides the term_similarity() and other related functions (term_similarity_from_enrichResult(), term_similarity_from_KEGG(), term_similarity_from_Reactome(), term_similarity_from_MSigDB() and term_similarity_from_gmt()) which calculate the similarity of terms by the gene overlapping, with methods of Jaccard coefficient, Dice coefficient, overlap coefficient and kappa coefficient.

The similarity can be calculated by providing:

  1. A list of gene sets where each gene set contains a vector of genes.
  2. A enrichResult object which is normally from the ‘clusterProfiler’, ‘DOSE’, ‘meshes’ or ‘ReactomePA’ package.
  3. A list of KEGG/Reactome/MsigDB IDs. The gene set names can also be provided for MsigDB ontologies.
  4. A gmt file and the corresponding gene set IDs.

Once you have the similarity matrix, you can send it to simplifyEnrichment() function. But note, as we benchmarked in the manuscript, the clustering on the gene overlap similarity performs much worse than on the semantic similarity.

Comparing clustering methods

In the simplifyEnrichment package, there are also functions that compare clustering results from different methods. Here we still use previously generated variable mat which is the similarity matrix from the 500 random GO terms. Simply running compare_clustering_methods() function performs all supported methods (in all_clustering_methods()) excluding mclust, because mclust usually takes very long time to run. The function generates a figure with three panels:

  1. A heatmap of the similarity matrix with different clusterings as row annotations.
  2. A heatmap of the pair-wise concordance of the clustering from every two methods.
  3. Barplots of the difference scores for each method, the number of clusters (total clusters and the clusters with size >= 5) and the mean similarity of the terms that are in the same clusters (block mean).

In the barplots, the three metrics are defined as follows:

  1. Different score: This is the difference between the similarity values for the terms that belong to the same clusters and different clusters. For a similarity matrix \(M\), for term \(i\) and term \(j\) where \(i \ne j\), the similarity value \(x_{i,j}\) is saved to the vector \(\mathbf{x_1}\) only when term \(i\) and \(j\) are in a same cluster. \(x_{i,j}\) is saved to the vector \(\mathbf{x_2}\) when term \(i\) and \(j\) are not in the same cluster. The difference score measures the distribution difference between \(\mathbf{x_1}\) and \(\mathbf{x_2}\), calculated as the Kolmogorov-Smirnov statistic between the two distributions.
  2. Number of clusters: For each clustering, there are two numbers: the number of total clusters and the number of clusters with size >= 5 (only the big clusters).
  3. Block mean: The mean similarity values of the blocks in the similarity heatmap. Similar denotation as difference score, for term \(i\) and \(j\) where \(i\) can be the same as \(j\) (values on the diagonal are also used), the similarity value \(x_{i,j}\) is saved to the vector \(\mathbf{x_3}\) only when term \(i\) and \(j\) are in a same cluster. The block mean is the mean value over \(\mathbf{x_3}\).
compare_clustering_methods(mat)
## Cluster 500 terms by 'binary_cut'... 43 clusters, used 2.1767 secs.
## Cluster 500 terms by 'kmeans'... 17 clusters, used 6.322072 secs.
## Cluster 500 terms by 'dynamicTreeCut'... 58 clusters, used 0.27562 secs.
## Cluster 500 terms by 'apcluster'... 39 clusters, used 2.080923 secs.
## Cluster 500 terms by 'hdbscan'... 13 clusters, used 0.5381405 secs.
## Cluster 500 terms by 'fast_greedy'... 29 clusters, used 0.2463009 secs.
## Cluster 500 terms by 'leading_eigen'... 30 clusters, used 0.4711776 secs.
## Cluster 500 terms by 'louvain'... 29 clusters, used 0.211499 secs.
## Cluster 500 terms by 'walktrap'... 26 clusters, used 0.4840653 secs.
## Cluster 500 terms by 'MCL'... 28 clusters, used 4.431322 secs.

If plot_type argument is set to heatmap. There are heatmaps for the similarity matrix under different clusterings methods. The last panel is a table with the number of clusters.

compare_clustering_methods(mat, plot_type = "heatmap")
## Cluster 500 terms by 'binary_cut'... 43 clusters, used 1.645315 secs.
## Cluster 500 terms by 'kmeans'... 17 clusters, used 5.534831 secs.
## Cluster 500 terms by 'dynamicTreeCut'... 58 clusters, used 0.2569602 secs.
## Cluster 500 terms by 'apcluster'... 39 clusters, used 1.040525 secs.
## Cluster 500 terms by 'hdbscan'... 13 clusters, used 0.2993989 secs.
## Cluster 500 terms by 'fast_greedy'... 29 clusters, used 0.1341319 secs.
## Cluster 500 terms by 'leading_eigen'... 30 clusters, used 0.4206941 secs.
## Cluster 500 terms by 'louvain'... 29 clusters, used 0.1984208 secs.
## Cluster 500 terms by 'walktrap'... 26 clusters, used 0.4869998 secs.
## Cluster 500 terms by 'MCL'... 28 clusters, used 3.226906 secs.