Skip to main content

Multilayer networks for text analysis with multiple data types


We are interested in the widespread problem of clustering documents and finding topics in large collections of written documents in the presence of metadata and hyperlinks. To tackle the challenge of accounting for these different types of datasets, we propose a novel framework based on Multilayer Networks and Stochastic Block Models. The main innovation of our approach over other techniques is that it applies the same non-parametric probabilistic framework to the different sources of datasets simultaneously. The key difference to other multilayer complex networks is the strong unbalance between the layers, with the average degree of different node types scaling differently with system size. We show that the latter observation is due to generic properties of text, such as Heaps’ law, and strongly affects the inference of communities. We present and discuss the performance of our method in different datasets (hundreds of Wikipedia documents, thousands of scientific papers, and thousands of E-mails) showing that taking into account multiple types of information provides a more nuanced view on topic- and document-clusters and increases the ability to predict missing links.

1 Introduction

A widespread problem in modern Data Science is how to combine multiple data types such as images, text, and numbers in a meaningful framework [15]. The traditional approach to tackle this challenge is to construct machine learning pipelines in which each data type is treated separately—sequentially or in parallel—and the partial results are combined at the end of the procedure [6, 7]. There are two problems with such a procedure. First, it leads to the development of ad-hoc solutions that are highly contingent on the dataset in question [8, 9]. Second, each model is trained independently from one another, meaning that the relationships between the different types of data are not taken into account [10, 11]. These problems show the need of developing a unified statistical framework applied simultaneously to the different types of data [12].

In this paper, we investigate the problem of clustering and finding topics in collections of written documents for which additional information is available as metadata and as hyperlinks between documents. We obtain a unified statistical framework to this problem by mapping it to the problem of inferring groups in multilayer networks. The key design for the unified framework proposed here is inspired by the connections [3, 1315] between the problems of identifying (i) topics in a collection of written documents (i.e. topic modeling) [16] and (ii) communities in complex networks (i.e. community detection) [17]. In particular, Ref. [15] shows that both problems can be tackled using Stochastic Block Models (SBM) [3, 12, 1822] and that SBMs, previously applied to find communities in complex networks, outperform and overcome many of the difficulties of the most popular unsupervised methods to infer structures from large collections of texts (topic modelling methods such as the Latent Dirichlet Allocation [23] and its generalizations). However, these approaches have been applied only to the textual part of collections of documents, ignoring additional information available about them. For instance, in datasets of scientific publications, one would consider only the text of the articles but not the citation network (used in traditional community detection methods [17]) or other metadata (such as the journal or bibliographical classification) [24, 25]. We propose here an extension of Ref. [15] and show how the diversity of information typically available about documents can be incorporated in the same framework by using multilayer SBMs [3, 10, 11]. As illustrated in Fig. 1, in addition to the bipartite Document-Word layer discussed in Ref. [15], here we incorporate a Hyperlink layer connecting the different written documents and a Metadata-Document layer that incorporates tags and other metadata. The key difference to other multilayer networks [4], as explored in Sect. 2 below, is that statistical laws [26] governing the frequency of words on documents leave fingerprints on the density of the different network layers. Our investigations in different datasets, reported in Sect. 3 for collection of Wikipedia articles and in the Supplementary Information for three other datasets, reveal that the proposed multilayer approach leads to improved results when compared to both the topic modelling approach of [15] and the usual community detection of (hyperlink) networks. Our approach leads to a more nuanced view on the communities of documents, generates a list of topics associated to the communities, and improves the link-prediction capabilities when compared to the hyperlink network alone [27]. The details on our methods can be found in the appendices, Supplementary Information, and in the repository [28].

Figure 1
figure 1

Different views on the relationship between written documents. Lower layer: a bipartite multi-graph of documents (circles) and word types (triangles), links correspond to word tokens. Middle layer: directed graph of documents (e.g., hyperlinks in Wikipedia or citations in Scientific Papers). Upper layer: a bipartite graph of documents and tags (squares), used to classify the documents

2 Multiple data sources as multilayer networks

In this section we introduce the general methodology of our paper: we introduce the types of data we are interested in (Sect. 2.1), we show how they can be represented as a multilayer network and discuss the properties of these networks (Sect. 2.2), and we describe how they can be modelled using Stochastic Block Models (Sect. 2.3).

2.1 Setting: multiple data sources

We consider a collection of \(d=1, \ldots,D\) documents and we are interested in clustering and finding underlying similarities between them using combinations of the following information:

Text (T)::

Each document contains \(k_{d}\) word tokens from a vocabulary of V word types (\(M=\sum_{d} k_{d}\) is the total number of word tokens).

Hyperlinks (H)::

Documents are linked to each other by building a (directed) graph or network.

Metadata (M)::

Documents are classified by tags or other metadata.

These characteristics are typical for textual data and networks. Here we explore three types of such datasets, summarized in Table 1. The main dataset we use to illustrate our results was extracted from the English Wikipedia, where the documents are articles (in scientific categories), the text is the content of the articles, hyperlinks are links between articles contained in the text, and metadata are tags introduced by users to classify the articles (categories). In our main example, we selected hundreds of articles in one of three scientific categories of Wikipedia (see Appendix 1 for details). Our main findings are confirmed in a second Wikipedia dataset (obtained choosing different scientific categories), in a citation dataset (documents are scientific papers, hyperlinks are citations, the text is extracted from the title and abstract, and metadata are scientific categories), and in an E-mail dataset (documents are all E-mails from the same user, hyperlinks correspond to E-mails sent between users, and the text is the content of the E-mails). These results and further details of the data are presented in the Supplementary Information 1 (see Additional file 1-Sect. 1).

Table 1 Summary of the datasets used in this paper

2.2 Data as networks

The data described above can be represented as multilayer networks. The Hyperlink layer is the most obvious one, where documents are nodes and the hyperlinks are directed edges. The Metadata layer is built by a bipartite network consisting of metadata tags and documents as nodes, whereby undirected edges correspond to documents containing a given metadata-tag. Finally, the Text layer is obtained by restricting the text analysis to the level of word frequencies (bag-of-words) and then considering the bipartite network of word (types) and documents, where the edges correspond to word tokens (i.e., the count of how often a word type appears in a document). While word-nodes and metadata tags appear only in the text and the metadata layer, all layers have document nodes in common. The novelty of our multilayer approach, in comparison to other approaches using multilayer networks, is the inclusion of the text layer. The importance of using a bipartite multigraph layer [22] to represent the text, instead of alternative “word networks” [14, 29, 30], is that it contains the complete information of word occurrence in documents and allows for a formal connection to topic-modelling methods [15, 20].

We now investigate the properties of the multilayer network described above, based on known results in networks and textual data. The most striking feature of this network is that the size of the different layers varies dramatically and scales differently with system size. A first indication of this lack of balance is seen by looking at the number of edges shown in Table 1: the number of edges in the text layer (i.e. word tokens) is substantially larger than the number of nodes or edges in all of the other layers. Such an imbalance is expected in all datasets in which the same type of data as outlined in Sect. 2.1 is present. To see this, we investigate in Fig. 2 how the average degree \(\langle k_{X} \rangle \) (number of edges/ total number of nodes) of the different node types X scale with the number of documents \(n_{D}\) (which plays the role of system size). For the document nodes in the Hyperlink layer and the Text layer we see a constant average degree, typical of sparse networks. The Metadata layer yields a trivial scaling linear with \(n_{D}\) as in dense networks because each document has one edge to a metadata node. More interestingly, the average degree of the word type nodes in the Text layer, \(\langle k_{V} \rangle \), shows a growth that scales as

$$\begin{aligned} \langle k_{V} \rangle \sim n_{D}^{\gamma }, \end{aligned}$$

with \(0 < \gamma <1\). This is between the usual limits expected for sparse (\(\gamma =0\)) and dense (\(\gamma =1\)) networks.

Figure 2
figure 2

Scaling of average degree for each class of node reveals sparse and dense layers. Scaling of the average degree \(\langle k_{X} \rangle \) with the number of documents \(n_{D}\) depends on the node types X. The average degree was computed over all nodes of the same type (see legend, where H,T,M indicates the layer) in a sample of \(n_{D}\) documents from dataset. The symbols (error bars) are the average (standard deviation) over multiple random samples of documents. The prediction for the degree of word types using Eq. (1) is also plotted for reference

We now explain the observation in Eq. (1) in terms of properties of text in general. More specifically, the type-token relationship in texts follows Heaps’ law [26, 31, 32], which states that the number of word types V scales with the word tokens M as

$$\begin{aligned} V \sim M^{\beta }, \end{aligned}$$

whereby \(0<\beta <1\) is the parameter of interest. The average degree is obtained as \(\langle k_{V} \rangle = M/V\) and \(n_{D} \propto M\) (where the proportionality constant is the average size of Wikipedia articles, in word tokens). Combining this with Eqs. (1) and (2) we obtain that \(\gamma = 1-\beta \). From the data used here, we estimate a Heaps’ exponent \(\beta = 0.56\), that leads to a prediction of \(\gamma =0.44\). This prediction is shown as a dashed line in Fig. 2 and is in good agreement (for large \(n_{D}\)) with the average degree of word nodes.

2.3 Stochastic block models

To achieve our goal of clustering documents and identifying topics considering multiple type of datasets simultaneously, we need to explore statistical patterns in the connectivity of the multilayer networks discussed above. This can be obtained using Stochastic Block Models (SBMs). The choice of SBMs is based on the existence of a successful computational and theoretical framework, reviewed in Ref. [12], that can be applied to networks with the characteristics needed in our problem: different types of networks (directed, bipartite, and multi edges), multilayer networks [11], and accounting for key ingredients to detect communities (e.g., degree correction and a nested/hierarchical generalizations [33]). Our previous analysis of bipartite word-document networks using this framework has outperformed traditional topic modelling approaches [15].

SBMs are a family of random-graph models that generate networks with adjacency matrix \(A_{\mathrm{ij}}\) with probability \(P(\boldsymbol{A}|\boldsymbol{b})\), where the vector b with entries \(b_{i} \in \{1, \ldots, B\} \), specifies the membership of nodes \(i=1,\ldots, D\) into one of B possible groups. For our multilayer network design—developed for the three types of data (H,T,M) as discussed in Sect. 2.2—we fit the SBM framework to each layer combining them by constraining document groups to be the same across all layers, i.e. with a joint probability

$$\begin{aligned} P(\boldsymbol{A}_{\text{H}}, \boldsymbol{A}_{\text{T}}, \boldsymbol{A}_{\text{M}}|\boldsymbol{b}) = P( \boldsymbol{A}_{\text{H}}|\boldsymbol{b})P(\boldsymbol{A}_{\text{T}}|\boldsymbol{b})P( \boldsymbol{A}_{\text{M}}| \boldsymbol{b}), \end{aligned}$$

where \(\boldsymbol{A}_{\text{H}}\), \(\boldsymbol{A}_{\text{T}}\) and \(\boldsymbol{A}_{\text{M}}\) are the adjacency matrices of each respective layer. In each individual layer, edges between nodes i and j are sampled from a Poisson distribution with average [20]

$$\begin{aligned} \theta _{i}\theta _{j}\omega _{b_{i},b_{j}}, \end{aligned}$$

whereby \(\omega _{\mathrm{rs}}\) is the expected number of edges between group r and s, \(b_{i}\) is the group membership of node i, and \(\theta _{i}\) is overall propensity with which a node is selected within its own group. Non-informative priors are used for the parameters θ and ω and the marginal likelihood of the SBM is computed as [34]

$$\begin{aligned} P(\boldsymbol{A}|\boldsymbol{b}) = \int P(\boldsymbol{A}|\boldsymbol{\omega }, \boldsymbol{\theta }, \boldsymbol{b})P( \boldsymbol{\omega },\boldsymbol{ \theta }|\boldsymbol{b})\,d\boldsymbol{\theta }\,d\boldsymbol{\omega }, \end{aligned}$$

Based on this, we consider the overall posterior distribution for a single partition conditioned the edges on all layers [35]

$$\begin{aligned} P(\boldsymbol{b}|\boldsymbol{A}_{\text{H}}, \boldsymbol{A}_{\text{T}}, \boldsymbol{A}_{\text{M}}) = \frac{P(\boldsymbol{A}_{\text{H}}|\boldsymbol{b})P(\boldsymbol{A}_{\text{T}}|\boldsymbol{b})P(\boldsymbol{A}_{\text{M}}|\boldsymbol{b})P(\boldsymbol{b})}{P(\boldsymbol{A}_{\text{H}}, \boldsymbol{A}_{\text{T}}, \boldsymbol{A}_{\text{M}})}. \end{aligned}$$

With this approach, not only the words but also the documents are now clustered into categories. We implement the inference using the package graph-tool [28, 3638] (see Additional file 1-Sect. 2 for details and Ref. [28] for our codes).

3 Application to Wikipedia data

In this section we apply the methodology and ideas discussed above to the Wikipedia dataset which contains articles classified by users in the categories Mathematics, Physics, and Biology. We are interested in comparing the outcomes and performance of the models discussed above applied to the different types of information in the data. We fit multiple variants of the multilayer SBM, whereby we choose different layers to be included in the model.

3.1 Description length

The performance of each model can be measured by the extent to which a model succeeds in describing (compressing) the data. This can be quantified computing its description length (DL) [39, 40]

$$\begin{aligned} \mathrm{DL}=-\log P(\boldsymbol{A}_{\text{H}}, \boldsymbol{A}_{\text{T}}, \boldsymbol{A}_{\text{M}}, \boldsymbol{b}), \end{aligned}$$

which describes the information necessary to describe both the data and the model parameters. From Eq. (6), we see that minimizing the description length is equivalent to maximizing the posterior probability \(P(\boldsymbol{b}|\boldsymbol{A}_{\text{H}}, \boldsymbol{A}_{\text{T}}, \boldsymbol{A}_{\text{M}})\).

In Table 2 we summarise the DL obtained for each model in our dataset. It is quite clear that the DL of the models containing the Text layer are much larger than those containing only the Hyperlink and Metadata layers. This is a direct consequence of the large number of word types in the data, when compared to documents or hyperlinks, the lack of balance between the layers mentioned in Sect. 2.2. This lack of balance between layers thus has important consequences for the inference of partitions and our ability to compare the different models. For instance, the effectiveness of the multilayer approach could be evaluated by comparing the DL of the multilayer model (e.g., DL of model \(H+T\)) to the sum of the DL of the single-layer models (e.g., DL of model H + DL of model T). In our case this comparison is not very informative because the DL of the combined model is dominated by the largest layer and the DL of the small layer often lies within the fluctuations obtained from multiple MCMC runs (see Additional file 1-Sect. 2). This reasoning suggests that the clustering of nodes would be dominated by the Text layer or, if the Text layer is excluded, by the Hyperlink layer which will dominate over the Metadata layer.Footnote 1 However, we will see below that there are still significant and meaningful differences in the clustering of nodes obtained using different combinations of layers. This happens because the inference problem remains non-trivial because the DL landscape contains many distinct states with similar values in the DL so that even small effects due to the H and M layers can affect the outcome.

Table 2 Description length for each combination of layers in the multilayer stochastic block model. We compute the average description length (DL), Eq. (7), for each model class alongside the standard deviation over multiple MCMC runs. We also retrieved the minimum DL (MDL) over all the runs. The DL of the Text layer exceeds the Hyperlink and Metadata layer by several orders of magnitude, thus contributing the most to the Hyperlink + Text model

3.2 Qualitative comparison of groups of documents

Community detection methods aim to find the partition of the nodes that best captures the structure in the network in a meaningful way whilst being robust to noise [12, 21]. We thus evaluate the different models by comparing the resulting partitioning of documents [35]. Specifically, we fit the Hyperlink, Text, and Hyperlink + Text model and obtain a best partition from combining multiple samples from the posterior \(P(\boldsymbol{b}|\boldsymbol{A})\) for each model to construct a point estimate, which utilises the different parts of the posterior distribution. We then project the group membership onto the Hyperlink layer (which only contains document-nodes) and retrieve the consensus partition alongside the uncertainty of the partition [41] (see Appendix 2 for details).

Our results are shown in Fig. 3 and reveal that our model is successful in retrieving different meaningful groupings of the articles depending on the available data (i.e. layers included in the model). We first notice that the classification of articles made by users—panel (a), Wikipedia label—group articles in Mathematics and Biology that are strongly linked with each other (through hyperlinks), whereas Physics articles appear intertwined in between them. When we infer the partition of nodes based only on the hyperlink network—panel (b), Hyperlink model—we obtain that our model obtains 2 groups and it is quite confident about it (uncertainty is zero, \(\sigma = 0\).). This partition resembles the partition based on Wikipedia labels. When the documents are partitioned based on their text—panel (c), Text Model-, a richer picture emerges. There is a large community that resembles closely the documents classified as Biology and one of the communities obtained using the hyperlinks layer. However, the remaining documents (most of the Mathematics and Physics articles) are now grouped in 4 categories (i.e., 5 communities in total) which are still linked to each other but more loosely than before (even though Fig. 3 shows the Hyperlink network, the Hyperlinks were not used to group documents in panels a) and c)). Finally, when hyperlinks and text are used simultaneously—panel (d)—4 communities are found, which resemble the previous ones but that also show important distinctions. This demonstrates that even if the Text layer dominates the description length, there are noticeable differences in the inferred partitions when using the hyperlinks in addition to text for clustering documents.

Figure 3
figure 3

Different models lead to different partitions of Wikipedia articles into communities. The network corresponds to Wikipedia articles (nodes) and hyperlinks (edges). The colour of the nodes corresponds to groups of document nodes: (a) from the Wikipedia labels (with annotations Mathematics, Physics, and Biology); (b)–(d) communities found from our model using different datasets (layers) and the consensus partition (see App. 2), where σ—defined in Eq. (15)—quantifies the uncertainty of the reported communities (\(0 \le \sigma \le 1\))

We now argue that the more nuanced classification of documents obtained with the Text and Hyperlink + Text models are qualitatively meaningful. For example, we can see a cluster of 5 (Physics) nodes in the bottom left of the Hyperlink model that was not identified as a separate group, but it is now picked up in the Text and Hyperlink + Text model. This cluster of nodes include Wikipedia articles on the Josephson effect, macroscopic quantum phenomena, magnetic flux quantum, macroscopic quantum self trapping, and quantum tunnelling. Even more strikingly, in the bottom of the network there is a lone (Physics) green node surrounded by (Biology) red nodes which corresponds to the Wikipedia article on isotopic labelling (a technique in the intersection of Physics and Biology). In traditional community detection methods, which use link information as an indicator of groups, such a node would be in the community of its surrounding neighbours. However, in the Hyperlink + Text model, we are able to detect the uniqueness of such a node.

3.3 Quantitative comparison between different models

In the example discussed above it was clear that the different models yielded different yet related partitions of Wikipedia articles. In order to quantify the similarity of the results of the different models, we performed a systematic comparison of the partitions generated by multiple runs of each model and computed their similarities using the maximum overlap partition (Fig. 4, see Appendix 2 for details). The results show that the partitions generated by the Hyperlink + Text model is most similar to the Text model. Similar results are obtained in our alternative datasets—see Additional file 1-Sect. 1—and using the normalised mutual information (NMI) as an alternative dissimilarity measure—see Additional file 1-Sect. 3.

Figure 4
figure 4

Maximum partition overlap of the consensus partitions between the model classes. The average and standard deviations of the maximum partition overlap between and within different models

We also compare the Hyperlink and Hyperlink + Text model in terms of their ability to predict missing edges [27, 42] (see Appendix 3 for details on our method). We found that the Hyperlink + Text model has an Area-Under-Curve (AUC) score of \(0.63 \pm 0.06\) (average ± standard deviation) and the Hyperlink model has \(0.54 \pm 0.02\), with the difference being statistically significant (\(p=0.0013\), using a 2-sample t-test). This confirms that the multilayer approach proposed here is successful in retrieving existing relationships that are missed in the network-only approach.

3.4 Lack of balance in the hyperlink-text model

The results of the previous sections are strongly influenced by the lack of balance in Hyperlink + Text model, as discussed in Sect. 2. To further illustrate this point, here we artificially reduce the unbalance of the multilayer network by sampling a fraction μ of word tokens before fitting a Hyperlink + Text model. We expect that, as we increase the fraction of words μ, the Text layer will increasingly dominate the inference. This expectation is confirmed in Fig. 5, which shows that for \(\mu \geq 0.6\), the partition overlap of the μ-hyperlink-text model is statistically indistinguishable from the partition overlap obtained using the Text-only model. That is, we see that the Text layer dominates the inference in the μ-Hyperlink + Text for \(\mu \geq 0.6\). However, as discussed above, the effect of the hyperlink layer can lead to different consensus partition.

Figure 5
figure 5

Text layer determines the partitions obtained in the multilayer model (Hyperlink + Text). Similarity (overlap of consensus partition) between Hyperlink partition and μ-Hyperlink-Text partition as a function of the subsampling parameter μ where \(\mu =0\) (\(\mu =1\)) corresponds to the case with all (none) of the word tokens removed in the Hyperlink-Text model. For a given value of μ, a random fraction \(1-\mu \) of the words were removed and the Hyperlink + Text model was then fitted for multiple iterations. The consensus partition was then computed for the Hyperlink + Text model and its partition overlap with Hyperlink model. A higher sub-sampling of text (i.e. smaller values of μ) results in the consensus partition between the Hyperlink + Text and Hyperlink model having a high degree of overlap

3.5 Topic modelling: groups of words

Since our approach provides a clustering of all nodes, we not only group documents but also words. The groups of word (types) can be interpreted as the topics of the documents linked to them, showing that our framework simultaneously solves the traditional problem of topic modeling [14, 23]. Below we show the topics obtained in our Wikipedia dataset, as an example of our generic topic-modelling methodology.

In the consensus partition of the Hyperlink-Text network (see Fig. 3) we found 12 topics (groups of word types). The most frequent words in each of these topics is shown in Table 3. Qualitatively, we see that topics are often composed of semantically related words, e.g. topics 1 and 3 contain a large number of key words associated to Biology whilst topics 5 and 10 contains a large number of jargon related to Physics.

Table 3 Groups of word types as topics. Upper table: the 20 most frequent words in the 12 topics (word groups) found in the consensus partition of the Hyperlink + Text model. Lower table: the topic proportion (8) of the four groups of documents

We now discuss the topical composition of (groups of) documents. Let \(T=B_{V}\) be the number of topics and \(B_{D}\) be the number of document groups, then the mixture proportion of topic \(t =1,\ldots,T\) in document group \(i =1,\ldots,B_{D}\) is given by

$$\begin{aligned} f_{i}^{t} = \frac{n_{i}^{t}}{\sum_{t'=1}^{T}n_{i}^{t'}}, \end{aligned}$$

where \(n_{i}^{t}\) is the number of word tokens in topic t that appeared in documents d in document-group i. The results obtained for the four document groups are shown at the bottom of Table 3. Interestingly, topic 4 cannot be identified with any specific group of documents. This suggests that the words in this topic are similar to so-called stopwords, a pre-defined set of common words considered uninformative which are typically removed from the corpus before any model is to be fitted in order to improve the model [43]. This is consistent with the finding of Ref. [15] that SBMs applied to word-document networks were able to automatically filter stop words by grouping them into a “topic” that is well connected to all documents. Our findings suggest that the same is true for multilayer models and that our approach is robust against the presence of stopwords. In fact, this stopword topic is responsible for a large fraction (40%) of the topic-proportion for all groups of documents. The underlying reason for this is the higher frequency of these words, which (due to Zipf’s law) dominate the weights of the topic mixture models [44]. To overcome this feature, and assess the over- or under-representation of topics more rigorously, we account for the overall frequency of occurrence of words in topics t as

$$\begin{aligned} \bigl\langle f^{t} \bigr\rangle = \frac{\sum_{i=1}^{B_{D}}n_{i}^{t}}{\sum_{t=1}^{T}\sum_{j=1}^{B_{D}} n_{j}^{t}}, \end{aligned}$$

and define the normalised value of the mixture proportion of topic t in document group i as

$$\begin{aligned} \tau _{i}^{t} = \frac{f_{i}^{t} - \langle f^{t} \rangle }{\langle f^{t} \rangle }. \end{aligned}$$

This normalised measure has an intuitive interpretation: \(\tau _{i}^{t} > 0\) (\(\tau _{i}^{t} < 0\)) implies that topic t is over-represented (under-represented) in document group d. In Fig. 6, we show \(\tau _{i}^{t} \) for the 12 topics and the 4 document groups, providing a much clearer view on the connection between topics and groups of documents. For example, we see that document group 2 (articles labelled as Physics) has a large over-representation of topic 10, which corresponds to the Physics topic whilst being underrepresented in document group 2 (articles labelled as Biology). Looking at the model’s newly proposed document group (group 4) we see that it has an over-representation from topics 7 and 5 (and in a less extent from topics 2, 9, and 11), confirming its hybrid category.

Figure 6
figure 6

Normalised contribution of topics to the group of documents. The normalized measure (10) was computed for all 4 document groups and 12 word groups (topics). We set a threshold of \(\tau _{i}^{t} \geq 0.2\) (\(\tau _{i}^{t} \leq -0.2\)) to define if a topic is over- (under-) represented in a document group d

4 Discussion and conclusions

In this paper, we introduced and explored a formal methodology that combines multiple data types (e.g., text, metadata, links) to perform the common tasks of clustering and inferring latent relationships between documents in text analysis. The main theoretical advantage of our methodology is that it incorporates all the different types of data into a single, consistent, statistical model. Our approach is based on an extension of multilayer Stochastic Block Models, that have been used previously to find communities in (sparse) complex networks and that is used here to perform text analysis (see Refs. [3, 18] for alternative uses of SBMs for topic modelling). On the one hand, our method extends community-detection methods to the analysis of text in the presence of multiple data types, our main finding being that: (i) universal statistical properties of texts lead to different link densities at the different layers of the network; and (ii) that the word layer plays a dominant role in the inference of partitions. On the other hand, our method can be viewed as a generalized topic modelling method that incorporates meta-data and hyperlinks, labels the communities of documents by examining the proportion of topics, and builds on the previous connections between SBMs and Latent Dirichlet Allocation [15, 20].

Our investigations on four different datasets show consistent results that reveal the potential and limitations of our approach. Our most important finding is that our methodology succeeds in using the multiple data types (e.g., a text layer) leading to more nuanced communities of documents and in increasing the ability to predict missing links. On the practical side, the lack of balance between the different layers poses challenges on how to evaluate the contributions of different layers because the description length obtained in the inference process is dominated by the text layer and variations obtained within the (Monte Carlo) inference process become larger than the contribution of alternative layers. This suggests further investigations on the role of unbalanced layers in multilayer networks, and how to deal with them within the proposed framework, as important steps to expand the success of complex-network methods to other classes of relevant datasets.

Availability of data and materials

Codes are available at and the datasets at


  1. In the Metadata layer, we found that the metadata tags will form a trivial single group as there is insufficient evidence for the model to construct more than one group. Therefore, we constrained metadata tags to be in separate groups to ensure that they provide additional information to the models being fitted.




  1. Kedem B, De Oliveira V, Sverchkov M (2017) Statistical data fusion. World Scientific, Singapore

    Book  Google Scholar 

  2. Costanedo F (2013) A review of data fusion techniques. Sci World J 2013:704504

    Google Scholar 

  3. Zhu Y, Yan X, Getoor L, Moore C (2013) Scalable text and link analysis with mixed-topic link models. In: Proceedings of the 19th ACM SIGKDD international conference on knowledge discovery and data mining, pp 473–481

    Chapter  Google Scholar 

  4. Kivelä M, Arenas A, Barthelemy M, Gleeson J, Moreno Y, Porter M (2014) Multilayer networks. J Complex Netw 2(3):203–271

    Article  Google Scholar 

  5. Zanin M, Papo D, Sousa PA, Menasalvas E, Nicchi A, Kubik E, Boccaletti S (2016) Combining complex networks and data mining: why and how. Phys Rep 635:1–44

    Article  MathSciNet  Google Scholar 

  6. Breck E, Zinkevich M, Polyzotis N, Whang S, Roy S (2019) Data validation for machine learning. In: Proceedings of SysML

    Google Scholar 

  7. O’Leary K, Uchida M (2020) Common problems with creating machine learning pipelines from existing code. In: Third conference on machine learning and systems (MLSys)

    Google Scholar 

  8. Arun R, Suresh V, Madhavan CEV, Murthy MNN (2010) On finding the natural number of topics with latent Dirichlet allocation: some observations. In: Advances in knowledge discovery and data mining, 391–402

    Chapter  Google Scholar 

  9. Cao J, Xia T, Li J, Zhang Y, Tang S (2009) A density-based method for adaptive LDA model selection. Neurocomputing 72:1775–1781

    Article  Google Scholar 

  10. Vallès-Català T, Massucci FA, Guimerà R, Sales-Pardo M (2016) Multilayer stochastic block models reveal the multilayer structure of complex networks. Phys Rev X 6:011036

    Google Scholar 

  11. Peixoto TP (2015) Inferring the mesoscale structure of layered, edge-valued and time-varying networks. Phys Rev E 92(4):042807

    Article  Google Scholar 

  12. Peixoto TP (2019) Bayesian stochastic blockmodeling. In: Advances in network clustering and blockmodeling, ch. 11

    Google Scholar 

  13. Ball B, Karrer B, Newman MEJ (2011) Efficient and principled method for detecting communities in networks. Phys Rev E 84:036103

    Article  Google Scholar 

  14. Lancichinetti A, Sirer MI, Wang JX, Acuna D, Körding K, Amaral LAN (2015) High-reproducibility and high-accuracy method for automated topic classification. Phys Rev X 5(1):011007

    Google Scholar 

  15. Gerlach M, Peixoto TP, Altmann EG (2018) A network approach to topic models. Sci Adv 4:eaaq1360

    Article  Google Scholar 

  16. Blei DM (2012) Probabilistic topic models. Commun ACM 55

  17. Fortunato S (2010) Community detection in graphs. Phys Rep 486(3–5:75–174

    Article  MathSciNet  Google Scholar 

  18. Bouveyron C, Latouche P, Zreik R (2016) The stochastic topic block model for the clustering of vertices in networks with textual edges. Stat Comput: 1–21

  19. Holland PW, Laskey KB, Leinhardt S (1983) Stochastic blockmodels: first steps. Soc Netw 5(2)

  20. Karrer B, Newman MEJ (2011) Stochastic blockmodels and community structure in networks. Phys Rev E 83:016107

    Article  MathSciNet  Google Scholar 

  21. Hastings M (2006) Community detection as an inference problem, physical review. Phys Rev E, Stat Nonlinear Soft Matter Phys 74:035102

    Article  Google Scholar 

  22. Yen T-C, Larremore DB (2020) Community detection in bipartite networks with stochastic blockmodels. Phys Rev E 102:032309

    Article  MathSciNet  Google Scholar 

  23. Blei DM, Ng AY, Jordan MI (2003) Latent Dirichlet allocation. J Mach Learn Res 3

  24. Hric D, Peixoto TP, Fortunato S (2016) Network structure, metadata, and the prediction of missing nodes and annotations. Phys Rev X 6(3):031038

    Google Scholar 

  25. Newman M, Clauset A (2015) Structure and inference in annotated networks. Nat Commun 7

  26. Altmann EG, Gerlach M (2016) Statistical laws in linguistics. Creativity and universality in language: 7–26

  27. Guimera R, Pardo MS (2009) Missing and spurious interactions and the reconstruction of complex networks. Proc Natl Acad Sci 106:22073–22078

    Article  Google Scholar 

  28. Codes: TopSBM (Topic Models based on Stochastic Block Models, and graph-tool (Efficient network analysis,

  29. de Arruda HF, Costa LDF, Amancio DR (2016) Topic segmentation via community detection in complex networks. Chaos 26(6):063120

    Article  MathSciNet  Google Scholar 

  30. Leydesdorff L, Nerghes A (2017) Co-word maps and topic modeling: a comparison using small and medium-sized corpora (\(N< 1000\)). Journal of the Association for Information Science and Technology 68(4)

  31. Herdan G (1960) Type-token mathematics. Mouton

    MATH  Google Scholar 

  32. Heaps HS (1978) Information retrieval. Academic, New York

    MATH  Google Scholar 

  33. Peixoto TP (2014) Hierarchical block structures and high-resolution model selection in large networks. Phys Rev X 4(1):011047

    Google Scholar 

  34. Peixoto TP (2017) Nonparametric Bayesian inference of the microcanonical stochastic block model. Phys Rev E 95(1):012317

    Article  Google Scholar 

  35. Hric D, Darst RK, Fortunato S (2014) Community detection in networks: structural communities versus ground truth. Phys Rev E 90:062805

    Article  Google Scholar 

  36. Peixoto TP (2014) Efficient Monte Carlo and greedy heuristic for the inference of stochastic block models. Phys Rev E 89(1):012804

    Article  Google Scholar 

  37. Peixoto TP (2020) Merge-split Markov chain Monte Carlo for community detection. Phys Rev E 102:012305

    Article  Google Scholar 

  38. Newman MEJ, Barkema GT (1999) Monte Carlo methods in statistical physics. Oxford University Press, London

    MATH  Google Scholar 

  39. Rissanen J (1978) Modeling by shortest data description. Automatica 14:465–471

    Article  Google Scholar 

  40. Grünwald P (2007) The minimum description length principle. MIT Press, Cambridge

    Book  Google Scholar 

  41. Peixoto TP (2021) Revealing consensus and dissensus between network partitions. Phys Rev X 11:021003

    Google Scholar 

  42. Vallès-Català T, Peixoto TP, Guimerà R, Sales-Pardo M (2018) Consistencies and inconsistencies between model selection and link prediction in networks. Phys Rev E 97:062316

    Article  Google Scholar 

  43. Haddi E, Liu X, Shi Y (2013) The role of text pre-processing in sentiment analysis. Procedia computer science, vol 17

    Google Scholar 

  44. Altmann EG, Dias L, Gerlach M (2017) Generalized entropies and the similarity of texts. J Stat Mech Theory Exp 2017(1):014002

    Article  Google Scholar 

  45. Bird S, Loper E, Klein E (2009) Natural language processing with Python. O’Reilly Media Inc.

    MATH  Google Scholar 

  46. Clauset A, Moore C, Newman MEJ (2008) Hierarchical structure and the prediction of missing links in networks. Nature 453:98–101

    Article  Google Scholar 

Download references


Funding from The University of Sydney was received through the CTDS incubator scheme.

Author information

Authors and Affiliations



MG, TP, and EGA conceived the idea. CCH and YT performed the numerical investigations, data analysis, and prepared the figures. All authors contributed to the methodological development. CCH, LA, MG, TPP, and EGA wrote the manuscript. All authors read and approved the final manuscript.

Corresponding authors

Correspondence to Charles C. Hyland or Eduardo G. Altmann.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary information (PDF 600 kB)


Appendix 1: Wikipedia data collection and preparation

We used a snapshot of the Wikipedia data retrieved on the 5th of June, 2020. The following lists the data extraction and processing steps:

  1. 1.

    Data Retrieval: We retrieved the Wikipedia articles and their content (metadata, text, link) through the MediaWiki APIFootnote 2 and parsing the Wikipedia dumpsFootnote 3.

  2. 2.

    Network Formulation: We constructed a network whereby each node represents a Wikipedia article and each (directed) edge represents hyperlinks between the Wikipedia articles. We removed any nodes with less than 2 outgoing links.

  3. 3.

    Retrieve Connected Component: For ease of analysis, we extracted the largest connected component in the hyperlink network constructed.

  4. 4.

    Text Processing: We process the Wikipedia text data through both tokenization and lemmatization using NLTK [45].

The resultant dataset is available in our repository [28].

Appendix 2: Maximum overlap and consensus partition

The maximum overlap between partitions measures the similarities between sets of partitions. The maximum overlap between partitions x and y is given by

$$\begin{aligned} w(\boldsymbol{x}, \boldsymbol{y}) = \operatorname {arg\,max}_{\boldsymbol{\mu }}\sum_{i} \delta _{x_{i}, \mu (y_{i})}, \end{aligned}$$

where μ is a bijective mapping between the group labels [41].

The normalized maximum overlap between partitions x and y is given by

$$\begin{aligned} w(\boldsymbol{x}, \boldsymbol{y}) = \frac{\sum_{i}\delta _{x_{i}, y_{i}}}{N}, \end{aligned}$$

where N is the number of nodes and lies in the unit interval [0,1].

Given multiple partitions, we also wish to extract a consensus partition \(\hat{\boldsymbol{b}}\) which has the maximal sum of overlaps with all the partitions. Such a consensus partition can be obtained through the double maximization of the set of equations:

$$\begin{aligned} &\hat{b}_{i} = \operatorname {arg\,max}_{r}\sum _{m}\delta _{\mu _{m}(b_{i}^{m}),r}, \end{aligned}$$
$$\begin{aligned} &\boldsymbol{\mu }_{m} = \operatorname {arg\,max}_{\boldsymbol{\mu }}\sum _{r}m_{r,\mu (r)}^{(m)}, \end{aligned}$$

where μ is a bijective mapping between the group labels and \(m_{r,\mu (r)}^{(m)}\) is the contingency table between \(\hat{\boldsymbol{b}}\) and partition \(\boldsymbol{b}^{(m)}\). An iterative procedure is then carried out on the set of equations until no further improvement is possible. The uncertainty σ of the consensus partition obtained from \(M_{p}\) partitions is quantified as [28, 41]

$$\begin{aligned} \sigma = 1 - \frac{1}{N M_{p}} \sum_{i} \sum_{m} \delta _{\mu _{m}(b_{i}^{m}), \hat{b}_{i}}. \end{aligned}$$

Appendix 3: Supervised learning via link prediction

A supervised learning approach to select the best model can be done through the task of link prediction [42, 46]. Let \(\boldsymbol{A}^{O}\) be the observed network and δA be missing or spurious edges. The desired posterior distribution of missing entries δA conditioned on the observed network \(\boldsymbol{A}^{O}\) can be computed as

$$\begin{aligned} P\bigl(\delta \boldsymbol{A}|\boldsymbol{A}^{O}\bigr) = \frac{\sum_{\boldsymbol{b}}P(\boldsymbol{A}^{O} \cup \delta \boldsymbol{A}|\boldsymbol{b})P(\boldsymbol{b}|\boldsymbol{A}^{O}) }{P(\boldsymbol{A}^{O}|\boldsymbol{b})}. \end{aligned}$$

However, as the normalization constant is difficult to obtain, the numerator of Eq. (16) can be computed by sampling partitions from the posterior and then inserting or deleting edges from the graph and computing the new likelihood. As a result, we therefore may compute the relative probability between specific sets of alternative predictive hypotheses \(\{\delta \boldsymbol{A}_{i}\}\) through the likelihood ratios ratio

$$\begin{aligned} \lambda _{i} = \frac{P(\delta \boldsymbol{A}_{i}|\boldsymbol{A}^{O})}{\sum_{j}P(\delta \boldsymbol{A}_{j}|\boldsymbol{A}^{O})}. \end{aligned}$$

We can compute the area under curve (AUC) of the receiver operating characteristic curve to evaluate the SBM’s classification abilities. Furthermore, given two sets of AUCs from two different models, we can compare the models’ performance by computing the t-statistic for a null model with zero mean for the difference in AUC which is given by

$$\begin{aligned} t_{\Delta \mathrm{AUC}} = \frac{\langle \Delta \mathrm{AUC} \rangle }{\sigma _{\Delta \mathrm{AUC}}/\sqrt{n}}, \end{aligned}$$

where \(\langle \Delta \mathrm{AUC} \rangle \), \(\sigma _{\Delta \mathrm{AUC}}\), and n are the mean, standard deviation and size of the population.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit

Reprints and Permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hyland, C.C., Tao, Y., Azizi, L. et al. Multilayer networks for text analysis with multiple data types. EPJ Data Sci. 10, 33 (2021).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: