Multilayer Networks for Text Analysis with Multiple Data Types

We are interested in the widespread problem of clustering documents and finding topics in large collections of written documents in the presence of metadata and hyperlinks. To tackle the challenge of accounting for these different types of datasets, we propose a novel framework based on Multilayer Networks and Stochastic Block Models. The main innovation of our approach over other techniques is that it applies the same non-parametric probabilistic framework to the different sources of datasets simultaneously. The key difference to other multilayer complex networks is the strong unbalance between the layers, with the average degree of different node types scaling differently with system size. We show that the latter observation is due to generic properties of text, such as Heaps' law, and strongly affects the inference of communities. We present and discuss the performance of our method in different datasets (hundreds of Wikipedia documents, thousands of scientific papers, and thousands of E-mails) showing that taking into account multiple types of information provides a more nuanced view on topic- and document-clusters and increases the ability to predict missing links.


Introduction
A widespread problem in modern Data Science is how to combine multiple data types such as images, text, and numbers in a meaningful framework [1,2,3,4,5]. The traditional approach to tackle this challenge is to construct machine learning pipelines in which each data type is treated separately -sequentially or in parallel -and the partial results are combined at the end of the procedure [6,7]. There are two problems with such a procedure. First, it leads to the development of ad-hoc solutions that are highly contingent on the dataset in question [8,9]. Second, each model is trained independently from one another, meaning that the relationships between the different types of data are not taken into account [10,11]. These problems show the need of developing a unified statistical framework applied simultaneously to the different types of data [12].
In this paper, we investigate the problem of clustering and finding topics in collections of written documents for which additional information is available as metadata and as hyperlinks between documents. We obtain a unified statistical framework to this problem by mapping it to the problem of inferring groups in multilayer networks. The key design for the unified framework proposed here is inspired arXiv:2106.15821v1 [cs.SI] 30 Jun 2021 Figure 1: Different views on the relationship between written documents. Lower layer: a bipartite multi-graph of documents (circles) and word types (triangles), links correspond to word tokens. Middle layer: directed graph of documents (e.g., hyperlinks in Wikipedia or citations in Scientific Papers). Upper layer: a bipartite graph of documents and tags (squares), used to classify the documents. by the connections [3,13,14,15] between the problems of identifying (i) topics in a collection of written documents (i.e. topic modeling) [16] and (ii) communities in complex networks (i.e. community detection) [17]. In particular, Ref. [15] shows that both problems can be tackled using Stochastic Block Models (SBM) [3,12,18,19,20,21,28] and that SBMs, previously applied to find communities in complex networks, outperform and overcome many of the difficulties of the most popular unsupervised methods to infer structures from large collections of texts (topic modelling methods such as the Latent Dirichlet Allocation [22] and its generalizations). However, these approaches have been applied only to the textual part of collections of documents, ignoring additional information available about them. For instance, in datasets of scientific publications, one would consider only the text of the articles but not the citation networks (used in traditional community detection methods [17]) or other metadata (such as the journal or bibliographical classification) [44,42]. We propose here an extension of Ref. [15] and show how the diversity of information typically available about documents can be incorporated in the same framework by using multilayer SBMs [3,10,11]. As illustrated in Fig. 1, in addition to the bipartite Document-Word layer discussed in Ref. [15], here we incorporate a Hyperlink layer connecting the different written documents and a Metadata-Document layer that incorporates tags and other metadata. The key difference to other multilayer networks [4], as explored in Sec. 2 below, is that statis-tical laws [23] governing the frequency of words on documents leave fingerprints on the density of the different network layers. Our investigations in different datasets, reported in Sec. 3 for collection of Wikipedia articles and in the Supplementary Information for three other datasets, reveal that the proposed multilayer approach leads to improved results when compared to both the topic modelling approach of [15] and the usual community detection of (hyperlink) networks. Our approach leads to a more nuanced view on the communities of documents, generates a list of topics associated to the communities, and improves the link-prediction capabilities when compared to the hyperlink network alone [24]. The details on our methods can be found in the appendices, Supplementary Information, and in the repository [25].

Multiple data sources as multilayer networks
In this section we introduce the general methodology of our paper: we introduce the types of data we are interested in (Sec. 2.1), we show how they can be represented as a multilayer network and discuss the properties of these networks (Sec. 2.2), and we describe how they can be modelled using Stochastic Block Models (Sec. 2.3).

Setting: Multiple Data Sources
We consider a collection of d = 1, . . . , D documents and we are interested in clustering and finding underlying similarities between them using combinations of the following information: Text (T): Each document contains k d word tokens from a vocabulary of V word types (M = d k d is the total number of word tokens). Hyperlinks (H): Documents are linked to each other by building a (directed) graph or network. Metadata (M): Documents are classified by tags or other metadata. These characteristics are typical for textual data and networks. Here we explore three types of such datasets, summarized in Tab. 1. The main dataset we use to illustrate our results was extracted from the English Wikipedia, where the documents are articles (in scientific categories), the text is the content of the articles, hyperlinks are links between articles contained in the text, and metadata are tags introduced by users to classify the articles (categories). In our main example, we selected hundreds of articles in one of three scientific categories of Wikipedia (see Appendix 1 for details). Our main findings are confirmed in a second Wikipedia dataset (obtained choosing different scientific categories), in a citation dataset (documents are scientific papers, hyperlinks are citations, the text is extracted from the title and abstract, and metadata are scientific categories), and in an E-mail dataset (documents are all E-mails from the same user, hyperlinks correspond to E-mails sent between users, and the text is the content of the E-mails). These results and further details of the data are presented in the Supplementary Information (see SI-Sec. 1).

Data as Networks
The data described above can be represented as multilayer networks. The Hyperlink layer is the most obvious one, where documents are nodes and the hyperlinks are directed edges. The Metadata layer is built by a bipartite network consisting of metadata tags and documents as nodes, whereby undirected edges correspond to documents containing a given metadata-tag. Finally, the Text layer is obtained by restricting the text analysis to the level of word frequencies (bag-of-words) and then considering the bipartite network of word (types) and documents, where the edges correspond to word tokens (i.e., the count of how often a word type appears in a document). While word-nodes and metadata tags appear only in the text and the metadata layer, all layers have document nodes in common. The novelty of our multilayer approach, in comparison to other approaches using multilayer networks, is the inclusion of the text layer. The importance of using a bipartite multigraph layer [28] to represent the text, instead of alternative "word networks" [14,26,27], is that it contains the complete information of word occurrence in documents and allows for a formal connection to topic-modelling methods [15,20]. We now investigate the properties of the multilayer network described above, based on known results in networks and textual data. The most striking feature of this network is that the size of the different layers varies dramatically and scales differently with system size. A first indication of this lack of balance is seen by looking at the number of edges shown in Tab. 1: the number of edges in the text layer (i.e. word tokens) is substantially larger than the number of nodes or edges in all of the other layers. Such an imbalance is expected in all datasets in which the same type of data as outlined in Sec. 2.1 is present. To see this we investigate in Fig. 2 how the average degree k X (number of edges/ total number of nodes) of the different node types X scale with the number of documents n D (which plays the role of system size). For the document nodes in the Hyperlink layer and the Text layer we see a constant average degree, typical of sparse networks. The Metadata layer yields a trivial scaling linear with n D as in dense networks because each document has one edge to a metadata node. More interestingly, the average degree of the word type nodes in the Text layer, k V , shows a growth that scales as with 0 < γ < 1. This is between the usual limits expected for sparse (γ = 0) and dense (γ = 1) networks. We now explain the observation in Eq. (1) in terms of properties of text in general. More specifically, the type-token relationship in texts follows Heaps' law [29,30,23], which states that the number of word types V scales with the word tokens M as whereby 0 < β < 1 is the parameter of interest. The average degree is obtained as k V = M/V and n D ∝ M (where the proportionality constant is the average size of Wikipedia articles, in word tokens). Combining this with Eqs. (1) and (2) we obtain that γ = 1 − β. From the data used here, we estimate a Heaps' exponent β = 0.56, that leads to a prediction of γ = 0.44. This prediction is shown as a dashed line in Fig. 2 and is in good agreement (for large n D ) with the average degree of word nodes.

Stochastic Block Models
To achieve our goal of clustering documents and identifying topics considering multiple type of datasets simultaneously, we need to explore statistical patterns in the connectivity of the multilayer networks discussed above. This can be obtained using Stochastic Block Models (SBMs). The choice of SBM is based on the existence of a successful computational and theoretical framework, reviewed in Ref. [12], that can be applied to networks with the characteristics needed in our problem: different types of networks (directed, bipartite, and multi edges), multilayer networks [11], and accounting for key ingredients to detect communities (e.g., degree correction and a nested/hierarchical generalizations [32]). Our previous analysis of bipartite word-document networks using this framework has outperformed traditional topic modelling approaches [15]. SBMs are a family of random-graph models that generate networks with adjacency matrix A ij with probability P (A|b), where the vector b with entries b i ∈ {1, · · ·, B}, specifies the membership of nodes i = 1, · · ·, D into one of B possible groups. For our multilayer network design -developed for the three types of data (H,T,M) as discussed in Sec. 2.2 -we fit the SBM framework to each layer combining them by constraining document groups to be the same across all layers, i.e. with a joint probability where A H , A T and A M are the adjacency matrices of each respective layer. In each individual layer, edges between nodes i and j are sampled from a Poisson distribution with average [20] θ i θ j ω bi,bj whereby ω rs is the expected number of edges between group r and s, b i is the group membership of node i, and θ i is overall propensity with which a node is selected within its own group. Non-informative priors are used for the parameters θ and ω and the marginal likelihood of the SBM is computed as [31] P Based on this, we consider the overall posterior distribution for a single partition conditioned the edges on all layers [35] With this approach, not only the words but also the documents are now clustered into categories. We implement the inference using the package graph-tool [25, 41,45,43] (see SI-Sec. 2 for details and Ref. [25] for our codes).

Application to Wikipedia data
In this section we apply the methodology and ideas discussed above to the Wikipedia dataset which contains articles classified by users in the categories Mathematics, Physics, and Biology. We are interested in comparing the outcomes and performance of the models discussed above applied to the different types of information in the data. We fit multiple variants of the multilayer SBM, whereby we choose different layers to be included in the model.

Description Length
The performance of each model can be measured by the extent to which a model succeeds in describing (compressing) the data. This can be quantified computing its description length (DL) [33,34] which describes the information necessary to describe both the data and the model parameters. From Eq. 6, we see that minimizing the description length is equivalent to maximizing the posterior probability P (b|A H , A T , A M ). In Tab. 2 we summarise the DL obtained for each model in our dataset. It is quite clear that the DL of the models containing the Text layer are much larger than those containing only the Hyperlink and Metadata layers. This is a direct consequence of the large number of word types in the data, when compared to documents or hyperlinks, the lack of balance between the layers mentioned in Sec. 2.2. This lack of balance between layers thus has important consequences for the inference of partitions and our ability to compare the different models. For instance, the effectiveness of the multilayer approach could be evaluated by comparing the DL of the multilayer model (e.g., DL of model H + T ) to the sum of the DL of the single-layer models (e.g., DL of model H + DL of model T ). In our case this comparison is not very informative because the DL of the combined model is dominated by the largest layer and the DL of the small layer often lies within the fluctuations obtained from multiple MCMC runs (see SI-Sec. 2). This reasoning suggests that the clustering of nodes would be dominated by the Text layer or, if the Text layer is excluded, by the Hyperlink layer which will dominate over the Metadata layer [1] . However, we will see below that there are still significant and meaningful differences in the clustering of nodes obtained using different combinations of layers. This happens because the inference problem remains non-trivial because the DL landscape contains many distinct states with similar values in the DL so that even small effects due to the H and M layers can affect the outcome.

Qualitative comparison of groups of documents
Community detection methods aim to find the partition of the nodes that best captures the structure in the network in a meaningful way whilst being robust to noise [12,21]. We thus evaluate the different models by comparing the resulting partitioning of documents [35]. Specifically, we fit the Hyperlink, Text, and Hyperlink + Text model and obtain a best partition from combining multiple samples from the posterior P (b|A) for each model to construct a point estimate, which utilises the different parts of the posterior distribution. We then project the group membership [1] In the Metadata layer, we found that the metadata tags will form a trivial single group as there is insufficient evidence for the model to construct more than one group. Therefore, we constrained metadata tags to be in separate groups to ensure that they provide additional information to the models being fitted. Our results are shown in Fig. 3 and reveal that our model is successful in retrieving different meaningful groupings of the articles depending on the available data (i.e. layers included in the model). We first notice that the classification of articles made by users -panel (a), Wikipedia label -group articles in Mathematics and Biology that are strongly linked with each other (through hyperlinks), whereas Physics articles appear intertwined in between them. When we infer the partition of nodes based only on the hyperlink network -panel (b), Hyperlink model -we obtain that our model obtains 2 groups and it is quite confident about it (uncertainty is zero, σ = 0.). This partition resembles the partition based on Wikipedia labels. When the documents are partitioned based on their text -panel (c), Text Model-, a richer picture emerges. There is a large community that resembles closely the documents classified as Biology and one of the communities obtained using the hyperlinks layer. However, the remaining documents (most of the Mathematics and Physics articles) are now grouped in 4 categories (i.e., 5 communities in total) which are still linked to each other but more loosely than before (even though Fig. 3 shows the Hyperlink network, the Hyperlinks were not used to group documents in panels a) and c)). Finally, when hyperlinks and text are used simultaneously -panel (d) -4 communities are found, which resemble the previous ones but that also show important distinctions. This demonstrates that even if the Text layer dominates the description length, there are noticeable differences in the inferred partitions when using the hyperlinks in addition to text for clustering documents.
We now argue that the more nuanced classification of documents obtained with the Text and Hyperlink + Text models are qualitatively meaningful. For example, we can see a cluster of 5 (Physics) nodes in the bottom left of the Hyperlink model that was not identified as a separate group, but it is now picked up in the Text and Hyperlink + Text model. This cluster of nodes include Wikipedia articles on the Josephson effect, macroscopic quantum phenomena, magnetic flux quantum, macroscopic quantum self trapping, and quantum tunnelling. Even more strikingly, in the bottom of the network there is a lone (Physics) green node surrounded by (Biology) red nodes which corresponds to the Wikipedia article on isotopic labelling (a technique in the intersection of Physics and Biology). In traditional community detection methods, which use link information as an indicator of groups, such a node would be in the community of its surrounding neighbours. However, in the Hyperlink + Text model, we are able to detect the uniqueness of such a node.

Quantitative comparison between different models
In the example discussed above it was clear that the different models yielded different yet related partitions of Wikipedia articles. In order to quantify the similarity of the results of the different models, we performed a systematic comparison of the partitions generated by multiple runs of each model and computed their similarities using the maximum overlap partition (Fig. 4, see Appendix 2 for details). The results show that the partitions generated by the Hyperlink + Text model is most similar to the Text model. Similar results are obtained in our alternative datasetssee SI-Sec. 1 -and using the normalised mutual information (NMI) as an alternative dissimilarity measure -see SI-Sec. 3.
We also compare the Hyperlink and Hyperlink + Text model in terms of their ability to predict missing edges [24,37] (see Appendix 3 for details on our method). We found that the Hyperlink+Text model has an Area-Under-Curve (AUC) score of 0.63±0.06 (average ± standard deviation) and the Hyperlink model has 0.54±0.02, with the difference being statistically significant (p = 0.0013, using a 2-sample ttest). This confirms that the multilayer approach proposed here is successful in retrieving existing relationships that are missed in the network-only approach.

Lack of balance in the Hyperlink-Text model
The results of the previous sections are strongly influenced by the lack of balance in Hyperlink + Text model, as discussed in Sec. 2. To further illustrate this point, here we artificially reduce the unbalance of the multilayer network by sampling a fraction µ of word tokens before fitting a Hyperlink + Text model. We expect that, as we increase the fraction of words µ, the Text layer will increasingly dominate the inference. This expectation is confirmed in Fig. 5, which shows that for µ ≥ 0.6, the partition overlap of the µ−hyperlink-text model is statistically indistinguishable from the partition overlap obtained using the Text-only model. That is, we see that the Text layer dominates the inference in the µ−Hyperlink + Text for µ ≥ 0.6. However, as discussed above, the effect of the hyperlink layer can lead to different consensus partition.

Topic Modelling: Groups of Words
Since our approach provides a clustering of all nodes, we not only group documents but also words. The groups of word (types) can be interpreted as the topics of the documents linked to them, showing that our framework simultaneously solves the traditional problem of topic modeling [22,14]. Below we show the topics obtained in our Wikipedia dataset, as an example of our generic topic-modelling methodology.
In the consensus partition of the Hyperlink-Text network (see Fig. 3) we found 12 topics (groups of word types). The most frequent words in each of these topics is shown in Tab. 3. Qualitatively, we see that topics are often composed of semantically related words, e.g. topics 1 and 3 contain a large number of key words associated to Biology whilst topics 5 and 10 contains a large number of jargon related to Physics.
We now discuss the topical composition of (groups of) documents. Let T = B V be the number of topics and B D be the number of document groups, then the mixture proportion of topic t = 1, . . . , T in document group i = 1, . . . , B D is given by where n t i is the number of word tokens in topic t that appeared in documents d in document-group i. The results obtained for the four document groups are shown at the bottom of Tab. 3. Interestingly, topic 4 cannot be identified with any specific group of documents. This suggests that the words in this topic are similar to socalled stopwords, a pre-defined set of common words considered uninformative which are typically removed from the corpus before any model is to be fitted in order to improve the model [38]. This is consistent with the finding of Ref. [15] that SBMs applied to word-document networks were able to automatically filter stop words by  grouping them into a "topic" that is well connected to all documents. Our findings suggest that the same is true for multilayer models and that our approach is robust against the presence of stopwords. In fact, this stopword topic is responsible for a large fraction (40%) of the topic-proportion for all groups of documents. The underlying reason for this is the higher frequency of these words, which (due to Zipf's law) dominate the weights of the topic mixture models [46]. To overcome this feature, and assess the over-or under-representation of topics more rigorously, we account for the overall frequency of occurrence of words in topics t as and define the normalised value of the mixture proportion of topic t in document group i as This normalised measure has an intuitive interpretation: τ t i > 0 (τ t i < 0) implies that topic t is over-represented (under-represented) in document group d. In Fig. 6, we show τ t i for the 12 topics and the 4 document groups, providing a much clearer view on the connection between topics and groups of documents. For example, we see that document group 2 (articles labelled as Physics) has a large over-representation of topic 10, which corresponds to the Physics topic whilst being underrepresented in document group 2 (articles labelled as Biology). Looking at the model's newly proposed document group (group 4) we see that it has an over-representation from topics 7 and 5 (and in a less extent from topics 2, 9, and 11), confirming its hybrid category.

Discussion and Conclusions
In this paper, we introduced and explored a formal methodology that combines multiple data types (e.g., text, metadata, links) to perform the common tasks of clustering and inferring latent relationships between documents in text analysis. The main theoretical advantage of our methodology is that it incorporates all the different types of data into a single, consistent, statistical model. Our approach is based on an extension of multilayer Stochastic Block Models, that have been used previously to find communities in (sparse) complex networks and that is used here to perform text analysis (see Refs. [3,18] for alternative uses of SBMs for topic modelling). On the one hand, our method extends community-detection methods to the analysis of text in the presence of multiple data types, our main finding being that: (i) universal statistical properties of texts lead to different link densities at the different layers of the network; and (ii) that the word layer plays a dominant role in the inference of partitions. On the other hand, our method can be viewed as a generalized topic modelling method that incorporates meta-data and hyperlinks, labels the communities of documents by examining the proportion of topics, and builds on the previous connections between SBMs and Latent Dirichlet Allocation [15,20]. Our investigations on four different datasets show consistent results that reveal the potential and limitations of our approach. Our most important finding is that Figure 6: Normalised contribution of topics to the group of documents. The normalized measure (10) was computed for all 4 document groups and 12 word groups (topics). We set a threshold of τ t i ≥ 0.2 (τ t i ≤ −0.2) to define if a topic is over-(under-) represented in a document group d.
our methodology succeeds in using the multiple data types (e.g., a text layer) leading to more nuanced communities of documents and in increasing the ability to predict missing links. On the practical side, the lack of balance between the different layers poses challenges on how to evaluate the contributions of different layers because the description length obtained in the inference process is dominated by the text layer and variations obtained within the (Monte Carlo) inference process become larger than the contribution of alternative layers. This suggests further investigations on the role of unbalanced layers in multilayer networks, and how to deal with them within the proposed framework, as important steps to expand the success of complex-network methods to other classes of relevant datasets. of the consensus partition obtained from M p partitions can be quantified as [36,25]

Appendix 3: Supervised Learning Via Link Prediction
A supervised learning approach to select the best model can be done through the task of link prediction [37,39]. Let A O be the observed network and δA be missing or spurious edges. The desired posterior distribution of missing entries δA conditioned on the observed network A O can be computed as However, as the normalization constant is difficult to obtain, the numerator of Eq. 16 can be computed by sampling partitions from the posterior and then inserting or deleting edges from the graph and computing the new likelihood. As a result, we therefore may compute the relative probability between specific sets of alternative predictive hypotheses {δA i } through the likelihood ratios ratio .
We can compute the area under curve (AUC) of the receiver operating characteristic curve to evaluate the SBM's classification abilities. Furthermore, given two sets of AUCs from two different models, we can compare the models' performance by computing the t-statistic for a null model with zero mean for the difference in AUC which is given by where ∆AU C , σ ∆AU C , and n are the mean, standard deviation and size of the population.