 Regular article
 Open Access
 Published:
Timevarying graph representation learning via higherorder skipgram with negative sampling
EPJ Data Science volume 11, Article number: 33 (2022)
Abstract
Representation learning models for graphs are a successful family of techniques that project nodes into feature spaces that can be exploited by other machine learning algorithms. Since many realworld networks are inherently dynamic, with interactions among nodes changing over time, these techniques can be defined both for static and for timevarying graphs. Here, we show how the skipgram embedding approach can be generalized to perform implicit tensor factorization on different tensor representations of timevarying graphs. We show that higherorder skipgram with negative sampling (HOSGNS) is able to disentangle the role of nodes and time, with a small fraction of the number of parameters needed by other approaches. We empirically evaluate our approach using timeresolved facetoface proximity data, showing that the learned representations outperform stateoftheart methods when used to solve downstream tasks such as network reconstruction. Good performance on predicting the outcome of dynamical processes such as disease spreading shows the potential of this method to estimate contagion risk, providing early risk awareness based on contact tracing data.
1 Introduction
A great variety of natural and artificial systems can be represented as networks of elementary structural entities coupled by relations between them. The abstraction of such systems as networks helps us understand, predict and optimize their behaviour [1, 2]. In this sense, node and graph embeddings have been established as standard feature representations in many learning tasks [3, 4]. Node embedding methods map nodes into lowdimensional vectors that can be used to solve downstream tasks such as edge prediction, network reconstruction and node classification.
Node embeddings have proven successful in achieving lowdimensional encoding of static network structures, but many realworld networks are inherently dynamic [5, 6]. Timeresolved networks are also the support of important dynamical processes, such as epidemic or rumor spreading, cascading failures, consensus formation, etc. [7]. Timeresolved node embeddings have been shown to yield improved performance for predicting the outcome of dynamical processes over networks, such as information diffusion and disease spreading [8], providing estimation of infection and contagion risk when used with contact tracing data.
Since we expect having more data on proximity networks being used for contact tracing and as proxies for epidemic risk [9], learning meaningful representations of timeresolved proximity networks can be of extreme importance when facing events such as epidemic outbreaks [10, 11]. The manual and automatic collection of timeresolved proximity graphs for contact tracing purposes presents an opportunity for quick identification of possible infection clusters and infection chains. Even before the COVID19 pandemic, the use of wearable proximity sensors for collecting timeresolved proximity networks has been largely discussed in the literature and many approaches have been used to describe patterns of activity and community structure, and to study spreading patterns of infectious diseases [12–14].
Here we propose a representation learning model that performs implicit tensor factorization on different higherorder representations of timevarying graphs. The main contributions are as follows:

Given that the skipgram embedding approach implicitly performs a factorization of the shifted pointwise mutual information matrix (PMI) [15], we generalize it to perform implicit factorization of a shifted PMI tensor. We then define the steps to achieve this factorization using higherorder skipgram with negative sampling (HOSGNS) optimization.

We show how to apply 3rdorder and 4thorder SGNS on different higherorder representations of timevarying graphs.

We show that timevarying graph representations learned via HOSGNS outperform stateoftheart methods when used to solve downstream tasks, even using a fraction of the number of embedding parameters.
We report the results of learning embeddings on empirical timeresolved facetoface proximity data and using such representations as predictors for solving three different tasks: predicting the outcomes of a SIR spreading process over the timevarying graph, network reconstruction and link prediction. We compare these results with stateofthe art methods for timevarying graph representation learning.
2 Preliminaries and related work
2.1 Skipgram representation learning
The skipgram model was designed to compute word embeddings in word2vec [16], and afterwards extended to graph node embeddings [17–19]. Levy and Goldberg [15] established the relation between skipgram trained with negative sampling (SGNS) and traditional matrix decomposition methods [20, 21], showing the equivalence of SGNS optimization to factorizing a shifted PMI matrix [22].
Starting from a textual corpus of words \(w_{1},w_{2},\dots , w_{m}\) from a vocabulary \(\mathcal{V}\), it assigns to each word \(w_{s}\) a context corresponding to words surrounding \(w_{s}\) in a window of size T, i.e. the multiset \(c_{T}(w_{s}) = \{w_{sT},\dots ,w_{s1},w_{s+1},\dots ,w_{s+T}\}\). Training samples \(\mathcal{D} = \{(i,j): i \in \mathcal{W}, j \in \mathcal{C},~j \in c_{T}(i) \}\) are built by collecting all the observed wordcontext pairs, where \(\mathcal{W}\) and \(\mathcal{C}\) are the vocabularies of words and contexts respectively (usually \(\mathcal{W} = \mathcal{C} = \mathcal{V}\)). Here we denote as \(\#(i,j)\) the number of times \((i,j)\) appears in \(\mathcal{D}\). Similarly we use \(\#i = \sum_{j}\#(i,j)\) and \(\#j = \sum_{i}\#(i,j)\) as the number of times each word occurs in \(\mathcal{D}\), with relative frequencies \(P_{\mathcal{D}}(i,j)= \frac{\#(i,j)}{\mathcal{D}}\), \(P_{\mathcal{D}}(i)= \frac{\#i}{\mathcal{D}}\) and \(P_{\mathcal{D}}(j)= \frac{\#j}{\mathcal{D}}\).
SGNS computes ddimensional representations for words and contexts in two matrices \(\mathbf{W} \in \mathbb{R}^{\mathcal{W} \times d}\) and \(\mathbf{C} \in \mathbb{R}^{\mathcal{C} \times d}\), performing a binary classification task in which pairs \((i,j) \in \mathcal{D}\) are positive examples and pairs \((i,j_{\mathcal{N}})\) with randomly sampled contexts are negative examples. The probability of the positive class is parametrized as the sigmoid (\(\sigma (x) = (1+e^{x})^{1}\)) of the inner product of embedding vectors:
and each wordcontext pair \((i,j)\) contributes to the loss as follows:
where the second expression uses the symmetry property \(\sigma (x) = 1\sigma (x)\) inside the expected value and κ is the number of negative examples, sampled according to the empirical distribution of contexts \(P_{\mathcal{N}}(j) = P_{\mathcal{D}}(j)\).
Following results found in [15], the sum of all \(\ell (i,j)\) weighted with the probability that each pair \((i,j)\) appears in \(\mathcal{D}\) gives the SGNS objective function:
where \(P_{\mathcal{N}}(i, j) = P_{\mathcal{D}}(i) \cdot P_{\mathcal{D}}(j)\) is the probability of \((i,j)\) under assumption of statistical independence.
Levy and Goldberg [15] demonstrated that, when d is sufficiently high, optimal SGNS embedding matrices satisfy these relations:
which tell us that SGNS optimization is equivalent to a rankd matrix decomposition of the wordcontext pointwise mutual information (PMI) matrix shifted by a constant, i.e. the number of negative samples. Here in this work, we refer to the shifted PMI matrix also as \(\operatorname{SPMI}_{\kappa} = \operatorname{PMI} \log \kappa \). This equivalence was later retrieved from diverse assumptions [23–27], and exploited to compute closed form expressions approximated in different graph embedding models [28].
2.2 Random walk based graph embeddings
Given an undirected, weighted and connected graph \(\mathcal{G = (V,E)}\) with nodes \(i,j \in \mathcal{V}\), edges \((i,j) \in \mathcal{E}\) and adjacency matrix A, graph embedding methods are unsupervised models designed to map nodes into dense ddimensional representations (\(d \ll \mathcal{V}\)) [29]. A well known family of approaches based on the skipgram model consists in sampling random walks from the graph and processing node sequences as textual sentences. In DeepWalk [17] and node2vec [19], the skipgram model is used to obtain node embeddings from cooccurrences in random walk realizations. Although the original implementation of DeepWalk uses hierarchical softmax to compute embeddings, we will refer to the SGNS formulation given by [28].
Since SGNS can be interpreted as a factorization of the wordcontext PMI matrix [15], the asymptotic form of the PMI matrix implicitly decomposed in DeepWalk can be derived [28]. Given the 1step transition matrix \(\mathbf{P} = \mathbf{D}^{1}\mathbf{A}\), where \(\mathbf{D} = \operatorname{diag}(d_{1}, \dots , d_{\mathcal{V}})\) and \(d_{i} = \sum_{j \in \mathcal{V}}\mathbf{A}_{ij}\) is the (weighted) node degree, the expected PMI for a nodecontext pair \((i,j)\) occurring in a Tsized window is:
where \(\operatorname{vol}(\mathcal{G}) = \sum_{i,j\in \mathcal{V}}\mathbf{A}_{ij}\). We will return to this equation in Sect. 3.2, where we use the expression in \((a)\) to build probability tensors from higherorder graph representations.
2.3 Timevarying graphs and their algebraic representations
Timevarying graphs [5, 6] are defined as triples \(\mathcal{H = (V,E,T)}\), i.e. collections of events \((i, j, k) \in \mathcal{E}\), representing undirected pairwise relations among nodes at discrete times (\(i,j \in \mathcal{V}\), \(k \in \mathcal{T}\)). \(\mathcal{H}\) can be seen as a temporal sequence of static graphs \(\{\mathcal{G}^{(k)}\}_{k \in \mathcal{T}}\), each of those with adjacency matrix \(\mathbf{A}^{(k)}\) such that \(\mathbf{A}^{(k)}_{ij} = \omega (i,j,k) \in \mathbb{R}\) is the weight of the event \((i,j,k) \in \mathcal{E}\). We can concatenate the list of timestamped snapshots \([\mathbf{A}^{(1)}, \dots , \mathbf{A}^{(\mathcal{T})}]\) to obtain a single 3rdorder tensor \(\boldsymbol{\mathcal{A}}^{\mathrm{stat}}(\mathcal{H}) \in \mathbb{R}^{ \mathcal{V}\times \mathcal{V}\times \mathcal{T}}\) which characterize the evolution of the graph over time. This representation has been used to discover latent community structures of temporal graphs [13] and to perform temporal link prediction [30]. Indeed, beyond the above stacked graph representation, more exhaustive representations are possible. In particular, the multilayer approach [31] allows to map the topology of a timevarying graph \(\mathcal{H}\) into a static network \(\mathcal{G_{\mathcal{H}}} = (\mathcal{V}_{\mathcal{H}}, \mathcal{E}_{ \mathcal{H}})\) (the supraadjacency graph) such that vertices in \(\mathcal{V}_{\mathcal{H}}\) correspond to nodetime pairs \((i, k)\equiv i^{(k)} \in \mathcal{V} \times \mathcal{T}\) and edges in \(\mathcal{E}_{\mathcal{H}}\) represent connections \((i^{(k)}, j^{(l)})\) among them. Since every link can be arranged in a quadruple \((i,j,k,l)\), the connectivity structure is associated to a 4thorder tensor \(\boldsymbol{\mathcal{A}}^{\mathrm{dyn}}(\mathcal{H}) \in \mathbb{R}^{ \mathcal{V}\times \mathcal{V}\times \mathcal{T}\times  \mathcal{T}}\) that is equivalent, up to an opportune reshaping, to the adjacency matrix \(\mathbf{A}(\mathcal{G}_{\mathcal{H}}) \in \mathbb{R}^{\mathcal{V} \mathcal{T}\times \mathcal{V}\mathcal{T}}\) of \(\mathcal{G}_{\mathcal{H}}\). Multilayer representations for timevarying networks have been used to study timedependent centrality measures [32] and properties of spreading processes [33].
In the same spirit that word2vec refers to the word pairs \((i,j)\) as (word, context), here we refer to the node pairs \((i,j)\) as (node, context), and the time pairs \((k,l)\) as (time, contexttime).
2.4 Timevarying graph representation learning
Given a timevarying graph \(\mathcal{H} = (\mathcal{V}, \mathcal{E}, \mathcal{T})\), we define as temporal network embedding a model that learns from data, implicitly or explicitly, a mapping function:
which project timestamped nodes into a latent lowrank vector space that encodes structural and temporal properties of the original evolving graph [34, 35]. Many existing methods learn node representations from sequences of static snapshots through incremental updates in a streaming scenario: deep autoencoders [36], SVD [37], skipgram [38, 39] and random walk sampling [40–42]. Another class of models learn dynamic node representations by recurrent/attention mechanisms [43–46] or by imposing temporal stability among adjacent time intervals [47, 48]. DyANE [8] and weg2vec [49] project the dynamic graph structure into a static graph, in order to compute embeddings with word2vec. Closely related to these ones are [50] and [51], which learn node vectors according to timerespecting random walks or spreading trajectory paths. Moreover, [52] proposed an embedding framework for useritem temporal interactions, and [53] suggested a tensorbased convolutional architecture for dynamic graphs.
Methods that perform well for predicting outcomes of spreading processes make use of timerespecting supraadjacency representations such as the one proposed by [33]. In these graph representations, a random walk corresponds to a temporal path in the original timevarying graph, enconding relevant information about the spreading process into its connectivity structure. The supraadjacency representation \(\mathcal{G}_{\mathcal{H}}\) that we refer in Sect. 3.2, also used in DyANE, with adjacency matrix \(\mathbf{A}(\mathcal{G}_{\mathcal{H}})\), is defined by two rules:

1.
For each event \((i,j,t_{0})\), if i is also active at time \(t_{1} > t_{0}\) and in no other timestamp between the two, we add a crosscoupling edge between supraadjacency nodes \(j^{(t_{0})}\) and \(i^{(t_{1})}\). In addition, if the next interaction of j with other nodes happens at \(t_{2}>t_{0}\), we add an edge between \(i^{(t_{0})}\) and \(j^{(t_{2})}\). The weights of such edges are set to \(\omega (i,j,t_{0})\).

2.
For every case as described above, we also add selfcoupling edges \((i^{(t_{0})}, i^{(t_{1})})\) and \((j^{(t_{0})}, j^{(t_{2})})\), with weights set to 1.
Figure 1 shows the differences between a timevarying graph and its timeaware supraadjacency representation, according to the formulation described above. DyANE computes, given a node \(i \in \mathcal{V}\), one vector representation for each timestamped node \(i^{(t)} \in \mathcal{V}^{(\mathcal{T})} = \{(i,t) \in \mathcal{V} \times \mathcal{T}: \exists (i,j,t) \in \mathcal{E}\}\) of this supraadjacency representation. Similar models that learn timeresolved node representations require a quantity \(\mathcal{O}(\mathcal{V}\cdot \mathcal{T})\) of embedding parameters to represent the timevarying graph in the latent space. As we will see, compared with these methods, our approach requires a quantity \(\mathcal{O}(\mathcal{V} + \mathcal{T})\) of embedding parameters for disentangled node and time representations.
3 Proposed method
Given a timevarying graph \(\mathcal{H = (V, E, T)}\), we propose a representation learning method that learns disentangled representations for nodes and time slices. More formally, we learn a function:
This embedding representation can then be reconciled with the definition in Eq. (6) by combining v and t in a single \(\mathbf{v}^{(t)}\) representation using any combination function \(c: (\mathbf{v},\mathbf{t}) \in \mathbb{R}^{d} \times \mathbb{R}^{d} \mapsto \mathbf{v}^{(t)} \in \mathbb{R}^{d}\). It follows that computing and combining distinct vector embeddings for nodes and time slices needs a quantity \(\mathcal{O}(\mathcal{V}+\mathcal{T})\) of learnable parameters, leading to a more efficient method to obtain timeaware node representations without requiring to learn a much bigger number \(\mathcal{O}(\mathcal{V}\cdot \mathcal{T})\) of learnable parameters.
The parameters of the embedding representation in Eq. (7) are learned through a higherorder generalization of skipgram with negative sampling (HOSGNS), based on the existing skipgram framework for node embeddings, as shown in Sect. 3.1. We show that this generalization allows to implicitly factorize higherorder relations that characterize tensor representations of timevarying graphs, in the same way that the classical SGNS decomposes dyadic relations associated to a static graph.
Similar approaches have been applied in NLP for dynamic word embeddings [54], and higherorder extensions of the skipgram model have been proposed to learn contextdependent [55] and syntacticaware [56] word representations. Also tensor factorization techniques have been applied to include the temporal dimension in recommender systems [57, 58], knowledge graphs [59, 60] and facetoface contact networks [12, 13]. But this work is the first to merge SGNS with tensor factorization, and then apply it to learn timevarying graph embeddings. HOSGNS differs from existing temporal network embeddings based on skipgram [38, 39], which are minor adaptations of standard SGNS to the dynamic setting. In fact, in Sect. 3.1 we show how the equations in the skipgram framework can be completely rewritten to be naturally applied to inherently higherorder problems.
In the next sections, we first show the formal steps to the generalization of the skipgram approach to higherorder data structures, and then we show how to apply HOSGNS on 3rdorder and 4thorder representations of timevarying graphs.
3.1 SGNS for higherorder data structures
Here we address the problem of generalizing SGNS to learn embedding representations from higherorder cooccurrences. In Sect. 2.3 we described snapshotbased and multilayerbased representations of timevarying graphs, that can be seen as 3rd and 4thorder cooccurrence tensors; therefore in the remaining of this manuscript we focus on 3rd and 4thorder structures. In this section, we describe in detail the generalization of SGNS to the 3rdorder case. In Additional file 1 we discuss more in detail the derivation of the HOSGNS objective function to any nthorder representation.
We consider a set of training samples \(\mathcal{D} = \{(i, j, k): i \in \mathcal{W}, j \in \mathcal{C}, k \in \mathcal{T}\}\) obtained by collecting cooccurrences among elements from three sets \(\mathcal{W}\), \(\mathcal{C}\) and \(\mathcal{T}\). While SGNS is limited to pairs of nodecontext \((i, j)\), here \(\mathcal{D}\) is constructed with three (or more) variables, e.g. sampling random walks over a higherorder data structure. We denote as \(\#(i,j,k)\) the number of times the triple \((i,j,k)\) appears in \(\mathcal{D}\). Similarly we use \(\#i = \sum_{j,k}\#(i,j,k)\), \(\#j = \sum_{i,k}\#(i,j,k)\) and \(\#k = \sum_{i,j}\#(i,j,k)\) as the number of times each distinct element occurs in \(\mathcal{D}\), with relative frequencies \(P_{\mathcal{D}}(i,j,k)= \frac{\#(i,j,k)}{\mathcal{D}}\), \(P_{\mathcal{D}}(i)= \frac{\#i}{\mathcal{D}}\), \(P_{\mathcal{D}}(j)= \frac{\#j}{\mathcal{D}}\) and \(P_{\mathcal{D}}(k)= \frac{\#k}{\mathcal{D}}\).
Optimization is performed as a binary classification task, where the objective is to discern occurrences actually coming from \(\mathcal{D}\) from random occurrences. We define the likelihood for a single observation \((i,j,k)\) by applying a sigmoid (\(\sigma (x) = (1+e^{x})^{1}\)) to the higherorder inner product \([\!\![\cdot]\!\!]\) of corresponding ddimensional representations:
where embedding vectors \(\mathbf{w}_{i},\mathbf{c}_{j}, \mathbf{t}_{k} \in \mathbb{R}^{d}\) are respectively rows of \(\mathbf{W} \in \mathbb{R}^{\mathcal{W} \times d}\), \(\mathbf{C} \in \mathbb{R}^{\mathcal{C} \times d}\) and \(\mathbf{T} \in \mathbb{R}^{\mathcal{T} \times d}\). In the 4thorder case we will also have a fourth embedding matrix \(\mathbf{S} \in \mathbb{R}^{\mathcal{S} \times d}\) related to a fourth set \(\mathcal{S}\). For negative sampling we fix an observed \((i,j,k) \in \mathcal{D}\) and independently sample \(j_{\mathcal{N}}\) and \(k_{\mathcal{N}}\) to generate κ negative examples \((i,j_{\mathcal{N}},k_{\mathcal{N}})\). In this way, for a single occurrence \((i,j,k) \in \mathcal{D}\), the expected contribution to the loss is:
where the noise distribution is the product of independent marginal probabilities \(P_{\mathcal{N}}(j, k)= P_{\mathcal{D}}(j) \cdot P_{\mathcal{D}}(k)\). Thus the global objective is the sum of all the quantities of Eq. (9) weighted with the corresponding relative frequency \(P_{\mathcal{D}}(i,j,k)\). The full loss function can be expressed as:
In Additional file 1 we show the formal steps to obtain Eq. (10) for the nthorder case and that it can be optimized with respect to the embedding parameters, satisfying the lowrank tensor approximation of the multivariate shifted PMI tensor into factor matrices W, C, T:
Equation (11), like the analogous derived in Levy and Goldberg [15] in Eq. (4), assumes full rank embedding matrices with \(d \approx R = \operatorname{rank}(\operatorname{SPMI}_{\kappa})\). For the case when \(d \ll R\), a comprehensive theoretical analysis is missing, although recent works propose the feasibility of exact lowdimensional factorizations of realworld static networks [61, 62]. Nevertheless, in Additional file 1, we include an empirical analysis of the effectiveness of HOSGNS for lowrank factorization of timevarying graph representations.
3.2 Timevarying graph embedding via HOSGNS
While a static graph \(\mathcal{G = (V,E)}\) is uniquely represented by an adjacency matrix \(\mathbf{A}(\mathcal{G}) \in \mathbb{R}^{\mathcal{V}\times  \mathcal{V}}\), a timevarying graph \(\mathcal{H = (V,E,T)}\) admits diverse possible higherorder adjacency relations (Sect. 2.3). Starting from these higherorder relations, we can either use them directly or use random walk realizations to build a dataset of higherorder cooccurrences. In the same spirit that random walk realizations lead to dyadic cooccurrences used to learn embeddings in SGNS, we use higherorder cooccurrences to learn embeddings via HOSGNS.
As discussed in Sect. 3.1, the statistics of higherorder relations can be summarized in multivariate PMI tensors, which derive from cooccurrence probabilities among elements. Once such PMI tensors are constructed, we can again factorize them via HOSGNS. To show the versatility of this approach, we choose probability tensors derived from two different types of higherorder relations:

1.
A 3rdorder tensor \(\boldsymbol{\mathcal{P}}^{(\mathrm{stat})}(\mathcal{H}) \in \mathbb{R}^{ \mathcal{V}\times \mathcal{V}\times \mathcal{T}}\) which gather relative frequencies of nodes occurrences in temporal edges:
$$ \bigl(\boldsymbol{\mathcal{P}}^{(\mathrm{stat})} \bigr)_{ijk} = \frac{\omega (i,j,k)}{\operatorname{vol}(\mathcal{H})}, $$(12)where \(\operatorname{vol}(\mathcal{H}) = \sum_{i,j,k}\omega (i,j,k)\) is the total weight of interactions occurring in \(\mathcal{H}\). These probabilities are associated to the snapshot sequence representation \(\boldsymbol{\mathcal{A}}^{\mathrm{stat}}(\mathcal{H}) = [\mathbf{A}^{(1)}, \dots , \mathbf{A}^{(\mathcal{T})}]\) and contain information about the topological structure of \(\mathcal{H}\).

2.
A 4thorder tensor \(\boldsymbol{\mathcal{P}}^{(\mathrm{dyn})}(\mathcal{H}) \in \mathbb{R}^{ \mathcal{V}\times \mathcal{V}\times \mathcal{T}\times  \mathcal{T}}\), which gather occurrence probabilities of timestamped nodes over random walks of the supraadjacency graph \(\mathcal{G}_{\mathcal{H}}\) (as used in DyANE). Using the numerator of Eq. (5), with supraadjacency indices \(i^{(k)}\) and \(j^{(l)}\) instead of usual indices i and j, tensor entries are given by:
$$ \bigl(\boldsymbol{\mathcal{P}}^{(\mathrm{dyn})} \bigr)_{ijkl} = \frac{1}{2T}\sum_{r=1}^{T} \biggl[ \frac{d_{i^{(k)}}}{\operatorname{vol}(\mathcal{G}_{\mathcal{H}})}\bigl( \mathbf{P}^{r} \bigr)_{i^{(k)},j^{(l)}} + \frac{d_{j^{(l)}}}{\operatorname{vol}(\mathcal{G}_{\mathcal{H}})} \bigl( \mathbf{P}^{r} \bigr)_{j^{(l)},i^{(k)}} \biggr]. $$(13)These probabilities encode causal dependencies among temporal nodes and are correlated with dynamical properties of spreading processes. Notice that the computation of \(\boldsymbol{\mathcal{P}}^{(\mathrm{dyn})}(\mathcal{H})\) requires an undirected supraadjacency graph, while in DyANE is directed.
We also combined the two representations in a single tensor that is the average of \(\boldsymbol{\mathcal{P}}^{(\mathrm{stat})}\) and \(\boldsymbol{\mathcal{P}}^{(dyn)}\)
where {\delta}_{kl}=\mathbb{1}[k=l] is the Kronecker delta.
Figure 2 summarizes the differences between graph embedding via classical SGNS and timevarying graph embedding via HOSGNS. Here, indices \((i,j,k,l)\) correspond to (node, context, time, contexttime) in a 4thorder tensor representation of \(\mathcal{H}\).
The above tensors gather empirical probabilities \(P_{\mathcal{D}}(i,j,k\dots )\) corresponding to positive examples of observable higherorder relations. The probabilities of negative examples \(P_{\mathcal{N}}(i,j,k\dots )\) can be obtained as the product of marginal distributions \(P_{\mathcal{D}}(i)\), \(P_{\mathcal{D}}(j)\), \(P_{\mathcal{D}}(k)\dots \) Objective functions like Eq. (10) can be computed with a sampling strategy: picking positive tuples according to the data distribution \(P_{\mathcal{D}}\) and negative ones according to independent sampling \(P_{\mathcal{N}}\), HOSGNS objective can be optimized through the following weighted cross entropy loss:
where B is the number of the samples drawn in a training step and κ is the negative sampling constant. We additionally apply the warmup steps explained in Additional file 1 to speedup the main training stage.
4 Experiments
For the experiments we use timevarying graphs collected by the SocioPatterns collaboration (http://www.sociopatterns.org) using wearable proximity sensors that sense the facetoface proximity relations of individuals wearing them. After training the proposed models (HOSGNS applied to \(\boldsymbol{\mathcal{P}}^{(\mathrm{stat})}\), \(\boldsymbol{\mathcal{P}}^{(\mathrm{dyn})}\) or \(\boldsymbol{\mathcal{P}}^{(\mathrm{stat}\mathrm{dyn})}\)) on each dataset, embedding matrices \(\mathbf{W, C, T}\) (and S except for \(\boldsymbol{\mathcal{P}}^{(\mathrm{stat})}\)) are mapped to embedding vectors \(\mathbf{w}_{i}\), \(\mathbf{c}_{j}\), \(\mathbf{t}_{k}\) (and \(\mathbf{s}_{l}\)) where \(i,j \in \mathcal{V}\) and \(k,l \in \mathcal{T}\). In Sect. 4.2, we use the learned representations to solve different downstream tasks: node classification, temporal event reconstruction and missing event prediction. Finally, in Sect. 4.4 we show the visualization of the twodimensional projections of the embeddings for one of the chosen empirical datasets.
4.1 Experimental setup
4.1.1 Datasets
We performed experiments with both empirical and synthetic datasets describing facetoface proximity of individuals. We used publicly available empirical contact data collected by the SocioPatterns collaboration [63], with a temporal resolution of 20 seconds, in a variety of contexts: in a school (“LyonSchool”), a conference (“SFHH”), a hospital (“LH10”), a highschool (“Thiers13”), and in offices (“InVS15”) [64]. This is currently the largest collection of open datasets sensing proximity in the same range and temporal resolution used by modern contact tracing systems. In addition, we used social interactions data generated by the agentbasedmodel OpenABMCovid19 [65] to simulate an outbreak of COVID19 in a urban setting.
We built a timevarying graph from each dataset, and for the empirical data we performed aggregation on 600 seconds time windows, neglecting those snapshots without registered interactions at that time scale. The weight of the link \((i,j,k)\) is the number of events recorded between nodes \((i,j)\) in a certain aggregated window k. For synthetic data we maintained the original temporal resolution and we set links weights to 1. Table 1 shows statistics for each dataset.
4.1.2 Baselines
We compare our approach with several baseline methods from the literature of timevarying graph embeddings, which learn timestamped node representations: (1) DyANE [8], which learns temporal node embeddings with DeepWalk, mapping a timevarying graph into a supraadjacency representation; (2) DynGEM [36], a deep autoencoder architecture which dynamically reconstructs each graph snapshot initializing model weights with parameters learned in previous time frames; (3) DynamicTriad [47], which captures structural information and temporal patterns of nodes, modeling the triadic closure process; (4) DySAT [45], a deep neural model that computes node embeddings by a joint selfattention mechanism applied on structural neighborhood and temporal dynamics; (5) ISGNS [39], an incremental skipgram embedding model based on DeepWalk. Details about hyperparameters used in each method can be found in Additional file 1.
4.2 Downstream tasks
4.2.1 Node classification
The aim of this task is to classify nodes in epidemic states according to a SIR epidemic process with infection rate β and recovery rate μ. We simulated 30 realizations of the SIR process on top of each empirical graph with different combinations of parameters \((\beta ,\mu )\). We used similar combinations of epidemic parameters and the same dynamical process to produce SIR states as described in [8]. Then we set a logistic regression to classify epidemic states SIR assigned to each active node \(i^{(k)}\) during the unfolding of the spreading process. We combine the embedding vectors of HOSGNS using the Hadamard (elementwise) product \(\mathbf{w}_{i}\circ \mathbf{t}_{k}\). We compared with dynamic node embeddings learned from baselines. For fair comparison, all models produce timestamped node representations with dimension \(d = 128\) as input to the logistic regression.
4.2.2 Temporal event reconstruction
In this task, we aim to determine if a generic event \((i,j,k)\) (occurred or not) is in \(\mathcal{H=(V,E,T)}\), i.e., if there is an edge between nodes i and j at time k. We create a random timevarying graph \(\mathcal{H^{*}=(V,E^{*},T)}\) with same active nodes \(\mathcal{V}^{(\mathcal{T})}\) and a number of \(\mathcal{E}\) events that are not part of \(\mathcal{E}\) (i.e. \(\mathcal{E}\cap \mathcal{E}^{*} =\) Ø). In other words \(\mathcal{E}^{*}\) contains random events that may occur only between the nodes that are active in each snapshot, disregarding other possible edges that involve inactive nodes. Embedding representations learned from \(\mathcal{H}\) are used as features to train a logistic regression to predict if a given event \((i,j,k)\) is in \(\mathcal{E}\) or in \(\mathcal{E^{*}}\). We combine the embedding vectors of HOSGNS as follows: for \(\mathrm{HOSGNS} ^{(\mathrm{stat})}\), we use the Hadamard product \(\mathbf{w}_{i}\circ \mathbf{c}_{j}\circ \mathbf{t}_{k}\); for \(\mathrm{HOSGNS} ^{(\mathrm{dyn})}\) and \(\mathrm{HOSGNS} ^{(\mathrm{stat}\mathrm{dyn})}\), we use \(\mathbf{w}_{i}\circ \mathbf{c}_{j}\circ \mathbf{t}_{k}\circ \mathbf{s}_{k}\). For baseline methods, we aggregate vector embeddings to obtain linklevel representations with binary operators (Average, Hadamard, WeightedL1, WeightedL2 and Concat) as already used in previous works [19, 66]. For fair comparison, all models are required produce event representations with dimension \(d = 192\).
4.2.3 Missing event prediction
In this task, we aim to predict the occurrence of an event \((i,j,k)\) previously removed from \(\mathcal{H=(V,E,T)}\). We create a pruned timevarying graph \(\mathcal{H^{\dagger}=(V,E^{\dagger},T)}\) with the same active nodes \(\mathcal{V}^{(\mathcal{T})}\) and a number of events \(\mathcal{E}^{\dagger} = 70\%~\mathcal{E}\) sampled from \(\mathcal{H}\). Embedding representations learned from \(\mathcal{H}^{\dagger}\) are used as features to train a logistic regression to predict missing occurred events \((i,j,k) \in \mathcal{E}\setminus \mathcal{E}^{\dagger}\) against the events \(\mathcal{E}^{*}\) of a random timevarying graph \(\mathcal{H^{*}=(V,E^{*},T)}\) (see above). We combine the embedding vectors of HOSGNS for the classification task as explained in the event reconstruction task.
4.3 Results
In this section we first show downstream task performance results for the empirical and synthetic datasets, then we compare the different approaches in terms of training complexity, by measuring the number of trainable parameters and the training time with fixed number of training steps.
Tasks were evaluated using traintest split. To avoid information leakage from training to test, we randomly split \(\mathcal{V}\) and \(\mathcal{T}\) in train and test sets \((\mathcal{V}_{tr}, \mathcal{V}_{ts})\) and \((\mathcal{T}_{tr}, \mathcal{T}_{ts})\), with proportion 70%–30%. For node classification, only nodes in \(\mathcal{V}_{tr}\) at times in \(\mathcal{T}_{tr}\) were included in the train set, and only nodes in \(\mathcal{V}_{ts}\) at times in \(\mathcal{T}_{ts}\) were included in the test set. For event reconstruction and prediction, only events with \(i,j \in \mathcal{V}_{tr}\) and \(k \in \mathcal{T}_{tr}\) were included in the train set, and only events with \(i,j \in \mathcal{V}_{ts}\) and \(k \in \mathcal{T}_{ts}\) were included in the test set.
All approaches were evaluated for downstream tasks in terms of MacroF1 scores in all datasets. 5 different runs of the embedding model are evaluated on 30 different traintest splits in every downstream tasks. We report the average score with standard error over all splits. In node classification, every SIR realization is assigned to a single embedding run to compute prediction scores. In event reconstruction and prediction tasks, a different random timevarying graph realization \(\mathcal{H}^{*}\) to produce samples of nonoccurring events is assigned to each traintest subset.
4.3.1 Empirical datasets
Results for the classification of nodes in epidemic states are shown in Table 2. We report here a subset of \((\beta ,\mu )\) but other combinations are available on Additional file 1. DynGEM and DynamicTriad have low scores, since they are not devised to learn from graph dynamics. Also DySAT has a bad performance in this task, since this method uses a context prediction objective that preserves the local structure without properly encoding dynamical patterns. \(\mathrm{HOSGNS} ^{(\mathrm{stat})}\) is not able to capture the graph dynamics due to the static nature of \(\boldsymbol{\mathcal{P}}^{(\mathrm{stat})}\). ISGNS, due to the incremental training, performs only marginally better than \(\mathrm{HOSGNS} ^{(\mathrm{stat})}\). DyANE, \(\mathrm{HOSGNS} ^{(\mathrm{stat}\mathrm{dyn})}\) and \(\mathrm{HOSGNS} ^{(\mathrm{dyn})}\) show good performance, with these two HOSGNS variants outperforming DyANE in most of the combinations of datasets and SIR parameters.
Results for the temporal event reconstruction task are reported in Table 3. Temporal event reconstruction is not performed well by DynGEM. DynamicTriad has better performance with WeightedL1 and WeightedL2 operators, while DyANE, DySAT and ISGNS have better performance using Hadamard and WeightedL2. ISGNS has the second best perfomances in most of the datasets. Since Hadamard product is explicitly used in Eq. (8) to optimize HOSGNS, all HOSGNS variants show best scores with this operator. \(\mathrm{HOSGNS} ^{(\mathrm{stat})}\) outperforms all approaches, setting new stateoftheart results in this task. The \(\boldsymbol{\mathcal{P}}^{(\mathrm{dyn})}\) representation used as input to \(\mathrm{HOSGNS} ^{(\mathrm{dyn})}\) does not focus on events but on dynamics, so the performance for event reconstruction is slightly below DyANE, while \(\mathrm{HOSGNS} ^{(\mathrm{stat}\mathrm{dyn})}\) is comparable to DyANE.
Table 4 outlines the results for the missing event prediction task. In this case \(\mathrm{HOSGNS} ^{(\mathrm{stat})}\) has lower performance, but comparable with DynGEM and DynamicTriad. DySAT and ISGNS work slightly better with Hadamard or WeightedL1/L2 operator, but they are outperformed by DyANE that has an excellent performance with Hadamard or WeightedL2. However \(\mathrm{HOSGNS} ^{(\mathrm{dyn})}\) and \(\mathrm{HOSGNS} ^{(\mathrm{stat}\mathrm{dyn})}\) have the best scores, which emphasize the importance of leveraging dynamics to learn and predict missing information.
Results for HOSGNS models using other operators are available in Additional file 1. We observe an overall good performance of \(\mathrm{HOSGNS} ^{(\mathrm{stat}\mathrm{dyn})}\) in all downstream tasks, being in almost all cases among the two highest scores, compared to the other two HOSGNS variants which excel in certain tasks but have lower performance in the others.
4.3.2 Synthetic datasets
Here we report the performance of downstream tasks with the two synthetic datasets only for \(\mathrm{HOSGNS} ^{(\mathrm{stat})}\) and \(\mathrm{HOSGNS} ^{(\mathrm{dyn})}\), given the similar performance of \(\mathrm{HOSGNS} ^{(\mathrm{dyn})}\) and \(\mathrm{HOSGNS} ^{(\mathrm{stat}\mathrm{dyn})}\) in previous experiments. We also chose DyANE as the only baseline, given its better performance compared to other baselines in empirical datasets.
Results for the node classification task for a set of \((\beta ,\mu )\) combinations are reported in Table 5, with other combinations available in Additional file 1. These results reflect previous results on empirical datasets, with \(\mathrm{HOSGNS} ^{(\mathrm{dyn})}\) performance comparable or superior to DyANE.
Results for the event reconstruction and prediction tasks are reported in Table 6. DyANE performs well with Hadamard operation, but nevertheless the scores are below \(\mathrm{HOSGNS} ^{(\mathrm{dyn})}\) and \(\mathrm{HOSGNS} ^{(\mathrm{stat})}\) scores. Especially with \(\mathrm{HOSGNS} ^{(\mathrm{stat})}\), the performance of event reconstruction is not much larger than even prediction, contrary to empirical datasets. This difference might be due to the different topological features of synthetic networks respect to empirical ones.
4.3.3 Training complexity
We report in Table 7 the number of trainable parameters and training time duration for each considered algorithm, when applied to an empirical graph (LyonSchool) and to the synthetic ones. The proposed HOSGNS model requires a number of trainable parameters that is orders of magnitude smaller than other approaches, with a training time considerably shorter as the number of nodes increases, given a fixed number of training iterations. ISGNS has a comparable number of parameters because it incrementally updates \(\mathcal{O}(\mathcal{V})\) parameters moving across the \(\mathcal{T}\) snapshots. DySAT training time is considerably higher due to the computational overhead of the selfattention mechanism.
4.4 Embedding space visualization
One of the main advantages of HOSGNS is that it is able to disentangle the role of nodes and time by learning representations of nodes and time intervals separately. In this section, we include plots with twodimensional projections of these embeddings, made with UMAP [67] for manifold learning and nonlinear dimensionality reduction. With these plots, we show that the embedding matrices learned by \(\mathrm{HOSGNS} ^{(\mathrm{stat})}\) and \(\mathrm{HOSGNS} ^{(\mathrm{dyn})}\) successfully capture both the structure and the dynamics of the timevarying graph.
Dynamical information can be represented by associating each embedding vector to its corresponding time interval \(k \in \mathcal{T}\), and graph structure can be represented by associating each embedding vector to a community membership. While community membership can be estimated by different community detection methods, we choose to use a dataset with ground truth data containing node membership information. We consider the LyonSchool dataset as a case study, widely investigated in literature respect to structural and spreading properties [68–73]. This dataset spans two days and includes metadata (Table 8) concerning the class of each participant of the school (10 different labels for children and 1 label for teachers), and we identify the community membership of each individual according to these labels (class labels). Moreover we also assign time labels according to activation of individual nodes in temporal snapshots.
To show how disentangled representations capture different aspects of the evolving graph, in Fig. 3 we plot individual representations of nodes \(i \in \mathcal{V}\) and time slices \(k \in \mathcal{T}\) labeled according to the class membership and the time snapshot respectively. Both \(\mathrm{HOSGNS} ^{(\mathrm{stat})}\) and \(\mathrm{HOSGNS} ^{(\mathrm{dyn})}\) capture the community structure (left of each panel) with node embeddings clustered into the groundtruth classes, but dynamical information expressed by time embeddings (right of each panel) is different for the two methods. Due to the timerespecting topology of the supraadjacency graph, \(\mathrm{HOSGNS} ^{(\mathrm{dyn})}\) captures the causality of node cooccurrences encoding temporal slices into a timeordered onedimensional manifold. \(\mathrm{HOSGNS} ^{(\mathrm{stat})}\) is built on the snapshot representation, invariant over time permutation, and thus the temporal encoding is constrained to the local connectivity structure of graph slices.
In Fig. 4 we visualize representations of temporal nodes \(i^{(k)} \in \mathcal{V}^{(\mathcal{T})}\), computed as Hadamard products of nodes and time embeddings. \(\mathrm{HOSGNS} ^{(\mathrm{stat})}\) projections show clusters of nodes active at multiple times representing different social situations: interactions during lectures present uniform class labels and heterogeneous time labels, whereas interactions occurred in social spaces with mixed classes present uniform time labels and heterogeneous class labels. This is in line with previous studies [13], where different patterns of interactions are found during school activities, and gatherings in social spaces (such as canteen and playground) are more concentrated during lunch time. \(\mathrm{HOSGNS} ^{(\mathrm{dyn})}\) projected embeddings, due to the causality information encoded in time representations, display trajectories of social interactions that span over time in the embedding space, with communities interacting and mixing at different points of the day.
In Fig. 5 we see dynamic node embeddings computed with baseline methods without dissociating structure and time. The embedding space in DyANE encodes properly the timeaware topology, since the model is based on the supraadjacency graph like \(\mathrm{HOSGNS} ^{(\mathrm{dyn})}\). Also DynamicTriad captures significant temporal structures, but it is less effective to express the overall dynamics since it is limited in modeling the triadic closure process. Other relevant interaction patterns are instead accounted with supraadjacency random walks. DynGEM, DySAT and ISGNS embedding spaces do not encode any structural or temporal information.
5 Conclusions
In this paper, we introduce HigherOrder SkipGram with Negative Sampling (HOSGNS) for timevarying graph representation learning. We generalize the skipgram embedding approach that implicitly performs a factorization of the shifted PMI matrix to perform implicit factorization of a shifted PMI tensor. We show how to optimize HOSGNS for the generic nthorder case, and how to apply 3rdorder and 4thorder SGNS on different higherorder representations of timevarying graphs.
The embedding representations learned by HOSGNS outperform other methods in the literature and set new stateoftheart results for solving downstream tasks. By learning embeddings on empirical timeresolved facetoface proximity data, such representations can be effectively used to predict the outcomes of a SIR spreading process over the timevarying graph. They also can be effectively used for network reconstruction and link prediction.
HOSGNS is able to learn more compact representations of timevarying graphs due to the reduced number of parameters, with computational complexity that is comparable or lower than other stateoftheart methods. By learning disentangled representations of nodes and time intervals, HOSGNS uses a number of parameters in the order of \(\mathcal{O}(\mathcal{V} + \mathcal{T})\), while models that learn nodetime representations need a number of parameters that is at least \(\mathcal{O}(\mathcal{V}\cdot \mathcal{T})\).
While other methods such as DyANE assume that the whole temporal network has to be known, here we relax this assumption and we show that the learned representations can be used also for predicting events that are not seen during the representation learning phase. Yet, one limitation still holds: the transductivity of the model makes it unable to generalize the embedding representations outside the set of observed temporal slices. A future work to tackle this limitation is the extension of the methodology to include prior constraints, such as temporal smoothness and stability of embeddings over consecutive time slices, or to equip the model with an inductive framework.
We show that HOSGNS can be intuitively applied to timevarying graphs, but this methodology can be easily adapted to solve other representation learning problems that involve multimodal data and multilayered graph representations, where the purpose is to factorize higherorder dependencies between elementary units of the system. Beyond these applications, extensions of the model can find usage in feature learning on higherorder systems, i.e. hypergraphs and simplicial complexes, where interactions among vertices are intrinsically polyadic.
Availability of data and materials
We use open data which can be downloaded on http://www.sociopatterns.org or generated from https://github.com/BDIpathogens/OpenABMCovid19. The source code is publicly available at https://github.com/simonepiaggesi/hosgns.
References
Newman ME (2003) The structure and function of complex networks. SIAM Rev 45(2):167–256
Albert R, Barabási AL (2002) Statistical mechanics of complex networks. Rev Mod Phys 74(1):47
Cai H, Zheng VW, Chang KCC (2018) A comprehensive survey of graph embedding: problems, techniques, and applications. IEEE Trans Knowl Data Eng 30(9):1616–1637
Goyal P, Ferrara E (2018) Graph embedding techniques, applications, and performance: a survey. KnowlBased Syst 151:78–94
Holme P, Saramäki J (2012) Temporal networks. Phys Rep 519(3):97–125
Casteigts A, Flocchini P, Quattrociocchi W, Santoro N (2012) Timevarying graphs and dynamic networks. Int J Parallel Emerg Distrib Syst 27(5):387–408
Barrat A, Barthelemy M, Vespignani A (2008) Dynamical processes on complex networks. Cambridge University Press, Cambridge
Sato K, Oka M, Barrat A, Cattuto C (2021) Predicting partially observed processes on temporal networks by dynamicsaware node embeddings (DyANE). EPJ Data Sci 10(1):22
Alsdurf H, Bengio Y, Deleu T, Gupta P, Ippolito D, Janda R, Jarvie M, Kolody T, Krastev S, Maharaj T et al (2020). Covi white paper. arXiv preprint. arXiv:2005.08502
Kapoor A, Ben X, Liu L, Perozzi B, Barnes M, Blais M, O’Banion S (2020) Examining COVID19 forecasting using spatiotemporal gnns. In: Proceedings of the 16th international workshop on mining and learning with graphs (MLG)
Gao J, Sharma R, Qian C, Glass LM, Spaeder J, Romberg J, Sun J, Xiao C (2021) Stan: spatiotemporal attention network for pandemic prediction using realworld evidence. J Am Med Inform Assoc 28(4):733–743
Sapienza A, Panisson A, Wu J, Gauvin L, Cattuto C (2015) Detecting anomalies in timevarying networks using tensor decomposition. In: 2015 IEEE international conference on data mining workshop (ICDMW). IEEE, pp 516–523
Gauvin L, Panisson A, Cattuto C (2014) Detecting the community structure and activity patterns of temporal networks: a nonnegative tensor factorization approach. PLoS ONE 9(1):86028
Génois M, Vestergaard CL, Fournet J, Panisson A, Bonmarin I, Barrat A (2015) Data on facetoface contacts in an office building suggest a lowcost vaccination strategy based on community linkers. Netw Sci 3(3):326–347
Levy O, Goldberg Y (2014) Neural word embedding as implicit matrix factorization. In: Advances in neural information processing systems, pp 2177–2185
Mikolov T, Sutskever I, Chen K, Corrado GS, Dean J (2013) Distributed representations of words and phrases and their compositionality. In: Advances in neural information processing systems, pp 3111–3119
Perozzi B, AlRfou R, Skiena S (2014) Deepwalk: online learning of social representations. In: Proc. of the 20th ACM SIGKDD int. conf. on knowledge discovery and data mining. ACM, New York, pp 701–710
Tang J, Qu M, Wang M, Zhang M, Yan J, Mei Q (2015) Line: largescale information network embedding. In: Proceedings of the 24th international conference on world wide web, pp 1067–1077
Grover A, Leskovec J (2016) node2vec: scalable feature learning for networks. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pp 855–864
Kolda TG, Bader BW (2009) Tensor decompositions and applications. SIAM Rev 51(3):455–500
Anandkumar A, Ge R, Hsu D, Kakade SM, Telgarsky M (2014) Tensor decompositions for learning latent variable models. J Mach Learn Res 15:2773–2832
Church KW, Hanks P (1990) Word association norms, mutual information, and lexicography. Comput Linguist 16(1):22–29
Yang Z, Ding M, Zhou C, Yang H, Zhou J, Tang J (2020) Understanding negative sampling in graph representation learning. In: Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining, pp 1666–1676
Assylbekov Z, Takhanov R (2019) Context vectors are reflections of word vectors in half the dimensions. J Artif Intell Res 66:225–242
Allen C, Balazevic I, Hospedales T (2019) What the vec? Towards probabilistically grounded embeddings. In: Advances in neural information processing systems, pp 7465–7475
Melamud O, Goldberger J (2017) Informationtheory interpretation of the skipgram negativesampling objective function. In: Proceedings of the 55th annual meeting of the association for computational linguistics (volume 2: short papers), pp 167–171
Arora S, Li Y, Liang Y, Ma T, Risteski A (2016) A latent variable model approach to pmibased word embeddings. Trans Assoc Comput Linguist 4:385–399
Qiu J, Dong Y, Ma H, Li J, Wang K, Tang J (2018) Network embedding as matrix factorization: unifying deepwalk, line, pte, and node2vec. In: Proceedings of the eleventh ACM international conference on web search and data mining. ACM, New York, pp 459–467
Hamilton WL, Ying R, Leskovec J (2017) Representation learning on graphs: methods and applications. arXiv preprint. arXiv:1709.05584
Dunlavy DM, Kolda TG, Acar E (2011) Temporal link prediction using matrix and tensor factorizations. ACM Trans Knowl Discov Data 5(2):1–27
De Domenico M, SoléRibalta A, Cozzo E, Kivelä M, Moreno Y, Porter MA, Gómez S, Arenas A (2013) Mathematical formulation of multilayer networks. Phys Rev X 3(4):041022
Taylor D, Porter MA, Mucha PJ (2019) In: Holme P, Saramäki J (eds) Supracentrality analysis of temporal networks with directed interlayer coupling. Springer, Cham, pp 325–344
Valdano E, Ferreri L, Poletto C, Colizza V (2015) Analytical computation of the epidemic threshold on temporal networks. Phys Rev X 5(2):021005
Kazemi SM, Goel R, Jain K, Kobyzev I, Sethi A, Forsyth P, Poupart P (2020) Representation learning for dynamic graphs: a survey. J Mach Learn Res 21(70):1–73
Barros CDT, Mendonça MRF, Vieira AB, Ziviani A (2021) A survey on embedding dynamic graphs. ACM Comput Surv 55(1):10
Goyal P, Kamra N, He X, Liu Y (2017) Dyngem: deep embedding method for dynamic graphs. In: IJCAI workshop on representation learning for graphs (ReLiG)
Zhang Z, Cui P, Pei J, Wang X, Zhu W (2018) TIMERS: errorbounded SVD restart on dynamic networks. In: Thirtysecond AAAI conference on artificial intelligence
Du L, Wang Y, Song G, Lu Z, Wang J (2018) Dynamic network embedding: an extended approach for skipgram based network embedding. In: IJCAI, pp 2086–2092
Peng H, Li J, Yan H, Gong Q, Wang S, Liu L, Wang L, Ren X (2020) Dynamic network embedding via incremental skipgram with negative sampling. Sci China Inf Sci 63(10):1–19
Béres F, Kelen DM, Pálovics R, Benczúr AA (2019) Node embeddings in dynamic graphs. Appl Netw Sci 4(1):64
Mahdavi S, Khoshraftar S, An A (2018) dynnode2vec: scalable dynamic network embedding. In: 2018 IEEE international conference on big data (big data). IEEE, pp 3762–3765
Yu W, Cheng W, Aggarwal CC, Zhang K, Chen H, Wang W (2018) Netwalk: a flexible deep embedding approach for anomaly detection in dynamic networks. In: Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining, pp 2672–2681
Goyal P, Chhetri SR, Canedo A (2020) dyngraph2vec: capturing network dynamics using dynamic graph representation learning. KnowlBased Syst 187:104816
Li T, Zhang J, Philip SY, Zhang Y, Yan Y (2018) Deep dynamic network embedding for link prediction. IEEE Access 6:29219–29230
Sankar A, Wu Y, Gou L, Zhang W, Yang H (2020) Dysat: deep neural representation learning on dynamic graphs via selfattention networks. In: Proceedings of the 13th international conference on web search and data mining, pp 519–527
Xu D, Ruan C, Korpeoglu E, Kumar S, Achan K (2020) Inductive representation learning on temporal graphs. In: International conference on learning representations
Zhou L, Yang Y, Ren X, Wu F, Zhuang Y (2018) Dynamic network embedding by modeling triadic closure process. In: Thirtysecond AAAI conference on artificial intelligence
Zhu L, Guo D, Yin J, Ver Steeg G, Galstyan A (2016) Scalable temporal latent space inference for link prediction in dynamic social networks. IEEE Trans Knowl Data Eng 28(10):2765–2777
Torricelli M, Karsai M, Gauvin L (2020) weg2vec: event embedding for temporal networks. Sci Rep 10(1):1–11
Nguyen GH, Lee JB, Rossi RA, Ahmed NK, Koh E, Kim S (2018) Continuoustime dynamic network embeddings. In: Companion proceedings of the web conference 2018, pp 969–976
Zhan XX, Li Z, Masuda N, Holme P, Wang H (2020) Susceptibleinfectedspreadingbased network embedding in static and temporal networks. EPJ Data Sci 9(1):30
Kumar S, Zhang X, Leskovec J (2019) Predicting dynamic embedding trajectory in temporal interaction networks. In: Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining, pp 1269–1278
Malik OA, Ubaru S, Horesh L, Kilmer ME, Avron H (2021) Dynamic graph convolutional networks using the tensor mproduct. In: Proceedings of the 2021 SIAM international conference on data mining (SDM). SIAM, Philadelphia, pp 729–737
Rudolph M, Blei D (2018) Dynamic embeddings for language evolution. In: Proceedings of the 2018 world wide web conference, pp 1003–1011
Liu P, Qiu X, Huang X (2015) Learning contextsensitive word embeddings with neural tensor skipgram model. In: Twentyfourth international joint conference on artificial intelligence
Cotterell R, Poliak A, Van Durme B, Eisner J (2017) Explaining and generalizing skipgram through exponential family principal component analysis. In: Proceedings of the 15th conference of the European chapter of the association for computational linguistics: volume 2, short papers, pp 175–181
Xiong L, Chen X, Huang TK, Schneider J, Carbonell JG (2010) Temporal collaborative filtering with Bayesian probabilistic tensor factorization. In: Proceedings of the 2010 SIAM international conference on data mining. SIAM, Philadelphia, pp 211–222
Wu X, Shi B, Dong Y, Huang C, Chawla NV (2019) Neural tensor factorization for temporal interaction learning. In: Proc. of the twelfth ACM int. conf. on web search and data mining, pp 537–545
Lacroix T, Obozinski G, Usunier N (2020) Tensor decompositions for temporal knowledge base completion. In: International conference on learning representations
Ma Y, Tresp V, Daxberger EA (2019) Embedding models for episodic knowledge graphs. J Web Semant 59:100490
Chanpuriya S, Musco C, Sotiropoulos K, Tsourakakis C (2020) Node embeddings and exact lowrank representations of complex networks. In: Advances in neural information processing systems, vol 33
Chanpuriya S, Musco C, Sotiropoulos K, Tsourakakis C (2021) Deepwalking backwards: from embeddings back to graphs. In: Meila M, Zhang T (eds) Proceedings of the 38th international conference on machine learning, vol 139, pp 1473–1483
Cattuto C, Van den Broeck W, Barrat A, Colizza V, Pinton JF, Vespignani A (2010) Dynamics of persontoperson interactions from distributed rfid sensor networks. PLoS ONE 5(7):e11596
Génois M, Barrat A (2018) Can colocation be used as a proxy for facetoface contacts? EPJ Data Sci 7(1):11
Hinch R, Probert WJ, Nurtay A, Kendall M, Wymant C, Hall M, Lythgoe K, Bulas Cruz A, Zhao L, Stewart A et al. (2021) OpenabmCOVID19—an agentbased model for nonpharmaceutical interventions against COVID19 including contact tracing. PLoS Comput Biol 17(7):1009146
Tsitsulin A, Mottin D, Karras P, Müller E (2018) Verse: versatile graph embeddings from similarity measures. In: Proceedings of the 2018 world wide web conference, pp 539–548
McInnes L, Healy J, Melville J (2018) Umap: uniform manifold approximation and projection for dimension reduction. arXiv preprint. arXiv:1802.03426
Stehlé J, Voirin N, Barrat A, Cattuto C, Isella L, Pinton JF, Quaggiotto M, Van den Broeck W, Régis C, Lina B et al. (2011) Highresolution measurements of facetoface contact patterns in a primary school. PLoS ONE 6(8):e23176
Barrat A, Cattuto C, Colizza V, Gesualdo F, Isella L, Pandolfi E, Pinton JF, Ravà L, Rizzo C, Romano M, Stehlé J, Tozzi AE, Van den Broeck W (2013) Empirical temporal networks of facetoface human interactions. Eur Phys J Spec Top 222(6):1295–1309
Starnini M, Baronchelli A, Barrat A, PastorSatorras R (2012) Random walks on temporal networks. Phys Rev E 85(5):056115
Panisson A, Gauvin L, Barrat A, Cattuto C (2013) Fingerprinting temporal networks of closerange human proximity. In: 2013 IEEE international conference on pervasive computing and communications workshops (PERCOM workshops). IEEE, pp 261–266
Sapienza A, Barrat A, Cattuto C, Gauvin L (2018) Estimating the outcome of spreading processes on networks with incomplete information: a dimensionality reduction approach. Phys Rev E 98(1):012317
Galimberti E, Barrat A, Bonchi F, Cattuto C, Gullo F (2018) Mining (maximal) spancores from temporal networks. In: Proceedings of the 27th ACM international conference on information and knowledge management, pp 107–116
Acknowledgements
The authors would like to thank Prof. Ciro Cattuto for the fruitful discussions that helped shaping this manuscript.
Funding
AP acknowledges support from Intesa Sanpaolo Innovation Center. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Author information
Authors and Affiliations
Contributions
AP designed the study, SP performed the experiments. SP and AP discussed the results and wrote the manuscript. All authors read and approved the final version of the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Supplementary Information
Below is the link to the electronic supplementary material.
13688_2022_344_MOESM1_ESM.pdf
Supplementary Material. Supplementary Material include formal proofs and additional experiments not shown in the manuscript. (PDF 2.0 MB)
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Piaggesi, S., Panisson, A. Timevarying graph representation learning via higherorder skipgram with negative sampling. EPJ Data Sci. 11, 33 (2022). https://doi.org/10.1140/epjds/s13688022003448
Received:
Accepted:
Published:
DOI: https://doi.org/10.1140/epjds/s13688022003448
Keywords
 Representation learning
 Timevarying graphs
 Spreading processes
 Temporal link prediction