Time-varying graph representation learning via higher-order skip-gram with negative sampling

Representation learning models for graphs are a successful family of techniques that project nodes into feature spaces that can be exploited by other machine learning algorithms. Since many real-world networks are inherently dynamic, with interactions among nodes changing over time, these techniques can be defined both for static and for time-varying graphs. Here, we show how the skip-gram embedding approach can be generalized to perform implicit tensor factorization on different tensor representations of time-varying graphs. We show that higher-order skip-gram with negative sampling (HOSGNS) is able to disentangle the role of nodes and time, with a small fraction of the number of parameters needed by other approaches. We empirically evaluate our approach using time-resolved face-to-face proximity data, showing that the learned representations outperform state-of-the-art methods when used to solve downstream tasks such as network reconstruction. Good performance on predicting the outcome of dynamical processes such as disease spreading shows the potential of this method to estimate contagion risk, providing early risk awareness based on contact tracing data. Supplementary Information The online version contains supplementary material available at 10.1140/epjds/s13688-022-00344-8.


Introduction
A great variety of natural and artificial systems can be represented as networks of elementary structural entities coupled by relations between them. The abstraction of such systems as networks helps us understand, predict and optimize their behaviour [1,2]. In this sense, node and graph embeddings have been established as standard feature representations in many learning tasks [3,4]. Node embedding methods map nodes into lowdimensional vectors that can be used to solve downstream tasks such as edge prediction, network reconstruction and node classification.
Node embeddings have proven successful in achieving low-dimensional encoding of static network structures, but many real-world networks are inherently dynamic [5,6]. Time-resolved networks are also the support of important dynamical processes, such as epidemic or rumor spreading, cascading failures, consensus formation, etc. [7]. Time-resolved node embeddings have been shown to yield improved performance for predicting the outcome of dynamical processes over networks, such as information diffusion and disease spreading [8], providing estimation of infection and contagion risk when used with contact tracing data.
Since we expect having more data on proximity networks being used for contact tracing and as proxies for epidemic risk [9], learning meaningful representations of timeresolved proximity networks can be of extreme importance when facing events such as epidemic outbreaks [10,11]. The manual and automatic collection of time-resolved proximity graphs for contact tracing purposes presents an opportunity for quick identification of possible infection clusters and infection chains. Even before the COVID-19 pandemic, the use of wearable proximity sensors for collecting time-resolved proximity networks has been largely discussed in the literature and many approaches have been used to describe patterns of activity and community structure, and to study spreading patterns of infectious diseases [12][13][14].
Here we propose a representation learning model that performs implicit tensor factorization on different higher-order representations of time-varying graphs. The main contributions are as follows: • Given that the skip-gram embedding approach implicitly performs a factorization of the shifted pointwise mutual information matrix (PMI) [15], we generalize it to perform implicit factorization of a shifted PMI tensor. We then define the steps to achieve this factorization using higher-order skip-gram with negative sampling (HOSGNS) optimization. • We show how to apply 3rd-order and 4th-order SGNS on different higher-order representations of time-varying graphs. • We show that time-varying graph representations learned via HOSGNS outperform state-of-the-art methods when used to solve downstream tasks, even using a fraction of the number of embedding parameters. We report the results of learning embeddings on empirical time-resolved face-to-face proximity data and using such representations as predictors for solving three different tasks: predicting the outcomes of a SIR spreading process over the time-varying graph, network reconstruction and link prediction. We compare these results with state-of-the art methods for time-varying graph representation learning.

Preliminaries and related work 2.1 Skip-gram representation learning
The skip-gram model was designed to compute word embeddings in word2vec [16], and afterwards extended to graph node embeddings [17][18][19]. Levy and Goldberg [15] established the relation between skip-gram trained with negative sampling (SGNS) and traditional matrix decomposition methods [20,21], showing the equivalence of SGNS optimization to factorizing a shifted PMI matrix [22].
Starting from a textual corpus of words w 1 , w 2 , . . . , w m from a vocabulary V, it assigns to each word w s a context corresponding to words surrounding w s in a window of size T, i.e. the multi-set c T (w s ) = {w s-T , . . . , w s-1 , w s+1 , . . . , w s+T }. Training samples D = {(i, j) : i ∈ W, j ∈ C, j ∈ c T (i)} are built by collecting all the observed word-context pairs, where W and C are the vocabularies of words and contexts respectively (usually W = C = V). Here we denote as #(i, j) the number of times (i, j) appears in D. Similarly we use #i = j #(i, j) and #j = i #(i, j) as the number of times each word occurs in D, with relative frequencies P D (i, j) = #(i,j) |D| , P D (i) = #i |D| and P D (j) = #j |D| . SGNS computes d-dimensional representations for words and contexts in two matrices W ∈ R |W|×d and C ∈ R |C|×d , performing a binary classification task in which pairs (i, j) ∈ D are positive examples and pairs (i, j N ) with randomly sampled contexts are negative examples. The probability of the positive class is parametrized as the sigmoid (σ (x) = (1 + e -x ) -1 ) of the inner product of embedding vectors: and each word-context pair (i, j) contributes to the loss as follows: where the second expression uses the symmetry property σ (-x) = 1σ (x) inside the expected value and κ is the number of negative examples, sampled according to the empirical distribution of contexts P N (j) = P D (j).
Following results found in [15], the sum of all (i, j) weighted with the probability that each pair (i, j) appears in D gives the SGNS objective function: where P N (i, j) = P D (i) · P D (j) is the probability of (i, j) under assumption of statistical independence. Levy and Goldberg [15] demonstrated that, when d is sufficiently high, optimal SGNS embedding matrices satisfy these relations: which tell us that SGNS optimization is equivalent to a rank-d matrix decomposition of the word-context pointwise mutual information (PMI) matrix shifted by a constant, i.e. the number of negative samples. Here in this work, we refer to the shifted PMI matrix also as SPMI κ = PMIlog κ. This equivalence was later retrieved from diverse assumptions [23][24][25][26][27], and exploited to compute closed form expressions approximated in different graph embedding models [28].

Random walk based graph embeddings
Given an undirected, weighted and connected graph G = (V, E) with nodes i, j ∈ V, edges (i, j) ∈ E and adjacency matrix A, graph embedding methods are unsupervised models designed to map nodes into dense d-dimensional representations (d |V|) [29].
A well known family of approaches based on the skip-gram model consists in sampling random walks from the graph and processing node sequences as textual sentences. In DeepWalk [17] and node2vec [19], the skip-gram model is used to obtain node embeddings from co-occurrences in random walk realizations. Although the original implementation of DeepWalk uses hierarchical softmax to compute embeddings, we will refer to the SGNS formulation given by [28].
Since SGNS can be interpreted as a factorization of the word-context PMI matrix [15], the asymptotic form of the PMI matrix implicitly decomposed in DeepWalk can be derived [28]. Given the 1-step transition matrix P = D -1 A, where D = diag(d 1 , . . . , d |V| ) and d i = j∈V A ij is the (weighted) node degree, the expected PMI for a node-context pair (i, j) occurring in a T-sized window is: where vol(G) = i,j∈V A ij . We will return to this equation in Sect. 3.2, where we use the expression in (a) to build probability tensors from higher-order graph representations.

Time-varying graphs and their algebraic representations
Time-varying graphs [5,6] are defined as triples H = (V, E, T ), i.e. collections of events (i, j, k) ∈ E, representing undirected pairwise relations among nodes at discrete times (i, j ∈ V, k ∈ T ). H can be seen as a temporal sequence of static graphs {G (k) } k∈T , each of those with adjacency matrix A (k) such that A (k) ij = ω(i, j, k) ∈ R is the weight of the event (i, j, k) ∈ E. We can concatenate the list of time-stamped snapshots [A (1) , . . . , A (|T |) ] to obtain a single 3rd-order tensor A stat (H) ∈ R |V|×|V|×|T | which characterize the evolution of the graph over time. This representation has been used to discover latent community structures of temporal graphs [13] and to perform temporal link prediction [30]. Indeed, beyond the above stacked graph representation, more exhaustive representations are possible. In particular, the multi-layer approach [31] allows to map the topology of a time-varying graph H into a static network G H = (V H , E H ) (the supra-adjacency graph) such that vertices in V H correspond to node-time pairs (i, k) ≡ i (k) ∈ V × T and edges in E H represent connections (i (k) , j (l) ) among them. Since every link can be arranged in a quadruple (i, j, k, l), the connectivity structure is associated to a 4th-order tensor A dyn (H) ∈ R |V|×|V|×|T |×|T | that is equivalent, up to an opportune reshaping, to the adjacency matrix A(G H ) ∈ R |V||T |×|V||T | of G H . Multi-layer representations for time-varying networks have been used to study time-dependent centrality measures [32] and properties of spreading processes [33].
In the same spirit that word2vec refers to the word pairs (i, j) as (word, context), here we refer to the node pairs (i, j) as (node, context), and the time pairs (k, l) as (time, contexttime).

Time-varying graph representation learning
Given a time-varying graph H = (V, E, T ), we define as temporal network embedding a model that learns from data, implicitly or explicitly, a mapping function: which project time-stamped nodes into a latent low-rank vector space that encodes structural and temporal properties of the original evolving graph [34,35]. Many existing methods learn node representations from sequences of static snapshots through incremental updates in a streaming scenario: deep autoencoders [36], SVD [37], skip-gram [38,39]  and random walk sampling [40][41][42]. Another class of models learn dynamic node representations by recurrent/attention mechanisms [43][44][45][46] or by imposing temporal stability among adjacent time intervals [47,48]. DyANE [8] and weg2vec [49] project the dynamic graph structure into a static graph, in order to compute embeddings with word2vec. Closely related to these ones are [50] and [51], which learn node vectors according to time-respecting random walks or spreading trajectory paths. Moreover, [52] proposed an embedding framework for user-item temporal interactions, and [53] suggested a tensorbased convolutional architecture for dynamic graphs. Methods that perform well for predicting outcomes of spreading processes make use of time-respecting supra-adjacency representations such as the one proposed by [33]. In these graph representations, a random walk corresponds to a temporal path in the original time-varying graph, enconding relevant information about the spreading process into its connectivity structure. The supra-adjacency representation G H that we refer in Sect. 3.2, also used in DyANE, with adjacency matrix A(G H ), is defined by two rules: 1. For each event (i, j, t 0 ), if i is also active at time t 1 > t 0 and in no other time-stamp between the two, we add a cross-coupling edge between supra-adjacency nodes j (t 0 ) and i (t 1 ) . In addition, if the next interaction of j with other nodes happens at t 2 > t 0 , we add an edge between i (t 0 ) and j (t 2 ) . The weights of such edges are set to ω(i, j, t 0 ). 2. For every case as described above, we also add self-coupling edges (i (t 0 ) , i (t 1 ) ) and (j (t 0 ) , j (t 2 ) ), with weights set to 1. Figure 1 shows the differences between a time-varying graph and its time-aware supraadjacency representation, according to the formulation described above. DyANE computes, given a node i ∈ V, one vector representation for each time-stamped node i (t) ∈ V (T ) = {(i, t) ∈ V × T : ∃(i, j, t) ∈ E} of this supra-adjacency representation. Similar models that learn time-resolved node representations require a quantity O(|V|·|T |) of embedding parameters to represent the time-varying graph in the latent space. As we will see, compared with these methods, our approach requires a quantity O(|V| + |T |) of embedding parameters for disentangled node and time representations.

Proposed method
Given a time-varying graph H = (V, E, T ), we propose a representation learning method that learns disentangled representations for nodes and time slices. More formally, we learn a function: This embedding representation can then be reconciled with the definition in Eq. (6) by combining v and t in a single v (t) representation using any combination function c : It follows that computing and combining distinct vector embeddings for nodes and time slices needs a quantity O(|V| + |T |) of learnable parameters, leading to a more efficient method to obtain time-aware node representations without requiring to learn a much bigger number O(|V| · |T |) of learnable parameters.
The parameters of the embedding representation in Eq. (7) are learned through a higherorder generalization of skip-gram with negative sampling (HOSGNS), based on the existing skip-gram framework for node embeddings, as shown in Sect. 3.1. We show that this generalization allows to implicitly factorize higher-order relations that characterize tensor representations of time-varying graphs, in the same way that the classical SGNS decomposes dyadic relations associated to a static graph.
Similar approaches have been applied in NLP for dynamic word embeddings [54], and higher-order extensions of the skip-gram model have been proposed to learn contextdependent [55] and syntactic-aware [56] word representations. Also tensor factorization techniques have been applied to include the temporal dimension in recommender systems [57,58], knowledge graphs [59,60] and face-to-face contact networks [12,13]. But this work is the first to merge SGNS with tensor factorization, and then apply it to learn time-varying graph embeddings. HOSGNS differs from existing temporal network embeddings based on skip-gram [38,39], which are minor adaptations of standard SGNS to the dynamic setting. In fact, in Sect. 3.1 we show how the equations in the skip-gram framework can be completely rewritten to be naturally applied to inherently higher-order problems.
In the next sections, we first show the formal steps to the generalization of the skip-gram approach to higher-order data structures, and then we show how to apply HOSGNS on 3rd-order and 4th-order representations of time-varying graphs.

SGNS for higher-order data structures
Here we address the problem of generalizing SGNS to learn embedding representations from higher-order co-occurrences. In Sect. 2.3 we described snapshot-based and multilayer-based representations of time-varying graphs, that can be seen as 3rd and 4thorder co-occurrence tensors; therefore in the remaining of this manuscript we focus on 3rd and 4th-order structures. In this section, we describe in detail the generalization of SGNS to the 3rd-order case. In Additional file 1 we discuss more in detail the derivation of the HOSGNS objective function to any nth-order representation.
We consider a set of training samples D = {(i, j, k) : i ∈ W, j ∈ C, k ∈ T } obtained by collecting co-occurrences among elements from three sets W, C and T . While SGNS is limited to pairs of node-context (i, j), here D is constructed with three (or more) variables, e.g. sampling random walks over a higher-order data structure. We denote as #(i, j, k) the number of times the triple (i, j, k) appears in D. Similarly we use #i = j,k #(i, j, k), #j = i,k #(i, j, k) and #k = i,j #(i, j, k) as the number of times each distinct element occurs in D, with relative frequencies P D (i, j, k) = #(i,j,k) |D| , P D (i) = #i |D| , P D (j) = #j |D| and P D (k) = #k |D| . Optimization is performed as a binary classification task, where the objective is to discern occurrences actually coming from D from random occurrences. We define the likelihood for a single observation (i, j, k) by applying a sigmoid (σ (x) = (1 + e -x ) -1 ) to the higher-order inner product t·u of corresponding d-dimensional representations: where embedding vectors w i , c j , t k ∈ R d are respectively rows of W ∈ R |W|×d , C ∈ R |C|×d and T ∈ R |T |×d . In the 4th-order case we will also have a fourth embedding matrix S ∈ R |S|×d related to a fourth set S. For negative sampling we fix an observed (i, j, k) ∈ D and independently sample j N and k N to generate κ negative examples (i, j N , k N ). In this way, for a single occurrence (i, j, k) ∈ D, the expected contribution to the loss is: where the noise distribution is the product of independent marginal probabilities P N (j, k) = P D (j) · P D (k). Thus the global objective is the sum of all the quantities of Eq. (9) weighted with the corresponding relative frequency P D (i, j, k). The full loss function can be expressed as: In Additional file 1 we show the formal steps to obtain Eq. (10) for the nth-order case and that it can be optimized with respect to the embedding parameters, satisfying the lowrank tensor approximation of the multivariate shifted PMI tensor into factor matrices W, C, T: Equation (11), like the analogous derived in Levy and Goldberg [15] in Eq. (4), assumes full rank embedding matrices with d ≈ R = rank(SPMI κ ). For the case when d R, a comprehensive theoretical analysis is missing, although recent works propose the feasibility of exact low-dimensional factorizations of real-world static networks [61,62]. Nevertheless, in Additional file 1, we include an empirical analysis of the effectiveness of HOSGNS for low-rank factorization of time-varying graph representations.

Time-varying graph embedding via HOSGNS
While a static graph G = (V, E) is uniquely represented by an adjacency matrix A(G) ∈ R |V|×|V| , a time-varying graph H = (V, E, T ) admits diverse possible higher-order adjacency relations (Sect. 2.3). Starting from these higher-order relations, we can either use them directly or use random walk realizations to build a dataset of higher-order co-occurrences. In the same spirit that random walk realizations lead to dyadic cooccurrences used to learn embeddings in SGNS, we use higher-order co-occurrences to learn embeddings via HOSGNS.
As discussed in Sect. 3.1, the statistics of higher-order relations can be summarized in multivariate PMI tensors, which derive from co-occurrence probabilities among elements. Once such PMI tensors are constructed, we can again factorize them via HOSGNS. To show the versatility of this approach, we choose probability tensors derived from two different types of higher-order relations: 1. A 3rd-order tensor P (stat) (H) ∈ R |V|×|V|×|T | which gather relative frequencies of nodes occurrences in temporal edges: where vol(H) = i,j,k ω(i, j, k) is the total weight of interactions occurring in H. These probabilities are associated to the snapshot sequence representation and contain information about the topological structure of H. 2. A 4th-order tensor P (dyn) (H) ∈ R |V|×|V|×|T |×|T | , which gather occurrence probabilities of time-stamped nodes over random walks of the supra-adjacency graph G H (as used in DyANE). Using the numerator of Eq. (5), with supra-adjacency indices i (k) and j (l) instead of usual indices i and j, tensor entries are given by: These probabilities encode causal dependencies among temporal nodes and are correlated with dynamical properties of spreading processes. Notice that the computation of P (dyn) (H) requires an undirected supra-adjacency graph, while in DyANE is directed. We also combined the two representations in a single tensor that is the average of P (stat) and P (dyn) where δ kl = 1[k = l] is the Kronecker delta. Figure 2 summarizes the differences between graph embedding via classical SGNS and time-varying graph embedding via HOSGNS. Here, indices (i, j, k, l) correspond to (node, context, time, context-time) in a 4th-order tensor representation of H.
The above tensors gather empirical probabilities P D (i, j, k . . . ) corresponding to positive examples of observable higher-order relations. The probabilities of negative examples P N (i, j, k . . . ) can be obtained as the product of marginal distributions P D (i), P D (j), P D (k) . . . Objective functions like Eq. (10) can be computed with a sampling strategy: picking positive tuples according to the data distribution P D and negative ones according to independent sampling P N , HOSGNS objective can be optimized through the following weighted cross entropy loss: where B is the number of the samples drawn in a training step and κ is the negative sampling constant. We additionally apply the warm-up steps explained in Additional file 1 to speed-up the main training stage. and j (l) (node j at time l) within a context window of size T and maximizes σ (tw i , c j , t k , s l u). In both cases, for each input sample, we fix i and draw κ combinations of j or j, k, l from a noise distribution, and we maximize σ (-w i · c j ) (SGNS) or σ (-tw i , c j , t k , s l u) (HOSGNS) with their corresponding embedding vectors (negative sampling)

Experiments
For the experiments we use time-varying graphs collected by the SocioPatterns collaboration (http://www.sociopatterns.org) using wearable proximity sensors that sense the face-to-face proximity relations of individuals wearing them. After training the proposed models (HOSGNS applied to P (stat) , P (dyn) or P (stat|dyn) ) on each dataset, embedding matrices W, C, T (and S except for P (stat) ) are mapped to embedding vectors w i , c j , t k (and s l ) where i, j ∈ V and k, l ∈ T . In Sect. 4.2, we use the learned representations to solve different downstream tasks: node classification, temporal event reconstruction and missing event prediction. Finally, in Sect. 4.4 we show the visualization of the two-dimensional projections of the embeddings for one of the chosen empirical datasets.

Datasets
We performed experiments with both empirical and synthetic datasets describing face-toface proximity of individuals. We used publicly available empirical contact data collected by the SocioPatterns collaboration [63], with a temporal resolution of 20 seconds, in a variety of contexts: in a school ("LyonSchool"), a conference ("SFHH"), a hospital ("LH10"), a highschool ("Thiers13"), and in offices ("InVS15") [64]. This is currently the largest collection of open datasets sensing proximity in the same range and temporal resolution used by modern contact tracing systems. In addition, we used social interactions data generated by the agent-based-model OpenABM-Covid19 [65] to simulate an outbreak of COVID-19 in a urban setting. We built a time-varying graph from each dataset, and for the empirical data we performed aggregation on 600 seconds time windows, neglecting those snapshots without registered interactions at that time scale. The weight of the link (i, j, k) is the number of events recorded between nodes (i, j) in a certain aggregated window k. For synthetic data we maintained the original temporal resolution and we set links weights to 1. Table 1 shows statistics for each dataset.

Baselines
We compare our approach with several baseline methods from the literature of timevarying graph embeddings, which learn time-stamped node representations: (1) DyANE [8], which learns temporal node embeddings with DeepWalk, mapping a time-varying graph into a supra-adjacency representation; (2) DynGEM [36], a deep autoencoder architecture which dynamically reconstructs each graph snapshot initializing model weights with parameters learned in previous time frames; (3) DynamicTriad [47], which captures structural information and temporal patterns of nodes, modeling the triadic closure process; (4) DySAT [45], a deep neural model that computes node embeddings by a joint self-attention mechanism applied on structural neighborhood and temporal dynamics; (5) ISGNS [39], an incremental skip-gram embedding model based on DeepWalk. Details about hyper-parameters used in each method can be found in Additional file 1.

Node classification
The aim of this task is to classify nodes in epidemic states according to a SIR epidemic process with infection rate β and recovery rate μ. We simulated 30 realizations of the SIR process on top of each empirical graph with different combinations of parameters (β, μ). We used similar combinations of epidemic parameters and the same dynamical process to produce SIR states as described in [8]. Then we set a logistic regression to classify epidemic states S-I-R assigned to each active node i (k) during the unfolding of the spreading process. We combine the embedding vectors of HOSGNS using the Hadamard (elementwise) product w i • t k . We compared with dynamic node embeddings learned from baselines. For fair comparison, all models produce time-stamped node representations with dimension d = 128 as input to the logistic regression.

Temporal event reconstruction
In this task, we aim to determine if a generic event (i, j, k) (occurred or not) is in H = (V, E, T ), i.e., if there is an edge between nodes i and j at time k. We create a random time-varying graph H * = (V, E * , T ) with same active nodes V (T ) and a number of |E| events that are not part of E (i.e. E ∩ E * = Ø). In other words E * contains random events that may occur only between the nodes that are active in each snapshot, disregarding other possible edges that involve inactive nodes. Embedding representations learned from H are used as features to train a logistic regression to predict if a given event (i, j, k) is in E or in E * . We combine the embedding vectors of HOSGNS as follows: for HOSGNS (stat) , we use the Hadamard product w i • c j • t k ; for HOSGNS (dyn) and HOSGNS (stat|dyn) , we use w i • c j • t k • s k . For baseline methods, we aggregate vector embeddings to obtain link-level representations with binary operators (Average, Hadamard, Weighted-L1, Weighted-L2 and Concat) as already used in previous works [19,66]. For fair comparison, all models are required produce event representations with dimension d = 192.

Missing event prediction
In this task, we aim to predict the occurrence of an event (i, j, k) previously removed from H = (V, E, T ). We create a pruned time-varying graph H † = (V, E † , T ) with the same active nodes V (T ) and a number of events |E † | = 70% |E| sampled from H. Embedding representations learned from H † are used as features to train a logistic regression to predict missing occurred events (i, j, k) ∈ E \ E † against the events E * of a random time-varying graph H * = (V, E * , T ) (see above). We combine the embedding vectors of HOSGNS for the classification task as explained in the event reconstruction task.

Results
In this section we first show downstream task performance results for the empirical and synthetic datasets, then we compare the different approaches in terms of training complexity, by measuring the number of trainable parameters and the training time with fixed number of training steps. Tasks were evaluated using train-test split. To avoid information leakage from training to test, we randomly split V and T in train and test sets (V tr , V ts ) and (T tr , T ts ), with proportion 70%-30%. For node classification, only nodes in V tr at times in T tr were included in the train set, and only nodes in V ts at times in T ts were included in the test set. For event reconstruction and prediction, only events with i, j ∈ V tr and k ∈ T tr were included in the train set, and only events with i, j ∈ V ts and k ∈ T ts were included in the test set.
All approaches were evaluated for downstream tasks in terms of Macro-F1 scores in all datasets. 5 different runs of the embedding model are evaluated on 30 different train-test splits in every downstream tasks. We report the average score with standard error over all splits. In node classification, every SIR realization is assigned to a single embedding run to compute prediction scores. In event reconstruction and prediction tasks, a different random time-varying graph realization H * to produce samples of non-occurring events is assigned to each train-test subset.

Empirical datasets
Results for the classification of nodes in epidemic states are shown in Table 2. We report here a subset of (β, μ) but other combinations are available on Additional file 1. DynGEM and DynamicTriad have low scores, since they are not devised to learn from graph dynamics. Also DySAT has a bad performance in this task, since this method uses a context prediction objective that preserves the local structure without properly encoding dynamical patterns. HOSGNS (stat) is not able to capture the graph dynamics due to the static nature of P (stat) . ISGNS, due to the incremental training, performs only marginally better than HOSGNS (stat) . DyANE, HOSGNS (stat|dyn) and HOSGNS (dyn) show good performance, with these two HOSGNS variants outperforming DyANE in most of the combinations of datasets and SIR parameters.
Results for the temporal event reconstruction task are reported in Table 3. Temporal event reconstruction is not performed well by DynGEM. DynamicTriad has better  Results for HOSGNS models using other operators are available in Additional file 1. We observe an overall good performance of HOSGNS (stat|dyn) in all downstream tasks, being in almost all cases among the two highest scores, compared to the other two HOSGNS variants which excel in certain tasks but have lower performance in the others.

Synthetic datasets
Here we report the performance of downstream tasks with the two synthetic datasets only for HOSGNS (stat) and HOSGNS (dyn) , given the similar performance of HOSGNS (dyn) and HOSGNS (stat|dyn) in previous experiments. We also chose DyANE as the only baseline, given its better performance compared to other baselines in empirical datasets. Results for the node classification task for a set of (β, μ) combinations are reported in Table 5, with other combinations available in Additional file 1. These results reflect previous results on empirical datasets, with HOSGNS (dyn) performance comparable or superior to DyANE.
Results for the event reconstruction and prediction tasks are reported in Table 6. DyANE performs well with Hadamard operation, but nevertheless the scores are below HOSGNS (dyn) and HOSGNS (stat) scores. Especially with HOSGNS (stat) , the performance of event reconstruction is not much larger than even prediction, contrary to empirical datasets. This difference might be due to the different topological features of synthetic networks respect to empirical ones.

Training complexity
We report in Table 7 the number of trainable parameters and training time duration for each considered algorithm, when applied to an empirical graph (LyonSchool) and to the synthetic ones. The proposed HOSGNS model requires a number of trainable parameters Table 4 Macro-F1 scores for missing event prediction in empirical datasets. We highlight in bold the two best scores for each dataset. For baseline models we underline their highest score  that is orders of magnitude smaller than other approaches, with a training time considerably shorter as the number of nodes increases, given a fixed number of training iterations. ISGNS has a comparable number of parameters because it incrementally updates O(|V|) parameters moving across the |T | snapshots. DySAT training time is considerably higher due to the computational overhead of the self-attention mechanism.

Embedding space visualization
One of the main advantages of HOSGNS is that it is able to disentangle the role of nodes and time by learning representations of nodes and time intervals separately. In this section, we include plots with two-dimensional projections of these embeddings, made with UMAP [67] for manifold learning and non-linear dimensionality reduction. With these plots, we show that the embedding matrices learned by HOSGNS (stat) and HOSGNS (dyn) successfully capture both the structure and the dynamics of the time-varying graph. Dynamical information can be represented by associating each embedding vector to its corresponding time interval k ∈ T , and graph structure can be represented by associating each embedding vector to a community membership. While community membership can be estimated by different community detection methods, we choose to use a dataset with ground truth data containing node membership information. We consider the LyonSchool dataset as a case study, widely investigated in literature respect to structural and spreading properties [68][69][70][71][72][73]. This dataset spans two days and includes metadata (Table 8) concerning the class of each participant of the school (10 different labels for children and 1 label for teachers), and we identify the community membership of each individual according to these labels (class labels). Moreover we also assign time labels according to activation of individual nodes in temporal snapshots.  To show how disentangled representations capture different aspects of the evolving graph, in Fig. 3 we plot individual representations of nodes i ∈ V and time slices k ∈ T labeled according to the class membership and the time snapshot respectively. Both HOSGNS (stat) and HOSGNS (dyn) capture the community structure (left of each panel) with node embeddings clustered into the ground-truth classes, but dynamical information expressed by time embeddings (right of each panel) is different for the two methods. Due to the time-respecting topology of the supra-adjacency graph, HOSGNS (dyn) captures the causality of node co-occurrences encoding temporal slices into a time-ordered one-dimensional manifold. HOSGNS (stat) is built on the snapshot representation, invariant over time permutation, and thus the temporal encoding is constrained to the local connectivity structure of graph slices.
In Fig. 4 we visualize representations of temporal nodes i (k) ∈ V (T ) , computed as Hadamard products of nodes and time embeddings. HOSGNS (stat) projections show clusters of nodes active at multiple times representing different social situations: interactions during lectures present uniform class labels and heterogeneous time labels, whereas interactions occurred in social spaces with mixed classes present uniform time labels and heterogeneous class labels. This is in line with previous studies [13], where different patterns of interactions are found during school activities, and gatherings in social spaces (such as canteen and playground) are more concentrated during lunch time. HOSGNS (dyn) projected embeddings, due to the causality information encoded in time representations, display trajectories of social interactions that span over time in the embedding space, with communities interacting and mixing at different points of the day. Two-dimensional projections of the 128-dim embedding manifold spanned by dynamic node embeddings, trained on LYONSCHOOL data and obtained with Hadamard products {w i • t k } (i,k)∈V (T ) between rows of W (node embeddings) and T (time embeddings), from HOSGNS model trained on: (a) P (stat) and (b) P (dyn) . We highlight the temporal participation to communities (left of each panel) and the time interval of activation (right of each panel) Figure 5 Two-dimensional projections of the 128-dim embedding manifold spanned by dynamic node embeddings for LYONSCHOOL data learned with baseline methods. As in Fig. 4 we highlight the temporal participation to communities (top of each panel) and the time interval of activation (bottom of each panel) In Fig. 5 we see dynamic node embeddings computed with baseline methods without dissociating structure and time. The embedding space in DyANE encodes properly the time-aware topology, since the model is based on the supra-adjacency graph like HOSGNS (dyn) . Also DynamicTriad captures significant temporal structures, but it is less effective to express the overall dynamics since it is limited in modeling the triadic closure process. Other relevant interaction patterns are instead accounted with supra-adjacency random walks. DynGEM, DySAT and ISGNS embedding spaces do not encode any structural or temporal information.

Conclusions
In this paper, we introduce Higher-Order Skip-Gram with Negative Sampling (HOSGNS) for time-varying graph representation learning. We generalize the skip-gram embedding approach that implicitly performs a factorization of the shifted PMI matrix to perform implicit factorization of a shifted PMI tensor. We show how to optimize HOSGNS for the generic nth-order case, and how to apply 3rd-order and 4th-order SGNS on different higher-order representations of time-varying graphs.
The embedding representations learned by HOSGNS outperform other methods in the literature and set new state-of-the-art results for solving downstream tasks. By learning embeddings on empirical time-resolved face-to-face proximity data, such representations can be effectively used to predict the outcomes of a SIR spreading process over the timevarying graph. They also can be effectively used for network reconstruction and link prediction.
HOSGNS is able to learn more compact representations of time-varying graphs due to the reduced number of parameters, with computational complexity that is comparable or lower than other state-of-the-art methods. By learning disentangled representations of nodes and time intervals, HOSGNS uses a number of parameters in the order of O(|V| + |T |), while models that learn node-time representations need a number of parameters that is at least O(|V| · |T |).
While other methods such as DyANE assume that the whole temporal network has to be known, here we relax this assumption and we show that the learned representations can be used also for predicting events that are not seen during the representation learning phase. Yet, one limitation still holds: the transductivity of the model makes it unable to generalize the embedding representations outside the set of observed temporal slices. A future work to tackle this limitation is the extension of the methodology to include prior constraints, such as temporal smoothness and stability of embeddings over consecutive time slices, or to equip the model with an inductive framework.
We show that HOSGNS can be intuitively applied to time-varying graphs, but this methodology can be easily adapted to solve other representation learning problems that involve multi-modal data and multi-layered graph representations, where the purpose is to factorize higher-order dependencies between elementary units of the system. Beyond these applications, extensions of the model can find usage in feature learning on higherorder systems, i.e. hypergraphs and simplicial complexes, where interactions among vertices are intrinsically polyadic.