Figure 4From: Time-varying graph representation learning via higher-order skip-gram with negative samplingTwo-dimensional projections of the 128-dim embedding manifold spanned by dynamic node embeddings, trained on LyonSchool data and obtained with Hadamard products \(\{\mathbf{w}_{i}\circ \mathbf{t}_{k}\}_{(i,k) \in \mathcal{V}^{( \mathcal{T})}}\) between rows of W (node embeddings) and T (time embeddings), from HOSGNS model trained on: (a) \(\boldsymbol{\mathcal{P}}^{(\text{stat})}\) and (b) \(\boldsymbol{\mathcal{P}}^{(\text{dyn})}\). We highlight the temporal participation to communities (left of each panel) and the time interval of activation (right of each panel)Back to article page