Skip to main content
Figure 2 | EPJ Data Science

Figure 2

From: Time-varying graph representation learning via higher-order skip-gram with negative sampling

Figure 2

Representation of SGNS and HOSGNS with embedding matrices and operations on embedding vectors. Starting from a random walk realization on a static graph \(\mathcal{G=(V,E)}\), SGNS takes as input nodes i and j within a context window of size T, and maximizes \(\sigma (\mathbf{w}_{i} \cdot \mathbf{c}_{j})\). HOSGNS starts from a random walk realization on a higher-order representation of time-varying graph \(\mathcal{H=(V,E,T)}\), takes as input nodes \(i^{(k)}\) (node i at time k) and \(j^{(l)}\) (node j at time l) within a context window of size T and maximizes \(\sigma ([\!\![\mathbf{w}_{i}, \mathbf{c}_{j}, \mathbf{t}_{k}, \mathbf{s}_{l}]\!\!])\). In both cases, for each input sample, we fix i and draw κ combinations of j or j, k, l from a noise distribution, and we maximize \(\sigma (-\mathbf{w}_{i} \cdot \mathbf{c}_{j})\) (SGNS) or \(\sigma (-[\!\![\mathbf{w}_{i}, \mathbf{c}_{j}, \mathbf{t}_{k}, \mathbf{s}_{l}]\!\!])\) (HOSGNS) with their corresponding embedding vectors (negative sampling)

Back to article page