Spectral Gap Rewiring Layer

GAP-Layer is a graph neural network layer that helps to optimize the spectral gap of a graph by minimizing or maximizing the bottleneck size. The goal of GAP-Layer is to create more connected or separated communities depending on the mining task required.

The Spectral Gap Rewiring

The first step in implementing GAP-Layer is to minimize the spectral gap by minimizing the loss function. The loss function is given by:

$$ L\_{Fiedler} = \|\tilde{\mathbf{A}}-\mathbf{A}\| \_F + \alpha(\lambda\_2)^2 $$

The derivatives of this cost function with respect to each element of A are not trivial. Depending on whether we use the Laplacian, L, or the normalized Laplacian, L~, the derivatives will be different. The Laplacian derivatives are presented in Kang et al. 2019 while we present the Spectral Gradients for the normalized Laplacian. We use the Fiedler vector which is the second eigenvector, f2, of L or g2 of L~ to calculate the derivatives. We suggest referring to the original paper for more details on these derivatives.

The problem with GAP-Layer is the computation of Fiedler vector and Fiedler value requires $O(n^3)$ in every learning iteration, which makes the method impractical. To solve this, we learn an approximation of f2 that uses the Dirichlet energy (the energy required for the graph to be painted following the Fiedler vector splitting).

$$ \mathbf{f}\_2(u) =  \begin{array}{cl} +1/\sqrt{n}  & \text{if}\;\; u\;\; \text{belongs to the first cluster} \\ -1/\sqrt{n}  & \text{if}\;\; u\;\; \text{belongs to the second cluster} \end{array} $$

With this approximation, we can easily compute the cluster of each node with a simple MLP.

GAP-Layer

GAP-Layer is defined as follows: given the matrix X (n * F) that encodes the features of the nodes after a message passing (MP) layer, Softmax(MLP(X)) learns the association X → S while S is optimized according to the loss function:

$$ L\_{Cut} = -\frac{Tr[\mathbf{S}^T\mathbf{A}\mathbf{S}]}{Tr[\mathbf{S}^T\mathbf{D}\mathbf{S}]} + \left\|\frac{\mathbf{S}^T\mathbf{S}}{\|\mathbf{S}^T\mathbf{S}\|\_F} - \frac{\mathbf{I}\_n}{\sqrt{2}}\right\|\_F $$

Then, the f2 is approximated from S using f2(u) equation. Once calculated f2 and λ2 we consider the loss:

$$ L\_{Fiedler} = \|\tilde{\mathbf{A}}-\mathbf{A}\|\_F + \alpha(\lambda\_2)^2 $$

We then return the value of A (minus the derivative of λ2 times the step size) so that the GAP diffusion is given by:

$$ \mathbf{T}^{GAP} = \tilde{\mathbf{A}}(\mathbf{S}) \odot \mathbf{A} $$

GAP-Layer is a powerful technique for optimizing the spectral gap and making graphs more efficient, and it has been shown to be effective in a variety of applications.

References

Kang, J., & Tong, H. (2019, November). N2n: Network derivative mining. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management (pp. 861-870).

Great! Next, complete checkout for full access to SERP AI.
Welcome back! You've successfully signed in.
You've successfully subscribed to SERP AI.
Success! Your account is fully activated, you now have access to all content.
Success! Your billing info has been updated.
Your billing was not updated.