doi: 10.17586/2226-1494-2026-26-2-331-336


Ego-net link prediction with GNN (in English)

E. I. Zamyatin


Read the full article  ';
Article in English

For citation:
Zamyatin E.I. Ego-net link prediction with GNN. Scientific and Technical Journal of Information Technologies, Mechanics and Optics, 2026, vol. 26, no. 2, pp. 331–336. doi: 10.17586/2226-1494-2026-26-2-331-336


Abstract
The task of link prediction is one of the key challenges in the field of social network analysis. The common way to build such systems is based on the idea of decomposing a task into two levels. At the first level, links within ego-nets are predicted; at the second, the results are aggregated to form the final predictions. The accuracy of such systems depends on the first-level model. Heuristic methods are usually used here. The focus of this work is on developing a new supervised model to improve the quality of link prediction within ego-nets. The heterogeneity of the edge attributes, the absence of node features, and the dynamic nature of ego-nets distinguish this task from others. The proposed method belongs to the class of graph neural networks. Its key feature is the ability to effectively consider the topology of the graph along with the attributes of the edges, without relying on the properties of the nodes. This effect is achieved by modeling the hidden state of node pairs, rather than the state of each node individually. The iterative nature of the model makes it possible to propagate knowledge about the relationships between nodes, increasing the complexity of the structures considered with each step. To measure the accuracy of the model, the Ego-VK dataset was used. This dataset consists of a set of ego-nets from a subsample of users of the VKontakte social network. The model is compared with the classical Adamic-Adar method as well as modern approaches based on graph neural networks. Experiments show that the proposed model is significantly superior to the baselines with respect to NDCG@5 ranking quality metric. The results demonstrate the high effectiveness of the proposed model, and the possibility of integration into distributed systems makes it widely applicable in the industry.

Keywords: information retrieval, recommender systems, graph neural networks, social networks analysis, ego-nets

Acknowledgements. The research was carried out under the Project No. 625135 “Development of software modules for effective training and inference of deep learning models”.

References
1. KeramatfarA., Rafiee M., Amirkhani H. Graph Neural Networks: a bibliometrics overview. Machine Learning with Applications, 2022, vol. 10, pp. 100401. https://doi.org/10.1016/j.mlwa.2022.100401
2. Abadal S., Jain A., Guirado R., López-Alonso J., Alarcón E. Computing Graph Neural Networks: a survey from algorithms to accelerators. ACM Computing Surveys (CSUR), 2021, vol. 54, no. 9, pp. 1–38. https://doi.org/10.1145/3477141
3. Ragunathan K., Selvarajah K., Kobti, Z. Link prediction by analyzing common neighbors based subgraphs using convolutional neural network. Frontiers in Artificial Intelligence and Applications, 2020, vol. 325, pp. 1906-1913. https://doi.org/10.3233/faia200308
4. ZhangM., Chen Y. Link prediction based on graph neural networks. Proc. of the 32nd International Conference on Neural Information Processing Systems, 2018, pp. 5171–5181.
5. Epasto A., Lattanzi S., Mirrokni V., Sebe I.O., Taei A., Verma S. Ego-net community mining applied to friend suggestion. Proceedings of the VLDB Endowment, 2015, vol. 9, no. 4, pp. 324–335. https://doi.org/10.14778/2856318.2856327
6. Perozzi B., Al-Rfou R., Skiena S. Deepwalk: online learning of social representations. Proc. of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2014, pp. 701–710. https://doi.org/10.1145/2623330.2623732
7. Zhu Z., Zhang Z., Xhonneux L.-P., Tang J. Neural bellman-ford networks: A general graph neural network framework for link prediction. Proc. of the 35th International Conference on Neural Information Processing, 2021, pp. 29476–29490.
8. Suri S., Vassilvitskii S. Counting triangles and the curse of the last reducer. Proc. of the 20th International Conference on World Wide Web, 2011, pp. 607–614. https://doi.org/10.1145/1963405.1963491
9. Schank T. Algorithmic Aspects of Triangle-Based Network Analysis:Ph.D. dissertation. Universitat Karlsruhe, Fakultat fur Informatik, 2007.
10. ErricaF., Podda M., Bacciu D., Micheli A. A fair comparison of graph neural networks for graph classification. Proc. of the 8th International Conference on Learning Representations, 2020.
11. Gilmer J., Schoenholz S.S., Riley P.F., Vinyals O., Dahl G.E. Neural message passing for quantum chemistry. Proc. of the 34th International Conference on Machine Learning, 2017, vol. 70, pp. 1263–1272.
12. Hamilton W.L., Ying R., Leskovec J. Inductive representation learning on large graphs. Proc. of the 31st International Conference on Neural Information Processing Systems, 2017, pp. 1025–1035.
13. Xu D., Ruan Ch., Körpeoglu E., Kumar S., Achan K. Inductive representation learning on temporal graphs.Proc. of the 8th International Conference on Learning Representations, 2020.
14. Velickovic P., Cucurull G., Casanova A., Romero A., Liò P., Bengio Y. Graph attention networks. Proc. of the 6th International Conference on Learning Representations, 2018.
15. Kipf T.N., Welling M. Semi-supervised classification with graph convolutional networks. Proc. of the 5th International Conference on Learning Representations, 2017.
16. Xu K., Hu W., Leskovec J., Jegelka S. How powerful are graph neural networks? Proc. of the 7th International Conference on Learning Representations, 2019.
17. LemanA.A.,Weisfeiler B. A reduction of a graph to a canonical form and an algebra
arising during this reduction. Nauchno-Technicheskaya Informatsia. Seriya 2, 1968, no. 9, pp. 12–16.
18. Morris C., Ritzert M., Fey M., Hamilton W.L., Lenssen J.E., Rattan G., Grohe M. Weisfeiler and Leman go neural: Higher-order graph neural networks. Proceedings of the AAAI Conference on Artificial Intelligence, 2019, vol. 33, no. 1. pp. 4602–4609. https://doi.org/10.1609/aaai.v33i01.33014602
19. Maron H., Ben-Hamu H., Serviansky H., Lipman Y. Provably powerful graph networks. Proc. of the 33rd International Conference on Neural Information Processing Systems, 2019, pp. 2156–2167.
20. Morris C., Kriege N.M., Bause F., Kersting K., Mutzel P., Neumann M. Tudataset: A collection of benchmark datasets for learning with graphs. arXiv, 2020. arXiv:2007.08663. https://doi.org/10.48550/arXiv.2007.08663
21. Järvelin K., Kekäläinen J. Cumulated gain-based evaluation of IR techniques. ACM Transactions on Information Systems, 2002, vol. 20, no. 4, pp. 422–446. https://doi.org/10.1145/582415.582418


Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License
Copyright 2001-2026 ©
Scientific and Technical Journal
of Information Technologies, Mechanics and Optics.

Яндекс.Метрика