Please use this identifier to cite or link to this item: http://hdl.handle.net/10071/36914
Author(s): Awais, M.
Postolache, O. A.
Oliveira, S. M.
Date: 2026
Title: Quantum-enhanced learning: Leveraging von Neumann entropy for enhanced graph neural network performance
Journal title: Neural Networks
Volume: 201
Reference: Awais, M., Postolache, O. A., & Oliveira, S. M. (2026). Quantum-enhanced learning: Leveraging von Neumann entropy for enhanced graph neural network performance. Neural Networks, 201, Article 108958. https://doi.org/10.1016/j.neunet.2026.108958
ISSN: 0893-6080
DOI (Digital Object Identifier): 10.1016/j.neunet.2026.108958
Keywords: Graph neural networks
von Neumann entropy Quantum information theory
Over-squashing
Over-smoothing
Abstract: Graph Neural Networks (GNNs) have established themselves as powerful tools for learning from graph-structured data. However, their reliance on local message-passing mechanisms leads to over-squashing—the compression of exponentially growing neighborhood information into fixed-size vectors—which severely limits long-range dependency modeling. We introduce the Quantum-Inspired Graph Neural Network (QGNN) with a novel Quantum Entanglement Loss (QEL) function that addresses this challenge through a fundamentally different mechanism than existing approaches. Unlike spectral regularization (which enforces smoothness) or maximum entropy methods (which encourage representation diversity), QEL minimizes the von Neumann entropy of the node embedding correlation matrix, thereby concentrating eigenvalues in dominant eigenmodes that preserve global structural patterns. This entropy minimization creates direct information pathways between distant but functionally related nodes, effectively bypassing multi-hop bottlenecks. We evaluate QGNN on both standard benchmarks (Cora, Citeseer, PPI, Electronic Circuits) and the Long Range Graph Benchmark (LRGB) suite, which features graphs with average diameters up to 56.99 (Peptides). On LRGB datasets, QGNN achieves substantial improvements: 37.6% relative MAE reduction on Peptides-struct compared to GCN, 4.0% improvement over Graph Transformers (GraphGPS), and notably, 97% better performance than GCN on node pairs separated by 7+ hops. Despite these gains, QGNN requires only 20–30% additional computational overhead compared to standard GCN, while being 5–6 ×  faster than Graph Transformer approaches. Our results establish entropy-based regularization as a principled and efficient approach for long-range dependency modeling in graphs.
Peerreviewed: yes
Access type: Open Access
Appears in Collections:IT-RI - Artigos em revistas científicas internacionais com arbitragem científica

Files in This Item:
File SizeFormat 
article_117842.pdf3,19 MBAdobe PDFView/Open


FacebookTwitterDeliciousLinkedInDiggGoogle BookmarksMySpaceOrkut
Formato BibTex mendeley Endnote Logotipo do DeGóis Logotipo do Orcid 

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.