go back
go back
Volume 18, No. 12
Graph Compression for Interpretable Graph Neural Network Inference At Scale
Abstract
We demonstrate ExGIS , a parallel inference query engine to support explainable Graph Neural Network ( GNNs ) inference analysis in large graphs. (1) For a class of GNNs M 𝐿 with at most 𝐿 layers, and a graph 𝑀 , ExGIS performs an o!ine, once-for-all compression of 𝑀 to a small graph 𝑀 𝑀 , such that for any inference query 𝑁 that requests the output of any GNN 𝑂 →M 𝐿 on any node 𝑃 in 𝑀 , 𝑀 𝑀 can be directly queried to yield correct output without decompression. (2) Given a workload 𝑄 of inference queries that requests the output of GNNs from M over 𝑀 , ExGIS perform fast online GNN inference and interpretation in parallel. It dynamically partitions 𝑄 to balance workloads, and (a) executes inference that only consults compressed graph 𝑀 𝑀 without decompression, and (b) directly yields concise, explanatory subgraphs from 𝑀 𝑀 that can clarify the query output with high fidelity, all in parallel. Moreover, ExGIS integrates visual, interactive interfaces for query performance analysis, and a Large Language Models (LLMs)- enabled interpreter to support user-friendly, natural language explanation of query outputs. We demonstrate the compression rate and scalability of ExGIS , and its application in interpretable anomaly detection over bitcoin transaction networks and academic networks.
PVLDB is part of the VLDB Endowment Inc.
Privacy Policy