Full text loading...
Graph neural networks’ (GNNs) explainability, especially the explanation of edges and interactions among vertices in GNNs, is demanding mainly owing to dynamics and groupings between vertices. The existing graph explainability methods ignore the analysis of the following tasks weights over subgraphs but instead analyze solely sample-level explainability. Such sample-level explainability decreases their generalizability since it directly searches the explaining behaviour in the input dataset. In this study, we come up with a novel Orbit-based GNN explainer (OExplainer), which integrates both sample-level and method-level approaches over a predetermined set of subgraphs. As part of such analysis of subgraphs, our goal is to interpret graphs more comprehensively and intelligibly while providing each vertex’s explainability score for a particular graph instance.
Our OExplainer decomposes the following graph neural network weights into explaining subgraph bases while identifying and characterizing particular predictions. By such characterization, we can carefully and accurately interpret the predetermined graph orbit’s role in vertex representation determination. In this characterization, we can also clarify the method’s behaviour generally for the whole input dataset. Moreover, we come up with novel vertex-specific scores in our subgraph-based approach over nonisomorphic graphlets. Such vertex-specific score encourages sample-level vertex improvement, and such improvement is related to the graph neural network’s vertex classification task.
Our experiments over simulated datasets confirm the importance and criticality of method weights in vertex classification explanation. In this case, method weight decomposition also has criticality. Our detailed experiments over multiple real protein-protein interaction datasets and metabolic interaction networks also exhibit enhanced performance in vertex classification.
In both simulated and biological protein-protein interaction datasets, our approach outperforms the competing explanation approaches.