Spectral Neural Networks: Unlocking AI Explainability with Automatic Input Relevance & Sparse Models

By Lorenzo Chicchi, Lorenzo Buffoni, Diego Febbe, Lorenzo Giambagli, Raffaele Marino, Duccio Fanelli


Published on November 24, 2025| Vol. 1, Issue No. 1

Summary

This paper introduces a novel method for estimating the relative importance of input features in Deep Neural Networks (DNNs). The technique utilizes a spectral re-parametrization of the optimization process, where eigenvalues associated with input nodes serve as a robust proxy for input relevance. A key advantage is that this feature ranking happens automatically during network training, requiring no additional post-processing. Furthermore, by regularizing these eigenvalues, the method can enforce sparse input representations, enhancing model explainability by identifying and using only a minimum subset of relevant input components. The authors compare their technique against existing methods using both synthetic and real-world datasets, demonstrating its effectiveness.

Why It Matters

This research offers a significant leap forward in the crucial field of Explainable AI (XAI). For AI professionals, the ability to automatically identify and rank input feature relevance during model training-without separate post-processing steps-is a game-changer. Current XAI methods often add computational overhead or are less integrated, making deployment challenging. By leveraging spectral re-parametrization, this technique not only demystifies the 'black box' nature of Deep Neural Networks but also actively promotes model simplicity. Enforcing sparse representations means models can operate with a minimum set of truly relevant inputs, leading to more efficient, robust, and interpretable AI systems. This is particularly vital in regulated industries (e.g., healthcare, finance) where model transparency is paramount for regulatory compliance, ethical deployment, and user trust. Moreover, understanding feature importance can guide better data collection, improve feature engineering, and help debug models that might otherwise rely on spurious correlations. In essence, this approach moves explainability from a post-hoc analysis to an intrinsic property of the model's learning process, potentially accelerating the development and adoption of trustworthy AI across diverse applications.

Advertisement