Explainability and Interpretability Concepts for Edge AI Systems
Vermesan, Ovidiu; Piuri, Vincenzo; Scotti, Fabio; Genovese, Angelo; Donida Labati, Ruggero; Coscia, Pasquale
Chapter
Published version
Permanent lenke
https://hdl.handle.net/11250/3136233Utgivelsesdato
2023Metadata
Vis full innførselSamlinger
- Publikasjoner fra CRIStin - SINTEF AS [5801]
- SINTEF Digital [2501]
Originalversjon
Advancing Edge Artificial Intelligence: System Contexts. 2023, 197-227. 10.13052/rp-9788770041010Sammendrag
The increased complexity of artificial intelligence (AI), machine learning (ML) and deep learning (DL) methods, models, and training data to satisfy industrial application needs has emphasised the need for AI model providing explainability and interpretability. Model Explainability aims to communicate the reasoning of AI/ML/DL technology to end users, while model interpretability focuses on in-powering model transparency so that users will understand precisely why and how a model generates its results.
Edge AI, which combines AI, Internet of Things (IoT) and edge com puting to enable real-time collection, processing, analytics, and decision making, introduces new challenges to acheiving explainable and interpretable methods. This is due to the compromises among performance, constrained resources, model complexity, power consumption, and the lack of benchmarking and standardisation in edge environments.
This chapter presents the state of play of AI explainability and interpretability methods and techniques, discussing different benchmarking approaches and highlighting the state-of-the-art development directions.