Towards Explainable Artificial Intelligence: Interpretable Models and Techniques

Main Article Content

Sanjay Reddy
Amelia Walker

Abstract

The rapid advancement of artificial intelligence (AI) has led to its integration into critical domains such as healthcare, finance, and autonomous systems, where understanding and trust in AI decisions are paramount. While deep learning models often achieve state-of-the-art performance, their complex, black-box nature limits their interpretability. This paper explores the growing field of explainable AI (XAI), focusing on methods and techniques for enhancing the interpretability of AI models. We examine various approaches, including model-specific techniques like decision trees and rule-based systems, and model-agnostic methods such as feature importance, local explanations, and surrogate models. Furthermore, we discuss the trade-offs between accuracy and interpretability, providing a comprehensive review of the current landscape and future challenges. By promoting transparency in AI, this research aims to improve user trust, ensure fairness, and facilitate the deployment of AI systems in safety-critical applications.

Article Details

How to Cite
Reddy, S., & Walker, A. (2025). Towards Explainable Artificial Intelligence: Interpretable Models and Techniques. International Journal on Advanced Electrical and Computer Engineering, 13(1), 14–20. Retrieved from https://journals.mriindia.com/index.php/ijaece/article/view/85
Section
Articles

Similar Articles

<< < 1 2 3 4 > >> 

You may also start an advanced similarity search for this article.