Image Processing-Enhanced Explainable Deep Learning for Skin Disease Detection
Main Article Content
Abstract
This paper introduces a new way to improve the accuracy and transparency of deep learning models for detecting skin diseases. It addresses the well-known "black box" issue of deep neural networks by integrating explainable artificial intelligence (XAI) techniques with image processing methods. The suggested framework begins with a preprocessing stage, where image processing techniques such as noise reduction, contrast enhancement, and lesion segmentation are applied to skin lesion images. These techniques enhance the quality of the input data, thereby boosting the model's ability to extract relevant features. The processed images are then input into a convolutional neural network (CNN) that is fine-tuned to classify different skin conditions. To ensure that the model's decisions are transparent, the framework incorporates XAI methods, such as Grad-CAM (Gradient-weighted Class Activation Mapping). Grad-CAM creates heatmaps that highlight the specific parts of the image the model focuses on when making a prediction. This two-part strategy, which utilises image processing to enhance input and XAI to clarify output, yields a more reliable and trustworthy system. Experimental results show that the proposed method not only achieves a high level of diagnostic accuracy but also gives clinicians a visual explanation of the model’s reasoning. This increased transparency is crucial for clinical use, as it enhances confidence and facilitates the integration of AI tools into dermatological practice
Article Details

This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License.