This study aims to improve the interpretability of brain tumour detection by using explainable AI techniques, namely Grad-CAM and SHAP, alongside an Xception-based convolutional neural network (CNN). The model classifies brain MRI images into four categories — glioma, meningioma, pituitary tumour and non-tumour — ensuring transparency and reliability for potential clinical applications. An Xception-based CNN was trained using a labelled dataset of brain MRI images. Grad-CAM then provided region-based visual explanations by highlighting the areas of the MRI scans that were most important for tumour classification. SHAP quantified feature importance, offering a detailed understanding of model decisions. These complementary methods enhance model transparency and address potential biases. The model achieved accuracies of 99.95%, 99.08%, and 98.78% on the training, validation, and test sets, respectively. Grad-CAM effectively identified regions that were significant for different tumour types, while SHAP analysis provided insights into the importance of individual features. Together, these approaches confirmed the reliability and interpretability of the model, overcoming key challenges in AI-driven medical diagnostics. Integrating Grad-CAM and SHAP with a high-performing CNN model enhances the interpretability and trustworthiness of brain tumour detection systems. The findings underscore the potential of explainable AI to improve diagnostic accuracy and encourage the adoption of AI technologies in clinical practice.
- APA 7th style
- Chicago style
- IEEE style
- Vancouver style
| < Prev | Next > |
|---|




ISSN 2353-6977 (Online)

