Explainable Artificial Intelligence (XAI) in Diagnosing Neurodevelopmental Disorders: From Black Boxes to Clinical Transparency

Ifeanyi Kingsley Egbuna 1, *, Precious Airebanmen Otoibhili 2, Rofiat Oyiza Abdulkareem 3, Festus Ikechukwu Ogbozor 4, Praise Etinosa Oyegue 5, Nnaemeka Kelsey Azih 6 and Oluwaseyi Blessing Akomolafe 7

1 Department of Supply Chain Management, Marketing, and Management, Wright State University, United States.
2 Department of Anatomy, Ambrose Alli University, Ekpoma, Edo State, Nigeria.
3 Department of Pharmacology, Bayero University, Kano, Nigeria.
4 Department of Biochemistry, Biophysics and Biotechnology, Jagiellonian University, Poland.
5 Department of Anatomy, Ambrose Alli University, Ekpoma, Edo State, Nigeria.
6 Department of Paediatrics, Margaret Lawrence University Teaching Hospital Abuja, Nigeria.
7 Department of Master of Arts in Psychology, School of Health Sciences, Mapúa University, Manila, Philippines.
 
Review
International Journal of Biological and Pharmaceutical Sciences Archive, 2025, 10(01), 031-057.
Article DOI: 10.53771/ijbpsa.2025.10.1.0052
Publication history: 
Received on 18 May 2025; revised on 30 June 2025; accepted on 03 July 2025
 
Abstract: 
Neurodevelopmental disorders (NDDs), including autism spectrum disorder (ASD) and attention deficit hyperactivity disorder (ADHD), affect millions of children globally, posing immense clinical, social, and economic burdens. While early diagnosis remains critical for improving long-term outcomes, traditional assessment methods often suffer from subjectivity, late detection, and limited scalability. The advent of artificial intelligence (AI) has ushered in a new era of data-driven precision in mental health diagnostics, yet the opaque, “black-box” nature of most AI models has hindered their acceptance in high-stakes clinical settings. In response, explainable AI (XAI) has emerged as a crucial bridge between computational performance and clinical interpretability. This review critically explores the foundations, applications, and limitations of XAI in the early detection and diagnosis of NDDs. We examine core XAI paradigms—ranging from SHAP and LIME to attention-based and saliency-driven techniques—and illustrate their capacity to illuminate AI decision-making in real-world diagnostic workflows. Case studies on the use of XAI in analyzing fMRI, EEG, and behavioral data for ASD and ADHD offer compelling evidence of its transformative potential. Yet, challenges persist, including the inconsistency of explanation reliability, trade-offs between model transparency and accuracy, and the risks posed by data bias, particularly in underrepresented pediatric populations. Looking forward, we chart future directions involving the fusion of XAI with digital biomarkers, federated learning for multicenter collaboration, and clinician-in-the-loop systems to ensure ethical, trustworthy, and context-sensitive deployment. By integrating interpretability into the very fabric of AI systems, this review advocates for a future where transparency and technological innovation coalesce to advance pediatric neuropsychiatric care.
 
Keywords: 
Explainable AI; Neurodevelopmental Disorders; Autism Diagnosis; ADHD Clustering; Interpretable Models; Pediatric Neuroscience; Clinical Transparency; Digital Biomarkers
 
Full text article in PDF: