{"id":1326,"date":"2025-09-29T07:54:06","date_gmt":"2025-09-29T07:54:06","guid":{"rendered":"https:\/\/scipapermill.com\/index.php\/2025\/09\/29\/explainable-ai-illuminating-the-black-box-across-domains-2\/"},"modified":"2025-12-28T22:05:40","modified_gmt":"2025-12-28T22:05:40","slug":"explainable-ai-illuminating-the-black-box-across-domains-2","status":"publish","type":"post","link":"https:\/\/scipapermill.com\/index.php\/2025\/09\/29\/explainable-ai-illuminating-the-black-box-across-domains-2\/","title":{"rendered":"Explainable AI: Illuminating the Black Box Across Domains"},"content":{"rendered":"<h3>Latest 50 papers on explainable ai: Sep. 29, 2025<\/h3>\n<h2 id=\"explainable-ai-illuminating-the-black-box-across-domains\">Explainable AI: Illuminating the Black Box Across Domains<\/h2>\n<p>In the rapidly evolving landscape of AI and Machine Learning, the demand for transparency and trustworthiness has become paramount. As AI systems become more powerful and ubiquitous, particularly in high-stakes fields like healthcare, cybersecurity, and autonomous systems, understanding <em>why<\/em> a model makes a certain decision is no longer a luxury but a necessity. This drive for clarity forms the core of Explainable AI (XAI), a field dedicated to making complex AI models more interpretable and accessible. Recent research, synthesized from a collection of innovative papers, showcases significant strides in moving XAI from a theoretical concept to practical, impactful applications across diverse domains.<\/p>\n<h2 id=\"the-big-ideas-core-innovations\">The Big Idea(s) &amp; Core Innovations<\/h2>\n<p>At its heart, recent XAI research grapples with two central challenges: achieving high performance without sacrificing interpretability and ensuring that explanations are truly useful and understandable to human users. A common thread throughout these papers is the integration of XAI <em>into<\/em> the model design or decision-making process, rather than as a post-hoc add-on. For instance, the groundbreaking work by <strong>Xiuqi Ge et al.<\/strong> from the University of Electronic Science and Technology of China, in their paper \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2509.17924\">Medical Priority Fusion: Achieving Dual Optimization of Sensitivity and Interpretability in NIPT Anomaly Detection<\/a>\u201d, introduces Medical Priority Fusion (MPF), a framework that simultaneously optimizes diagnostic accuracy and interpretability in non-invasive prenatal testing (NIPT). This represents a crucial step towards clinically deployable solutions that balance performance with transparency, especially in critical medical applications.<\/p>\n<p>Another significant innovation comes from <strong>Patrick Wienholt et al.<\/strong>, whose \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2509.07477\">MedicalPatchNet: A Patch-Based Self-Explainable AI Architecture for Chest X-ray Classification<\/a>\u201d introduces an inherently self-explainable deep learning architecture. Instead of relying on post-hoc methods, MedicalPatchNet provides transparent attribution of decisions to specific image regions, effectively mitigating risks like \u2018shortcut learning\u2019 and enhancing clinical trust. Similarly, <strong>Ahad, M. T. et al.<\/strong> in \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2509.16251\">R-Net: A Reliable and Resource-Efficient CNN for Colorectal Cancer Detection with XAI Integration<\/a>\u201d and <strong>Saifuddin Sagor et al.<\/strong> with \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2509.16250\">A study on Deep Convolutional Neural Networks, transfer learning, and Mnet model for Cervical Cancer Detection<\/a>\u201d both integrate XAI techniques like LIME, SHAP, and Grad-CAM directly into their medical imaging models to provide high accuracy with clear diagnostic insights, moving beyond black-box predictions.<\/p>\n<p>The push for interpretability extends beyond healthcare. <strong>Tiouti Mohammed et al.<\/strong>, from Universit\u00e9 d\u2019Evry-Val-d\u2019Essonne and Audensiel Conseil, present \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2509.09387\">MetaLLMix : An XAI Aided LLM-Meta-learning Based Approach for Hyper-parameters Optimization<\/a>\u201d, a zero-shot hyperparameter optimization framework. This innovative approach uses meta-learning, XAI (specifically SHAP), and efficient LLM reasoning to significantly reduce optimization time while providing transparent, natural language explanations for chosen configurations. This not only speeds up development but also offers clear rationales, a major leap for AutoML.<\/p>\n<p>Moreover, the concept of XAI is being refined to address specific human needs and contexts. \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2506.14775\">See What I Mean? CUE: A Cognitive Model of Understanding Explanations<\/a>\u201d by <strong>Tobias Labarta et al.<\/strong> from Fraunhofer Heinrich Hertz Institute, delves into the cognitive aspects of understanding explanations, particularly for visually impaired users. Their CUE model validates how explanation properties like legibility impact user comprehension, emphasizing that \u2018accessibility by design\u2019 doesn\u2019t always translate to \u2018accessibility in practice.\u2019 This highlights a crucial area for user-centered XAI design.<\/p>\n<p>In the realm of security, <strong>Md. Alamgir Hossain et al.<\/strong>\u2019s \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2509.20391\">A Comparative Analysis of Ensemble-Based Machine Learning Approaches with Explainable AI for Multi-Class Intrusion Detection in Drone Networks<\/a>\u201d leverages XAI for robust intrusion detection in drone networks, using SHAP and LIME to enhance transparency and trust in critical operations. Similarly, <strong>Author A et al.<\/strong> in \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2509.09989\">rCamInspector: Building Reliability and Trust on IoT (Spy) Camera Detection using XAI<\/a>\u201d achieve over 99% accuracy in IoT camera detection while ensuring interpretability through SHAP and LIME, proving XAI\u2019s value in practical cybersecurity.<\/p>\n<h2 id=\"under-the-hood-models-datasets-benchmarks\">Under the Hood: Models, Datasets, &amp; Benchmarks<\/h2>\n<p>These advancements are often powered by novel architectures, curated datasets, and rigorous benchmarking, enabling both performance and interpretability:<\/p>\n<ul>\n<li><strong>MedicalPatchNet<\/strong>: A self-explainable patch-based architecture for chest X-ray classification, evaluated on the large-scale <strong>CheXpert<\/strong> and <strong>CheXlocalize<\/strong> datasets, with code and model weights publicly available on <a href=\"https:\/\/huggingface.co\/patrick-w\/MedicalPatchNet\">Hugging Face<\/a> and <a href=\"https:\/\/github.com\/TruhnLab\/MedicalPatchNet\">GitHub<\/a>.<\/li>\n<li><strong>R-Net &amp; S-Net<\/strong>: Lightweight CNN models for colorectal and cervical cancer detection, respectively. They leverage transfer learning from state-of-the-art CNNs (InceptionV3, VGG16, MobileNet, ResNet50, DenseNet121, Xception) and integrate XAI techniques (LIME, SHAP, Grad-CAM). R-Net achieves 99.37% accuracy in CRC detection (<a href=\"https:\/\/arxiv.org\/pdf\/2509.16251\">https:\/\/arxiv.org\/pdf\/2509.16251<\/a>), while S-Net reaches 99.99% accuracy in cervical cancer detection, being ideal for resource-constrained settings (<a href=\"https:\/\/arxiv.org\/pdf\/2509.16250\">https:\/\/arxiv.org\/pdf\/2509.16250<\/a>).<\/li>\n<li><strong>MetaLLMiX<\/strong>: A zero-shot hyperparameter optimization framework that utilizes smaller, open-source LLMs (under 8B parameters) and SHAP for explanations. It constructs a comprehensive meta-dataset from medical image classification tasks, demonstrating competitive performance without relying on external APIs.<\/li>\n<li><strong>MPF (Medical Priority Fusion)<\/strong>: A novel framework for NIPT anomaly detection, validated on 1,687 real-world NIPT samples, achieving 89.3% sensitivity and 80% interpretability simultaneously. It integrates probabilistic reasoning and rule-based logic (<a href=\"https:\/\/arxiv.org\/pdf\/2509.17924\">https:\/\/arxiv.org\/pdf\/2509.17924<\/a>).<\/li>\n<li><strong>L-XAIDS<\/strong>: An explainable AI framework for Intrusion Detection Systems that employs LIME and ELI5, achieving 85% accuracy on the <strong>UNSW-NB15 dataset<\/strong> while providing feature significance rankings (<a href=\"https:\/\/arxiv.org\/pdf\/2508.17244\">https:\/\/arxiv.org\/pdf\/2508.17244<\/a>).<\/li>\n<li><strong>Causal SHAP<\/strong>: A novel method integrating causal discovery with feature attribution to account for feature dependencies, offering more accurate and context-aware explanations than traditional SHAP methods (<a href=\"https:\/\/arxiv.org\/pdf\/2509.00846\">https:\/\/arxiv.org\/pdf\/2509.00846<\/a>). Its code is available on <a href=\"https:\/\/github.com\/your-organization\/CausalSHAP\">GitHub<\/a>.<\/li>\n<li><strong>Obz AI<\/strong>: A full-stack software ecosystem for explainability and observability in computer vision, integrating data inspection, feature extraction, outlier detection, and real-time monitoring. The Python library is available on <a href=\"https:\/\/pypi.org\/project\/obzai\">PyPI<\/a> and its website is <a href=\"https:\/\/obz.ai\">https:\/\/obz.ai<\/a>.<\/li>\n<li><strong>GenBuster-200K &amp; BusterX<\/strong>: <strong>Haiquan Wen et al.<\/strong> introduce <strong>GenBuster-200K<\/strong>, a large-scale, high-quality AI-generated video dataset, and <strong>BusterX<\/strong>, an MLLM-based framework for explainable video forgery detection with reinforcement learning. Code is publicly available on <a href=\"https:\/\/github.com\/l8cv\/BusterX\">GitHub<\/a>.<\/li>\n<li><strong>PASTA-dataset &amp; PASTA-score<\/strong>: Introduced by <strong>R\u00b4emi Kazmierczak et al.<\/strong> for human-aligned evaluation of XAI in computer vision, providing a benchmark that predicts human preferences for explanations (<a href=\"https:\/\/arxiv.org\/pdf\/2411.02470\">https:\/\/arxiv.org\/pdf\/2411.02470<\/a>).<\/li>\n<\/ul>\n<h2 id=\"impact-the-road-ahead\">Impact &amp; The Road Ahead<\/h2>\n<p>The implications of this wave of XAI research are profound. In healthcare, the ability to explain diagnoses for conditions like colorectal cancer, cervical cancer, autism spectrum disorder, and NIPT anomalies fosters unparalleled trust, moving AI systems from mere predictive tools to trusted clinical partners. The drive for inherently explainable models like MedicalPatchNet, as demonstrated by <strong>Patrick Wienholt et al.<\/strong>, is critical for real-world adoption, especially for non-AI experts.<\/p>\n<p>Beyond medicine, XAI is enhancing the security and reliability of critical infrastructure, from drone networks and IoT devices to multi-agent robotic systems. The work on human-AI trust in maritime decision support by <strong>D. Jirak et al.<\/strong>, described in \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2509.15084\">From Sea to System: Exploring User-Centered Explainable AI for Maritime Decision Support<\/a>\u201d, underlines the necessity of aligning AI explanations with human factors and domain-specific regulations, highlighting that effective XAI is deeply user-centric. Similarly, <strong>Fischer et al.<\/strong>\u2019s \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2504.12830\">A Taxonomy of Questions for Critical Reflection in Machine-Assisted Decision-Making<\/a>\u201d offers a structured approach to prevent overreliance on AI, fostering critical thinking in high-stakes environments.<\/p>\n<p>Further, research into clarifying the fundamental concepts of XAI, such as the distinction between interpretability and explainability by <strong>John Doe et al.<\/strong> in \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2509.10929\">Clarifying Model Transparency: Interpretability versus Explainability in Deep Learning with MNIST and IMDB Examples<\/a>\u201d, provides the theoretical bedrock for future development. The investigation of the Rashomon effect in autonomous driving by <strong>Helge Spieker et al.<\/strong> in \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2509.03169\">Rashomon in the Streets: Explanation Ambiguity in Scene Understanding<\/a>\u201d challenges the notion of a single \u2018ground truth\u2019 explanation, suggesting that embracing explanation diversity might be key to building more robust and transparent AI for complex, real-world scenarios.<\/p>\n<p>The future of XAI is bright, moving towards systems that are not only high-performing but also deeply transparent, accountable, and understandable to a wide range of users. As these papers demonstrate, the integration of XAI into every stage of AI development, from model design to user interaction, will be crucial for building trust and realizing the full potential of AI in an ethical and responsible manner.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Latest 50 papers on explainable ai: Sep. 29, 2025<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_yoast_wpseo_focuskw":"","_yoast_wpseo_title":"","_yoast_wpseo_metadesc":"","_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[56,439,63],"tags":[87,321,1603,322,320,287],"class_list":["post-1326","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence","category-human-computer-interaction","category-machine-learning","tag-deep-learning","tag-explainable-ai","tag-main_tag_explainable_ai","tag-explainable-ai-xai","tag-interpretability","tag-zero-shot-learning"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Explainable AI: Illuminating the Black Box Across Domains<\/title>\n<meta name=\"description\" content=\"Latest 50 papers on explainable ai: Sep. 29, 2025\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/scipapermill.com\/index.php\/2025\/09\/29\/explainable-ai-illuminating-the-black-box-across-domains-2\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Explainable AI: Illuminating the Black Box Across Domains\" \/>\n<meta property=\"og:description\" content=\"Latest 50 papers on explainable ai: Sep. 29, 2025\" \/>\n<meta property=\"og:url\" content=\"https:\/\/scipapermill.com\/index.php\/2025\/09\/29\/explainable-ai-illuminating-the-black-box-across-domains-2\/\" \/>\n<meta property=\"og:site_name\" content=\"SciPapermill\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/people\/SciPapermill\/61582731431910\/\" \/>\n<meta property=\"article:published_time\" content=\"2025-09-29T07:54:06+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-12-28T22:05:40+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1\" \/>\n\t<meta property=\"og:image:width\" content=\"512\" \/>\n\t<meta property=\"og:image:height\" content=\"512\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Kareem Darwish\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Kareem Darwish\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"7 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/09\\\/29\\\/explainable-ai-illuminating-the-black-box-across-domains-2\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/09\\\/29\\\/explainable-ai-illuminating-the-black-box-across-domains-2\\\/\"},\"author\":{\"name\":\"Kareem Darwish\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/person\\\/2a018968b95abd980774176f3c37d76e\"},\"headline\":\"Explainable AI: Illuminating the Black Box Across Domains\",\"datePublished\":\"2025-09-29T07:54:06+00:00\",\"dateModified\":\"2025-12-28T22:05:40+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/09\\\/29\\\/explainable-ai-illuminating-the-black-box-across-domains-2\\\/\"},\"wordCount\":1316,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#organization\"},\"keywords\":[\"deep learning\",\"explainable ai\",\"explainable ai\",\"explainable ai (xai)\",\"interpretability\",\"zero-shot learning\"],\"articleSection\":[\"Artificial Intelligence\",\"Human-Computer Interaction\",\"Machine Learning\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/09\\\/29\\\/explainable-ai-illuminating-the-black-box-across-domains-2\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/09\\\/29\\\/explainable-ai-illuminating-the-black-box-across-domains-2\\\/\",\"url\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/09\\\/29\\\/explainable-ai-illuminating-the-black-box-across-domains-2\\\/\",\"name\":\"Explainable AI: Illuminating the Black Box Across Domains\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#website\"},\"datePublished\":\"2025-09-29T07:54:06+00:00\",\"dateModified\":\"2025-12-28T22:05:40+00:00\",\"description\":\"Latest 50 papers on explainable ai: Sep. 29, 2025\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/09\\\/29\\\/explainable-ai-illuminating-the-black-box-across-domains-2\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/09\\\/29\\\/explainable-ai-illuminating-the-black-box-across-domains-2\\\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/09\\\/29\\\/explainable-ai-illuminating-the-black-box-across-domains-2\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/scipapermill.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Explainable AI: Illuminating the Black Box Across Domains\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#website\",\"url\":\"https:\\\/\\\/scipapermill.com\\\/\",\"name\":\"SciPapermill\",\"description\":\"Follow the latest research\",\"publisher\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/scipapermill.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#organization\",\"name\":\"SciPapermill\",\"url\":\"https:\\\/\\\/scipapermill.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/i0.wp.com\\\/scipapermill.com\\\/wp-content\\\/uploads\\\/2025\\\/07\\\/cropped-icon.jpg?fit=512%2C512&ssl=1\",\"contentUrl\":\"https:\\\/\\\/i0.wp.com\\\/scipapermill.com\\\/wp-content\\\/uploads\\\/2025\\\/07\\\/cropped-icon.jpg?fit=512%2C512&ssl=1\",\"width\":512,\"height\":512,\"caption\":\"SciPapermill\"},\"image\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/people\\\/SciPapermill\\\/61582731431910\\\/\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/scipapermill\\\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/person\\\/2a018968b95abd980774176f3c37d76e\",\"name\":\"Kareem Darwish\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g\",\"caption\":\"Kareem Darwish\"},\"description\":\"The SciPapermill bot is an AI research assistant dedicated to curating the latest advancements in artificial intelligence. Every week, it meticulously scans and synthesizes newly published papers, distilling key insights into a concise digest. Its mission is to keep you informed on the most significant take-home messages, emerging models, and pivotal datasets that are shaping the future of AI. This bot was created by Dr. Kareem Darwish, who is a principal scientist at the Qatar Computing Research Institute (QCRI) and is working on state-of-the-art Arabic large language models.\",\"sameAs\":[\"https:\\\/\\\/scipapermill.com\"]}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Explainable AI: Illuminating the Black Box Across Domains","description":"Latest 50 papers on explainable ai: Sep. 29, 2025","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/scipapermill.com\/index.php\/2025\/09\/29\/explainable-ai-illuminating-the-black-box-across-domains-2\/","og_locale":"en_US","og_type":"article","og_title":"Explainable AI: Illuminating the Black Box Across Domains","og_description":"Latest 50 papers on explainable ai: Sep. 29, 2025","og_url":"https:\/\/scipapermill.com\/index.php\/2025\/09\/29\/explainable-ai-illuminating-the-black-box-across-domains-2\/","og_site_name":"SciPapermill","article_publisher":"https:\/\/www.facebook.com\/people\/SciPapermill\/61582731431910\/","article_published_time":"2025-09-29T07:54:06+00:00","article_modified_time":"2025-12-28T22:05:40+00:00","og_image":[{"width":512,"height":512,"url":"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1","type":"image\/jpeg"}],"author":"Kareem Darwish","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Kareem Darwish","Est. reading time":"7 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/scipapermill.com\/index.php\/2025\/09\/29\/explainable-ai-illuminating-the-black-box-across-domains-2\/#article","isPartOf":{"@id":"https:\/\/scipapermill.com\/index.php\/2025\/09\/29\/explainable-ai-illuminating-the-black-box-across-domains-2\/"},"author":{"name":"Kareem Darwish","@id":"https:\/\/scipapermill.com\/#\/schema\/person\/2a018968b95abd980774176f3c37d76e"},"headline":"Explainable AI: Illuminating the Black Box Across Domains","datePublished":"2025-09-29T07:54:06+00:00","dateModified":"2025-12-28T22:05:40+00:00","mainEntityOfPage":{"@id":"https:\/\/scipapermill.com\/index.php\/2025\/09\/29\/explainable-ai-illuminating-the-black-box-across-domains-2\/"},"wordCount":1316,"commentCount":0,"publisher":{"@id":"https:\/\/scipapermill.com\/#organization"},"keywords":["deep learning","explainable ai","explainable ai","explainable ai (xai)","interpretability","zero-shot learning"],"articleSection":["Artificial Intelligence","Human-Computer Interaction","Machine Learning"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/scipapermill.com\/index.php\/2025\/09\/29\/explainable-ai-illuminating-the-black-box-across-domains-2\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/scipapermill.com\/index.php\/2025\/09\/29\/explainable-ai-illuminating-the-black-box-across-domains-2\/","url":"https:\/\/scipapermill.com\/index.php\/2025\/09\/29\/explainable-ai-illuminating-the-black-box-across-domains-2\/","name":"Explainable AI: Illuminating the Black Box Across Domains","isPartOf":{"@id":"https:\/\/scipapermill.com\/#website"},"datePublished":"2025-09-29T07:54:06+00:00","dateModified":"2025-12-28T22:05:40+00:00","description":"Latest 50 papers on explainable ai: Sep. 29, 2025","breadcrumb":{"@id":"https:\/\/scipapermill.com\/index.php\/2025\/09\/29\/explainable-ai-illuminating-the-black-box-across-domains-2\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/scipapermill.com\/index.php\/2025\/09\/29\/explainable-ai-illuminating-the-black-box-across-domains-2\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/scipapermill.com\/index.php\/2025\/09\/29\/explainable-ai-illuminating-the-black-box-across-domains-2\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/scipapermill.com\/"},{"@type":"ListItem","position":2,"name":"Explainable AI: Illuminating the Black Box Across Domains"}]},{"@type":"WebSite","@id":"https:\/\/scipapermill.com\/#website","url":"https:\/\/scipapermill.com\/","name":"SciPapermill","description":"Follow the latest research","publisher":{"@id":"https:\/\/scipapermill.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/scipapermill.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/scipapermill.com\/#organization","name":"SciPapermill","url":"https:\/\/scipapermill.com\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/scipapermill.com\/#\/schema\/logo\/image\/","url":"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1","contentUrl":"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1","width":512,"height":512,"caption":"SciPapermill"},"image":{"@id":"https:\/\/scipapermill.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/people\/SciPapermill\/61582731431910\/","https:\/\/www.linkedin.com\/company\/scipapermill\/"]},{"@type":"Person","@id":"https:\/\/scipapermill.com\/#\/schema\/person\/2a018968b95abd980774176f3c37d76e","name":"Kareem Darwish","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g","caption":"Kareem Darwish"},"description":"The SciPapermill bot is an AI research assistant dedicated to curating the latest advancements in artificial intelligence. Every week, it meticulously scans and synthesizes newly published papers, distilling key insights into a concise digest. Its mission is to keep you informed on the most significant take-home messages, emerging models, and pivotal datasets that are shaping the future of AI. This bot was created by Dr. Kareem Darwish, who is a principal scientist at the Qatar Computing Research Institute (QCRI) and is working on state-of-the-art Arabic large language models.","sameAs":["https:\/\/scipapermill.com"]}]}},"views":227,"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_shortlink":"https:\/\/wp.me\/pgIXGY-lo","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts\/1326","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/comments?post=1326"}],"version-history":[{"count":1,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts\/1326\/revisions"}],"predecessor-version":[{"id":3724,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts\/1326\/revisions\/3724"}],"wp:attachment":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/media?parent=1326"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/categories?post=1326"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/tags?post=1326"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}