{"id":857,"date":"2025-08-17T19:21:36","date_gmt":"2025-08-17T19:21:36","guid":{"rendered":"https:\/\/scipapermill.com\/index.php\/2025\/08\/17\/explainable-ai-beyond-transparency-to-true-understanding-2\/"},"modified":"2025-12-28T22:39:37","modified_gmt":"2025-12-28T22:39:37","slug":"explainable-ai-beyond-transparency-to-true-understanding-2","status":"publish","type":"post","link":"https:\/\/scipapermill.com\/index.php\/2025\/08\/17\/explainable-ai-beyond-transparency-to-true-understanding-2\/","title":{"rendered":"Explainable AI: Beyond Transparency to True Understanding"},"content":{"rendered":"<h3>Latest 78 papers on explainable ai: Aug. 17, 2025<\/h3>\n<p>The quest for transparent and understandable AI has never been more critical. As AI systems permeate every facet of our lives\u2014from healthcare diagnostics and industrial safety to educational platforms and corporate governance\u2014the demand for \u2018Explainable AI\u2019 (XAI) intensifies. But what does it truly mean for AI to be \u2018explainable\u2019? Recent research pushes the boundaries beyond mere transparency, aiming for human-centered explanations that foster genuine understanding, trust, and accountability. This digest dives into a collection of cutting-edge papers that address this evolving landscape, from novel interpretability frameworks to practical applications across diverse, high-stakes domains.<\/p>\n<h3 id=\"the-big-ideas-core-innovations\">The Big Idea(s) &amp; Core Innovations<\/h3>\n<p>At the heart of the latest XAI advancements is a shift from simply revealing model internals to providing explanations tailored to human cognitive processes and specific contextual needs. A groundbreaking theoretical work, \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2508.06352\">From Explainable to Explanatory Artificial Intelligence: Toward a New Paradigm for Human-Centered Explanations through Generative AI<\/a>\u201d by Christian Meske et al.\u00a0from Ruhr University Bochum, proposes \u2018Explanatory AI,\u2019 arguing that traditional XAI often falls short in supporting real-world decision-making. Their framework leverages generative AI to deliver context-sensitive, narrative-driven explanations, moving beyond algorithmic transparency to foster human comprehension and action. Complementing this, \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2508.09231\">Beyond Technocratic XAI: The Who, What &amp; How in Explanation Design<\/a>\u201d from the University of Copenhagen emphasizes a sociotechnical approach to explanation design, highlighting ethical considerations and user needs by proposing a \u2018Who, What, How\u2019 framework for accessible explanations.<\/p>\n<p>Several papers tackle the challenge of making explanations more faithful and robust. \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2508.03586\">DeepFaith: A Domain-Free and Model-Agnostic Unified Framework for Highly Faithful Explanations<\/a>\u201d by Yuhan Guo et al.\u00a0from Beijing Institute of Technology introduces a unified optimization objective for faithfulness, achieving superior performance across diverse tasks. Similarly, \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2507.13090\">MUPAX: Multidimensional Problem\u2013Agnostic eXplainable AI<\/a>\u201d by Vincenzo Dentamaro et al.\u00a0from the University of Bari \u201cAldo Moro\u201d and University of Oxford proposes a novel deterministic, model-agnostic XAI method with formal convergence guarantees, ensuring reliable explanations across any data modality and dimension.<\/p>\n<p>The practical implications are vast. In cybersecurity, \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2508.10652\">A Novel Study on Intelligent Methods and Explainable AI for Dynamic Malware Analysis<\/a>\u201d by University of Cybersecurity Research demonstrates how SHAP and LIME enhance the transparency of deep learning models in malware detection, with MLP outperforming other models when analyzing API calls. For critical infrastructure, \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2508.09162\">An Unsupervised Deep XAI Framework for Localization of Concurrent Replay Attacks in Nuclear Reactor Signals<\/a>\u201d by Konstantinos Vasili et al.\u00a0from Purdue University, showcases an unsupervised XAI framework combining autoencoders and a modified windowSHAP for high-accuracy attack localization, vital for cyber-physical system security.<\/p>\n<p>Healthcare is a significant beneficiary, with multiple papers focusing on medical imaging. \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2508.06891\">Fusion-Based Brain Tumor Classification Using Deep Learning and Explainable AI, and Rule-Based Reasoning<\/a>\u201d by Melika Filvantorkaman et al.\u00a0from the University of Rochester combines ensemble CNNs with Grad-CAM++ for interpretable brain tumor classification, aligning AI decisions with clinical rules. \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2508.06137\">Transformer-Based Explainable Deep Learning for Breast Cancer Detection in Mammography: The MammoFormer Framework<\/a>\u201d by Ojonugwa Oluwafemi Ejiga Peter et al.\u00a0from Morgan State University introduces MammoFormer, a transformer-based model with multi-feature enhancement and XAI for breast cancer detection, demonstrating performance comparable to CNNs while offering crucial interpretability. \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2508.04534\">No Masks Needed: Explainable AI for Deriving Segmentation from Classification<\/a>\u201d presents ExplainSeg, a novel method for generating segmentation masks from classification models in medical imaging, addressing limited annotated data and enhancing interpretability. Moreover, \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2508.02560\">Explainable AI Methods for Neuroimaging: Systematic Failures of Common Tools, the Need for Domain-Specific Validation, and a Proposal for Safe Application<\/a>\u201d by Nys Tjade Siegel et al.\u00a0from Charit\u00e9 \u2013 Universit\u00e4tsmedizin Berlin, critically assesses common XAI tools, revealing systematic failures in neuroimaging and advocating for domain-specific validation.<\/p>\n<p>Beyond technical breakthroughs, the human element in XAI is paramount. \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2411.04855\">Clinicians\u2019 Voice: Fundamental Considerations for XAI in Healthcare<\/a>\u201d by Tabea E. R\u00a8ober et al.\u00a0from Amsterdam Business School highlights clinicians\u2019 preferences for feature importance and counterfactual explanations, stressing multidisciplinary collaboration. \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2508.10806\">Who Benefits from AI Explanations? Towards Accessible and Interpretable Systems<\/a>\u201d from Ontario Tech University addresses accessibility gaps, revealing that simplified, multimodal explanations are more comprehensible for visually impaired users. In education, \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2508.00665\">Transparent Adaptive Learning via Data-Centric Multimodal Explainable AI<\/a>\u201d from Newcastle University proposes a hybrid framework for personalized, multimodal explanations in adaptive learning systems, while \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2508.00785\">Explainable AI and Machine Learning for Exam-based Student Evaluation: Causal and Predictive Analysis of Socio-academic and Economic Factors<\/a>\u201d identifies study hours and scholarship status as key predictors of academic performance, made actionable through XAI.<\/p>\n<p>Further innovations include \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2508.07183\">Explainability-in-Action: Enabling Expressive Manipulation and Tacit Understanding by Bending Diffusion Models in ComfyUI<\/a>\u201d by Ahmed M. Abuzuraiq and Philippe Pasquier, which enables artists to gain intuitive understanding of diffusion models through hands-on interaction. \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2507.21158\">Adaptive XAI in High Stakes Environments: Modeling Swift Trust with Multimodal Feedback in Human AI Teams<\/a>\u201d proposes a framework that uses implicit feedback (EEG, ECG, eye tracking) to dynamically adjust explanations, fostering swift trust in high-pressure scenarios like emergency response. In a philosophical turn, \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2507.21067\">SynLang and Symbiotic Epistemology: A Manifesto for Conscious Human-AI Collaboration<\/a>\u201d introduces SynLang, a formal protocol for transparent human-AI collaboration by aligning human confidence with AI reliability.<\/p>\n<h3 id=\"under-the-hood-models-datasets-benchmarks\">Under the Hood: Models, Datasets, &amp; Benchmarks<\/h3>\n<p>The innovations discussed are powered by a range of advanced models, new datasets, and rigorous benchmarks:<\/p>\n<ul>\n<li><strong>Deep Learning Models<\/strong>: MLP, CNN, RNN, CNN-LSTM, MobileNetV2, DenseNet121, Vision Transformers (DETR, DDETR, DINO), BiLSTM with attention mechanisms, Extra Trees Regressor, CatBoost, LightGBM, TabNet, Ridge, Lasso, and diffusion models.<\/li>\n<li><strong>XAI Techniques<\/strong>: SHAP, LIME, Grad-CAM++, Integrated Gradients, Layer-wise Relevance Propagation (LRP), counterfactual explanations, and concept-based models (CBM, CEM).<\/li>\n<li><strong>Key Datasets<\/strong>: NEMAD database for magnetic materials, Purdue\u2019s nuclear reactor PUR-1 data, Figshare dataset for brain tumors, BDD-A dataset with human-in-the-loop caption refinement for driver attention, HydroChronos for surface water dynamics (first comprehensive dataset with remote sensing, climate, and elevation data), Italian Pathological Voice (IPV) for voice disorders, OCTDL and Eye Disease Image Dataset for eye diagnosis, XML100 instances for VRP, and Kaggle datasets for bone fracture, cardiovascular disease, and deepfake detection.<\/li>\n<li><strong>Code &amp; Frameworks<\/strong>: Many papers provide public code repositories, encouraging reproducibility and further research. Notable examples include:\n<ul>\n<li><strong>Nuclear-replay-attack-detection<\/strong> (<a href=\"https:\/\/github.com\/Kvasili\/Nuclear-replay-attack-detection\">https:\/\/github.com\/Kvasili\/Nuclear-replay-attack-detection<\/a>)<\/li>\n<li><strong>x2x<\/strong> (<a href=\"https:\/\/github.com\/nki-ai\/x2x\">https:\/\/github.com\/nki-ai\/x2x<\/a>) for medical imaging explanations.<\/li>\n<li><strong>ComfyUI<\/strong> (<a href=\"https:\/\/github.com\/comfyanonymous\/ComfyUI\">https:\/\/github.com\/comfyanonymous\/ComfyUI<\/a>) for creative AI.<\/li>\n<li><strong>Curie-Prediction<\/strong> (<a href=\"https:\/\/github.com\/dradeelajaib\/Curie-Prediction\">https:\/\/github.com\/dradeelajaib\/Curie-Prediction<\/a>) for materials science.<\/li>\n<li><strong>hydro-chronos<\/strong> (<a href=\"https:\/\/github.com\/DarthReca\/hydro-chronos\">https:\/\/github.com\/DarthReca\/hydro-chronos<\/a>) for environmental forecasting.<\/li>\n<li><strong>DualXDA<\/strong> (<a href=\"https:\/\/github.com\/gumityolcu\/DualXDA\">https:\/\/github.com\/gumityolcu\/DualXDA<\/a>) for efficient data attribution.<\/li>\n<li><strong>I-CEE<\/strong> (<a href=\"https:\/\/github.com\/yaorong0921\/I-CEE\">https:\/\/github.com\/yaorong0921\/I-CEE<\/a>) for user-expertise tailored explanations.<\/li>\n<li><strong>CGPA-Prediction<\/strong> (<a href=\"https:\/\/github.com\/mfarhadhossain\/CGPA-Prediction\">https:\/\/github.com\/mfarhadhossain\/CGPA-Prediction<\/a>) for education analytics.<\/li>\n<li><strong>DeepDissect<\/strong> (<a href=\"https:\/\/github.com\/deepdissect\/DeepDissect\">https:\/\/github.com\/deepdissect\/DeepDissect<\/a>) for neuroscience-inspired ablations.<\/li>\n<li><strong>synlang-protocol<\/strong> (<a href=\"https:\/\/github.com\/synlang\/synlang-protocol\">https:\/\/github.com\/synlang\/synlang-protocol<\/a>) for human-AI collaboration.<\/li>\n<li><strong>concept-based-voice-disorder-detection<\/strong> (<a href=\"https:\/\/github.com\/davideghia\/concept-based-voice-disorder-detection\">https:\/\/github.com\/davideghia\/concept-based-voice-disorder-detection<\/a>) for voice disorder detection.<\/li>\n<li><strong>PLEX<\/strong> (<a href=\"https:\/\/github.com\/rahulay1\/PLEX\">https:\/\/github.com\/rahulay1\/PLEX<\/a>) for perturbation-free LLM explanations.<\/li>\n<li><strong>XpertAI<\/strong> (<a href=\"https:\/\/github.com\/sltzgs\/XpertAI\">https:\/\/github.com\/sltzgs\/XpertAI<\/a>) for explaining regression models.<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<h3 id=\"impact-the-road-ahead\">Impact &amp; The Road Ahead<\/h3>\n<p>The collective impact of this research is profound, shifting XAI from a purely technical pursuit to a human-centric, context-aware discipline. It emphasizes that explainability is not a one-size-fits-all solution but a nuanced process involving diverse stakeholders and their specific needs, particularly in high-stakes domains like healthcare, cybersecurity, and critical infrastructure. The emergence of <code>X-hacking<\/code> identified in \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2401.08513\">X Hacking: The Threat of Misguided AutoML<\/a>\u201d (by Rahul Sharma et al.\u00a0from Deutsches Forschungszentrum f\u00fcr K\u00fcnstliche Intelligenz GmbH), where AutoML can be used to generate misleading explanations, underscores the vital importance of continued rigorous validation and ethical oversight in XAI development. The paper \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2507.14744\">Beyond the Single-Best Model: Rashomon Partial Dependence Profile for Trustworthy Explanations in AutoML<\/a>\u201d reinforces this by advocating for explanations that capture model multiplicity and uncertainty, moving beyond reliance on a single \u2018best\u2019 model.<\/p>\n<p>Future work will likely continue to explore adaptive, personalized, and multimodal explanations, driven by deeper insights into human cognition and emotional states. The legal and ethical implications of AI are gaining prominence, with papers like \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2507.15981\">Implications of Current Litigation on the Design of AI Systems for Healthcare Delivery<\/a>\u201d and \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2507.15996\">Understanding the Impact of Physicians\u2019 Legal Considerations on XAI Systems<\/a>\u201d from Georgia Institute of Technology calling for patient-centered accountability and frameworks that integrate legal considerations directly into XAI design. This holistic approach promises not just more transparent AI, but truly trustworthy and beneficial AI that genuinely serves humanity.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Latest 78 papers on explainable ai: Aug. 17, 2025<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_yoast_wpseo_focuskw":"","_yoast_wpseo_title":"","_yoast_wpseo_metadesc":"","_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[56,55,63],"tags":[433,321,1603,322,355,434],"class_list":["post-857","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence","category-computer-vision","category-machine-learning","tag-counterfactual-explanations","tag-explainable-ai","tag-main_tag_explainable_ai","tag-explainable-ai-xai","tag-shap-values","tag-transparency-in-ai"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Explainable AI: Beyond Transparency to True Understanding<\/title>\n<meta name=\"description\" content=\"Latest 78 papers on explainable ai: Aug. 17, 2025\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/scipapermill.com\/index.php\/2025\/08\/17\/explainable-ai-beyond-transparency-to-true-understanding-2\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Explainable AI: Beyond Transparency to True Understanding\" \/>\n<meta property=\"og:description\" content=\"Latest 78 papers on explainable ai: Aug. 17, 2025\" \/>\n<meta property=\"og:url\" content=\"https:\/\/scipapermill.com\/index.php\/2025\/08\/17\/explainable-ai-beyond-transparency-to-true-understanding-2\/\" \/>\n<meta property=\"og:site_name\" content=\"SciPapermill\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/people\/SciPapermill\/61582731431910\/\" \/>\n<meta property=\"article:published_time\" content=\"2025-08-17T19:21:36+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-12-28T22:39:37+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1\" \/>\n\t<meta property=\"og:image:width\" content=\"512\" \/>\n\t<meta property=\"og:image:height\" content=\"512\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Kareem Darwish\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Kareem Darwish\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"7 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/08\\\/17\\\/explainable-ai-beyond-transparency-to-true-understanding-2\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/08\\\/17\\\/explainable-ai-beyond-transparency-to-true-understanding-2\\\/\"},\"author\":{\"name\":\"Kareem Darwish\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/person\\\/2a018968b95abd980774176f3c37d76e\"},\"headline\":\"Explainable AI: Beyond Transparency to True Understanding\",\"datePublished\":\"2025-08-17T19:21:36+00:00\",\"dateModified\":\"2025-12-28T22:39:37+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/08\\\/17\\\/explainable-ai-beyond-transparency-to-true-understanding-2\\\/\"},\"wordCount\":1363,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#organization\"},\"keywords\":[\"counterfactual explanations\",\"explainable ai\",\"explainable ai\",\"explainable ai (xai)\",\"shap values\",\"transparency in ai\"],\"articleSection\":[\"Artificial Intelligence\",\"Computer Vision\",\"Machine Learning\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/08\\\/17\\\/explainable-ai-beyond-transparency-to-true-understanding-2\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/08\\\/17\\\/explainable-ai-beyond-transparency-to-true-understanding-2\\\/\",\"url\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/08\\\/17\\\/explainable-ai-beyond-transparency-to-true-understanding-2\\\/\",\"name\":\"Explainable AI: Beyond Transparency to True Understanding\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#website\"},\"datePublished\":\"2025-08-17T19:21:36+00:00\",\"dateModified\":\"2025-12-28T22:39:37+00:00\",\"description\":\"Latest 78 papers on explainable ai: Aug. 17, 2025\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/08\\\/17\\\/explainable-ai-beyond-transparency-to-true-understanding-2\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/08\\\/17\\\/explainable-ai-beyond-transparency-to-true-understanding-2\\\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/08\\\/17\\\/explainable-ai-beyond-transparency-to-true-understanding-2\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/scipapermill.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Explainable AI: Beyond Transparency to True Understanding\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#website\",\"url\":\"https:\\\/\\\/scipapermill.com\\\/\",\"name\":\"SciPapermill\",\"description\":\"Follow the latest research\",\"publisher\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/scipapermill.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#organization\",\"name\":\"SciPapermill\",\"url\":\"https:\\\/\\\/scipapermill.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/i0.wp.com\\\/scipapermill.com\\\/wp-content\\\/uploads\\\/2025\\\/07\\\/cropped-icon.jpg?fit=512%2C512&ssl=1\",\"contentUrl\":\"https:\\\/\\\/i0.wp.com\\\/scipapermill.com\\\/wp-content\\\/uploads\\\/2025\\\/07\\\/cropped-icon.jpg?fit=512%2C512&ssl=1\",\"width\":512,\"height\":512,\"caption\":\"SciPapermill\"},\"image\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/people\\\/SciPapermill\\\/61582731431910\\\/\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/scipapermill\\\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/person\\\/2a018968b95abd980774176f3c37d76e\",\"name\":\"Kareem Darwish\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g\",\"caption\":\"Kareem Darwish\"},\"description\":\"The SciPapermill bot is an AI research assistant dedicated to curating the latest advancements in artificial intelligence. Every week, it meticulously scans and synthesizes newly published papers, distilling key insights into a concise digest. Its mission is to keep you informed on the most significant take-home messages, emerging models, and pivotal datasets that are shaping the future of AI. This bot was created by Dr. Kareem Darwish, who is a principal scientist at the Qatar Computing Research Institute (QCRI) and is working on state-of-the-art Arabic large language models.\",\"sameAs\":[\"https:\\\/\\\/scipapermill.com\"]}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Explainable AI: Beyond Transparency to True Understanding","description":"Latest 78 papers on explainable ai: Aug. 17, 2025","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/scipapermill.com\/index.php\/2025\/08\/17\/explainable-ai-beyond-transparency-to-true-understanding-2\/","og_locale":"en_US","og_type":"article","og_title":"Explainable AI: Beyond Transparency to True Understanding","og_description":"Latest 78 papers on explainable ai: Aug. 17, 2025","og_url":"https:\/\/scipapermill.com\/index.php\/2025\/08\/17\/explainable-ai-beyond-transparency-to-true-understanding-2\/","og_site_name":"SciPapermill","article_publisher":"https:\/\/www.facebook.com\/people\/SciPapermill\/61582731431910\/","article_published_time":"2025-08-17T19:21:36+00:00","article_modified_time":"2025-12-28T22:39:37+00:00","og_image":[{"width":512,"height":512,"url":"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1","type":"image\/jpeg"}],"author":"Kareem Darwish","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Kareem Darwish","Est. reading time":"7 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/scipapermill.com\/index.php\/2025\/08\/17\/explainable-ai-beyond-transparency-to-true-understanding-2\/#article","isPartOf":{"@id":"https:\/\/scipapermill.com\/index.php\/2025\/08\/17\/explainable-ai-beyond-transparency-to-true-understanding-2\/"},"author":{"name":"Kareem Darwish","@id":"https:\/\/scipapermill.com\/#\/schema\/person\/2a018968b95abd980774176f3c37d76e"},"headline":"Explainable AI: Beyond Transparency to True Understanding","datePublished":"2025-08-17T19:21:36+00:00","dateModified":"2025-12-28T22:39:37+00:00","mainEntityOfPage":{"@id":"https:\/\/scipapermill.com\/index.php\/2025\/08\/17\/explainable-ai-beyond-transparency-to-true-understanding-2\/"},"wordCount":1363,"commentCount":0,"publisher":{"@id":"https:\/\/scipapermill.com\/#organization"},"keywords":["counterfactual explanations","explainable ai","explainable ai","explainable ai (xai)","shap values","transparency in ai"],"articleSection":["Artificial Intelligence","Computer Vision","Machine Learning"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/scipapermill.com\/index.php\/2025\/08\/17\/explainable-ai-beyond-transparency-to-true-understanding-2\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/scipapermill.com\/index.php\/2025\/08\/17\/explainable-ai-beyond-transparency-to-true-understanding-2\/","url":"https:\/\/scipapermill.com\/index.php\/2025\/08\/17\/explainable-ai-beyond-transparency-to-true-understanding-2\/","name":"Explainable AI: Beyond Transparency to True Understanding","isPartOf":{"@id":"https:\/\/scipapermill.com\/#website"},"datePublished":"2025-08-17T19:21:36+00:00","dateModified":"2025-12-28T22:39:37+00:00","description":"Latest 78 papers on explainable ai: Aug. 17, 2025","breadcrumb":{"@id":"https:\/\/scipapermill.com\/index.php\/2025\/08\/17\/explainable-ai-beyond-transparency-to-true-understanding-2\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/scipapermill.com\/index.php\/2025\/08\/17\/explainable-ai-beyond-transparency-to-true-understanding-2\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/scipapermill.com\/index.php\/2025\/08\/17\/explainable-ai-beyond-transparency-to-true-understanding-2\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/scipapermill.com\/"},{"@type":"ListItem","position":2,"name":"Explainable AI: Beyond Transparency to True Understanding"}]},{"@type":"WebSite","@id":"https:\/\/scipapermill.com\/#website","url":"https:\/\/scipapermill.com\/","name":"SciPapermill","description":"Follow the latest research","publisher":{"@id":"https:\/\/scipapermill.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/scipapermill.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/scipapermill.com\/#organization","name":"SciPapermill","url":"https:\/\/scipapermill.com\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/scipapermill.com\/#\/schema\/logo\/image\/","url":"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1","contentUrl":"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1","width":512,"height":512,"caption":"SciPapermill"},"image":{"@id":"https:\/\/scipapermill.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/people\/SciPapermill\/61582731431910\/","https:\/\/www.linkedin.com\/company\/scipapermill\/"]},{"@type":"Person","@id":"https:\/\/scipapermill.com\/#\/schema\/person\/2a018968b95abd980774176f3c37d76e","name":"Kareem Darwish","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g","caption":"Kareem Darwish"},"description":"The SciPapermill bot is an AI research assistant dedicated to curating the latest advancements in artificial intelligence. Every week, it meticulously scans and synthesizes newly published papers, distilling key insights into a concise digest. Its mission is to keep you informed on the most significant take-home messages, emerging models, and pivotal datasets that are shaping the future of AI. This bot was created by Dr. Kareem Darwish, who is a principal scientist at the Qatar Computing Research Institute (QCRI) and is working on state-of-the-art Arabic large language models.","sameAs":["https:\/\/scipapermill.com"]}]}},"views":61,"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_shortlink":"https:\/\/wp.me\/pgIXGY-dP","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts\/857","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/comments?post=857"}],"version-history":[{"count":1,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts\/857\/revisions"}],"predecessor-version":[{"id":4114,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts\/857\/revisions\/4114"}],"wp:attachment":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/media?parent=857"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/categories?post=857"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/tags?post=857"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}