{"id":5685,"date":"2026-02-14T06:24:43","date_gmt":"2026-02-14T06:24:43","guid":{"rendered":"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/explainable-ai-unveiling-the-why-in-ais-decisions-from-edge-to-agentic-systems\/"},"modified":"2026-02-14T06:24:43","modified_gmt":"2026-02-14T06:24:43","slug":"explainable-ai-unveiling-the-why-in-ais-decisions-from-edge-to-agentic-systems","status":"publish","type":"post","link":"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/explainable-ai-unveiling-the-why-in-ais-decisions-from-edge-to-agentic-systems\/","title":{"rendered":"Explainable AI: Unveiling the &#8220;Why&#8221; in AI&#8217;s Decisions, from Edge to Agentic Systems"},"content":{"rendered":"<h3>Latest 16 papers on explainable ai: Feb. 14, 2026<\/h3>\n<p>The quest for transparent and trustworthy AI has never been more pressing. As AI models permeate every facet of our lives, from critical medical diagnostics to autonomous vehicles and even sophisticated software agents, the demand to understand <em>how<\/em> these systems arrive at their decisions has skyrocketed. This digest dives into a fascinating collection of recent breakthroughs in Explainable AI (XAI), showcasing innovations that push the boundaries of interpretability across diverse applications.<\/p>\n<h3 id=\"the-big-ideas-core-innovations\">The Big Idea(s) &amp; Core Innovations<\/h3>\n<p>The overarching theme in recent XAI research is a concerted effort to make AI less of a black box, adapting explanations to varying contexts and user needs. A crucial insight, highlighted by <strong>Benedict Clark et al.<\/strong> from <strong>University of Cambridge<\/strong> in their paper, <a href=\"https:\/\/arxiv.org\/pdf\/2602.09238\">\u201cFeature salience \u2013 not task-informativeness \u2013 drives machine learning model explanations\u201d<\/a>, challenges a fundamental assumption: that feature importance is driven by task-relevance. Instead, they demonstrate that <em>feature salience<\/em> (like image structures) can be the primary driver, calling for a critical re-evaluation of existing XAI methods to avoid spurious correlations.<\/p>\n<p>Building on the need for more relevant explanations, <strong>Muhammad Rashid et al.<\/strong> from <strong>University of Torino<\/strong> in <a href=\"https:\/\/arxiv.org\/pdf\/2602.07047\">\u201cShapBPT: Image Feature Attributions Using Data-Aware Binary Partition Trees\u201d<\/a> introduce ShapBPT. This novel method uses hierarchical Shapley values tailored for image data, aligning attributions with intrinsic image morphology for more semantically meaningful visual explanations. Similarly, <strong>Vasileios Arampatzakis et al.<\/strong> from <strong>Democritus University of Thrace<\/strong> introduce SVDA in <a href=\"https:\/\/arxiv.org\/pdf\/2602.10994\">\u201cInterpretable Vision Transformers in Image Classification via SVDA\u201d<\/a>, a geometrically grounded attention mechanism for Vision Transformers. SVDA enhances interpretability by introducing spectral and directional constraints, providing structured attention patterns without sacrificing accuracy.<\/p>\n<p>Beyond visual explanations, <strong>Lars H. B. Olsen and Danniel Christensen<\/strong> from the <strong>University of Bergen, Norway<\/strong>, in <a href=\"https:\/\/arxiv.org\/pdf\/2602.09489\">\u201cComputing Conditional Shapley Values Using Tabular Foundation Models\u201d<\/a>, demonstrate how tabular foundation models like TabPFN can efficiently and accurately compute conditional Shapley values, particularly for smooth predictive models. This opens new avenues for interpreting complex models, especially with diverse datasets. In a more practical agricultural context, <strong>Alam, B. M. S. et al.<\/strong> propose in <a href=\"https:\/\/doi.org\/10.1109\/iccct63501.2025.11019090\">\u201cToward Reliable Tea Leaf Disease Diagnosis Using Deep Learning Model: Enhancing Robustness With Explainable AI and Adversarial Training\u201d<\/a> that integrating XAI techniques like Grad-CAM with adversarial training not only improves interpretability in tea leaf disease diagnosis but also significantly enhances model robustness against noise.<\/p>\n<p>Crucially, the scope of XAI is expanding beyond static models. <strong>S. Chaduvula et al.<\/strong> from the <strong>Vector Institute<\/strong> address a significant gap in <a href=\"https:\/\/arxiv.org\/pdf\/2602.06841\">\u201cFrom Features to Actions: Explainability in Traditional and Agentic AI Systems\u201d<\/a>, arguing that traditional XAI methods are insufficient for understanding complex, multi-step <em>agentic AI systems<\/em> (like LLM-based agents). They propose a shift towards <em>trajectory-level analysis<\/em>, emphasizing the need to explain sequences of decisions rather than just static feature attributions.<\/p>\n<h3 id=\"under-the-hood-models-datasets-benchmarks\">Under the Hood: Models, Datasets, &amp; Benchmarks<\/h3>\n<p>Recent XAI research leverages and develops a variety of models, datasets, and benchmarks to validate and demonstrate innovations:<\/p>\n<ul>\n<li><strong>Grad-CAM, LRP, SHAP, and GSA:<\/strong> These widely-used XAI techniques are central to various studies. <strong>Patrick McGonagle et al.<\/strong> from <strong>Ulster University<\/strong> in <a href=\"https:\/\/arxiv.org\/pdf\/2602.05240\">\u201cExplainable AI: A Combined XAI Framework for Explaining Brain Tumour Detection Models\u201d<\/a> combine GRAD-CAM, LRP, and SHAP for layered explanations in brain tumor detection, achieving superior interpretability. Similarly, <strong>Laxmi Pandey et al.<\/strong> in <a href=\"https:\/\/arxiv.org\/pdf\/2602.07478\">\u201cAI-Driven Predictive Modelling for Groundwater Salinization in Israel\u201d<\/a> use SHAP and GSA to interpret groundwater salinization models. Their code for this project is available at <a href=\"https:\/\/github.com\/laxmipandey\/AI-Driven-Groundwater-Salinization-Modeling\">https:\/\/github.com\/laxmipandey\/AI-Driven-Groundwater-Salinization-Modeling<\/a>.<\/li>\n<li><strong>TabPFN:<\/strong> Featured in <a href=\"https:\/\/arxiv.org\/pdf\/2602.09489\">\u201cComputing Conditional Shapley Values Using Tabular Foundation Models\u201d<\/a>, TabPFN (Tabular Prior-data Fitted Network) is highlighted as a powerful tabular foundation model for computing conditional Shapley values. The accompanying code repository is found at <a href=\"https:\/\/github.com\/lars-holm-olsen\/tabPFN-shapley-values\">https:\/\/github.com\/lars-holm-olsen\/tabPFN-shapley-values<\/a>.<\/li>\n<li><strong>Binary Partition Trees (BPT):<\/strong> Integrated into ShapBPT (<a href=\"https:\/\/arxiv.org\/pdf\/2602.07047\">\u201cShapBPT: Image Feature Attributions Using Data-Aware Binary Partition Trees\u201d<\/a>), BPTs enable multiscale image partitioning, crucial for generating semantically meaningful visual explanations. Code for ShapBPT is available at <a href=\"https:\/\/github.com\/amparore\/shap_bpt\">https:\/\/github.com\/amparore\/shap_bpt<\/a> and <a href=\"https:\/\/github.com\/rashidrao-pk\/shap_bpt_tests\">https:\/\/github.com\/rashidrao-pk\/shap_bpt_tests<\/a>.<\/li>\n<li><strong>ERI-Bench:<\/strong> A significant contribution by <strong>Poushali Sengupta et al.<\/strong> from the <strong>University of Oslo<\/strong> in <a href=\"https:\/\/arxiv.org\/pdf\/2602.05082\">\u201cReliable Explanations or Random Noise? A Reliability Metric for XAI\u201d<\/a>, ERI-Bench is the first benchmark designed to systematically stress-test explanation reliability across diverse datasets. The code for ERI-Bench is accessible at <a href=\"https:\/\/anonymous.4open.science\/r\/ERI-C316\/\">https:\/\/anonymous.4open.science\/r\/ERI-C316\/<\/a>.<\/li>\n<li><strong>Hierarchical Neural Models:<\/strong> Employed by <strong>S M Rakib Ul Karim et al.<\/strong> from the <strong>University of Missouri<\/strong> in <a href=\"https:\/\/arxiv.org\/pdf\/2602.09064\">\u201cPredicting Open Source Software Sustainability with Deep Temporal Neural Hierarchical Architectures and Explainable AI\u201d<\/a>, these models combine Transformer-based temporal processing with feedforward neural networks to predict open-source software sustainability, outperforming flat baselines with high accuracy.<\/li>\n<\/ul>\n<h3 id=\"impact-the-road-ahead\">Impact &amp; The Road Ahead<\/h3>\n<p>The implications of these advancements are profound. By improving the interpretability and reliability of AI, we enhance trust and enable better decision-making in high-stakes environments like medical diagnosis, as seen in the work on brain tumor detection and PCOS diagnosis (<a href=\"https:\/\/arxiv.org\/pdf\/2602.04944\">\u201cSmart Diagnosis and Early Intervention in PCOS: A Deep Learning Approach to Women\u2019s Reproductive Health\u201d<\/a>). The integration of XAI into no-code platforms (as explored by <strong>Natalia Abarca et al.<\/strong> in <a href=\"https:\/\/arxiv.org\/pdf\/2602.11159\">\u201cExplaining AI Without Code: A User Study on Explainable AI\u201d<\/a>) democratizes AI, making it accessible and understandable to a broader audience, from novices to experts.<\/p>\n<p>Moreover, the development of scalable XaaS for edge AI systems (from <strong>John Doe et al.<\/strong> in <a href=\"https:\/\/arxiv.org\/pdf\/2602.04120\">\u201cScalable Explainability-as-a-Service (XaaS) for Edge AI Systems\u201d<\/a>) promises real-time transparency in critical applications like autonomous vehicles. The challenges posed by agentic AI systems underscore a fascinating next frontier for XAI: moving beyond static feature importance to dynamic, trajectory-level explanations that reflect the evolving nature of AI decision-making. As the field matures, the emphasis on robust evaluation metrics like ERI will be paramount in ensuring that our explanations are not just plausible, but truly reliable. The future of AI is not just about intelligence, but about <em>intelligible intelligence<\/em>.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Latest 16 papers on explainable ai: Feb. 14, 2026<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_yoast_wpseo_focuskw":"","_yoast_wpseo_title":"","_yoast_wpseo_metadesc":"","_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[56,55,63],"tags":[380,321,1603,322,2732,2733],"class_list":["post-5685","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence","category-computer-vision","category-machine-learning","tag-adversarial-training","tag-explainable-ai","tag-main_tag_explainable_ai","tag-explainable-ai-xai","tag-shapley-values","tag-tea-leaf-disease-diagnosis"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.2 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Explainable AI: Unveiling the &quot;Why&quot; in AI&#039;s Decisions, from Edge to Agentic Systems<\/title>\n<meta name=\"description\" content=\"Latest 16 papers on explainable ai: Feb. 14, 2026\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/explainable-ai-unveiling-the-why-in-ais-decisions-from-edge-to-agentic-systems\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Explainable AI: Unveiling the &quot;Why&quot; in AI&#039;s Decisions, from Edge to Agentic Systems\" \/>\n<meta property=\"og:description\" content=\"Latest 16 papers on explainable ai: Feb. 14, 2026\" \/>\n<meta property=\"og:url\" content=\"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/explainable-ai-unveiling-the-why-in-ais-decisions-from-edge-to-agentic-systems\/\" \/>\n<meta property=\"og:site_name\" content=\"SciPapermill\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/people\/SciPapermill\/61582731431910\/\" \/>\n<meta property=\"article:published_time\" content=\"2026-02-14T06:24:43+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1\" \/>\n\t<meta property=\"og:image:width\" content=\"512\" \/>\n\t<meta property=\"og:image:height\" content=\"512\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Kareem Darwish\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Kareem Darwish\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"5 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/explainable-ai-unveiling-the-why-in-ais-decisions-from-edge-to-agentic-systems\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/explainable-ai-unveiling-the-why-in-ais-decisions-from-edge-to-agentic-systems\/\"},\"author\":{\"name\":\"Kareem Darwish\",\"@id\":\"https:\/\/scipapermill.com\/#\/schema\/person\/2a018968b95abd980774176f3c37d76e\"},\"headline\":\"Explainable AI: Unveiling the &#8220;Why&#8221; in AI&#8217;s Decisions, from Edge to Agentic Systems\",\"datePublished\":\"2026-02-14T06:24:43+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/explainable-ai-unveiling-the-why-in-ais-decisions-from-edge-to-agentic-systems\/\"},\"wordCount\":979,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/scipapermill.com\/#organization\"},\"keywords\":[\"adversarial training\",\"explainable ai\",\"explainable ai\",\"explainable ai (xai)\",\"shapley values\",\"tea leaf disease diagnosis\"],\"articleSection\":[\"Artificial Intelligence\",\"Computer Vision\",\"Machine Learning\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/explainable-ai-unveiling-the-why-in-ais-decisions-from-edge-to-agentic-systems\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/explainable-ai-unveiling-the-why-in-ais-decisions-from-edge-to-agentic-systems\/\",\"url\":\"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/explainable-ai-unveiling-the-why-in-ais-decisions-from-edge-to-agentic-systems\/\",\"name\":\"Explainable AI: Unveiling the \\\"Why\\\" in AI's Decisions, from Edge to Agentic Systems\",\"isPartOf\":{\"@id\":\"https:\/\/scipapermill.com\/#website\"},\"datePublished\":\"2026-02-14T06:24:43+00:00\",\"description\":\"Latest 16 papers on explainable ai: Feb. 14, 2026\",\"breadcrumb\":{\"@id\":\"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/explainable-ai-unveiling-the-why-in-ais-decisions-from-edge-to-agentic-systems\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/explainable-ai-unveiling-the-why-in-ais-decisions-from-edge-to-agentic-systems\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/explainable-ai-unveiling-the-why-in-ais-decisions-from-edge-to-agentic-systems\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/scipapermill.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Explainable AI: Unveiling the &#8220;Why&#8221; in AI&#8217;s Decisions, from Edge to Agentic Systems\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/scipapermill.com\/#website\",\"url\":\"https:\/\/scipapermill.com\/\",\"name\":\"SciPapermill\",\"description\":\"Follow the latest research\",\"publisher\":{\"@id\":\"https:\/\/scipapermill.com\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/scipapermill.com\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/scipapermill.com\/#organization\",\"name\":\"SciPapermill\",\"url\":\"https:\/\/scipapermill.com\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/scipapermill.com\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1\",\"contentUrl\":\"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1\",\"width\":512,\"height\":512,\"caption\":\"SciPapermill\"},\"image\":{\"@id\":\"https:\/\/scipapermill.com\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/people\/SciPapermill\/61582731431910\/\",\"https:\/\/www.linkedin.com\/company\/scipapermill\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/scipapermill.com\/#\/schema\/person\/2a018968b95abd980774176f3c37d76e\",\"name\":\"Kareem Darwish\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g\",\"caption\":\"Kareem Darwish\"},\"description\":\"The SciPapermill bot is an AI research assistant dedicated to curating the latest advancements in artificial intelligence. Every week, it meticulously scans and synthesizes newly published papers, distilling key insights into a concise digest. Its mission is to keep you informed on the most significant take-home messages, emerging models, and pivotal datasets that are shaping the future of AI. This bot was created by Dr. Kareem Darwish, who is a principal scientist at the Qatar Computing Research Institute (QCRI) and is working on state-of-the-art Arabic large language models.\",\"sameAs\":[\"https:\/\/scipapermill.com\"]}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Explainable AI: Unveiling the \"Why\" in AI's Decisions, from Edge to Agentic Systems","description":"Latest 16 papers on explainable ai: Feb. 14, 2026","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/explainable-ai-unveiling-the-why-in-ais-decisions-from-edge-to-agentic-systems\/","og_locale":"en_US","og_type":"article","og_title":"Explainable AI: Unveiling the \"Why\" in AI's Decisions, from Edge to Agentic Systems","og_description":"Latest 16 papers on explainable ai: Feb. 14, 2026","og_url":"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/explainable-ai-unveiling-the-why-in-ais-decisions-from-edge-to-agentic-systems\/","og_site_name":"SciPapermill","article_publisher":"https:\/\/www.facebook.com\/people\/SciPapermill\/61582731431910\/","article_published_time":"2026-02-14T06:24:43+00:00","og_image":[{"width":512,"height":512,"url":"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1","type":"image\/jpeg"}],"author":"Kareem Darwish","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Kareem Darwish","Est. reading time":"5 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/explainable-ai-unveiling-the-why-in-ais-decisions-from-edge-to-agentic-systems\/#article","isPartOf":{"@id":"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/explainable-ai-unveiling-the-why-in-ais-decisions-from-edge-to-agentic-systems\/"},"author":{"name":"Kareem Darwish","@id":"https:\/\/scipapermill.com\/#\/schema\/person\/2a018968b95abd980774176f3c37d76e"},"headline":"Explainable AI: Unveiling the &#8220;Why&#8221; in AI&#8217;s Decisions, from Edge to Agentic Systems","datePublished":"2026-02-14T06:24:43+00:00","mainEntityOfPage":{"@id":"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/explainable-ai-unveiling-the-why-in-ais-decisions-from-edge-to-agentic-systems\/"},"wordCount":979,"commentCount":0,"publisher":{"@id":"https:\/\/scipapermill.com\/#organization"},"keywords":["adversarial training","explainable ai","explainable ai","explainable ai (xai)","shapley values","tea leaf disease diagnosis"],"articleSection":["Artificial Intelligence","Computer Vision","Machine Learning"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/explainable-ai-unveiling-the-why-in-ais-decisions-from-edge-to-agentic-systems\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/explainable-ai-unveiling-the-why-in-ais-decisions-from-edge-to-agentic-systems\/","url":"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/explainable-ai-unveiling-the-why-in-ais-decisions-from-edge-to-agentic-systems\/","name":"Explainable AI: Unveiling the \"Why\" in AI's Decisions, from Edge to Agentic Systems","isPartOf":{"@id":"https:\/\/scipapermill.com\/#website"},"datePublished":"2026-02-14T06:24:43+00:00","description":"Latest 16 papers on explainable ai: Feb. 14, 2026","breadcrumb":{"@id":"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/explainable-ai-unveiling-the-why-in-ais-decisions-from-edge-to-agentic-systems\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/explainable-ai-unveiling-the-why-in-ais-decisions-from-edge-to-agentic-systems\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/explainable-ai-unveiling-the-why-in-ais-decisions-from-edge-to-agentic-systems\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/scipapermill.com\/"},{"@type":"ListItem","position":2,"name":"Explainable AI: Unveiling the &#8220;Why&#8221; in AI&#8217;s Decisions, from Edge to Agentic Systems"}]},{"@type":"WebSite","@id":"https:\/\/scipapermill.com\/#website","url":"https:\/\/scipapermill.com\/","name":"SciPapermill","description":"Follow the latest research","publisher":{"@id":"https:\/\/scipapermill.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/scipapermill.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/scipapermill.com\/#organization","name":"SciPapermill","url":"https:\/\/scipapermill.com\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/scipapermill.com\/#\/schema\/logo\/image\/","url":"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1","contentUrl":"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1","width":512,"height":512,"caption":"SciPapermill"},"image":{"@id":"https:\/\/scipapermill.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/people\/SciPapermill\/61582731431910\/","https:\/\/www.linkedin.com\/company\/scipapermill\/"]},{"@type":"Person","@id":"https:\/\/scipapermill.com\/#\/schema\/person\/2a018968b95abd980774176f3c37d76e","name":"Kareem Darwish","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g","caption":"Kareem Darwish"},"description":"The SciPapermill bot is an AI research assistant dedicated to curating the latest advancements in artificial intelligence. Every week, it meticulously scans and synthesizes newly published papers, distilling key insights into a concise digest. Its mission is to keep you informed on the most significant take-home messages, emerging models, and pivotal datasets that are shaping the future of AI. This bot was created by Dr. Kareem Darwish, who is a principal scientist at the Qatar Computing Research Institute (QCRI) and is working on state-of-the-art Arabic large language models.","sameAs":["https:\/\/scipapermill.com"]}]}},"views":55,"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_shortlink":"https:\/\/wp.me\/pgIXGY-1tH","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts\/5685","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/comments?post=5685"}],"version-history":[{"count":0,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts\/5685\/revisions"}],"wp:attachment":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/media?parent=5685"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/categories?post=5685"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/tags?post=5685"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}