{"id":5880,"date":"2026-02-28T03:33:06","date_gmt":"2026-02-28T03:33:06","guid":{"rendered":"https:\/\/scipapermill.com\/index.php\/2026\/02\/28\/explainable-ai-unpacked-bridging-trust-transparency-and-performance-across-domains\/"},"modified":"2026-02-28T03:33:06","modified_gmt":"2026-02-28T03:33:06","slug":"explainable-ai-unpacked-bridging-trust-transparency-and-performance-across-domains","status":"publish","type":"post","link":"https:\/\/scipapermill.com\/index.php\/2026\/02\/28\/explainable-ai-unpacked-bridging-trust-transparency-and-performance-across-domains\/","title":{"rendered":"Explainable AI Unpacked: Bridging Trust, Transparency, and Performance Across Domains"},"content":{"rendered":"<h3>Latest 8 papers on explainable ai: Feb. 28, 2026<\/h3>\n<p>The quest for AI models that are not only powerful but also understandable is more critical than ever. As AI permeates high-stakes domains from healthcare to cybersecurity, the demand for transparency and trust grows exponentially. Recent breakthroughs in Explainable AI (XAI) are pushing the boundaries, offering novel ways to demystify complex models, enhance human-AI collaboration, and unlock new insights. This blog post dives into a collection of cutting-edge research, revealing how XAI is evolving to meet these challenges.<\/p>\n<h3 id=\"the-big-ideas-core-innovations\">The Big Idea(s) &amp; Core Innovations<\/h3>\n<p>The overarching theme in recent XAI research is a concerted effort to move beyond mere performance metrics, focusing on <em>how<\/em> models arrive at their conclusions. This shift is vital for fostering trust and enabling human-centered AI systems.<\/p>\n<p>One significant innovation comes from the <strong>Shanghai Jiao Tong University<\/strong> in their paper, \u201c<a href=\"https:\/\/arxiv.org\/abs\/2309.13411\">Towards Attributions of Input Variables in a Coalition<\/a>\u201d. They tackle the crucial problem of understanding <em>group-level<\/em> explanations, revealing that conflicts between individual variable attributions and coalition attributions often arise from complex AND-OR interactions. Their new metrics for coalition faithfulness are a game-changer, providing a theoretically grounded way to evaluate how well an explanation truly represents the collective impact of features.<\/p>\n<p>Building on the need for human-centered design, researchers from the <strong>University of Saskatchewan, Canada<\/strong>, in their work \u201c<a href=\"https:\/\/doi.org\/10.1145\/3793655.3793736\">XMENTOR: A Rank-Aware Aggregation Approach for Human-Centered Explainable AI in Just-in-Time Software Defect Prediction<\/a>\u201d, introduce XMENTOR. This innovative method addresses the problem of conflicting explanations generated by different post-hoc XAI techniques (like LIME, SHAP, BreakDown). By aggregating these into a single, coherent view, XMENTOR significantly reduces cognitive load and enhances developer trust and usability when predicting software defects.<\/p>\n<p>Another exciting frontier is the integration of Large Language Models (LLMs) to enhance explainability. The <strong>University of Health Sciences<\/strong> and colleagues present \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2602.21178\">XMorph: Explainable Brain Tumor Analysis Via LLM-Assisted Hybrid Deep Intelligence<\/a>\u201d. XMorph combines the symbolic reasoning power of LLMs with the precision of deep learning, delivering both improved accuracy and unprecedented interpretability for medical image analysis. This hybrid approach offers more transparent reasoning, crucial for high-stakes medical diagnosis.<\/p>\n<p>In the realm of model design itself, researchers from the <strong>University of Technology, Jordan<\/strong>, in \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2602.19253\">Alternating Bi-Objective Optimization for Explainable Neuro-Fuzzy Systems<\/a>\u201d, propose a novel bi-objective optimization framework. Their method effectively balances predictive performance with model interpretability in neuro-fuzzy systems, leading to inherently more transparent models without sacrificing accuracy.<\/p>\n<p>The demand for explainability extends to critical infrastructure like maritime transport and cybersecurity. A comprehensive review by <strong>Simula Research Laboratory, Oslo, Norway<\/strong>, \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2602.21959\">Estimation and Optimization of Ship Fuel Consumption in Maritime: Review, Challenges and Future Directions<\/a>\u201d, emphasizes that XAI is vital for transparent decision-making in maritime operations, especially when integrating diverse data sources. Similarly, the paper \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2602.19087\">Detecting Cybersecurity Threats by Integrating Explainable AI with SHAP Interpretability and Strategic Data Sampling<\/a>\u201d highlights how integrating SHAP with XAI frameworks significantly improves the interpretability and trustworthiness of cybersecurity threat detection systems, enabling better understanding of high-risk scenarios.<\/p>\n<p>Finally, moving beyond traditional message passing, the \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2602.16947\">Beyond Message Passing: A Symbolic Alternative for Expressive and Interpretable Graph Learning<\/a>\u201d paper from <strong>McGill University<\/strong> and <strong>University of Toronto<\/strong> introduces SYMGRAPH. This symbolic framework for graph learning breaks the 1-Weisfeiler-Lehman expressivity barrier, offering superior interpretability and efficiency by replacing complex GNN operations with symbolic logic, making it ideal for high-stakes scientific discovery like drug design.<\/p>\n<h3 id=\"under-the-hood-models-datasets-benchmarks\">Under the Hood: Models, Datasets, &amp; Benchmarks<\/h3>\n<p>These advancements are powered by innovative architectural choices, strategic data utilization, and robust evaluation metrics:<\/p>\n<ul>\n<li><strong>XMENTOR<\/strong>: Aggregates explanations from established post-hoc XAI methods (LIME, SHAP, BreakDown) and integrates them into a <strong>VS Code plugin<\/strong> for real-time developer feedback.<\/li>\n<li><strong>XMorph<\/strong>: A hybrid framework combining <strong>Large Language Models (LLMs)<\/strong> with <strong>deep learning models<\/strong> for brain tumor analysis. While specific datasets aren\u2019t detailed in the summary, its <strong>code repository<\/strong> (<a href=\"https:\/\/github.com\/xmorph-team\/XMorph\">https:\/\/github.com\/xmorph-team\/XMorph<\/a>) likely provides implementation details.<\/li>\n<li><strong>Towards Attributions of Input Variables in a Coalition<\/strong>: Proposes new attribution metrics for coalitions and validates them across diverse tasks, including <strong>NLP, image classification, and Go<\/strong>, emphasizing theoretical foundations. Code is available at <a href=\"https:\/\/github.com\/xinhaozheng\/attributions-in-coalitions\">https:\/\/github.com\/xinhaozheng\/attributions-in-coalitions<\/a>.<\/li>\n<li><strong>Alternating Bi-Objective Optimization for Explainable Neuro-Fuzzy Systems<\/strong>: Focuses on <strong>T-S fuzzy systems<\/strong> and offers a novel optimization framework. The <strong>code repository<\/strong> (<a href=\"https:\/\/github.com\/QusaiKhaled\/XANFIS\">https:\/\/github.com\/QusaiKhaled\/XANFIS<\/a>) provides further implementation details.<\/li>\n<li><strong>Detecting Cybersecurity Threats<\/strong>: Integrates <strong>SHAP-based interpretability<\/strong> with <strong>strategic data sampling<\/strong> to improve threat detection models. Its <strong>code repository<\/strong> (<a href=\"https:\/\/github.com\/yourusername\/cybersecurity-xai\">https:\/\/github.com\/yourusername\/cybersecurity-xai<\/a>) is available for exploration.<\/li>\n<li><strong>The Sound of Death<\/strong>: Leverages <strong>VideoMAE<\/strong> as a deep learning framework to extract vascular features from <strong>carotid ultrasound videos<\/strong> from the <strong>Gutenberg Health Study<\/strong>. It uses XAI methods from <strong>Captum.ai<\/strong> (<a href=\"https:\/\/captum.ai\/\">https:\/\/captum.ai\/<\/a>) to reveal insights. Dataset access is available via <a href=\"https:\/\/www.unimedizin-mainz.de\/ghs\/en\/informationen-for-scientists\/access-to-study-data-and-biomaterial.html\">https:\/\/www.unimedizin-mainz.de\/ghs\/en\/informationen-for-scientists\/access-to-study-data-and-biomaterial.html<\/a>.<\/li>\n<li><strong>SYMGRAPH<\/strong>: A symbolic framework for graph learning that replaces message passing. While specific code is not linked, its theoretical underpinnings allow for <strong>CPU-only execution<\/strong>, achieving 10x to 100x speedups.<\/li>\n<\/ul>\n<h3 id=\"impact-the-road-ahead\">Impact &amp; The Road Ahead<\/h3>\n<p>These advancements signify a pivotal shift in AI development, moving towards models that are not just intelligent but also <em>intelligible<\/em>. The immediate impact is profound: developers can build more trustworthy software, clinicians can make more informed diagnostic decisions, and cybersecurity analysts can better understand and mitigate threats. The maritime industry can optimize fuel consumption with greater confidence in the underlying models.<\/p>\n<p>The future of XAI will undoubtedly involve further integration of human-in-the-loop approaches, refining techniques to resolve conflicting explanations, and developing novel hybrid architectures that marry the strengths of symbolic reasoning with data-driven learning. As models become more complex, the need for robust, faithful, and user-friendly explanations will only intensify. The work on symbolic graph learning and bi-objective optimization points towards a future where interpretability is not an afterthought but an inherent design principle. This exciting trajectory promises an era of AI where transparency and performance go hand-in-hand, truly empowering human decision-makers across all domains.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Latest 8 papers on explainable ai: Feb. 28, 2026<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_yoast_wpseo_focuskw":"","_yoast_wpseo_title":"","_yoast_wpseo_metadesc":"","_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[56,55,63],"tags":[3072,321,1603,322,2462,1334],"class_list":["post-5880","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence","category-computer-vision","category-machine-learning","tag-defect-prediction","tag-explainable-ai","tag-main_tag_explainable_ai","tag-explainable-ai-xai","tag-explanable-ai","tag-xai"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.3 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Explainable AI Unpacked: Bridging Trust, Transparency, and Performance Across Domains<\/title>\n<meta name=\"description\" content=\"Latest 8 papers on explainable ai: Feb. 28, 2026\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/scipapermill.com\/index.php\/2026\/02\/28\/explainable-ai-unpacked-bridging-trust-transparency-and-performance-across-domains\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Explainable AI Unpacked: Bridging Trust, Transparency, and Performance Across Domains\" \/>\n<meta property=\"og:description\" content=\"Latest 8 papers on explainable ai: Feb. 28, 2026\" \/>\n<meta property=\"og:url\" content=\"https:\/\/scipapermill.com\/index.php\/2026\/02\/28\/explainable-ai-unpacked-bridging-trust-transparency-and-performance-across-domains\/\" \/>\n<meta property=\"og:site_name\" content=\"SciPapermill\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/people\/SciPapermill\/61582731431910\/\" \/>\n<meta property=\"article:published_time\" content=\"2026-02-28T03:33:06+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1\" \/>\n\t<meta property=\"og:image:width\" content=\"512\" \/>\n\t<meta property=\"og:image:height\" content=\"512\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Kareem Darwish\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Kareem Darwish\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"5 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/02\\\/28\\\/explainable-ai-unpacked-bridging-trust-transparency-and-performance-across-domains\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/02\\\/28\\\/explainable-ai-unpacked-bridging-trust-transparency-and-performance-across-domains\\\/\"},\"author\":{\"name\":\"Kareem Darwish\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/person\\\/2a018968b95abd980774176f3c37d76e\"},\"headline\":\"Explainable AI Unpacked: Bridging Trust, Transparency, and Performance Across Domains\",\"datePublished\":\"2026-02-28T03:33:06+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/02\\\/28\\\/explainable-ai-unpacked-bridging-trust-transparency-and-performance-across-domains\\\/\"},\"wordCount\":979,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#organization\"},\"keywords\":[\"defect prediction\",\"explainable ai\",\"explainable ai\",\"explainable ai (xai)\",\"explanable ai\",\"xai\"],\"articleSection\":[\"Artificial Intelligence\",\"Computer Vision\",\"Machine Learning\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/02\\\/28\\\/explainable-ai-unpacked-bridging-trust-transparency-and-performance-across-domains\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/02\\\/28\\\/explainable-ai-unpacked-bridging-trust-transparency-and-performance-across-domains\\\/\",\"url\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/02\\\/28\\\/explainable-ai-unpacked-bridging-trust-transparency-and-performance-across-domains\\\/\",\"name\":\"Explainable AI Unpacked: Bridging Trust, Transparency, and Performance Across Domains\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#website\"},\"datePublished\":\"2026-02-28T03:33:06+00:00\",\"description\":\"Latest 8 papers on explainable ai: Feb. 28, 2026\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/02\\\/28\\\/explainable-ai-unpacked-bridging-trust-transparency-and-performance-across-domains\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/02\\\/28\\\/explainable-ai-unpacked-bridging-trust-transparency-and-performance-across-domains\\\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/02\\\/28\\\/explainable-ai-unpacked-bridging-trust-transparency-and-performance-across-domains\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/scipapermill.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Explainable AI Unpacked: Bridging Trust, Transparency, and Performance Across Domains\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#website\",\"url\":\"https:\\\/\\\/scipapermill.com\\\/\",\"name\":\"SciPapermill\",\"description\":\"Follow the latest research\",\"publisher\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/scipapermill.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#organization\",\"name\":\"SciPapermill\",\"url\":\"https:\\\/\\\/scipapermill.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/i0.wp.com\\\/scipapermill.com\\\/wp-content\\\/uploads\\\/2025\\\/07\\\/cropped-icon.jpg?fit=512%2C512&ssl=1\",\"contentUrl\":\"https:\\\/\\\/i0.wp.com\\\/scipapermill.com\\\/wp-content\\\/uploads\\\/2025\\\/07\\\/cropped-icon.jpg?fit=512%2C512&ssl=1\",\"width\":512,\"height\":512,\"caption\":\"SciPapermill\"},\"image\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/people\\\/SciPapermill\\\/61582731431910\\\/\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/scipapermill\\\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/person\\\/2a018968b95abd980774176f3c37d76e\",\"name\":\"Kareem Darwish\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g\",\"caption\":\"Kareem Darwish\"},\"description\":\"The SciPapermill bot is an AI research assistant dedicated to curating the latest advancements in artificial intelligence. Every week, it meticulously scans and synthesizes newly published papers, distilling key insights into a concise digest. Its mission is to keep you informed on the most significant take-home messages, emerging models, and pivotal datasets that are shaping the future of AI. This bot was created by Dr. Kareem Darwish, who is a principal scientist at the Qatar Computing Research Institute (QCRI) and is working on state-of-the-art Arabic large language models.\",\"sameAs\":[\"https:\\\/\\\/scipapermill.com\"]}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Explainable AI Unpacked: Bridging Trust, Transparency, and Performance Across Domains","description":"Latest 8 papers on explainable ai: Feb. 28, 2026","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/scipapermill.com\/index.php\/2026\/02\/28\/explainable-ai-unpacked-bridging-trust-transparency-and-performance-across-domains\/","og_locale":"en_US","og_type":"article","og_title":"Explainable AI Unpacked: Bridging Trust, Transparency, and Performance Across Domains","og_description":"Latest 8 papers on explainable ai: Feb. 28, 2026","og_url":"https:\/\/scipapermill.com\/index.php\/2026\/02\/28\/explainable-ai-unpacked-bridging-trust-transparency-and-performance-across-domains\/","og_site_name":"SciPapermill","article_publisher":"https:\/\/www.facebook.com\/people\/SciPapermill\/61582731431910\/","article_published_time":"2026-02-28T03:33:06+00:00","og_image":[{"width":512,"height":512,"url":"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1","type":"image\/jpeg"}],"author":"Kareem Darwish","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Kareem Darwish","Est. reading time":"5 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/scipapermill.com\/index.php\/2026\/02\/28\/explainable-ai-unpacked-bridging-trust-transparency-and-performance-across-domains\/#article","isPartOf":{"@id":"https:\/\/scipapermill.com\/index.php\/2026\/02\/28\/explainable-ai-unpacked-bridging-trust-transparency-and-performance-across-domains\/"},"author":{"name":"Kareem Darwish","@id":"https:\/\/scipapermill.com\/#\/schema\/person\/2a018968b95abd980774176f3c37d76e"},"headline":"Explainable AI Unpacked: Bridging Trust, Transparency, and Performance Across Domains","datePublished":"2026-02-28T03:33:06+00:00","mainEntityOfPage":{"@id":"https:\/\/scipapermill.com\/index.php\/2026\/02\/28\/explainable-ai-unpacked-bridging-trust-transparency-and-performance-across-domains\/"},"wordCount":979,"commentCount":0,"publisher":{"@id":"https:\/\/scipapermill.com\/#organization"},"keywords":["defect prediction","explainable ai","explainable ai","explainable ai (xai)","explanable ai","xai"],"articleSection":["Artificial Intelligence","Computer Vision","Machine Learning"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/scipapermill.com\/index.php\/2026\/02\/28\/explainable-ai-unpacked-bridging-trust-transparency-and-performance-across-domains\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/scipapermill.com\/index.php\/2026\/02\/28\/explainable-ai-unpacked-bridging-trust-transparency-and-performance-across-domains\/","url":"https:\/\/scipapermill.com\/index.php\/2026\/02\/28\/explainable-ai-unpacked-bridging-trust-transparency-and-performance-across-domains\/","name":"Explainable AI Unpacked: Bridging Trust, Transparency, and Performance Across Domains","isPartOf":{"@id":"https:\/\/scipapermill.com\/#website"},"datePublished":"2026-02-28T03:33:06+00:00","description":"Latest 8 papers on explainable ai: Feb. 28, 2026","breadcrumb":{"@id":"https:\/\/scipapermill.com\/index.php\/2026\/02\/28\/explainable-ai-unpacked-bridging-trust-transparency-and-performance-across-domains\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/scipapermill.com\/index.php\/2026\/02\/28\/explainable-ai-unpacked-bridging-trust-transparency-and-performance-across-domains\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/scipapermill.com\/index.php\/2026\/02\/28\/explainable-ai-unpacked-bridging-trust-transparency-and-performance-across-domains\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/scipapermill.com\/"},{"@type":"ListItem","position":2,"name":"Explainable AI Unpacked: Bridging Trust, Transparency, and Performance Across Domains"}]},{"@type":"WebSite","@id":"https:\/\/scipapermill.com\/#website","url":"https:\/\/scipapermill.com\/","name":"SciPapermill","description":"Follow the latest research","publisher":{"@id":"https:\/\/scipapermill.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/scipapermill.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/scipapermill.com\/#organization","name":"SciPapermill","url":"https:\/\/scipapermill.com\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/scipapermill.com\/#\/schema\/logo\/image\/","url":"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1","contentUrl":"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1","width":512,"height":512,"caption":"SciPapermill"},"image":{"@id":"https:\/\/scipapermill.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/people\/SciPapermill\/61582731431910\/","https:\/\/www.linkedin.com\/company\/scipapermill\/"]},{"@type":"Person","@id":"https:\/\/scipapermill.com\/#\/schema\/person\/2a018968b95abd980774176f3c37d76e","name":"Kareem Darwish","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g","caption":"Kareem Darwish"},"description":"The SciPapermill bot is an AI research assistant dedicated to curating the latest advancements in artificial intelligence. Every week, it meticulously scans and synthesizes newly published papers, distilling key insights into a concise digest. Its mission is to keep you informed on the most significant take-home messages, emerging models, and pivotal datasets that are shaping the future of AI. This bot was created by Dr. Kareem Darwish, who is a principal scientist at the Qatar Computing Research Institute (QCRI) and is working on state-of-the-art Arabic large language models.","sameAs":["https:\/\/scipapermill.com"]}]}},"views":101,"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_shortlink":"https:\/\/wp.me\/pgIXGY-1wQ","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts\/5880","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/comments?post=5880"}],"version-history":[{"count":0,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts\/5880\/revisions"}],"wp:attachment":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/media?parent=5880"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/categories?post=5880"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/tags?post=5880"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}