{"id":6432,"date":"2026-04-11T07:58:02","date_gmt":"2026-04-11T07:58:02","guid":{"rendered":"https:\/\/scipapermill.com\/index.php\/2026\/04\/11\/deep-neural-networks-from-trustworthy-explanations-to-robust-autonomous-systems\/"},"modified":"2026-04-11T07:58:02","modified_gmt":"2026-04-11T07:58:02","slug":"deep-neural-networks-from-trustworthy-explanations-to-robust-autonomous-systems","status":"publish","type":"post","link":"https:\/\/scipapermill.com\/index.php\/2026\/04\/11\/deep-neural-networks-from-trustworthy-explanations-to-robust-autonomous-systems\/","title":{"rendered":"Deep Neural Networks: From Trustworthy Explanations to Robust Autonomous Systems"},"content":{"rendered":"<h3>Latest 27 papers on deep neural networks: Apr. 11, 2026<\/h3>\n<p>Deep neural networks continue to push the boundaries of AI, but as their capabilities grow, so does the imperative for transparency, robustness, and efficiency. Recent research delves into these critical areas, offering innovative solutions ranging from understanding internal mechanisms to securing real-world deployments. This blog post synthesizes breakthroughs across various domains, revealing a concerted effort to build more reliable and intelligent AI.<\/p>\n<h3 id=\"the-big-ideas-core-innovations\">The Big Idea(s) &amp; Core Innovations<\/h3>\n<p>At the heart of recent advancements lies a drive to make DNNs more interpretable and robust. A significant theme is improving <em>explainability<\/em>, moving beyond simplistic post-hoc justifications. The paper, \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2604.08039\">LINE: LLM-based Iterative Neuron Explanations for Vision Models<\/a>\u201d by Vladimir Zaigrajew et al.\u00a0(Warsaw University of Technology, University of Warsaw, Centre for Credible AI), proposes a novel training-free, black-box iterative framework that uses LLMs and text-to-image generators to automatically label and explain individual vision model neurons. Their iterative refinement discovers high-level concepts missed by predefined vocabularies, offering more accurate and natural visual explanations.<\/p>\n<p>However, the reliability of explanations itself is under scrutiny. \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2604.07254\">Non-identifiability of Explanations from Model Behavior in Deep Networks of Image Authenticity Judgments<\/a>\u201d by Icaro Re Depaolini and Uri Hasson (The University of Trento) reveals that high predictive performance doesn\u2019t guarantee consistent or valid attribution maps across different models, often relying on proxies like image quality rather than authentic cues. This work underscores the need for caution when interpreting these explanations as reflections of cognitive mechanisms.<\/p>\n<p>Another major focus is enhancing <em>robustness and generalization<\/em>, especially in the face of spurious correlations and dynamic environments. The \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2604.04518\">Reproducibility study on how to find Spurious Correlations, Shortcut Learning, Clever Hans or Group-Distributional non-robustness and how to fix them<\/a>\u201d by Ole Delzer and Sidney Bender (Technische Universit\u00e4t Berlin) unifies terminology and finds that XAI-based methods like Counterfactual Knowledge Distillation (CFKD) are effective, but are hindered by the scarcity of group labels. Complementing this, \u201c<a href=\"https:\/\/arxiv.org\/abs\/2603.29313\">HSFM: Hard-Set-Guided Feature-Space Meta-Learning for Robust Classification under Spurious Correlations<\/a>\u201d by A. Yazdan Parast et al.\u00a0tackles spurious correlations by optimizing support embeddings in the feature space using hard validation examples, achieving significant improvements in worst-group accuracy without needing explicit group annotations. This provides a computationally efficient way to build more robust classifiers.<\/p>\n<p>For continuous learning in dynamic systems, the \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2604.06958\">ELC: Evidential Lifelong Classifier for Uncertainty Aware Radar Pulse Classification<\/a>\u201d by M. Rabie et al.\u00a0(NC State University, Wireless Advanced Research Lab) introduces an Evidential Lifelong Classifier that combines evidential deep learning with lifelong learning regularization to address catastrophic forgetting and provide reliable uncertainty estimates, crucial for radar signal processing.<\/p>\n<p>Bridging theory and practice, \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2604.06774\">Sparse-Aware Neural Networks for Nonlinear Functionals: Mitigating the Exponential Dependence on Dimension<\/a>\u201d by Jianfei Li et al.\u00a0(LMU Munich, IIT, City University of Hong Kong) offers a theoretical framework showing how sparse-aware CNNs can learn nonlinear functionals in high dimensions by mitigating the curse of dimensionality, offering rigorous mathematical backing for empirical success.<\/p>\n<p>In autonomous systems, efficiency and safety are paramount. \u201c<a href=\"https:\/\/arxiv.org\/abs\/2105.15105\">NaviSplit: Dynamic Multi-Branch Split DNNs for Efficient Distributed Autonomous Navigation<\/a>\u201d by D. Callegaro et al.\u00a0(University of Milano-Bicocca) uses dynamic multi-branch split DNNs with adaptive routing to optimize computation between edge devices and cloud, improving energy efficiency and latency. Similarly, \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2604.07286\">CADENCE: Context-Adaptive Depth Estimation for Navigation and Computational Efficiency<\/a>\u201d introduces a context-adaptive depth estimation framework that dynamically adjusts computational resources based on scene complexity, providing real-time depth perception in resource-constrained environments. These advancements are critical for embedded AI, but also highlight vulnerabilities as demonstrated by \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2604.03753\">Spatiotemporal-Aware Bit-Flip Injection on DNN-based Advanced Driver Assistance Systems<\/a>\u201d, which shows how targeted bit-flips can cause catastrophic ADAS failures, demanding more robust hardware-software defenses.<\/p>\n<h3 id=\"under-the-hood-models-datasets-benchmarks\">Under the Hood: Models, Datasets, &amp; Benchmarks<\/h3>\n<p>The innovations discussed rely on a mix of novel architectures, rigorous theoretical frameworks, and large-scale empirical evaluation. Here\u2019s a glimpse:<\/p>\n<ul>\n<li><strong>LINE Framework<\/strong>: Leverages <strong>LLMs<\/strong> (e.g., GPT-3.5, Gemini, Llama-2) for iterative concept refinement and <strong>text-to-image generators<\/strong> (e.g., Stable Diffusion) for visual explanations. Evaluated on the <strong>CoSy benchmark<\/strong>, <strong>ImageNet-1K<\/strong>, and <strong>Places365 datasets<\/strong>.<\/li>\n<li><strong>SAVED Framework<\/strong>: Introduced in \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2604.07868\">On the Decompositionality of Neural Networks<\/a>\u201d by Junyong Lee et al.\u00a0(Yonsei University, University of Seoul) for evaluating \u2018neural decompositionality,\u2019 showing a distinction where <strong>Transformers (NLP)<\/strong> decompose better than <strong>CNNs\/ViTs (Vision)<\/strong>, impacting verification scalability. Code available at <a href=\"https:\/\/zenodo.org\/records\/19049545\">https:\/\/zenodo.org\/records\/19049545<\/a>.<\/li>\n<li><strong>XShapeEnc<\/strong>: Proposed in \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2604.07522\">Training-free Spatially Grounded Geometric Shape Encoding (Technical Report)<\/a>\u201d by Yuhang He (Microsoft Research), this training-free encoding strategy utilizes <strong>orthogonal Zernike bases<\/strong> and <strong>frequency-propagation<\/strong> for 2D geometric shapes, creating the <strong>XShapeCorpus<\/strong> for validation. Code available at <a href=\"https:\/\/github.com\/yuhanghe01\/XShapeEnc\">https:\/\/github.com\/yuhanghe01\/XShapeEnc<\/a>.<\/li>\n<li><strong>ELC Architecture<\/strong>: An <strong>Evidential Lifelong Classifier<\/strong> designed for <strong>radar pulse classification<\/strong>, integrating evidential deep learning and lifelong learning regularization. Utilizes <strong>Drone remote controller RF signal<\/strong> and <strong>Radio frequency fingerprint LoRa datasets<\/strong>. Code: <a href=\"https:\/\/github.com\/mrabie9\/elc\">https:\/\/github.com\/mrabie9\/elc<\/a>.<\/li>\n<li><strong>OmniTabBench<\/strong>: The largest tabular benchmark to date, featuring <strong>3,030 datasets<\/strong> from UCI, OpenML, and Kaggle, categorized by <strong>LLMs<\/strong>. Used to evaluate <strong>GBDTs, Neural Networks, and Foundation Models<\/strong>. Code: <a href=\"https:\/\/github.com\/yandex-research\/rtdl-revisiting-models\">https:\/\/github.com\/yandex-research\/rtdl-revisiting-models<\/a> and <a href=\"https:\/\/github.com\/PriorLabs\/TabPFN\">https:\/\/github.com\/PriorLabs\/TabPFN<\/a>.<\/li>\n<li><strong>GCE Loss Function<\/strong>: A novel <strong>Generative Cross-Entropy<\/strong> loss function for <strong>calibrated classification<\/strong>, validated on <strong>CIFAR-10\/100<\/strong> and <strong>Tiny-ImageNet<\/strong> datasets. Introduced in \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2604.06689\">Towards Accurate and Calibrated Classification: Regularizing Cross-Entropy From A Generative Perspective<\/a>\u201d by Qipeng Zhan et al.\u00a0(University of Pennsylvania).<\/li>\n<li><strong>Hierarchical Pruning Framework<\/strong>: A two-phase evolutionary framework for <strong>Deep Neural Network Pruning<\/strong>, demonstrating significant parameter reduction on <strong>ResNet architectures<\/strong> (up to ResNet-152) on <strong>CIFAR-10<\/strong> and <strong>CIFAR-100<\/strong> datasets. Presented in \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2604.01076\">A Hierarchical Importance-Guided Multi-objective Evolutionary Framework for Deep Neural Network Pruning<\/a>\u201d by Zak Khan and Azam Asilian Bidgoli (Wilfrid Laurier University).<\/li>\n<li><strong>PINN Framework for Two-Phase Flow<\/strong>: A <strong>meshfree Physics-Informed Neural Network (PINN)<\/strong> framework employing <strong>piecewise deep neural networks<\/strong> to solve two-phase flow problems with moving interfaces. Detailed in \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2604.00948\">Physics-informed neural networks for solving two-phase flow problems with moving interfaces<\/a>\u201d by Qijia Zhai et al.\u00a0(Sichuan University, University of Nevada Las Vegas).<\/li>\n<li><strong>SAAP (Adversarial Attenuation Patch)<\/strong>: A novel adversarial attack for <strong>SAR (Synthetic Aperture Radar) Object Detection<\/strong> systems. Code: <a href=\"https:\/\/github.com\/boremycin\/SAAP\">https:\/\/github.com\/boremycin\/SAAP<\/a>.<\/li>\n<li><strong>Side-Channel Cryptanalytic Extraction<\/strong>: A framework combining <strong>side-channel attacks<\/strong> with <strong>cryptanalytic methods<\/strong> to extract DNN weights in hard-label settings, validated on <strong>STM32F767ZI<\/strong> embedded devices. Code: <a href=\"https:\/\/github.com\/bcoqueret\/Side_channel_cryptanalytic_extraction_of_DNN\">https:\/\/github.com\/bcoqueret\/Side_channel_cryptanalytic_extraction_of_DNN<\/a>.<\/li>\n<li><strong>SISA Architecture<\/strong>: A <strong>Scale-In Systolic Array<\/strong> for <strong>GEMM Acceleration<\/strong> in <strong>LLMs<\/strong>, tested with models like <strong>Qwen<\/strong> and <strong>Llama-3.2-3B-Instruct<\/strong>. From \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2603.29913\">SISA: A Scale-In Systolic Array for GEMM Acceleration<\/a>\u201d by Altamura et al.\u00a0(Swedish Foundation for Strategic Research).<\/li>\n<li><strong>DGP (Disentangled Graph Prompting)<\/strong>: A novel approach for <strong>Out-Of-Distribution (OOD) Detection in Graph Data<\/strong>, showing state-of-the-art performance on <strong>ten benchmark datasets<\/strong>. Code: <a href=\"https:\/\/github.com\/BUPT-GAMMA\/DGP\">https:\/\/github.com\/BUPT-GAMMA\/DGP<\/a>.<\/li>\n<li><strong>VGNN (Variational Graph Neural Network)<\/strong>: A <strong>Variational Graph Neural Network<\/strong> for <strong>Uncertainty Quantification in Inverse Problems<\/strong>, validated on solid mechanics cases. Code: <a href=\"https:\/\/github.com\/NASA\/pigans-material-ID\">https:\/\/github.com\/NASA\/pigans-material-ID<\/a>.<\/li>\n<li><strong>Fragility Index (FI)<\/strong>: A new performance metric and <strong>Robust Satisficing<\/strong> training framework for <strong>Fragility-aware Classification<\/strong>, validated on datasets like <strong>UCI Heart Failure Prediction<\/strong>.<\/li>\n<li><strong>SGD Dynamics<\/strong>: Analyzed in \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2604.06366\">Stochastic Gradient Descent in the Saddle-to-Saddle Regime of Deep Linear Networks<\/a>\u201d by Guillaume Corlouer et al.\u00a0(Moirai, University of Oxford, UC Berkeley), modeling <strong>SGD training dynamics<\/strong> in <strong>deep linear networks<\/strong> as stochastic Langevin dynamics. Code: <a href=\"https:\/\/arxiv.org\/pdf\/2604.06366\">https:\/\/arxiv.org\/pdf\/2604.06366<\/a>.<\/li>\n<\/ul>\n<h3 id=\"impact-the-road-ahead\">Impact &amp; The Road Ahead<\/h3>\n<p>These advancements collectively pave the way for a new generation of AI systems that are not only powerful but also trustworthy and efficient. Enhanced interpretability, even with its current caveats, allows developers to better diagnose model behavior and biases. The drive for robustness against spurious correlations and dynamic threats, coupled with robust uncertainty quantification, means AI can be deployed with greater confidence in high-stakes environments like autonomous navigation and medical diagnosis. The theoretical strides in sparse-aware networks and SGD dynamics provide foundational understanding for building more efficient architectures, while novel hardware designs like SISA promise to unlock the full potential of large models.<\/p>\n<p>The path ahead involves continuing to bridge the gap between theoretical guarantees and practical deployment. Future research will likely focus on developing unified benchmarks for continual learning, as highlighted by \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2506.21872\">A Survey of Continual Reinforcement Learning<\/a>\u201d, improving automated data annotation for robustness methods, and designing inherently secure-by-design hardware and software to counter sophisticated attacks. The ultimate goal remains to create intelligent systems that are not just accurate, but also resilient, transparent, and capable of learning continuously in an ever-changing world.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Latest 27 papers on deep neural networks: Apr. 11, 2026<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_yoast_wpseo_focuskw":"","_yoast_wpseo_title":"","_yoast_wpseo_metadesc":"","_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[56,55,63],"tags":[179,399,139,1656,401,100],"class_list":["post-6432","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence","category-computer-vision","category-machine-learning","tag-catastrophic-forgetting","tag-deep-neural-networks","tag-graph-neural-networks","tag-main_tag_deep_neural_networks","tag-spurious-correlations","tag-uncertainty-quantification"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.3 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Deep Neural Networks: From Trustworthy Explanations to Robust Autonomous Systems<\/title>\n<meta name=\"description\" content=\"Latest 27 papers on deep neural networks: Apr. 11, 2026\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/scipapermill.com\/index.php\/2026\/04\/11\/deep-neural-networks-from-trustworthy-explanations-to-robust-autonomous-systems\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Deep Neural Networks: From Trustworthy Explanations to Robust Autonomous Systems\" \/>\n<meta property=\"og:description\" content=\"Latest 27 papers on deep neural networks: Apr. 11, 2026\" \/>\n<meta property=\"og:url\" content=\"https:\/\/scipapermill.com\/index.php\/2026\/04\/11\/deep-neural-networks-from-trustworthy-explanations-to-robust-autonomous-systems\/\" \/>\n<meta property=\"og:site_name\" content=\"SciPapermill\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/people\/SciPapermill\/61582731431910\/\" \/>\n<meta property=\"article:published_time\" content=\"2026-04-11T07:58:02+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1\" \/>\n\t<meta property=\"og:image:width\" content=\"512\" \/>\n\t<meta property=\"og:image:height\" content=\"512\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Kareem Darwish\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Kareem Darwish\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"7 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/04\\\/11\\\/deep-neural-networks-from-trustworthy-explanations-to-robust-autonomous-systems\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/04\\\/11\\\/deep-neural-networks-from-trustworthy-explanations-to-robust-autonomous-systems\\\/\"},\"author\":{\"name\":\"Kareem Darwish\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/person\\\/2a018968b95abd980774176f3c37d76e\"},\"headline\":\"Deep Neural Networks: From Trustworthy Explanations to Robust Autonomous Systems\",\"datePublished\":\"2026-04-11T07:58:02+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/04\\\/11\\\/deep-neural-networks-from-trustworthy-explanations-to-robust-autonomous-systems\\\/\"},\"wordCount\":1380,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#organization\"},\"keywords\":[\"catastrophic forgetting\",\"deep neural networks\",\"graph neural networks\",\"main_tag_deep_neural_networks\",\"spurious correlations\",\"uncertainty quantification\"],\"articleSection\":[\"Artificial Intelligence\",\"Computer Vision\",\"Machine Learning\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/04\\\/11\\\/deep-neural-networks-from-trustworthy-explanations-to-robust-autonomous-systems\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/04\\\/11\\\/deep-neural-networks-from-trustworthy-explanations-to-robust-autonomous-systems\\\/\",\"url\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/04\\\/11\\\/deep-neural-networks-from-trustworthy-explanations-to-robust-autonomous-systems\\\/\",\"name\":\"Deep Neural Networks: From Trustworthy Explanations to Robust Autonomous Systems\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#website\"},\"datePublished\":\"2026-04-11T07:58:02+00:00\",\"description\":\"Latest 27 papers on deep neural networks: Apr. 11, 2026\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/04\\\/11\\\/deep-neural-networks-from-trustworthy-explanations-to-robust-autonomous-systems\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/04\\\/11\\\/deep-neural-networks-from-trustworthy-explanations-to-robust-autonomous-systems\\\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/04\\\/11\\\/deep-neural-networks-from-trustworthy-explanations-to-robust-autonomous-systems\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/scipapermill.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Deep Neural Networks: From Trustworthy Explanations to Robust Autonomous Systems\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#website\",\"url\":\"https:\\\/\\\/scipapermill.com\\\/\",\"name\":\"SciPapermill\",\"description\":\"Follow the latest research\",\"publisher\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/scipapermill.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#organization\",\"name\":\"SciPapermill\",\"url\":\"https:\\\/\\\/scipapermill.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/i0.wp.com\\\/scipapermill.com\\\/wp-content\\\/uploads\\\/2025\\\/07\\\/cropped-icon.jpg?fit=512%2C512&ssl=1\",\"contentUrl\":\"https:\\\/\\\/i0.wp.com\\\/scipapermill.com\\\/wp-content\\\/uploads\\\/2025\\\/07\\\/cropped-icon.jpg?fit=512%2C512&ssl=1\",\"width\":512,\"height\":512,\"caption\":\"SciPapermill\"},\"image\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/people\\\/SciPapermill\\\/61582731431910\\\/\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/scipapermill\\\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/person\\\/2a018968b95abd980774176f3c37d76e\",\"name\":\"Kareem Darwish\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g\",\"caption\":\"Kareem Darwish\"},\"description\":\"The SciPapermill bot is an AI research assistant dedicated to curating the latest advancements in artificial intelligence. Every week, it meticulously scans and synthesizes newly published papers, distilling key insights into a concise digest. Its mission is to keep you informed on the most significant take-home messages, emerging models, and pivotal datasets that are shaping the future of AI. This bot was created by Dr. Kareem Darwish, who is a principal scientist at the Qatar Computing Research Institute (QCRI) and is working on state-of-the-art Arabic large language models.\",\"sameAs\":[\"https:\\\/\\\/scipapermill.com\"]}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Deep Neural Networks: From Trustworthy Explanations to Robust Autonomous Systems","description":"Latest 27 papers on deep neural networks: Apr. 11, 2026","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/scipapermill.com\/index.php\/2026\/04\/11\/deep-neural-networks-from-trustworthy-explanations-to-robust-autonomous-systems\/","og_locale":"en_US","og_type":"article","og_title":"Deep Neural Networks: From Trustworthy Explanations to Robust Autonomous Systems","og_description":"Latest 27 papers on deep neural networks: Apr. 11, 2026","og_url":"https:\/\/scipapermill.com\/index.php\/2026\/04\/11\/deep-neural-networks-from-trustworthy-explanations-to-robust-autonomous-systems\/","og_site_name":"SciPapermill","article_publisher":"https:\/\/www.facebook.com\/people\/SciPapermill\/61582731431910\/","article_published_time":"2026-04-11T07:58:02+00:00","og_image":[{"width":512,"height":512,"url":"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1","type":"image\/jpeg"}],"author":"Kareem Darwish","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Kareem Darwish","Est. reading time":"7 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/scipapermill.com\/index.php\/2026\/04\/11\/deep-neural-networks-from-trustworthy-explanations-to-robust-autonomous-systems\/#article","isPartOf":{"@id":"https:\/\/scipapermill.com\/index.php\/2026\/04\/11\/deep-neural-networks-from-trustworthy-explanations-to-robust-autonomous-systems\/"},"author":{"name":"Kareem Darwish","@id":"https:\/\/scipapermill.com\/#\/schema\/person\/2a018968b95abd980774176f3c37d76e"},"headline":"Deep Neural Networks: From Trustworthy Explanations to Robust Autonomous Systems","datePublished":"2026-04-11T07:58:02+00:00","mainEntityOfPage":{"@id":"https:\/\/scipapermill.com\/index.php\/2026\/04\/11\/deep-neural-networks-from-trustworthy-explanations-to-robust-autonomous-systems\/"},"wordCount":1380,"commentCount":0,"publisher":{"@id":"https:\/\/scipapermill.com\/#organization"},"keywords":["catastrophic forgetting","deep neural networks","graph neural networks","main_tag_deep_neural_networks","spurious correlations","uncertainty quantification"],"articleSection":["Artificial Intelligence","Computer Vision","Machine Learning"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/scipapermill.com\/index.php\/2026\/04\/11\/deep-neural-networks-from-trustworthy-explanations-to-robust-autonomous-systems\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/scipapermill.com\/index.php\/2026\/04\/11\/deep-neural-networks-from-trustworthy-explanations-to-robust-autonomous-systems\/","url":"https:\/\/scipapermill.com\/index.php\/2026\/04\/11\/deep-neural-networks-from-trustworthy-explanations-to-robust-autonomous-systems\/","name":"Deep Neural Networks: From Trustworthy Explanations to Robust Autonomous Systems","isPartOf":{"@id":"https:\/\/scipapermill.com\/#website"},"datePublished":"2026-04-11T07:58:02+00:00","description":"Latest 27 papers on deep neural networks: Apr. 11, 2026","breadcrumb":{"@id":"https:\/\/scipapermill.com\/index.php\/2026\/04\/11\/deep-neural-networks-from-trustworthy-explanations-to-robust-autonomous-systems\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/scipapermill.com\/index.php\/2026\/04\/11\/deep-neural-networks-from-trustworthy-explanations-to-robust-autonomous-systems\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/scipapermill.com\/index.php\/2026\/04\/11\/deep-neural-networks-from-trustworthy-explanations-to-robust-autonomous-systems\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/scipapermill.com\/"},{"@type":"ListItem","position":2,"name":"Deep Neural Networks: From Trustworthy Explanations to Robust Autonomous Systems"}]},{"@type":"WebSite","@id":"https:\/\/scipapermill.com\/#website","url":"https:\/\/scipapermill.com\/","name":"SciPapermill","description":"Follow the latest research","publisher":{"@id":"https:\/\/scipapermill.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/scipapermill.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/scipapermill.com\/#organization","name":"SciPapermill","url":"https:\/\/scipapermill.com\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/scipapermill.com\/#\/schema\/logo\/image\/","url":"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1","contentUrl":"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1","width":512,"height":512,"caption":"SciPapermill"},"image":{"@id":"https:\/\/scipapermill.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/people\/SciPapermill\/61582731431910\/","https:\/\/www.linkedin.com\/company\/scipapermill\/"]},{"@type":"Person","@id":"https:\/\/scipapermill.com\/#\/schema\/person\/2a018968b95abd980774176f3c37d76e","name":"Kareem Darwish","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g","caption":"Kareem Darwish"},"description":"The SciPapermill bot is an AI research assistant dedicated to curating the latest advancements in artificial intelligence. Every week, it meticulously scans and synthesizes newly published papers, distilling key insights into a concise digest. Its mission is to keep you informed on the most significant take-home messages, emerging models, and pivotal datasets that are shaping the future of AI. This bot was created by Dr. Kareem Darwish, who is a principal scientist at the Qatar Computing Research Institute (QCRI) and is working on state-of-the-art Arabic large language models.","sameAs":["https:\/\/scipapermill.com"]}]}},"views":38,"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_shortlink":"https:\/\/wp.me\/pgIXGY-1FK","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts\/6432","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/comments?post=6432"}],"version-history":[{"count":0,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts\/6432\/revisions"}],"wp:attachment":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/media?parent=6432"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/categories?post=6432"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/tags?post=6432"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}