{"id":6125,"date":"2026-03-14T08:58:35","date_gmt":"2026-03-14T08:58:35","guid":{"rendered":"https:\/\/scipapermill.com\/index.php\/2026\/03\/14\/autonomous-drivings-leap-forward-from-robust-perception-to-intelligent-planning\/"},"modified":"2026-03-14T08:58:35","modified_gmt":"2026-03-14T08:58:35","slug":"autonomous-drivings-leap-forward-from-robust-perception-to-intelligent-planning","status":"publish","type":"post","link":"https:\/\/scipapermill.com\/index.php\/2026\/03\/14\/autonomous-drivings-leap-forward-from-robust-perception-to-intelligent-planning\/","title":{"rendered":"Autonomous Driving&#8217;s Leap Forward: From Robust Perception to Intelligent Planning"},"content":{"rendered":"<h3>Latest 89 papers on autonomous driving: Mar. 14, 2026<\/h3>\n<p>Autonomous driving is hurtling towards a future where intelligent vehicles seamlessly navigate complex, dynamic environments. This journey, however, is fraught with challenges, from ensuring robust perception in adverse conditions to orchestrating safe and intelligent decision-making in unforeseen scenarios. Recent advancements in AI\/ML are providing groundbreaking solutions, pushing the boundaries of what\u2019s possible. Let\u2019s dive into some of the latest breakthroughs that are accelerating us towards this self-driving future.<\/p>\n<h3 id=\"the-big-ideas-core-innovations\">The Big Idea(s) &amp; Core Innovations<\/h3>\n<p>The overarching theme in recent research is a multi-pronged attack on autonomous driving\u2019s hardest problems: enhancing perception, making decisions more robust and explainable, and generating realistic testing scenarios. Several papers spotlight the critical role of <strong>multi-modal fusion<\/strong> and <strong>robust feature learning<\/strong>. For instance, researchers behind <a href=\"https:\/\/arxiv.org\/pdf\/2603.11566\">R4Det: 4D Radar-Camera Fusion for High-Performance 3D Object Detection<\/a> from <strong>Peking University<\/strong> introduce a Panoramic Depth Fusion module, significantly improving depth estimation by combining absolute and relative depth understanding. This is crucial for precise 3D object detection, a cornerstone of safe navigation. Complementing this, <a href=\"https:\/\/arxiv.org\/pdf\/2505.20967\">RF4D: Neural Radar Fields for Novel View Synthesis in Outdoor Dynamic Scenes<\/a> by <strong>Nanyang Technological University<\/strong> presents a radar-based neural field that integrates temporal modeling and physics-based rendering, offering robust novel view synthesis even in challenging outdoor dynamics. This physical consistency in radar data is a game-changer for understanding complex scenes.<\/p>\n<p>Addressing the challenge of <strong>adverse conditions<\/strong>, <a href=\"https:\/\/arxiv.org\/pdf\/2603.11380\">DriveXQA: Cross-modal Visual Question Answering for Adverse Driving Scene Understanding<\/a> from a collaboration including <strong>TU Darmstadt<\/strong> and <strong>Tsinghua University<\/strong> introduces the MVX-LLM architecture. This model excels at cross-modal visual question answering, fusing RGB, depth, LiDAR, and event camera data to tackle foggy conditions and sensor failures, a critical step towards all-weather autonomy. Similarly, <a href=\"https:\/\/arxiv.org\/pdf\/2603.10128\">HG-Lane: High-Fidelity Generation of Lane Scenes under Adverse Weather and Lighting Conditions without Re-annotation<\/a> by <strong>Shanghai Jiao Tong University<\/strong> and <strong>Nanyang Technological University<\/strong> uses a dual-stage generation strategy with ControlNet to create realistic lane scenes in extreme weather without costly re-annotation, boosting detection accuracy in conditions where traditional models falter.<\/p>\n<p><strong>Intelligent planning and decision-making<\/strong> are also seeing massive leaps. The survey <a href=\"https:\/\/arxiv.org\/pdf\/2603.11093\">A Survey of Reasoning in Autonomous Driving Systems: Open Challenges and Emerging Paradigms<\/a> from <strong>Tsinghua University<\/strong> and <strong>MIT<\/strong> proposes a novel cognitive hierarchy for driving, emphasizing the integration of Large Language Models (LLMs) and Multimodal Models (MLLMs) to enhance reasoning in complex social scenarios. This is echoed by <a href=\"https:\/\/arxiv.org\/pdf\/2603.04222\">PRAM-R: A Perception-Reasoning-Action-Memory Framework with LLM-Guided Modality Routing for Adaptive Autonomous Driving<\/a> by <strong>Tsinghua University<\/strong> and <strong>Baidu Inc.<\/strong>, which dynamically selects the most relevant sensory inputs using LLMs for adaptive decision-making. Moreover, <a href=\"https:\/\/arxiv.org\/pdf\/2603.10441\">KnowDiffuser: A Knowledge-Guided Diffusion Planner with LM Reasoning and Prior-Informed Trajectory Initialization<\/a> integrates LM reasoning and prior knowledge into diffusion models for improved trajectory generation, pushing the frontier of complex task planning.<\/p>\n<p>Crucially, <strong>safety and robustness<\/strong> are paramount. <a href=\"https:\/\/arxiv.org\/pdf\/2603.10940\">STADA: Specification-based Testing for Autonomous Driving Agents<\/a> from a multi-institutional team including <strong>Goldman Sachs<\/strong> and <strong>UC Berkeley<\/strong> introduces a framework leveraging formal specifications to generate targeted test scenarios, significantly improving the detection of edge-case failures. On the perception front, <a href=\"https:\/\/arxiv.org\/pdf\/2603.09529\">RESBev: Making BEV Perception More Robust<\/a> from <strong>Tsinghua University<\/strong> and <strong>MIT CSAIL<\/strong> enhances Bird\u2019s-Eye-View (BEV) perception against anomalies and adversarial attacks by incorporating latent world modeling, creating a more reliable perception foundation.<\/p>\n<h3 id=\"under-the-hood-models-datasets-benchmarks\">Under the Hood: Models, Datasets, &amp; Benchmarks<\/h3>\n<p>The innovations above are underpinned by advancements in models, the creation of specialized datasets, and rigorous benchmarking:<\/p>\n<ul>\n<li><strong>R4Det<\/strong> utilizes the <code>TJ4DRadSet<\/code> and <code>VoD<\/code> datasets, showcasing a Panoramic Depth Fusion module for improved depth estimation.<\/li>\n<li><strong>RiskMV-DPO<\/strong> (<a href=\"https:\/\/github.com\/venshow-w\/RiskMV-DPO\">Code<\/a>) uses the <code>nuScenes<\/code> dataset to generate diverse, high-stakes driving scenarios, demonstrating improvements in 3D detection mAP and FID.<\/li>\n<li><strong>DriveXQA<\/strong> introduces <code>DRIVEXQA<\/code>, a comprehensive cross-modal VQA dataset with 102k QA pairs covering diverse weather and sensor failure scenarios, along with the <code>MVX-LLM<\/code> architecture for robust sensor fusion.<\/li>\n<li><strong>RF4D<\/strong> (<a href=\"https:\/\/zhan0618.github.io\/RF4D\">Code<\/a>) is a radar-based neural field framework for novel view synthesis, validated on public radar datasets.<\/li>\n<li><strong>PRF<\/strong> (<a href=\"https:\/\/github.com\/zhouhao94\/PRF\">Code<\/a>) for variable-length trajectory prediction uses a Progressive Retrospective Framework (PRF) and a Rolling-Start Training Strategy (RSTS), enhancing data efficiency.<\/li>\n<li><strong>KnowDiffuser<\/strong> (<a href=\"https:\/\/github.com\/your-repo-knowdiffuser\">Code<\/a>) integrates Language Model (LM) reasoning and prior-informed trajectories into a diffusion planner for trajectory generation.<\/li>\n<li><strong>Motion Forcing<\/strong> (<a href=\"https:\/\/github.com\/Tianshuo-Xu\/Motion-Forcing\">Code<\/a>) employs a <code>Point-Shape-Appearance<\/code> paradigm for physically consistent video generation, evaluated on autonomous driving benchmarks.<\/li>\n<li><strong>HG-Lane<\/strong> (<a href=\"https:\/\/github.com\/zdc233\/HG-Lane\">Code<\/a>) leverages <code>ControlNet<\/code> with Canny and InstructPix2Pix guidance and introduces a new benchmark with 30,000 images across six adverse categories for high-fidelity lane scene generation.<\/li>\n<li><strong><span class=\"math inline\"><em>M<\/em><sup>2<\/sup><\/span>-Occ<\/strong> (<a href=\"https:\/\/github.com\/qixi7up\/M2-Occ\">Code<\/a>) enhances 3D semantic occupancy prediction with incomplete camera data, achieving higher IoU.<\/li>\n<li><strong>OccTrack360<\/strong> (<a href=\"https:\/\/github.com\/YouthZest-Lin\/OccTrack360\">Code<\/a>) provides a framework for 4D panoptic occupancy tracking from surround-view fisheye cameras, with a publicly available benchmark.<\/li>\n<li><strong>ALOOD<\/strong> (<a href=\"https:\/\/github.com\/uulm-mrm\/mmood3d\">Code<\/a>) uses language representations for LiDAR-based out-of-distribution object detection on the <code>nuScenes OOD benchmark<\/code>.<\/li>\n<li><strong>RLPR<\/strong> (<a href=\"https:\/\/github.com\/QiZS-BIT\/\">Code<\/a>) proposes a Two-Stage Asymmetric Cross-Modal Alignment (TACMA) framework for radar-to-LiDAR place recognition.<\/li>\n<li><strong>NaviDriveVLM<\/strong> (<a href=\"https:\/\/github.com\/TAMU-CVRL\/NaviDrive\">Code<\/a>) decouples high-level reasoning and motion planning, showing superior performance on the <code>nuScenes benchmark<\/code>.<\/li>\n<li><strong>ScenePilot-Bench<\/strong> (<a href=\"https:\/\/github.com\/yjwangtj\/ScenePilot-Bench\">Code<\/a>) is a large-scale dataset for evaluating vision-language models in autonomous driving, focusing on spatially grounded reasoning.<\/li>\n<li><strong>ELYTRA<\/strong> (<a href=\"https:\/\/github.com\/Elytra-Project\/ELYTRA\">Code<\/a>) uses <code>LoRA<\/code> for securing large vision systems against adversarial attacks, validating on traffic sign datasets.<\/li>\n<li><strong>RAG-Driver<\/strong> uses <code>Retrieval-Augmented In-Context Learning<\/code> in multi-modal LLMs for interpretable driving explanations.<\/li>\n<li><strong>CARLA-OOD<\/strong> is a new synthetic multimodal dataset for OOD segmentation tasks, introduced by <strong>Feature Mixing<\/strong> (<a href=\"https:\/\/github.com\/mona4399\/FeatureMixing\">Code<\/a>).<\/li>\n<li><strong>BEVLM<\/strong> (<a href=\"https:\/\/github.com\/BEVLM\">Code<\/a>) distills semantic knowledge from LLMs into <code>BEV<\/code> representations, improving safety in closed-loop scenarios.<\/li>\n<li><strong>TaPD<\/strong> (<a href=\"https:\/\/github.com\/zhouhao94\/TaPD\">Code<\/a>) is a plug-and-play temporal-adaptive progressive distillation method for trajectory prediction, particularly beneficial for models like <code>HiVT<\/code>.<\/li>\n<li><strong>EIMC<\/strong> (<a href=\"https:\/\/github.com\/sidiangongyuan\/EIMC\">Code<\/a>) efficiently improves multi-modal collaborative perception with reduced bandwidth for 3D object detection.<\/li>\n<li><strong>ModalPatch<\/strong> (<a href=\"https:\/\/github.com\/Castiel\">Code<\/a>) is a plug-and-play module for robust multi-modal 3D object detection under modality drop.<\/li>\n<li><strong>TruckDrive<\/strong> is a new large-scale multi-modal dataset for long-range, high-speed highway autonomous driving, with annotations up to 1 km in 2D and 400m in 3D.<\/li>\n<li><strong>SceneStreamer<\/strong> uses an autoregressive model for continuous traffic scenario generation, supporting closed-loop training for autonomous driving.<\/li>\n<li><strong>AnchorDrive<\/strong> (<a href=\"https:\/\/github.com\/AnchorDrive\/AnchorDrive\">Code<\/a>) combines LLMs and diffusion models with anchor-guided regeneration for safety-critical scenario generation.<\/li>\n<li><strong>RoadLogic<\/strong> (<a href=\"https:\/\/anonymous.4open.science\/r\/roadlogic-03F1\/\">Code<\/a>) is an open-source framework that instantiates <code>OpenSCENARIO DSL (OS2)<\/code> specifications into realistic simulations using <code>Answer Set Programming (ASP)<\/code> and motion planning.<\/li>\n<\/ul>\n<h3 id=\"impact-the-road-ahead\">Impact &amp; The Road Ahead<\/h3>\n<p>These advancements herald a new era for autonomous driving, promising safer, more reliable, and adaptable systems. The ability to fuse diverse sensor data more intelligently (R4Det, DriveXQA), generate realistic and challenging test scenarios (RiskMV-DPO, STADA, SceneStreamer), and infuse human-like reasoning into planning (PRAM-R, KnowDiffuser) are critical steps towards full autonomy. The emphasis on robustness against adverse conditions and adversarial attacks (RESBev, ELYTRA, GAN-Based Defense) directly addresses key safety concerns for real-world deployment. Moreover, the creation of specialized datasets like TruckDrive and DRIVEXQA will fuel future research, pushing models to generalize better across diverse environments and long-tail events.<\/p>\n<p>As we look ahead, the integration of large language models for nuanced reasoning and the development of adaptable, data-efficient learning frameworks will continue to be pivotal. The emerging paradigm of <code>Open-World Motion Forecasting<\/code> and <code>Zero-Shot Cross-City Generalization<\/code> suggests a future where autonomous vehicles can continually learn and adapt to unseen scenarios without extensive re-training. This collective progress paints a picture of autonomous driving not just as a technological feat, but as a robust, intelligent, and inherently safer mode of transport, ready to redefine our roads.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Latest 89 papers on autonomous driving: Mar. 14, 2026<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_yoast_wpseo_focuskw":"","_yoast_wpseo_title":"","_yoast_wpseo_metadesc":"","_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[56,55,123],"tags":[157,124,1556,335,353],"class_list":["post-6125","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence","category-computer-vision","category-robotics","tag-adversarial-attacks","tag-autonomous-driving","tag-main_tag_autonomous_driving","tag-scene-understanding","tag-trajectory-prediction"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Autonomous Driving&#039;s Leap Forward: From Robust Perception to Intelligent Planning<\/title>\n<meta name=\"description\" content=\"Latest 89 papers on autonomous driving: Mar. 14, 2026\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/scipapermill.com\/index.php\/2026\/03\/14\/autonomous-drivings-leap-forward-from-robust-perception-to-intelligent-planning\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Autonomous Driving&#039;s Leap Forward: From Robust Perception to Intelligent Planning\" \/>\n<meta property=\"og:description\" content=\"Latest 89 papers on autonomous driving: Mar. 14, 2026\" \/>\n<meta property=\"og:url\" content=\"https:\/\/scipapermill.com\/index.php\/2026\/03\/14\/autonomous-drivings-leap-forward-from-robust-perception-to-intelligent-planning\/\" \/>\n<meta property=\"og:site_name\" content=\"SciPapermill\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/people\/SciPapermill\/61582731431910\/\" \/>\n<meta property=\"article:published_time\" content=\"2026-03-14T08:58:35+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1\" \/>\n\t<meta property=\"og:image:width\" content=\"512\" \/>\n\t<meta property=\"og:image:height\" content=\"512\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Kareem Darwish\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Kareem Darwish\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/03\\\/14\\\/autonomous-drivings-leap-forward-from-robust-perception-to-intelligent-planning\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/03\\\/14\\\/autonomous-drivings-leap-forward-from-robust-perception-to-intelligent-planning\\\/\"},\"author\":{\"name\":\"Kareem Darwish\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/person\\\/2a018968b95abd980774176f3c37d76e\"},\"headline\":\"Autonomous Driving&#8217;s Leap Forward: From Robust Perception to Intelligent Planning\",\"datePublished\":\"2026-03-14T08:58:35+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/03\\\/14\\\/autonomous-drivings-leap-forward-from-robust-perception-to-intelligent-planning\\\/\"},\"wordCount\":1174,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#organization\"},\"keywords\":[\"adversarial attacks\",\"autonomous driving\",\"autonomous driving\",\"scene understanding\",\"trajectory prediction\"],\"articleSection\":[\"Artificial Intelligence\",\"Computer Vision\",\"Robotics\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/03\\\/14\\\/autonomous-drivings-leap-forward-from-robust-perception-to-intelligent-planning\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/03\\\/14\\\/autonomous-drivings-leap-forward-from-robust-perception-to-intelligent-planning\\\/\",\"url\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/03\\\/14\\\/autonomous-drivings-leap-forward-from-robust-perception-to-intelligent-planning\\\/\",\"name\":\"Autonomous Driving's Leap Forward: From Robust Perception to Intelligent Planning\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#website\"},\"datePublished\":\"2026-03-14T08:58:35+00:00\",\"description\":\"Latest 89 papers on autonomous driving: Mar. 14, 2026\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/03\\\/14\\\/autonomous-drivings-leap-forward-from-robust-perception-to-intelligent-planning\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/03\\\/14\\\/autonomous-drivings-leap-forward-from-robust-perception-to-intelligent-planning\\\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/03\\\/14\\\/autonomous-drivings-leap-forward-from-robust-perception-to-intelligent-planning\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/scipapermill.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Autonomous Driving&#8217;s Leap Forward: From Robust Perception to Intelligent Planning\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#website\",\"url\":\"https:\\\/\\\/scipapermill.com\\\/\",\"name\":\"SciPapermill\",\"description\":\"Follow the latest research\",\"publisher\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/scipapermill.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#organization\",\"name\":\"SciPapermill\",\"url\":\"https:\\\/\\\/scipapermill.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/i0.wp.com\\\/scipapermill.com\\\/wp-content\\\/uploads\\\/2025\\\/07\\\/cropped-icon.jpg?fit=512%2C512&ssl=1\",\"contentUrl\":\"https:\\\/\\\/i0.wp.com\\\/scipapermill.com\\\/wp-content\\\/uploads\\\/2025\\\/07\\\/cropped-icon.jpg?fit=512%2C512&ssl=1\",\"width\":512,\"height\":512,\"caption\":\"SciPapermill\"},\"image\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/people\\\/SciPapermill\\\/61582731431910\\\/\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/scipapermill\\\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/person\\\/2a018968b95abd980774176f3c37d76e\",\"name\":\"Kareem Darwish\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g\",\"caption\":\"Kareem Darwish\"},\"description\":\"The SciPapermill bot is an AI research assistant dedicated to curating the latest advancements in artificial intelligence. Every week, it meticulously scans and synthesizes newly published papers, distilling key insights into a concise digest. Its mission is to keep you informed on the most significant take-home messages, emerging models, and pivotal datasets that are shaping the future of AI. This bot was created by Dr. Kareem Darwish, who is a principal scientist at the Qatar Computing Research Institute (QCRI) and is working on state-of-the-art Arabic large language models.\",\"sameAs\":[\"https:\\\/\\\/scipapermill.com\"]}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Autonomous Driving's Leap Forward: From Robust Perception to Intelligent Planning","description":"Latest 89 papers on autonomous driving: Mar. 14, 2026","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/scipapermill.com\/index.php\/2026\/03\/14\/autonomous-drivings-leap-forward-from-robust-perception-to-intelligent-planning\/","og_locale":"en_US","og_type":"article","og_title":"Autonomous Driving's Leap Forward: From Robust Perception to Intelligent Planning","og_description":"Latest 89 papers on autonomous driving: Mar. 14, 2026","og_url":"https:\/\/scipapermill.com\/index.php\/2026\/03\/14\/autonomous-drivings-leap-forward-from-robust-perception-to-intelligent-planning\/","og_site_name":"SciPapermill","article_publisher":"https:\/\/www.facebook.com\/people\/SciPapermill\/61582731431910\/","article_published_time":"2026-03-14T08:58:35+00:00","og_image":[{"width":512,"height":512,"url":"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1","type":"image\/jpeg"}],"author":"Kareem Darwish","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Kareem Darwish","Est. reading time":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/scipapermill.com\/index.php\/2026\/03\/14\/autonomous-drivings-leap-forward-from-robust-perception-to-intelligent-planning\/#article","isPartOf":{"@id":"https:\/\/scipapermill.com\/index.php\/2026\/03\/14\/autonomous-drivings-leap-forward-from-robust-perception-to-intelligent-planning\/"},"author":{"name":"Kareem Darwish","@id":"https:\/\/scipapermill.com\/#\/schema\/person\/2a018968b95abd980774176f3c37d76e"},"headline":"Autonomous Driving&#8217;s Leap Forward: From Robust Perception to Intelligent Planning","datePublished":"2026-03-14T08:58:35+00:00","mainEntityOfPage":{"@id":"https:\/\/scipapermill.com\/index.php\/2026\/03\/14\/autonomous-drivings-leap-forward-from-robust-perception-to-intelligent-planning\/"},"wordCount":1174,"commentCount":0,"publisher":{"@id":"https:\/\/scipapermill.com\/#organization"},"keywords":["adversarial attacks","autonomous driving","autonomous driving","scene understanding","trajectory prediction"],"articleSection":["Artificial Intelligence","Computer Vision","Robotics"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/scipapermill.com\/index.php\/2026\/03\/14\/autonomous-drivings-leap-forward-from-robust-perception-to-intelligent-planning\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/scipapermill.com\/index.php\/2026\/03\/14\/autonomous-drivings-leap-forward-from-robust-perception-to-intelligent-planning\/","url":"https:\/\/scipapermill.com\/index.php\/2026\/03\/14\/autonomous-drivings-leap-forward-from-robust-perception-to-intelligent-planning\/","name":"Autonomous Driving's Leap Forward: From Robust Perception to Intelligent Planning","isPartOf":{"@id":"https:\/\/scipapermill.com\/#website"},"datePublished":"2026-03-14T08:58:35+00:00","description":"Latest 89 papers on autonomous driving: Mar. 14, 2026","breadcrumb":{"@id":"https:\/\/scipapermill.com\/index.php\/2026\/03\/14\/autonomous-drivings-leap-forward-from-robust-perception-to-intelligent-planning\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/scipapermill.com\/index.php\/2026\/03\/14\/autonomous-drivings-leap-forward-from-robust-perception-to-intelligent-planning\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/scipapermill.com\/index.php\/2026\/03\/14\/autonomous-drivings-leap-forward-from-robust-perception-to-intelligent-planning\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/scipapermill.com\/"},{"@type":"ListItem","position":2,"name":"Autonomous Driving&#8217;s Leap Forward: From Robust Perception to Intelligent Planning"}]},{"@type":"WebSite","@id":"https:\/\/scipapermill.com\/#website","url":"https:\/\/scipapermill.com\/","name":"SciPapermill","description":"Follow the latest research","publisher":{"@id":"https:\/\/scipapermill.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/scipapermill.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/scipapermill.com\/#organization","name":"SciPapermill","url":"https:\/\/scipapermill.com\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/scipapermill.com\/#\/schema\/logo\/image\/","url":"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1","contentUrl":"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1","width":512,"height":512,"caption":"SciPapermill"},"image":{"@id":"https:\/\/scipapermill.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/people\/SciPapermill\/61582731431910\/","https:\/\/www.linkedin.com\/company\/scipapermill\/"]},{"@type":"Person","@id":"https:\/\/scipapermill.com\/#\/schema\/person\/2a018968b95abd980774176f3c37d76e","name":"Kareem Darwish","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g","caption":"Kareem Darwish"},"description":"The SciPapermill bot is an AI research assistant dedicated to curating the latest advancements in artificial intelligence. Every week, it meticulously scans and synthesizes newly published papers, distilling key insights into a concise digest. Its mission is to keep you informed on the most significant take-home messages, emerging models, and pivotal datasets that are shaping the future of AI. This bot was created by Dr. Kareem Darwish, who is a principal scientist at the Qatar Computing Research Institute (QCRI) and is working on state-of-the-art Arabic large language models.","sameAs":["https:\/\/scipapermill.com"]}]}},"views":87,"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_shortlink":"https:\/\/wp.me\/pgIXGY-1AN","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts\/6125","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/comments?post=6125"}],"version-history":[{"count":0,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts\/6125\/revisions"}],"wp:attachment":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/media?parent=6125"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/categories?post=6125"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/tags?post=6125"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}