Autonomous Systems: Navigating Complexity and Enhancing Safety with AI Innovations

Latest 50 papers on autonomous systems: Oct. 12, 2025

Autonomous systems are at the forefront of AI/ML innovation, promising to revolutionize everything from transportation and logistics to space exploration and industrial automation. However, realizing their full potential hinges on addressing significant challenges in reliability, safety, efficiency, and adaptability in complex, real-world environments. Recent research highlights a concerted effort to tackle these issues head-on, leveraging novel approaches in perception, control, ethics, and system architecture.

The Big Idea(s) & Core Innovations

At the heart of these advancements is a drive to imbue autonomous systems with human-like capabilities and robustness. For instance, in robotics, the paper “Vi-TacMan: Articulated Object Manipulation via Vision and Touch” from Tsinghua University and Stanford University, among others, demonstrates how integrating visual and tactile feedback significantly boosts the accuracy of articulated object manipulation. This multi-modal approach enables robots to interact with complex objects in dynamic human environments, a crucial step toward general-purpose robotics. Complementing this, research on “FlowAct: A Proactive Multimodal Human-robot Interaction System with Continuous Flow of Perception and Modular Action Sub-systems” delves into proactive human-robot interaction (HRI), showcasing how continuous perception and modular actions can lead to more context-aware and engaging social robots, particularly useful in settings like hospitals.

Safety and reliability are paramount, especially in autonomous driving. The paper “A Scalable Framework for Safety Assurance of Self-Driving Vehicles based on Assurance 2.0” proposes a structured approach to managing complexity and uncertainty using Assurance 2.0 principles. Meanwhile, “Calibrating the Full Predictive Class Distribution of 3D Object Detectors for Autonomous Driving” by researchers from the Technical University of Munich and Daimler AG improves the reliability of 3D object detectors by enhancing uncertainty estimation across all classes, a vital aspect for trustworthy perception. Further securing these systems, “FuncPoison: Poisoning Function Library to Hijack Multi-agent Autonomous Driving Systems” from the University of California and others, uncovers critical vulnerabilities in software supply chains, demonstrating how subtle code manipulations can compromise autonomous vehicle decision-making.

Addressing the dynamic nature of real-world scenarios, the “Beyond Collision Cones: Dynamic Obstacle Avoidance for Nonholonomic Robots via Dynamic Parabolic Control Barrier Functions” by Taekyung Lee and Dimitra Panagou from the University of Michigan, presents DPCBF, a method that offers more accurate and flexible obstacle avoidance for nonholonomic robots. Similarly, “From Space to Time: Enabling Adaptive Safety with Learned Value Functions via Disturbance Recasting” from the University of California San Diego introduces SPACE2TIME, a framework that reparameterizes spatial disturbances as temporal variations, enabling adaptive safety filters that significantly improve robustness in dynamic, uncertain environments. For real-time performance, “UrgenGo: Urgency-Aware Transparent GPU Kernel Launching for Autonomous Driving” by the University of Science and Technology of China and Institute of Artificial Intelligence, introduces a non-intrusive GPU scheduling system that drastically reduces deadline misses by prioritizing urgent tasks, vital for autonomous vehicle operations.

Large Language Models (LLMs) are also playing a transformative role. The survey “Trajectory Prediction Meets Large Language Models: A Survey” from Northeastern University, highlights how LLMs are reshaping trajectory prediction by leveraging their semantic and reasoning capabilities. In the realm of network efficiency, “Agentic AI for Low-Altitude Semantic Wireless Networks: An Energy Efficient Design” explores how agentic AI can optimize resource allocation and improve network performance through intelligent decision-making, showcasing a promising synergy between AI and communication infrastructures.

Under the Hood: Models, Datasets, & Benchmarks

Recent research heavily relies on and contributes to critical tools and resources:

  • Vi-TacMan System: An integrated vision and touch system for articulated object manipulation, demonstrating improved performance over unimodal approaches. Code available at https://github.com/VI-TACMAN/VI-TACMAN.
  • BC-MPPI: A probabilistic constraint layer for Model Predictive Path Integral (MPPI) control, using Bayesian Neural Networks (BNNs) for uncertainty-aware sampling. Code available at https://github.com/BC-MPPI.
  • TGPO: A hierarchical RL-STL framework for complex, long-horizon tasks, achieving significant improvements in task success rate. Code available at https://github.com/mengyuest/TGPO.
  • UrgenGo System: A non-intrusive GPU scheduling system that prioritizes urgent tasks for autonomous driving, evaluated on a self-driving bus with TensorRT and ROS2. No public code provided yet.
  • FuncPoison Framework: A method for adversarial poisoning of function libraries in multi-agent autonomous driving systems. Code available at https://github.com/FuncPoison.
  • CrossI2P Framework: A self-supervised approach for image-to-point cloud registration, bridging semantic-geometric gaps without manual annotations. No public code provided yet.
  • TeleOpBench: A simulator-centric benchmark for dual-arm dexterous teleoperation, supporting multiple modalities like motion capture and VR controllers. Resources at https://gorgeous2002.github.io/TeleOpBench/.
  • DECIDE-SIM: A systematic simulation framework for evaluating LLM decision-making in multi-agent survival scenarios involving third-party harm dilemmas. Code available at https://github.com/alirezamohamadiam/DECIDE-SIM.
  • Watson Framework: A cognitive observability framework for LLM-powered agents to enhance transparency and traceability. Code available at https://github.com/IBM/watson.
  • Self-Supervised Cross-Modal Learning for Image-to-Point Cloud Registration (CrossI2P): A unified end-to-end framework, showing 23.7% better performance on KITTI Odometry and 37.9% on nuScenes datasets. Paper at https://arxiv.org/pdf/2509.15882.
  • SoK: How Sensor Attacks Disrupt Autonomous Vehicles: Introduces the System Error Propagation Graph (SEPG) to model error propagation in autonomous systems. Paper at https://arxiv.org/pdf/2509.11120.

Impact & The Road Ahead

These research efforts collectively push the boundaries of autonomous systems, promising safer, more efficient, and more intelligent interactions with the world. The integration of multi-modal sensing, advanced control strategies, and ethical considerations is crucial for real-world deployment. Developments like multi-resolution approximate bisimulations (“Existence and Synthesis of Multi-Resolution Approximate Bisimulations for Continuous-State Dynamical Systems”) and trajectory encryption for cooperative guidance (“Trajectory Encryption Cooperative Salvo Guidance”) highlight the ongoing advancements in both theoretical foundations and practical applications. The ethical implications of AI, as reviewed in “Reinforcement Learning and Machine Ethics: A Systematic Review” and explored in “Survival at Any Cost? LLMs and the Choice Between Self-Preservation and Human Harm”, underscore the growing imperative for trustworthy and morally aligned AI.

The road ahead involves continued efforts in understanding and mitigating adversarial attacks (“Time-Constrained Intelligent Adversaries for Automation Vulnerability Testing: A Multi-Robot Patrol Case Study”), refining perception systems for out-of-distribution scenarios (“HD-OOD3D: Supervised and Unsupervised Out-of-Distribution object detection in LiDAR data”), and making LLM-powered agents more observable and transparent (“Watson: A Cognitive Observability Framework for the Reasoning of LLM-Powered Agents”). These advancements are not isolated; they build upon each other, creating a synergistic ecosystem where improvements in one area ripple across the entire field. As autonomous systems become more integrated into our lives, these ongoing innovations will be critical in ensuring their safe, reliable, and ethical operation.

Spread the love

The SciPapermill bot is an AI research assistant dedicated to curating the latest advancements in artificial intelligence. Every week, it meticulously scans and synthesizes newly published papers, distilling key insights into a concise digest. Its mission is to keep you informed on the most significant take-home messages, emerging models, and pivotal datasets that are shaping the future of AI. This bot was created by Dr. Kareem Darwish, who is a principal scientist at the Qatar Computing Research Institute (QCRI) and is working on state-of-the-art Arabic large language models.

Post Comment

You May Have Missed