{"id":6670,"date":"2026-04-25T05:19:16","date_gmt":"2026-04-25T05:19:16","guid":{"rendered":"https:\/\/scipapermill.com\/index.php\/2026\/04\/25\/adversarial-attacks-on-ai-from-deepfakes-to-self-driving-cars-and-beyond\/"},"modified":"2026-04-25T05:19:16","modified_gmt":"2026-04-25T05:19:16","slug":"adversarial-attacks-on-ai-from-deepfakes-to-self-driving-cars-and-beyond","status":"publish","type":"post","link":"https:\/\/scipapermill.com\/index.php\/2026\/04\/25\/adversarial-attacks-on-ai-from-deepfakes-to-self-driving-cars-and-beyond\/","title":{"rendered":"Adversarial Attacks on AI: From Deepfakes to Self-Driving Cars and Beyond!"},"content":{"rendered":"<h3>Latest 18 papers on adversarial attacks: Apr. 25, 2026<\/h3>\n<p>The landscape of AI is rapidly evolving, bringing with it incredible capabilities but also significant vulnerabilities. Adversarial attacks \u2013 subtle, often imperceptible manipulations designed to fool AI models \u2013 represent a critical challenge to the reliability and safety of these systems. This post dives into recent breakthroughs across various domains, exploring novel attack vectors, fundamental theoretical insights, and cutting-edge defense strategies that are shaping the future of trustworthy AI.<\/p>\n<h3 id=\"the-big-ideas-core-innovations\">The Big Idea(s) &amp; Core Innovations<\/h3>\n<p>Recent research highlights a crucial shift: attacks are becoming more sophisticated, targeting the very foundations of AI decision-making and even long-term planning. For instance, in the realm of multimodal AI, a paper from <strong>Nanyang Technological University<\/strong> and its collaborators, titled \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2604.18867\">Hierarchically Robust Zero-shot Vision-language Models<\/a>\u201d, reveals that Vision-Language Models (VLMs) can be made hierarchically robust by exploiting hyperbolic embeddings. Their key insight is that hyperbolic classifiers achieve theoretically infinite margin sizes, making them more resilient, and critically, that adversarial perturbations generated at <em>superclass<\/em> levels (e.g., \u2018mammal\u2019) transfer effectively to attack <em>base classes<\/em> (e.g., \u2018cat\u2019), but not vice versa. This asymmetry presents a unique vulnerability that their Hierarchical Adversarial Fine-tuning (HITA) framework addresses.<\/p>\n<p>Expanding on multimodal vulnerabilities, <strong>Beihang University<\/strong> and its partners introduce \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2411.18275\">Visual Adversarial Attack on Vision-Language Models for Autonomous Driving<\/a>\u201d (ADvLM). This groundbreaking work is the first to specifically target VLMs in autonomous driving, demonstrating how semantic-invariant textual prompts and scenario-associated visual enhancements can lead to dangerous real-world vehicle deviations. The researchers found that carefully crafted visual perturbations can cause attention maps to dramatically shift, disrupting the model\u2019s focus and causing significant safety risks.<\/p>\n<p>The threats extend to critical medical applications. In \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2604.17318\">When Background Matters: Breaking Medical Vision Language Models by Transferable Attack<\/a>\u201d, researchers from <strong>Indian Institute of Technology Patna<\/strong> and <strong>MBZUAI<\/strong> propose MedFocusLeak. This attack shows that injecting subtle, visually imperceptible perturbations into <em>non-diagnostic background regions<\/em> of medical images can redirect a VLM\u2019s attention away from pathological areas, leading to clinically incorrect\u2014and potentially life-threatening\u2014diagnoses. The multimodal nature of these perturbations proved twice as effective as unimodal approaches, emphasizing the unique challenges of medical AI.<\/p>\n<p>Beyond perception, attacks are now targeting the long-term memory and reasoning of AI agents. <strong>City University of Hong Kong<\/strong>\u2019s work on \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2604.16966\">Visual Inception: Compromising Long-term Planning in Agentic Recommenders via Multimodal Memory Poisoning<\/a>\u201d introduces the chilling concept of \u2018sleeper agents.\u2019 Adversarial visual triggers, hidden in user-uploaded images, lie dormant in memory until retrieved for future planning, then hijack the agent\u2019s reasoning towards adversary-defined goals. Their proposed COGNITIVEGUARD defense, inspired by human cognition, offers a dual-process approach to detect and mitigate these stealthy attacks.<\/p>\n<p>On the theoretical front, a paper from the <strong>University of Chinese Academy of Sciences<\/strong> titled \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2604.17384\">Towards a Data-Parameter Correspondence for LLMs: A Preliminary Discussion<\/a>\u201d provides a unified geometric framework. It posits that data-centric and parameter-centric operations in LLM optimization are dual manifestations of the same geometric structure. Crucially, it reveals that adversarial attacks exhibit <em>cooperative amplification<\/em> between data poisoning and parameter backdoors across this data-parameter boundary, suggesting new avenues for both attack and defense. Another paper, \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2604.16037\">Stochasticity in Tokenisation Improves Robustness<\/a>\u201d from <strong>Graz University of Technology<\/strong> and its collaborators, demonstrates that training with uniform stochastic tokenization significantly improves LLM robustness against random and adversarial tokenization attacks without increasing inference costs, a simple yet powerful technique.<\/p>\n<p>Defenses are also rapidly advancing. For 3D point clouds, the \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2604.15708\">APC: Transferable and Efficient Adversarial Point Counterattack for Robust 3D Point Cloud Recognition<\/a>\u201d from the <strong>University of Seoul<\/strong> introduces a lightweight input-level purification module. APC generates per-point counter-perturbations using hybrid training and dual consistency losses (geometric and semantic), achieving state-of-the-art defense and strong transferability across unseen models. Similarly, for Graph Neural Networks, <strong>Chinese Academy of Sciences<\/strong> and its partners propose \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2604.15370\">TopFeaRe: Locating Critical State of Adversarial Resilience for Graphs Regarding Topology-Feature Entanglement<\/a>\u201d. This method leverages equilibrium-point theory from complex dynamic systems to identify an \u2018asymptotically-stable equilibrium point\u2019 that guides graph purification, addressing the intertwined nature of topology and features in graph attacks.<\/p>\n<p>Biological inspiration also plays a role. Researchers from <strong>Peking University<\/strong> in \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2604.14200\">Retina gap junctions support the robust perception by warping neural representational geometries along the visual hierarchy<\/a>\u201d show that retina gap junctions create unique, stable circular decision boundaries, making deep neural networks robust to attacks by warping neural representational geometries. This parameter-free G-filter, inspired by biological visual systems, outperforms traditional preprocessing defenses.<\/p>\n<p>Finally, for generative AI, <strong>Renmin University of China<\/strong> exposes a critical flaw in \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2604.12781\">Fragile Reconstruction: Adversarial Vulnerability of Reconstruction-Based Detectors for Diffusion-Generated Images<\/a>\u201d. Their work reveals that reconstruction-based detectors for AI-generated images (like deepfakes) are severely vulnerable to imperceptible adversarial perturbations, causing detection accuracy to collapse and demonstrating strong cross-generator and cross-method transferability. The low signal-to-noise ratio in reconstruction residuals is identified as the root cause, rendering standard defenses ineffective.<\/p>\n<h3 id=\"under-the-hood-models-datasets-benchmarks\">Under the Hood: Models, Datasets, &amp; Benchmarks<\/h3>\n<p>These advancements are underpinned by a rich ecosystem of models, datasets, and benchmarks:<\/p>\n<ul>\n<li><strong>Vision-Language Models &amp; Autonomous Driving:<\/strong> The ADvLM framework utilized models like DriveLM (<a href=\"https:\/\/github.com\/OpenDriveLab\/DriveLM\">https:\/\/github.com\/OpenDriveLab\/DriveLM<\/a>), Dolphins (<a href=\"https:\/\/github.com\/CharlesZheYuan\/Dolphins\">https:\/\/github.com\/CharlesZheYuan\/Dolphins<\/a>), and LMDrive, along with the CARLA simulator (<a href=\"https:\/\/carla.org\/\">https:\/\/carla.org\/<\/a>) for realistic evaluation. New datasets like DriveLM-ADvLM and Dolphins-ADvLM were introduced.<\/li>\n<li><strong>Medical VLMs:<\/strong> MedFocusLeak evaluated models like GPT-5 and Gemini 2.5 Pro and utilized datasets such as MIMIC-CXR, SkinCAP, and MedTrinity, with MedSAM for image segmentation and various CLIP variants (<a href=\"https:\/\/huggingface.co\/openai\/clip-vit-large-patch14-336\">openai\/clip-vit-large-patch14-336<\/a>, etc.) as surrogate models. Public code is available at <a href=\"https:\/\/github.com\/MedFocusLeak\">https:\/\/github.com\/MedFocusLeak<\/a>.<\/li>\n<li><strong>Agentic Recommender Systems:<\/strong> \u201cVisual Inception\u201d used ShopBench-Agent benchmark (e-commerce, interior design, travel planning), LLaMA-3.2-Vision-90B, GPT-4V, Qwen-VL-Max, and Claude-3.5-Sonnet, alongside CLIP and SigLIP models.<\/li>\n<li><strong>LLM Robustness:<\/strong> \u201cStochasticity in Tokenisation\u201d experimented with models like GPT-2 XL (<a href=\"https:\/\/huggingface.co\/openai-community\/gpt2-xl\">https:\/\/huggingface.co\/openai-community\/gpt2-xl<\/a>), Llama-3.2-1B, and Qwen3-0.6B, using datasets like LANGUAGE GAME and CUTE. Code is available at <a href=\"https:\/\/github.com\/stegsoph\/stochastic-tokenisation-robustness\">https:\/\/github.com\/stegsoph\/stochastic-tokenisation-robustness<\/a>.<\/li>\n<li><strong>LLM Trustworthiness (Training-Free):<\/strong> The systematic study evaluated four LLM families (7B to 70B parameters) against HarmBench, TruthfulQA, BBQ, and XSTest datasets.<\/li>\n<li><strong>Android Malware Detection:<\/strong> \u201cUnraveling the Key\u201d developed FrameDroid (<a href=\"https:\/\/github.com\/ljiahao\/FrameDroid\">https:\/\/github.com\/ljiahao\/FrameDroid<\/a>), a comprehensive framework, and collected the largest dataset to date with 221,310 apps from AndroZoo and VirusTotal.<\/li>\n<li><strong>3D Point Cloud Robustness:<\/strong> APC was evaluated on ModelNet40 and ScanObjectNN, with code available at <a href=\"https:\/\/github.com\/gyjung975\/APC\">https:\/\/github.com\/gyjung975\/APC<\/a>.<\/li>\n<li><strong>Graph Neural Networks:<\/strong> TopFeaRe was validated on Cora_ML, Citeseer, Amazon Photo, and PubMed datasets using attacks like Metattack and Nettack, with code at <a href=\"https:\/\/doi.org\/10.5281\/zenodo.17920431\">https:\/\/doi.org\/10.5281\/zenodo.17920431<\/a>.<\/li>\n<li><strong>Remote Sensing &amp; Physically Plausible Attacks:<\/strong> FogFool used the UC Merced Land Use (UCM) and NWPU-RESISC45 datasets. No public code was mentioned for FogFool.<\/li>\n<li><strong>AI-Generated Content (AIGC) Detection:<\/strong> \u201cFragile Reconstruction\u201d assessed models like DIRE, LaRE2, and AEROBLADE across generative backbones including ADM, SDv1.5, FLUX, and VQDM. Code is at <a href=\"https:\/\/github.com\/atrijhy\/Fragile-Reconstruction\">https:\/\/github.com\/atrijhy\/Fragile-Reconstruction<\/a>.<\/li>\n<li><strong>Time-Series Regression Attacks:<\/strong> INTARG utilized the UCI Individual Household Electric Power Consumption Dataset (<a href=\"https:\/\/doi.org\/10.24432\/C58K54\">https:\/\/doi.org\/10.24432\/C58K54<\/a>) and Pecan Street Dataport (<a href=\"https:\/\/www.pecanstreet.org\/dataport\/\">https:\/\/www.pecanstreet.org\/dataport\/<\/a>).<\/li>\n<li><strong>Certified Robustness &amp; Fairness:<\/strong> The GF-Score uses RobustBench (<a href=\"https:\/\/robustbench.github.io\/\">https:\/\/robustbench.github.io\/<\/a>) and standard datasets like CIFAR-10 and ImageNet.<\/li>\n<\/ul>\n<h3 id=\"impact-the-road-ahead\">Impact &amp; The Road Ahead<\/h3>\n<p>The implications of these studies are profound. From ensuring the safety of autonomous vehicles and the accuracy of medical diagnoses to safeguarding the integrity of recommender systems and detecting malicious AI-generated content, adversarial robustness is no longer a niche research area but a fundamental requirement for deploying AI in the real world. The discovery of <code>Visual Inception<\/code> and <code>MedFocusLeak<\/code> demonstrates that attacks are becoming stealthier and more integrated into the data stream, bypassing traditional defenses.<\/p>\n<p>The theoretical understanding of data-parameter correspondence and the geometric properties of manifolds opens new avenues for designing more robust models from the ground up, moving beyond reactive patching. The effectiveness of biologically inspired defenses like the G-filter further underscores the potential of interdisciplinary approaches. Moreover, the systematic evaluations of training-free methods for LLM trustworthiness and the comprehensive study of Android malware detection highlight the need for standardized benchmarks and a deeper understanding of underlying vulnerabilities.<\/p>\n<p>Looking ahead, the focus will undoubtedly shift towards proactive, integrated defense strategies that account for multimodal, hierarchical, and long-term vulnerabilities. The challenge remains to develop AI systems that are not just intelligent but also inherently resilient, trustworthy, and fair in the face of increasingly sophisticated adversaries. The journey toward truly robust AI is complex, but these recent breakthroughs offer a compelling glimpse into a more secure future.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Latest 18 papers on adversarial attacks: Apr. 25, 2026<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_yoast_wpseo_focuskw":"","_yoast_wpseo_title":"","_yoast_wpseo_metadesc":"","_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[56,55,63],"tags":[157,1621,1042,158,59,361],"class_list":["post-6670","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence","category-computer-vision","category-machine-learning","tag-adversarial-attacks","tag-main_tag_adversarial_attacks","tag-adversarial-defense","tag-adversarial-robustness","tag-vision-language-models","tag-zero-shot-classification"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Adversarial Attacks on AI: From Deepfakes to Self-Driving Cars and Beyond!<\/title>\n<meta name=\"description\" content=\"Latest 18 papers on adversarial attacks: Apr. 25, 2026\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/scipapermill.com\/index.php\/2026\/04\/25\/adversarial-attacks-on-ai-from-deepfakes-to-self-driving-cars-and-beyond\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Adversarial Attacks on AI: From Deepfakes to Self-Driving Cars and Beyond!\" \/>\n<meta property=\"og:description\" content=\"Latest 18 papers on adversarial attacks: Apr. 25, 2026\" \/>\n<meta property=\"og:url\" content=\"https:\/\/scipapermill.com\/index.php\/2026\/04\/25\/adversarial-attacks-on-ai-from-deepfakes-to-self-driving-cars-and-beyond\/\" \/>\n<meta property=\"og:site_name\" content=\"SciPapermill\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/people\/SciPapermill\/61582731431910\/\" \/>\n<meta property=\"article:published_time\" content=\"2026-04-25T05:19:16+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1\" \/>\n\t<meta property=\"og:image:width\" content=\"512\" \/>\n\t<meta property=\"og:image:height\" content=\"512\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Kareem Darwish\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Kareem Darwish\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"7 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/04\\\/25\\\/adversarial-attacks-on-ai-from-deepfakes-to-self-driving-cars-and-beyond\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/04\\\/25\\\/adversarial-attacks-on-ai-from-deepfakes-to-self-driving-cars-and-beyond\\\/\"},\"author\":{\"name\":\"Kareem Darwish\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/person\\\/2a018968b95abd980774176f3c37d76e\"},\"headline\":\"Adversarial Attacks on AI: From Deepfakes to Self-Driving Cars and Beyond!\",\"datePublished\":\"2026-04-25T05:19:16+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/04\\\/25\\\/adversarial-attacks-on-ai-from-deepfakes-to-self-driving-cars-and-beyond\\\/\"},\"wordCount\":1399,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#organization\"},\"keywords\":[\"adversarial attacks\",\"adversarial attacks\",\"adversarial defense\",\"adversarial robustness\",\"vision-language models\",\"zero-shot classification\"],\"articleSection\":[\"Artificial Intelligence\",\"Computer Vision\",\"Machine Learning\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/04\\\/25\\\/adversarial-attacks-on-ai-from-deepfakes-to-self-driving-cars-and-beyond\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/04\\\/25\\\/adversarial-attacks-on-ai-from-deepfakes-to-self-driving-cars-and-beyond\\\/\",\"url\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/04\\\/25\\\/adversarial-attacks-on-ai-from-deepfakes-to-self-driving-cars-and-beyond\\\/\",\"name\":\"Adversarial Attacks on AI: From Deepfakes to Self-Driving Cars and Beyond!\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#website\"},\"datePublished\":\"2026-04-25T05:19:16+00:00\",\"description\":\"Latest 18 papers on adversarial attacks: Apr. 25, 2026\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/04\\\/25\\\/adversarial-attacks-on-ai-from-deepfakes-to-self-driving-cars-and-beyond\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/04\\\/25\\\/adversarial-attacks-on-ai-from-deepfakes-to-self-driving-cars-and-beyond\\\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/04\\\/25\\\/adversarial-attacks-on-ai-from-deepfakes-to-self-driving-cars-and-beyond\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/scipapermill.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Adversarial Attacks on AI: From Deepfakes to Self-Driving Cars and Beyond!\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#website\",\"url\":\"https:\\\/\\\/scipapermill.com\\\/\",\"name\":\"SciPapermill\",\"description\":\"Follow the latest research\",\"publisher\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/scipapermill.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#organization\",\"name\":\"SciPapermill\",\"url\":\"https:\\\/\\\/scipapermill.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/i0.wp.com\\\/scipapermill.com\\\/wp-content\\\/uploads\\\/2025\\\/07\\\/cropped-icon.jpg?fit=512%2C512&ssl=1\",\"contentUrl\":\"https:\\\/\\\/i0.wp.com\\\/scipapermill.com\\\/wp-content\\\/uploads\\\/2025\\\/07\\\/cropped-icon.jpg?fit=512%2C512&ssl=1\",\"width\":512,\"height\":512,\"caption\":\"SciPapermill\"},\"image\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/people\\\/SciPapermill\\\/61582731431910\\\/\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/scipapermill\\\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/person\\\/2a018968b95abd980774176f3c37d76e\",\"name\":\"Kareem Darwish\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g\",\"caption\":\"Kareem Darwish\"},\"description\":\"The SciPapermill bot is an AI research assistant dedicated to curating the latest advancements in artificial intelligence. Every week, it meticulously scans and synthesizes newly published papers, distilling key insights into a concise digest. Its mission is to keep you informed on the most significant take-home messages, emerging models, and pivotal datasets that are shaping the future of AI. This bot was created by Dr. Kareem Darwish, who is a principal scientist at the Qatar Computing Research Institute (QCRI) and is working on state-of-the-art Arabic large language models.\",\"sameAs\":[\"https:\\\/\\\/scipapermill.com\"]}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Adversarial Attacks on AI: From Deepfakes to Self-Driving Cars and Beyond!","description":"Latest 18 papers on adversarial attacks: Apr. 25, 2026","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/scipapermill.com\/index.php\/2026\/04\/25\/adversarial-attacks-on-ai-from-deepfakes-to-self-driving-cars-and-beyond\/","og_locale":"en_US","og_type":"article","og_title":"Adversarial Attacks on AI: From Deepfakes to Self-Driving Cars and Beyond!","og_description":"Latest 18 papers on adversarial attacks: Apr. 25, 2026","og_url":"https:\/\/scipapermill.com\/index.php\/2026\/04\/25\/adversarial-attacks-on-ai-from-deepfakes-to-self-driving-cars-and-beyond\/","og_site_name":"SciPapermill","article_publisher":"https:\/\/www.facebook.com\/people\/SciPapermill\/61582731431910\/","article_published_time":"2026-04-25T05:19:16+00:00","og_image":[{"width":512,"height":512,"url":"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1","type":"image\/jpeg"}],"author":"Kareem Darwish","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Kareem Darwish","Est. reading time":"7 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/scipapermill.com\/index.php\/2026\/04\/25\/adversarial-attacks-on-ai-from-deepfakes-to-self-driving-cars-and-beyond\/#article","isPartOf":{"@id":"https:\/\/scipapermill.com\/index.php\/2026\/04\/25\/adversarial-attacks-on-ai-from-deepfakes-to-self-driving-cars-and-beyond\/"},"author":{"name":"Kareem Darwish","@id":"https:\/\/scipapermill.com\/#\/schema\/person\/2a018968b95abd980774176f3c37d76e"},"headline":"Adversarial Attacks on AI: From Deepfakes to Self-Driving Cars and Beyond!","datePublished":"2026-04-25T05:19:16+00:00","mainEntityOfPage":{"@id":"https:\/\/scipapermill.com\/index.php\/2026\/04\/25\/adversarial-attacks-on-ai-from-deepfakes-to-self-driving-cars-and-beyond\/"},"wordCount":1399,"commentCount":0,"publisher":{"@id":"https:\/\/scipapermill.com\/#organization"},"keywords":["adversarial attacks","adversarial attacks","adversarial defense","adversarial robustness","vision-language models","zero-shot classification"],"articleSection":["Artificial Intelligence","Computer Vision","Machine Learning"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/scipapermill.com\/index.php\/2026\/04\/25\/adversarial-attacks-on-ai-from-deepfakes-to-self-driving-cars-and-beyond\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/scipapermill.com\/index.php\/2026\/04\/25\/adversarial-attacks-on-ai-from-deepfakes-to-self-driving-cars-and-beyond\/","url":"https:\/\/scipapermill.com\/index.php\/2026\/04\/25\/adversarial-attacks-on-ai-from-deepfakes-to-self-driving-cars-and-beyond\/","name":"Adversarial Attacks on AI: From Deepfakes to Self-Driving Cars and Beyond!","isPartOf":{"@id":"https:\/\/scipapermill.com\/#website"},"datePublished":"2026-04-25T05:19:16+00:00","description":"Latest 18 papers on adversarial attacks: Apr. 25, 2026","breadcrumb":{"@id":"https:\/\/scipapermill.com\/index.php\/2026\/04\/25\/adversarial-attacks-on-ai-from-deepfakes-to-self-driving-cars-and-beyond\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/scipapermill.com\/index.php\/2026\/04\/25\/adversarial-attacks-on-ai-from-deepfakes-to-self-driving-cars-and-beyond\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/scipapermill.com\/index.php\/2026\/04\/25\/adversarial-attacks-on-ai-from-deepfakes-to-self-driving-cars-and-beyond\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/scipapermill.com\/"},{"@type":"ListItem","position":2,"name":"Adversarial Attacks on AI: From Deepfakes to Self-Driving Cars and Beyond!"}]},{"@type":"WebSite","@id":"https:\/\/scipapermill.com\/#website","url":"https:\/\/scipapermill.com\/","name":"SciPapermill","description":"Follow the latest research","publisher":{"@id":"https:\/\/scipapermill.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/scipapermill.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/scipapermill.com\/#organization","name":"SciPapermill","url":"https:\/\/scipapermill.com\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/scipapermill.com\/#\/schema\/logo\/image\/","url":"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1","contentUrl":"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1","width":512,"height":512,"caption":"SciPapermill"},"image":{"@id":"https:\/\/scipapermill.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/people\/SciPapermill\/61582731431910\/","https:\/\/www.linkedin.com\/company\/scipapermill\/"]},{"@type":"Person","@id":"https:\/\/scipapermill.com\/#\/schema\/person\/2a018968b95abd980774176f3c37d76e","name":"Kareem Darwish","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g","caption":"Kareem Darwish"},"description":"The SciPapermill bot is an AI research assistant dedicated to curating the latest advancements in artificial intelligence. Every week, it meticulously scans and synthesizes newly published papers, distilling key insights into a concise digest. Its mission is to keep you informed on the most significant take-home messages, emerging models, and pivotal datasets that are shaping the future of AI. This bot was created by Dr. Kareem Darwish, who is a principal scientist at the Qatar Computing Research Institute (QCRI) and is working on state-of-the-art Arabic large language models.","sameAs":["https:\/\/scipapermill.com"]}]}},"views":30,"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_shortlink":"https:\/\/wp.me\/pgIXGY-1JA","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts\/6670","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/comments?post=6670"}],"version-history":[{"count":0,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts\/6670\/revisions"}],"wp:attachment":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/media?parent=6670"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/categories?post=6670"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/tags?post=6670"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}