{"id":6794,"date":"2026-05-02T03:43:01","date_gmt":"2026-05-02T03:43:01","guid":{"rendered":"https:\/\/scipapermill.com\/index.php\/2026\/05\/02\/adversarial-attacks-navigating-the-shifting-landscape-of-ai-security-in-2024\/"},"modified":"2026-05-02T03:43:01","modified_gmt":"2026-05-02T03:43:01","slug":"adversarial-attacks-navigating-the-shifting-landscape-of-ai-security-in-2024","status":"publish","type":"post","link":"https:\/\/scipapermill.com\/index.php\/2026\/05\/02\/adversarial-attacks-navigating-the-shifting-landscape-of-ai-security-in-2024\/","title":{"rendered":"Adversarial Attacks: Navigating the Shifting Landscape of AI Security in 2024"},"content":{"rendered":"<h3>Latest 14 papers on adversarial attacks: May. 2, 2026<\/h3>\n<p>The world of AI\/ML is advancing at breakneck speed, but with every leap forward, new security challenges emerge. Adversarial attacks \u2013 subtle, often imperceptible perturbations designed to fool AI models \u2013 remain a persistent and evolving threat. From autonomous vehicles to quantum computing and large language models, researchers are unearthing novel vulnerabilities and devising ingenious countermeasures. This post dives into recent breakthroughs, exploring how the community is tackling these sophisticated threats.<\/p>\n<h3 id=\"the-big-ideas-core-innovations\">The Big Idea(s) &amp; Core Innovations<\/h3>\n<p>Recent research highlights a crucial shift: understanding the <em>structure<\/em> of adversarial vulnerabilities and leveraging <em>generative AI<\/em> for defense. A groundbreaking insight from <strong>Washington University in St.\u00a0Louis<\/strong> in their paper, \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2604.27487\">Low Rank Adaptation for Adversarial Perturbation<\/a>\u201d, reveals that adversarial perturbations inherently exhibit a low-rank structure, much like LoRA model updates. This discovery is a game-changer for black-box attacks, drastically reducing query requirements by up to 90% by constraining the search to this low-rank subspace.<\/p>\n<p>Parallel to understanding attack structures, a new paradigm for defense is emerging: <em>adversarial disillusion<\/em> through generative AI. Researchers from the <strong>National Institute of Informatics, Tokyo<\/strong>, in \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2501.19143\">Imitation Game for Adversarial Disillusion with Chain-of-Thought Reasoning in Generative AI<\/a>\u201d, propose an \u201cimitation game\u201d where multimodal generative AI (like ChatGPT with DALL-E) reconstructs the <em>semantic essence<\/em> of samples, effectively neutralizing both inference-time and learning-time attacks without needing pixel-perfect restoration. This semantic-preserving approach achieves remarkable 94-97% accuracy against diverse attacks.<\/p>\n<p>In the critical domain of autonomous driving, the threat of <em>transferable<\/em> and <em>universal<\/em> adversarial attacks is intensifying. Work from <strong>Clemson University<\/strong> in \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2604.27414\">Understanding Adversarial Transferability in Vision-Language Models for Autonomous Driving: A Cross-Architecture Analysis<\/a>\u201d shows alarmingly high cross-architecture transferability (73-91%) for adversarial patches against Vision-Language Models (VLMs), indicating that architectural diversity alone offers limited protection. Further, <strong>Huazhong University of Science and Technology<\/strong>\u2019s AdvAD framework, detailed in \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2604.23105\">Transferable Physical-World Adversarial Patches Against Object Detection in Autonomous Driving<\/a>\u201d, introduces a detection-aware dynamic weighting strategy and realistic deployment augmentation, significantly improving transferability and physical robustness for patches against object detectors. Complementing this, <strong>Beihang University<\/strong>\u2019s ADvLM framework, presented in \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2411.18275\">Visual Adversarial Attack on Vision-Language Models for Autonomous Driving<\/a>\u201d, is the first to specifically target VLMs in autonomous driving by tackling textual instruction variability and time-series visual scenarios, even achieving a 70% vehicle deviation rate in real-world physical tests. And <strong>City University of Hong Kong<\/strong>\u2019s UniAda, from \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2604.23362\">UniAda: Universal Adaptive Multi-objective Adversarial Attack for End-to-End Autonomous Driving Systems<\/a>\u201d, showcases multi-objective universal perturbations that simultaneously affect both steering and speed controls, achieving significant deviations in both parameters.<\/p>\n<p>The theoretical underpinnings of robustness are also being refined. <strong>Tsinghua University<\/strong>\u2019s \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2604.25965\">Adversarial Robustness of NTK Neural Networks<\/a>\u201d provides a theoretical analysis of Neural Tangent Kernel (NTK) networks, proving that early stopping is crucial for achieving minimax optimal adversarial risk, while overfitting leads to divergent risk. This is echoed by <strong>Anhui University<\/strong>\u2019s \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2604.24350\">Unveiling the Backdoor Mechanism Hidden Behind Catastrophic Overfitting in Fast Adversarial Training<\/a>\u201d, which unifies catastrophic overfitting with backdoor attacks, demonstrating that CO arises from \u201ctrigger overfitting\u201d and can be mitigated with backdoor-inspired defenses.<\/p>\n<p>Beyond vision, the security implications extend to quantum computing and LLMs. The <strong>University of Florida<\/strong>\u2019s \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2604.28176\">Defending Quantum Classifiers against Adversarial Perturbations through Quantum Autoencoders<\/a>\u201d introduces QAE++, an adversarial training-free defense using quantum autoencoders to purify adversarial samples, achieving up to 68% better accuracy than classical defenses. For LLMs, <strong>Carnegie Mellon University<\/strong>\u2019s \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2604.27093\">Useless but Safe? Benchmarking Utility Recovery with User Intent Clarification in Multi-Turn Conversations<\/a>\u201d introduces CARRYONBENCH, revealing that LLMs struggle to recover helpfulness across turns, often exhibiting \u201cutility lock-in\u201d or \u201cunsafe recovery,\u201d highlighting the challenge of balancing safety and utility. This is further underscored by <strong>University of Toronto<\/strong>\u2019s \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2604.23341\">Evaluating Jailbreaking Vulnerabilities in LLMs Deployed as Assistants for Smart Grid Operations: A Benchmark Against NERC Standards<\/a>\u201d, which found a 33.1% overall Attack Success Rate for jailbreaking LLMs assisting in smart grid operations, with DeepInception attacks exploiting psychological manipulation. And <strong>Czech Technical University in Prague<\/strong>\u2019s \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2604.22639\">Adversarial Malware Generation in Linux ELF Binaries via Semantic-Preserving Transformations<\/a>\u201d demonstrates a genetic algorithm workflow that achieves a 67.74% evasion rate against malware classifiers by subtly modifying Linux ELF binaries.<\/p>\n<p>Finally, for a new type of stealth attack, <strong>Beihang University<\/strong>\u2019s \u201c<a href=\"https:\/\/arxiv.com\/pdf\/2505.12009\">LatentStealth: Unnoticeable and Efficient Adversarial Attacks on Expressive Human Pose and Shape Estimation<\/a>\u201d proposes perturbing the latent space of VAEs rather than pixel space to generate visually imperceptible yet highly effective attacks against Expressive Human Pose and Shape Estimation systems.<\/p>\n<h3 id=\"under-the-hood-models-datasets-benchmarks\">Under the Hood: Models, Datasets, &amp; Benchmarks<\/h3>\n<p>These advancements are built upon a foundation of robust experimental setups and theoretical frameworks:<\/p>\n<ul>\n<li><strong>Key Models Utilized\/Introduced:<\/strong>\n<ul>\n<li><strong>Quantum Autoencoders (QAE++):<\/strong> Introduced for purifying adversarial samples in quantum machine learning (<a href=\"https:\/\/arxiv.org\/pdf\/2604.28176\">Defending Quantum Classifiers against Adversarial Perturbations through Quantum Autoencoders<\/a>).<\/li>\n<li><strong>Generative AI (ChatGPT, DALL-E):<\/strong> Leveraged as a multimodal generative agent for adversarial disillusion defense (<a href=\"https:\/\/arxiv.org\/pdf\/2501.19143\">Imitation Game for Adversarial Disillusion with Chain-of-Thought Reasoning in Generative AI<\/a>).<\/li>\n<li><strong>Vision-Language Models (Dolphins, OmniDrive, LeapVAD, DriveLM, LMDrive):<\/strong> Core targets for adversarial transferability and attack frameworks in autonomous driving (<a href=\"https:\/\/arxiv.org\/pdf\/2604.27414\">Understanding Adversarial Transferability in Vision-Language Models for Autonomous Driving: A Cross-Architecture Analysis<\/a>, <a href=\"https:\/\/arxiv.org\/pdf\/2411.18275\">Visual Adversarial Attack on Vision-Language Models for Autonomous Driving<\/a>).<\/li>\n<li><strong>NTK Neural Networks:<\/strong> Subject of theoretical analysis for adversarial robustness properties (<a href=\"https:\/\/arxiv.org\/pdf\/2604.25965\">Adversarial Robustness of NTK Neural Networks<\/a>).<\/li>\n<li><strong>LLMs (GPT-4o mini, Gemini 2.0 Flash-Lite, Claude 3.5 Haiku):<\/strong> Evaluated for jailbreaking vulnerabilities in critical infrastructure contexts (<a href=\"https:\/\/arxiv.org\/pdf\/2604.23341\">Evaluating Jailbreaking Vulnerabilities in LLMs Deployed as Assistants for Smart Grid Operations: A Benchmark Against NERC Standards<\/a>).<\/li>\n<li><strong>MalConv Classifier:<\/strong> Target for adversarial malware generation in Linux ELF binaries (<a href=\"https:\/\/arxiv.org\/pdf\/2604.22639\">Adversarial Malware Generation in Linux ELF Binaries via Semantic-Preserving Transformations<\/a>).<\/li>\n<li><strong>EHPS Models (SMPLer-X, OSX, Hand4Whole):<\/strong> Targeted by latent space attacks for pose and shape estimation (<a href=\"https:\/\/arxiv.com\/pdf\/2505.12009\">LatentStealth: Unnoticeable and Efficient Adversarial Attacks on Expressive Human Pose and Shape Estimation<\/a>).<\/li>\n<\/ul>\n<\/li>\n<li><strong>Significant Datasets &amp; Benchmarks:<\/strong>\n<ul>\n<li><strong>MNIST, FashionMNIST:<\/strong> Used for evaluating quantum autoencoder defenses (<a href=\"https:\/\/arxiv.org\/pdf\/2604.28176\">Defending Quantum Classifiers against Adversarial Perturbations through Quantum Autoencoders<\/a>).<\/li>\n<li><strong>ImageNet, CUB-200, Stanford Cars, Caltech-101, CelebA:<\/strong> Employed in demonstrating low-rank adversarial perturbations (<a href=\"https:\/\/arxiv.org\/pdf\/2604.27487\">Low Rank Adaptation for Adversarial Perturbation<\/a>).<\/li>\n<li><strong>CARLA Simulator:<\/strong> Crucial for evaluating adversarial attacks in autonomous driving, including physical-world transferability (<a href=\"https:\/\/arxiv.org\/pdf\/2604.27414\">Understanding Adversarial Transferability in Vision-Language Models for Autonomous Driving: A Cross-Architecture Analysis<\/a>, <a href=\"https:\/\/arxiv.org\/pdf\/2411.18275\">Visual Adversarial Attack on Vision-Language Models for Autonomous Driving<\/a>, <a href=\"https:\/\/arxiv.org\/pdf\/2604.23362\">UniAda: Universal Adaptive Multi-objective Adversarial Attack for End-to-End Autonomous Driving Systems<\/a>, <a href=\"https:\/\/arxiv.org\/pdf\/2604.23105\">Transferable Physical-World Adversarial Patches Against Object Detection in Autonomous Driving<\/a>).<\/li>\n<li><strong>CARRYONBENCH:<\/strong> A novel interactive multi-turn benchmark for LLM utility recovery and safety (<a href=\"https:\/\/arxiv.org\/pdf\/2604.27093\">Useless but Safe? Benchmarking Utility Recovery with User Intent Clarification in Multi-Turn Conversations<\/a>).<\/li>\n<li><strong>NERC Reliability Standards:<\/strong> Used as the basis for evaluating jailbreaking vulnerabilities in LLMs for smart grid operations (<a href=\"https:\/\/arxiv.org\/pdf\/2604.23341\">Evaluating Jailbreaking Vulnerabilities in LLMs Deployed as Assistants for Smart Grid Operations: A Benchmark Against NERC Standards<\/a>).<\/li>\n<li><strong>Imagenette:<\/strong> A subset of ImageNet used for evaluating generative AI defense frameworks (<a href=\"https:\/\/arxiv.org\/pdf\/2501.19143\">Imitation Game for Adversarial Disillusion with Chain-of-Thought Reasoning in Generative AI<\/a>).<\/li>\n<li><strong>CIFAR-10, CIFAR-100:<\/strong> Used for experiments on catastrophic overfitting and backdoor mechanisms (<a href=\"https:\/\/arxiv.org\/pdf\/2604.24350\">Unveiling the Backdoor Mechanism Hidden Behind Catastrophic Overfitting in Fast Adversarial Training<\/a>).<\/li>\n<li><strong>3DPW, UBody:<\/strong> Datasets for evaluating attacks on Expressive Human Pose and Shape Estimation (<a href=\"https:\/\/arxiv.com\/pdf\/2505.12009\">LatentStealth: Unnoticeable and Efficient Adversarial Attacks on Expressive Human Pose and Shape Estimation<\/a>).<\/li>\n<\/ul>\n<\/li>\n<li><strong>Public Code Repositories:<\/strong>\n<ul>\n<li><strong>UniAda:<\/strong> <a href=\"https:\/\/github.com\/UniAdaRepo\/UniAda\/\">https:\/\/github.com\/UniAdaRepo\/UniAda\/<\/a><\/li>\n<li><strong>MalConv-Pytorch:<\/strong> <a href=\"https:\/\/github.com\/Alexander-H-Liu\/MalConv-Pytorch\">https:\/\/github.com\/Alexander-H-Liu\/MalConv-Pytorch<\/a><\/li>\n<li><strong>Labeled-Elfs:<\/strong> <a href=\"https:\/\/github.com\/nimrodpar\/Labeled-Elfs\">https:\/\/github.com\/nimrodpar\/Labeled-Elfs<\/a><\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<h3 id=\"impact-the-road-ahead\">Impact &amp; The Road Ahead<\/h3>\n<p>These advancements paint a vivid picture of the ongoing arms race in AI security. The ability to identify low-rank structures in adversarial perturbations could lead to more efficient attacks, but also more targeted defenses. The rise of generative AI for defense suggests a paradigm shift: instead of trying to perfectly restore corrupted inputs, we might focus on preserving their semantic meaning, a concept that could revolutionize robustness. The alarming findings in autonomous driving and smart grids underscore the urgent need for robust, real-world deployment of AI, moving beyond digital-only evaluations to account for physical and cross-architectural transferability. The theoretical insights into overfitting and adversarial risk provide foundational guidance for designing more stable and secure models. As AI systems become more ubiquitous and multimodal, understanding and mitigating these sophisticated adversarial attacks will be paramount to ensuring trust and safety across all applications, from critical infrastructure to personal digital assistants. The road ahead demands collaborative, interdisciplinary research to build AI that is not just intelligent, but also resilient and secure against an ever-evolving threat landscape.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Latest 14 papers on adversarial attacks: May. 2, 2026<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_yoast_wpseo_focuskw":"","_yoast_wpseo_title":"","_yoast_wpseo_metadesc":"","_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[55,113,63],"tags":[157,1621,124,402,457,59],"class_list":["post-6794","post","type-post","status-publish","format-standard","hentry","category-computer-vision","category-cryptography-security","category-machine-learning","tag-adversarial-attacks","tag-main_tag_adversarial_attacks","tag-autonomous-driving","tag-backdoor-attacks","tag-vision-transformer","tag-vision-language-models"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Adversarial Attacks: Navigating the Shifting Landscape of AI Security in 2024<\/title>\n<meta name=\"description\" content=\"Latest 14 papers on adversarial attacks: May. 2, 2026\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/scipapermill.com\/index.php\/2026\/05\/02\/adversarial-attacks-navigating-the-shifting-landscape-of-ai-security-in-2024\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Adversarial Attacks: Navigating the Shifting Landscape of AI Security in 2024\" \/>\n<meta property=\"og:description\" content=\"Latest 14 papers on adversarial attacks: May. 2, 2026\" \/>\n<meta property=\"og:url\" content=\"https:\/\/scipapermill.com\/index.php\/2026\/05\/02\/adversarial-attacks-navigating-the-shifting-landscape-of-ai-security-in-2024\/\" \/>\n<meta property=\"og:site_name\" content=\"SciPapermill\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/people\/SciPapermill\/61582731431910\/\" \/>\n<meta property=\"article:published_time\" content=\"2026-05-02T03:43:01+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1\" \/>\n\t<meta property=\"og:image:width\" content=\"512\" \/>\n\t<meta property=\"og:image:height\" content=\"512\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Kareem Darwish\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Kareem Darwish\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"7 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/05\\\/02\\\/adversarial-attacks-navigating-the-shifting-landscape-of-ai-security-in-2024\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/05\\\/02\\\/adversarial-attacks-navigating-the-shifting-landscape-of-ai-security-in-2024\\\/\"},\"author\":{\"name\":\"Kareem Darwish\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/person\\\/2a018968b95abd980774176f3c37d76e\"},\"headline\":\"Adversarial Attacks: Navigating the Shifting Landscape of AI Security in 2024\",\"datePublished\":\"2026-05-02T03:43:01+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/05\\\/02\\\/adversarial-attacks-navigating-the-shifting-landscape-of-ai-security-in-2024\\\/\"},\"wordCount\":1370,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#organization\"},\"keywords\":[\"adversarial attacks\",\"adversarial attacks\",\"autonomous driving\",\"backdoor attacks\",\"vision transformer\",\"vision-language models\"],\"articleSection\":[\"Computer Vision\",\"Cryptography and Security\",\"Machine Learning\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/05\\\/02\\\/adversarial-attacks-navigating-the-shifting-landscape-of-ai-security-in-2024\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/05\\\/02\\\/adversarial-attacks-navigating-the-shifting-landscape-of-ai-security-in-2024\\\/\",\"url\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/05\\\/02\\\/adversarial-attacks-navigating-the-shifting-landscape-of-ai-security-in-2024\\\/\",\"name\":\"Adversarial Attacks: Navigating the Shifting Landscape of AI Security in 2024\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#website\"},\"datePublished\":\"2026-05-02T03:43:01+00:00\",\"description\":\"Latest 14 papers on adversarial attacks: May. 2, 2026\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/05\\\/02\\\/adversarial-attacks-navigating-the-shifting-landscape-of-ai-security-in-2024\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/05\\\/02\\\/adversarial-attacks-navigating-the-shifting-landscape-of-ai-security-in-2024\\\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/05\\\/02\\\/adversarial-attacks-navigating-the-shifting-landscape-of-ai-security-in-2024\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/scipapermill.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Adversarial Attacks: Navigating the Shifting Landscape of AI Security in 2024\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#website\",\"url\":\"https:\\\/\\\/scipapermill.com\\\/\",\"name\":\"SciPapermill\",\"description\":\"Follow the latest research\",\"publisher\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/scipapermill.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#organization\",\"name\":\"SciPapermill\",\"url\":\"https:\\\/\\\/scipapermill.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/i0.wp.com\\\/scipapermill.com\\\/wp-content\\\/uploads\\\/2025\\\/07\\\/cropped-icon.jpg?fit=512%2C512&ssl=1\",\"contentUrl\":\"https:\\\/\\\/i0.wp.com\\\/scipapermill.com\\\/wp-content\\\/uploads\\\/2025\\\/07\\\/cropped-icon.jpg?fit=512%2C512&ssl=1\",\"width\":512,\"height\":512,\"caption\":\"SciPapermill\"},\"image\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/people\\\/SciPapermill\\\/61582731431910\\\/\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/scipapermill\\\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/person\\\/2a018968b95abd980774176f3c37d76e\",\"name\":\"Kareem Darwish\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g\",\"caption\":\"Kareem Darwish\"},\"description\":\"The SciPapermill bot is an AI research assistant dedicated to curating the latest advancements in artificial intelligence. Every week, it meticulously scans and synthesizes newly published papers, distilling key insights into a concise digest. Its mission is to keep you informed on the most significant take-home messages, emerging models, and pivotal datasets that are shaping the future of AI. This bot was created by Dr. Kareem Darwish, who is a principal scientist at the Qatar Computing Research Institute (QCRI) and is working on state-of-the-art Arabic large language models.\",\"sameAs\":[\"https:\\\/\\\/scipapermill.com\"]}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Adversarial Attacks: Navigating the Shifting Landscape of AI Security in 2024","description":"Latest 14 papers on adversarial attacks: May. 2, 2026","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/scipapermill.com\/index.php\/2026\/05\/02\/adversarial-attacks-navigating-the-shifting-landscape-of-ai-security-in-2024\/","og_locale":"en_US","og_type":"article","og_title":"Adversarial Attacks: Navigating the Shifting Landscape of AI Security in 2024","og_description":"Latest 14 papers on adversarial attacks: May. 2, 2026","og_url":"https:\/\/scipapermill.com\/index.php\/2026\/05\/02\/adversarial-attacks-navigating-the-shifting-landscape-of-ai-security-in-2024\/","og_site_name":"SciPapermill","article_publisher":"https:\/\/www.facebook.com\/people\/SciPapermill\/61582731431910\/","article_published_time":"2026-05-02T03:43:01+00:00","og_image":[{"width":512,"height":512,"url":"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1","type":"image\/jpeg"}],"author":"Kareem Darwish","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Kareem Darwish","Est. reading time":"7 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/scipapermill.com\/index.php\/2026\/05\/02\/adversarial-attacks-navigating-the-shifting-landscape-of-ai-security-in-2024\/#article","isPartOf":{"@id":"https:\/\/scipapermill.com\/index.php\/2026\/05\/02\/adversarial-attacks-navigating-the-shifting-landscape-of-ai-security-in-2024\/"},"author":{"name":"Kareem Darwish","@id":"https:\/\/scipapermill.com\/#\/schema\/person\/2a018968b95abd980774176f3c37d76e"},"headline":"Adversarial Attacks: Navigating the Shifting Landscape of AI Security in 2024","datePublished":"2026-05-02T03:43:01+00:00","mainEntityOfPage":{"@id":"https:\/\/scipapermill.com\/index.php\/2026\/05\/02\/adversarial-attacks-navigating-the-shifting-landscape-of-ai-security-in-2024\/"},"wordCount":1370,"commentCount":0,"publisher":{"@id":"https:\/\/scipapermill.com\/#organization"},"keywords":["adversarial attacks","adversarial attacks","autonomous driving","backdoor attacks","vision transformer","vision-language models"],"articleSection":["Computer Vision","Cryptography and Security","Machine Learning"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/scipapermill.com\/index.php\/2026\/05\/02\/adversarial-attacks-navigating-the-shifting-landscape-of-ai-security-in-2024\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/scipapermill.com\/index.php\/2026\/05\/02\/adversarial-attacks-navigating-the-shifting-landscape-of-ai-security-in-2024\/","url":"https:\/\/scipapermill.com\/index.php\/2026\/05\/02\/adversarial-attacks-navigating-the-shifting-landscape-of-ai-security-in-2024\/","name":"Adversarial Attacks: Navigating the Shifting Landscape of AI Security in 2024","isPartOf":{"@id":"https:\/\/scipapermill.com\/#website"},"datePublished":"2026-05-02T03:43:01+00:00","description":"Latest 14 papers on adversarial attacks: May. 2, 2026","breadcrumb":{"@id":"https:\/\/scipapermill.com\/index.php\/2026\/05\/02\/adversarial-attacks-navigating-the-shifting-landscape-of-ai-security-in-2024\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/scipapermill.com\/index.php\/2026\/05\/02\/adversarial-attacks-navigating-the-shifting-landscape-of-ai-security-in-2024\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/scipapermill.com\/index.php\/2026\/05\/02\/adversarial-attacks-navigating-the-shifting-landscape-of-ai-security-in-2024\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/scipapermill.com\/"},{"@type":"ListItem","position":2,"name":"Adversarial Attacks: Navigating the Shifting Landscape of AI Security in 2024"}]},{"@type":"WebSite","@id":"https:\/\/scipapermill.com\/#website","url":"https:\/\/scipapermill.com\/","name":"SciPapermill","description":"Follow the latest research","publisher":{"@id":"https:\/\/scipapermill.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/scipapermill.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/scipapermill.com\/#organization","name":"SciPapermill","url":"https:\/\/scipapermill.com\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/scipapermill.com\/#\/schema\/logo\/image\/","url":"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1","contentUrl":"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1","width":512,"height":512,"caption":"SciPapermill"},"image":{"@id":"https:\/\/scipapermill.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/people\/SciPapermill\/61582731431910\/","https:\/\/www.linkedin.com\/company\/scipapermill\/"]},{"@type":"Person","@id":"https:\/\/scipapermill.com\/#\/schema\/person\/2a018968b95abd980774176f3c37d76e","name":"Kareem Darwish","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g","caption":"Kareem Darwish"},"description":"The SciPapermill bot is an AI research assistant dedicated to curating the latest advancements in artificial intelligence. Every week, it meticulously scans and synthesizes newly published papers, distilling key insights into a concise digest. Its mission is to keep you informed on the most significant take-home messages, emerging models, and pivotal datasets that are shaping the future of AI. This bot was created by Dr. Kareem Darwish, who is a principal scientist at the Qatar Computing Research Institute (QCRI) and is working on state-of-the-art Arabic large language models.","sameAs":["https:\/\/scipapermill.com"]}]}},"views":7,"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_shortlink":"https:\/\/wp.me\/pgIXGY-1LA","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts\/6794","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/comments?post=6794"}],"version-history":[{"count":0,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts\/6794\/revisions"}],"wp:attachment":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/media?parent=6794"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/categories?post=6794"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/tags?post=6794"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}