{"id":2148,"date":"2025-11-30T13:02:22","date_gmt":"2025-11-30T13:02:22","guid":{"rendered":"https:\/\/scipapermill.com\/index.php\/2025\/11\/30\/adversarial-attacks-navigating-the-shifting-sands-of-ai-security-2\/"},"modified":"2025-12-28T21:07:18","modified_gmt":"2025-12-28T21:07:18","slug":"adversarial-attacks-navigating-the-shifting-sands-of-ai-security-2","status":"publish","type":"post","link":"https:\/\/scipapermill.com\/index.php\/2025\/11\/30\/adversarial-attacks-navigating-the-shifting-sands-of-ai-security-2\/","title":{"rendered":"Adversarial Attacks: Navigating the Shifting Sands of AI Security"},"content":{"rendered":"<h3>Latest 50 papers on adversarial attacks: Nov. 30, 2025<\/h3>\n<p>The world of AI\/ML is a double-edged sword: powerful, transformative, and increasingly vital to our daily lives. Yet, beneath its gleaming surface lies a complex landscape of vulnerabilities, where sophisticated \u201cadversarial attacks\u201d relentlessly challenge the robustness and trustworthiness of our most advanced models. This isn\u2019t just a theoretical concern; from manipulating financial forecasts to hijacking autonomous robots, these attacks pose tangible threats to real-world applications. This post dives into a recent collection of groundbreaking research, revealing the latest advancements in both offensive and defensive adversarial machine learning.<\/p>\n<h3 id=\"the-big-ideas-core-innovations\">The Big Idea(s) &amp; Core Innovations<\/h3>\n<p>Recent research highlights a crucial shift: attacks are becoming more precise, multi-modal, and transferable, while defenses are evolving to be more efficient, interpretable, and context-aware. A significant theme across several papers is the exploitation of <em>cross-modal interactions<\/em> and <em>semantic nuances<\/em> in complex AI systems. For instance, the <strong>Nanyang Technological University<\/strong> in their paper, <a href=\"https:\/\/arxiv.org\/pdf\/2511.21192\">When Robots Obey the Patch: Universal Transferable Patch Attacks on Vision-Language-Action Models<\/a>, introduces UPA-RFAS, a universal framework for crafting adversarial patches that can trick Vision-Language-Action (VLA) driven robots. This work shows how even subtle patches can hijack text-to-vision attention and misground instructions, demonstrating widespread vulnerabilities. Complementing this, research from <strong>Westlake University<\/strong> and <strong>City University of Hong Kong<\/strong> in <a href=\"https:\/\/arxiv.org\/pdf\/2511.19257\">Medusa: Cross-Modal Transferable Adversarial Attacks on Multimodal Medical Retrieval-Augmented Generation<\/a> reveals how medical AI systems are susceptible to cross-modal attacks that manipulate retrieval processes and distort medical outputs, achieving over 90% attack success rates. Similarly, <a href=\"https:\/\/arxiv.org\/pdf\/2511.20223\">V-Attack: Targeting Disentangled Value Features for Controllable Adversarial Attacks on LVLMs<\/a> by researchers from the <strong>Chinese Academy of Sciences<\/strong> shows how targeting disentangled \u2018value features\u2019 can enable precise, controllable attacks on Large Vision-Language Models (LVLMs), boosting success rates by 36% compared to existing methods. Further emphasizing this cross-modal vulnerability, the paper <a href=\"https:\/\/arxiv.org\/pdf\/2511.20002\">On the Feasibility of Hijacking MLLMs\u2019 Decision Chain via One Perturbation<\/a> from <strong>The Chinese University of Hong Kong, Shenzhen<\/strong> reveals how a single, semantic-aware perturbation can hijack the decision chain of MLLMs to manipulate outputs toward multiple predefined outcomes.<\/p>\n<p>Defensively, innovations are focusing on architectural robustness and efficient training. <strong>Nanjing University of Science and Technology<\/strong>\u2019s <a href=\"https:\/\/arxiv.org\/pdf\/2511.21574\">Multimodal Robust Prompt Distillation for 3D Point Cloud Models<\/a> introduces MRPD, a teacher-student framework that distills robustness into lightweight prompts for 3D point cloud models, achieving robust defense without additional inference costs. For large language models, the paper <a href=\"https:\/\/arxiv.org\/pdf\/2511.19523\">EAGER: Edge-Aligned LLM Defense for Robust, Efficient, and Accurate Cybersecurity Question Answering<\/a> by <strong>University of California, San Diego<\/strong> presents EAGER, a co-design framework that integrates quantization-aware fine-tuning with domain-specific preference alignment, reducing adversarial attack success rates by up to 7.3x. Also in the realm of LLM defense, <strong>Tel Aviv University<\/strong>\u2019s <a href=\"https:\/\/arxiv.org\/pdf\/2511.12217\">AlignTree: Efficient Defense Against LLM Jailbreak Attacks<\/a> introduces a lightweight classifier combining linear and non-linear signals for robust detection of harmful prompts. Meanwhile, <strong>University of Science &amp; Technology of China<\/strong> and <strong>University of North Carolina at Chapel Hill<\/strong> tackle the multi-modal challenge with <a href=\"https:\/\/arxiv.org\/pdf\/2511.18138\">Vulnerability-Aware Robust Multimodal Adversarial Training<\/a>, demonstrating VARMAT, a method that identifies and mitigates modality-specific vulnerabilities to significantly improve robustness. Finally, <strong>Manipal Institute of Technology<\/strong>\u2019s <a href=\"https:\/\/arxiv.org\/pdf\/2511.15807\">TopoReformer: Mitigating Adversarial Attacks Using Topological Purification in OCR Models<\/a> presents a model-agnostic framework that uses topological features to purify adversarial images in OCR systems, providing a novel defense against various attacks.<\/p>\n<h3 id=\"under-the-hood-models-datasets-benchmarks\">Under the Hood: Models, Datasets, &amp; Benchmarks<\/h3>\n<p>The advancements in adversarial ML are heavily reliant on tailored resources:<\/p>\n<ul>\n<li><strong>MRPD<\/strong> (<a href=\"https:\/\/github.com\/eminentgu\/MRPD\">https:\/\/github.com\/eminentgu\/MRPD<\/a>) utilizes multimodal knowledge from vision, text, and 3D teachers for 3D point cloud model defense.<\/li>\n<li><strong>UPA-RFAS<\/strong> (<a href=\"https:\/\/github.com\/huilu-ntu\/UPA-RFAS\">https:\/\/github.com\/huilu-ntu\/UPA-RFAS<\/a>) focuses on VLA-driven robots, demonstrating its attacks across diverse VLA models and sim-to-real settings.<\/li>\n<li><strong>EAGER<\/strong> (<a href=\"https:\/\/github.com\/onatgungor\/EAGER\">https:\/\/github.com\/onatgungor\/EAGER<\/a>) leverages a self-generated cybersecurity preference dataset for LLM alignment on edge devices like Jetson Orin.<\/li>\n<li>The paper <a href=\"https:\/\/arxiv.org\/pdf\/2511.20002\">On the Feasibility of Hijacking MLLMs\u2019 Decision Chain via One Perturbation<\/a> introduces <strong>RIST<\/strong>, a real-world image dataset with fine-grained semantic annotations to evaluate MLLM attack performance.<\/li>\n<li><strong>TopoReformer<\/strong> (<a href=\"https:\/\/github.com\/invi-bhagyesh\/TopoReformer\">https:\/\/github.com\/invi-bhagyesh\/TopoReformer<\/a>) is evaluated against various OCR models and attacks (FGSM, PGD, Carlini\u2013Wagner, EOT, BDPA, FAWA) on datasets like EMNIST and MNIST.<\/li>\n<li><strong>Q-MLLM<\/strong> (<a href=\"https:\/\/github.com\/Amadeuszhao\/QMLLM\">https:\/\/github.com\/Amadeuszhao\/QMLLM<\/a>) is a novel architecture using vector quantization for robust defense against adversarial attacks on MLLMs.<\/li>\n<li><strong>AlignTree<\/strong> (<a href=\"https:\/\/github.com\/Gilgo2\/AlignTree\">https:\/\/github.com\/Gilgo2\/AlignTree<\/a>) uses a random forest classifier and non-linear SVMs for efficient LLM jailbreak defense.<\/li>\n<li><strong>V-Attack<\/strong> (<a href=\"https:\/\/github.com\/Summu77\/V-Attack\">https:\/\/github.com\/Summu77\/V-Attack<\/a>) extensively experiments across multiple open-source and commercial LVLMs.<\/li>\n<li><strong>MOS-Attack<\/strong> (<a href=\"https:\/\/github.com\/pgg3\/MOS-Attack\">https:\/\/github.com\/pgg3\/MOS-Attack<\/a>) demonstrates superior performance on benchmark datasets like CIFAR-10 and ImageNet.<\/li>\n<li><strong>Cutter<\/strong> (<a href=\"https:\/\/github.com\/Qisenne\/Cutter\">https:\/\/github.com\/Qisenne\/Cutter<\/a>) uses real-world graphs for robustness evaluation, with potential for GCN training.<\/li>\n<li><strong>MedFedPure<\/strong> (<a href=\"https:\/\/arxiv.org\/pdf\/2511.11625\">https:\/\/arxiv.org\/pdf\/2511.11625<\/a>) introduces MAE-based detection and diffusion purification for medical federated systems.<\/li>\n<li><a href=\"https:\/\/arxiv.org\/pdf\/2408.10901\">A Gray-box Attack against Latent Diffusion Model-based Image Editing by Posterior Collapse<\/a> from <strong>University of California, Berkeley<\/strong>, <strong>Stanford University<\/strong>, and <strong>Google Research<\/strong> provides code for replication and evaluation at <a href=\"https:\/\/github.com\/ZhongliangGuo\/PosteriorCollapseAttack\">https:\/\/github.com\/ZhongliangGuo\/PosteriorCollapseAttack<\/a>.<\/li>\n<li><strong>PSM<\/strong> (<a href=\"https:\/\/github.com\/psm-defense\/psm\">https:\/\/github.com\/psm-defense\/psm<\/a>) is a black-box compatible defense, working with any API-accessible LLM.<\/li>\n<li><strong>TRAP<\/strong> (<a href=\"https:\/\/github.com\/uiuc-focal-lab\/TRAP\">https:\/\/github.com\/uiuc-focal-lab\/TRAP<\/a>) is evaluated on leading multimodal models like LLaVA-34B, Gemma3, GPT-4o, and Mistral-3.2.<\/li>\n<li><strong>VLA-Fool<\/strong> from <strong>Westlake University<\/strong> and others (<a href=\"https:\/\/arxiv.org\/pdf\/2511.16203\">https:\/\/arxiv.org\/pdf\/2511.16203<\/a>) is a comprehensive framework for VLA models.<\/li>\n<li><strong>Multi-Faceted Attack<\/strong> (<a href=\"https:\/\/github.com\/cure-lab\/MultiFacetedAttack\">https:\/\/github.com\/cure-lab\/MultiFacetedAttack<\/a>) targets leading commercial and open-source VLMs like GPT-4o and LlaMA 4.<\/li>\n<li><strong>MPD-SGR<\/strong> (<a href=\"https:\/\/github.com\/runhaojiang\/mpd-sgr\">https:\/\/github.com\/runhaojiang\/mpd-sgr<\/a>) is validated across multiple SNN architectures and datasets.<\/li>\n<li><strong>Robust Bidirectional Associative Memory<\/strong> (<a href=\"https:\/\/github.com\/Developer2046\/Bidirectional_Associative_Memory_SRA\">https:\/\/github.com\/Developer2046\/Bidirectional_Associative_Memory_SRA<\/a>) introduces the B-SRA algorithm.<\/li>\n<li><a href=\"https:\/\/arxiv.org\/pdf\/2406.19622\">Data-Driven Lipschitz Continuity: A Cost-Effective Approach to Improve Adversarial Robustness<\/a> and <a href=\"https:\/\/arxiv.org\/pdf\/2111.02331\">LTD: Low Temperature Distillation for Gradient Masking-free Adversarial Training<\/a> from <strong>National Tsing Hua University<\/strong> and <strong>IBM Research<\/strong> provide code bases at <a href=\"https:\/\/github.com\/IBMResearch\/data-driven-lipschitz-robustness\">https:\/\/github.com\/IBMResearch\/data-driven-lipschitz-robustness<\/a> and <a href=\"https:\/\/github.com\/MadryLab\/robustness\">https:\/\/github.com\/MadryLab\/robustness<\/a> respectively.<\/li>\n<li><strong>Instant Concept Erasure (ICE)<\/strong> by <strong>Purdue University<\/strong> (<a href=\"https:\/\/github.com\/sdasbisw\/InstantConceptErasure\">https:\/\/github.com\/sdasbisw\/InstantConceptErasure<\/a>) is applicable to T2I and T2V models.<\/li>\n<\/ul>\n<h3 id=\"impact-the-road-ahead\">Impact &amp; The Road Ahead<\/h3>\n<p>These advancements have profound implications. The increasing sophistication of cross-modal and semantic attacks on systems from autonomous robots to medical AI necessitates a paradigm shift in our approach to AI security. We can no longer solely rely on pixel-level defenses; understanding and protecting against <em>semantic hijacking<\/em> and <em>decision chain manipulation<\/em> is paramount. The focus on efficient, interpretable, and context-aware defenses, such as prompt distillation, quantization-aware fine-tuning, and topological purification, signals a move towards more practical and deployable solutions. The development of new benchmarks and analytical frameworks, like RIST for MLLMs and the uniform number scale for transferability attacks, is crucial for rigorously evaluating model robustness.<\/p>\n<p>Looking ahead, we can expect continued escalation in this AI arms race. The insights into vulnerabilities in high-dimensional distributed learning, financial time-series predictions, and even neuro-inspired SNNs highlight that no domain is truly safe. Future research will likely converge on adaptive, self-learning defense mechanisms that can anticipate and neutralize novel attack vectors, possibly leveraging meta-learning and real-time threat detection as explored in <a href=\"https:\/\/arxiv.org\/pdf\/2506.21127\">Meta Policy Switching for Secure UAV Deconfliction in Adversarial Airspace<\/a>. The quest for truly robust and trustworthy AI continues, driven by these relentless challenges and the innovative solutions they inspire. The future of AI security lies in proactive, multi-faceted approaches that mirror the complexity of the attacks themselves, ensuring that our intelligent systems remain reliable, safe, and aligned with human intent. The journey to truly secure AI is just beginning, and these papers illuminate critical steps along the way.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Latest 50 papers on adversarial attacks: Nov. 30, 2025<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_yoast_wpseo_focuskw":"","_yoast_wpseo_title":"","_yoast_wpseo_metadesc":"","_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[56,113,63],"tags":[157,1621,158,1293,64,548],"class_list":["post-2148","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence","category-cryptography-security","category-machine-learning","tag-adversarial-attacks","tag-main_tag_adversarial_attacks","tag-adversarial-robustness","tag-decision-making-bias","tag-diffusion-models","tag-spiking-neural-networks"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Adversarial Attacks: Navigating the Shifting Sands of AI Security<\/title>\n<meta name=\"description\" content=\"Latest 50 papers on adversarial attacks: Nov. 30, 2025\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/scipapermill.com\/index.php\/2025\/11\/30\/adversarial-attacks-navigating-the-shifting-sands-of-ai-security-2\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Adversarial Attacks: Navigating the Shifting Sands of AI Security\" \/>\n<meta property=\"og:description\" content=\"Latest 50 papers on adversarial attacks: Nov. 30, 2025\" \/>\n<meta property=\"og:url\" content=\"https:\/\/scipapermill.com\/index.php\/2025\/11\/30\/adversarial-attacks-navigating-the-shifting-sands-of-ai-security-2\/\" \/>\n<meta property=\"og:site_name\" content=\"SciPapermill\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/people\/SciPapermill\/61582731431910\/\" \/>\n<meta property=\"article:published_time\" content=\"2025-11-30T13:02:22+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-12-28T21:07:18+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1\" \/>\n\t<meta property=\"og:image:width\" content=\"512\" \/>\n\t<meta property=\"og:image:height\" content=\"512\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Kareem Darwish\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Kareem Darwish\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/11\\\/30\\\/adversarial-attacks-navigating-the-shifting-sands-of-ai-security-2\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/11\\\/30\\\/adversarial-attacks-navigating-the-shifting-sands-of-ai-security-2\\\/\"},\"author\":{\"name\":\"Kareem Darwish\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/person\\\/2a018968b95abd980774176f3c37d76e\"},\"headline\":\"Adversarial Attacks: Navigating the Shifting Sands of AI Security\",\"datePublished\":\"2025-11-30T13:02:22+00:00\",\"dateModified\":\"2025-12-28T21:07:18+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/11\\\/30\\\/adversarial-attacks-navigating-the-shifting-sands-of-ai-security-2\\\/\"},\"wordCount\":1239,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#organization\"},\"keywords\":[\"adversarial attacks\",\"adversarial attacks\",\"adversarial robustness\",\"decision-making bias\",\"diffusion models\",\"spiking neural networks\"],\"articleSection\":[\"Artificial Intelligence\",\"Cryptography and Security\",\"Machine Learning\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/11\\\/30\\\/adversarial-attacks-navigating-the-shifting-sands-of-ai-security-2\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/11\\\/30\\\/adversarial-attacks-navigating-the-shifting-sands-of-ai-security-2\\\/\",\"url\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/11\\\/30\\\/adversarial-attacks-navigating-the-shifting-sands-of-ai-security-2\\\/\",\"name\":\"Adversarial Attacks: Navigating the Shifting Sands of AI Security\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#website\"},\"datePublished\":\"2025-11-30T13:02:22+00:00\",\"dateModified\":\"2025-12-28T21:07:18+00:00\",\"description\":\"Latest 50 papers on adversarial attacks: Nov. 30, 2025\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/11\\\/30\\\/adversarial-attacks-navigating-the-shifting-sands-of-ai-security-2\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/11\\\/30\\\/adversarial-attacks-navigating-the-shifting-sands-of-ai-security-2\\\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/11\\\/30\\\/adversarial-attacks-navigating-the-shifting-sands-of-ai-security-2\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/scipapermill.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Adversarial Attacks: Navigating the Shifting Sands of AI Security\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#website\",\"url\":\"https:\\\/\\\/scipapermill.com\\\/\",\"name\":\"SciPapermill\",\"description\":\"Follow the latest research\",\"publisher\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/scipapermill.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#organization\",\"name\":\"SciPapermill\",\"url\":\"https:\\\/\\\/scipapermill.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/i0.wp.com\\\/scipapermill.com\\\/wp-content\\\/uploads\\\/2025\\\/07\\\/cropped-icon.jpg?fit=512%2C512&ssl=1\",\"contentUrl\":\"https:\\\/\\\/i0.wp.com\\\/scipapermill.com\\\/wp-content\\\/uploads\\\/2025\\\/07\\\/cropped-icon.jpg?fit=512%2C512&ssl=1\",\"width\":512,\"height\":512,\"caption\":\"SciPapermill\"},\"image\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/people\\\/SciPapermill\\\/61582731431910\\\/\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/scipapermill\\\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/person\\\/2a018968b95abd980774176f3c37d76e\",\"name\":\"Kareem Darwish\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g\",\"caption\":\"Kareem Darwish\"},\"description\":\"The SciPapermill bot is an AI research assistant dedicated to curating the latest advancements in artificial intelligence. Every week, it meticulously scans and synthesizes newly published papers, distilling key insights into a concise digest. Its mission is to keep you informed on the most significant take-home messages, emerging models, and pivotal datasets that are shaping the future of AI. This bot was created by Dr. Kareem Darwish, who is a principal scientist at the Qatar Computing Research Institute (QCRI) and is working on state-of-the-art Arabic large language models.\",\"sameAs\":[\"https:\\\/\\\/scipapermill.com\"]}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Adversarial Attacks: Navigating the Shifting Sands of AI Security","description":"Latest 50 papers on adversarial attacks: Nov. 30, 2025","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/scipapermill.com\/index.php\/2025\/11\/30\/adversarial-attacks-navigating-the-shifting-sands-of-ai-security-2\/","og_locale":"en_US","og_type":"article","og_title":"Adversarial Attacks: Navigating the Shifting Sands of AI Security","og_description":"Latest 50 papers on adversarial attacks: Nov. 30, 2025","og_url":"https:\/\/scipapermill.com\/index.php\/2025\/11\/30\/adversarial-attacks-navigating-the-shifting-sands-of-ai-security-2\/","og_site_name":"SciPapermill","article_publisher":"https:\/\/www.facebook.com\/people\/SciPapermill\/61582731431910\/","article_published_time":"2025-11-30T13:02:22+00:00","article_modified_time":"2025-12-28T21:07:18+00:00","og_image":[{"width":512,"height":512,"url":"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1","type":"image\/jpeg"}],"author":"Kareem Darwish","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Kareem Darwish","Est. reading time":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/scipapermill.com\/index.php\/2025\/11\/30\/adversarial-attacks-navigating-the-shifting-sands-of-ai-security-2\/#article","isPartOf":{"@id":"https:\/\/scipapermill.com\/index.php\/2025\/11\/30\/adversarial-attacks-navigating-the-shifting-sands-of-ai-security-2\/"},"author":{"name":"Kareem Darwish","@id":"https:\/\/scipapermill.com\/#\/schema\/person\/2a018968b95abd980774176f3c37d76e"},"headline":"Adversarial Attacks: Navigating the Shifting Sands of AI Security","datePublished":"2025-11-30T13:02:22+00:00","dateModified":"2025-12-28T21:07:18+00:00","mainEntityOfPage":{"@id":"https:\/\/scipapermill.com\/index.php\/2025\/11\/30\/adversarial-attacks-navigating-the-shifting-sands-of-ai-security-2\/"},"wordCount":1239,"commentCount":0,"publisher":{"@id":"https:\/\/scipapermill.com\/#organization"},"keywords":["adversarial attacks","adversarial attacks","adversarial robustness","decision-making bias","diffusion models","spiking neural networks"],"articleSection":["Artificial Intelligence","Cryptography and Security","Machine Learning"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/scipapermill.com\/index.php\/2025\/11\/30\/adversarial-attacks-navigating-the-shifting-sands-of-ai-security-2\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/scipapermill.com\/index.php\/2025\/11\/30\/adversarial-attacks-navigating-the-shifting-sands-of-ai-security-2\/","url":"https:\/\/scipapermill.com\/index.php\/2025\/11\/30\/adversarial-attacks-navigating-the-shifting-sands-of-ai-security-2\/","name":"Adversarial Attacks: Navigating the Shifting Sands of AI Security","isPartOf":{"@id":"https:\/\/scipapermill.com\/#website"},"datePublished":"2025-11-30T13:02:22+00:00","dateModified":"2025-12-28T21:07:18+00:00","description":"Latest 50 papers on adversarial attacks: Nov. 30, 2025","breadcrumb":{"@id":"https:\/\/scipapermill.com\/index.php\/2025\/11\/30\/adversarial-attacks-navigating-the-shifting-sands-of-ai-security-2\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/scipapermill.com\/index.php\/2025\/11\/30\/adversarial-attacks-navigating-the-shifting-sands-of-ai-security-2\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/scipapermill.com\/index.php\/2025\/11\/30\/adversarial-attacks-navigating-the-shifting-sands-of-ai-security-2\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/scipapermill.com\/"},{"@type":"ListItem","position":2,"name":"Adversarial Attacks: Navigating the Shifting Sands of AI Security"}]},{"@type":"WebSite","@id":"https:\/\/scipapermill.com\/#website","url":"https:\/\/scipapermill.com\/","name":"SciPapermill","description":"Follow the latest research","publisher":{"@id":"https:\/\/scipapermill.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/scipapermill.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/scipapermill.com\/#organization","name":"SciPapermill","url":"https:\/\/scipapermill.com\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/scipapermill.com\/#\/schema\/logo\/image\/","url":"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1","contentUrl":"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1","width":512,"height":512,"caption":"SciPapermill"},"image":{"@id":"https:\/\/scipapermill.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/people\/SciPapermill\/61582731431910\/","https:\/\/www.linkedin.com\/company\/scipapermill\/"]},{"@type":"Person","@id":"https:\/\/scipapermill.com\/#\/schema\/person\/2a018968b95abd980774176f3c37d76e","name":"Kareem Darwish","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g","caption":"Kareem Darwish"},"description":"The SciPapermill bot is an AI research assistant dedicated to curating the latest advancements in artificial intelligence. Every week, it meticulously scans and synthesizes newly published papers, distilling key insights into a concise digest. Its mission is to keep you informed on the most significant take-home messages, emerging models, and pivotal datasets that are shaping the future of AI. This bot was created by Dr. Kareem Darwish, who is a principal scientist at the Qatar Computing Research Institute (QCRI) and is working on state-of-the-art Arabic large language models.","sameAs":["https:\/\/scipapermill.com"]}]}},"views":44,"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_shortlink":"https:\/\/wp.me\/pgIXGY-yE","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts\/2148","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/comments?post=2148"}],"version-history":[{"count":1,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts\/2148\/revisions"}],"predecessor-version":[{"id":3075,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts\/2148\/revisions\/3075"}],"wp:attachment":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/media?parent=2148"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/categories?post=2148"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/tags?post=2148"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}