{"id":1982,"date":"2025-11-23T08:18:43","date_gmt":"2025-11-23T08:18:43","guid":{"rendered":"https:\/\/scipapermill.com\/index.php\/2025\/11\/23\/adversarial-attacks-navigating-the-shifting-sands-of-ai-security-and-robustness-4\/"},"modified":"2025-12-28T21:17:46","modified_gmt":"2025-12-28T21:17:46","slug":"adversarial-attacks-navigating-the-shifting-sands-of-ai-security-and-robustness-4","status":"publish","type":"post","link":"https:\/\/scipapermill.com\/index.php\/2025\/11\/23\/adversarial-attacks-navigating-the-shifting-sands-of-ai-security-and-robustness-4\/","title":{"rendered":"Adversarial Attacks: Navigating the Shifting Sands of AI Security and Robustness"},"content":{"rendered":"<h3>Latest 50 papers on adversarial attacks: Nov. 23, 2025<\/h3>\n<p>The landscape of Artificial Intelligence is evolving at breakneck speed, but with every advancement comes a new frontier of security challenges. Adversarial attacks, those insidious attempts to trick AI models with subtle, often imperceptible perturbations, remain a paramount concern across every domain, from computer vision to large language models and multi-agent systems. Recent research is not only uncovering novel attack vectors but also pioneering sophisticated defense mechanisms, pushing the boundaries of what it means to build truly robust and trustworthy AI. This post dives into a collection of recent breakthroughs, exploring how researchers are both breaking and fortifying our most advanced AI systems.<\/p>\n<h3 id=\"the-big-ideas-core-innovations\">The Big Idea(s) &amp; Core Innovations<\/h3>\n<p>At the heart of these recent papers lies a continuous cat-and-mouse game between attackers and defenders, with innovation stemming from both sides. A significant theme is the move beyond simple pixel-level perturbations to more sophisticated, semantically aware, and multi-modal attacks. For instance, the <strong>Q-MLLM<\/strong> framework from researchers at <a href=\"https:\/\/dx.doi.org\/10.14722\/ndss.2026.230407\">University of California, San Diego<\/a> pioneers a novel quantization-based defense against adversarial attacks on multimodal large language models (MLLMs). Their key insight involves introducing discrete bottlenecks in visual features via vector quantization, effectively blocking adversarial gradient paths and achieving impressive defense rates against jailbreak and toxic image attacks.<\/p>\n<p>On the attack front, works like \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2511.16203\">When Alignment Fails: Multimodal Adversarial Attacks on Vision-Language-Action Models<\/a>\u201d by researchers including Yuping Yan from <a href=\"https:\/\/arxiv.org\/pdf\/2511.16203\">TGAI Lab, School of Engineering, Westlake University<\/a>, introduce <strong>VLA-Fool<\/strong>. This framework systematically reveals that even minor, cross-modal perturbations can significantly disrupt Vision-Language-Action (VLA) models, leading to substantial behavioral deviations. Similarly, the \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2511.16110\">Multi-Faceted Attack: Exposing Cross-Model Vulnerabilities in Defense-Equipped Vision-Language Models<\/a>\u201d from institutions like <a href=\"https:\/\/github.com\/cure-lab\/MultiFacetedAttack\">The Chinese University of Hong Kong<\/a> demonstrates how the <strong>MFA<\/strong> framework can bypass multiple layers of VLM defenses by exploiting shared visual representations, achieving a 58.5% success rate against leading models.<\/p>\n<p>The realm of Large Language Models (LLMs) is particularly active. \u201c<a href=\"https:\/\/github.com\/psm-defense\/psm\">PSM: Prompt Sensitivity Minimization via LLM-Guided Black-Box Optimization<\/a>\u201d by Hussein Jawad and Nicolas J-B. Brunel from <a href=\"https:\/\/github.com\/psm-defense\/psm\">Capgemini Invent, Paris, France<\/a> presents a lightweight, black-box method for shielding system prompts from extraction attacks by minimizing leakage while preserving utility. Complementing this, \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2511.13771\">ExplainableGuard: Interpretable Adversarial Defense for Large Language Models Using Chain-of-Thought Reasoning<\/a>\u201d from Shaowei GUAN and colleagues at <a href=\"https:\/\/arxiv.org\/pdf\/2511.13771\">The Hong Kong Polytechnic University<\/a> introduces a defense mechanism that not only detects attacks but also provides transparent, step-by-step explanations, enhancing trustworthiness.<\/p>\n<p>Beyond perception and language, multi-agent systems and robotics are also in the crosshairs. \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2511.15292\">Adversarial Attack on Black-Box Multi-Agent by Adaptive Perturbation<\/a>\u201d introduces <strong>AdapAM<\/strong>, a stealthy black-box attack leveraging proxy agents and adaptive selection policies. In robotics, the \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2507.08303\">Keep on Going: Learning Robust Humanoid Motion Skills via Selective Adversarial Training<\/a>\u201d paper introduces <strong>SA2RT<\/strong>, a novel selective adversarial training method that dramatically improves humanoid robot motion policy robustness in real-world environments.<\/p>\n<p>Other notable innovations include: * <strong>TopoReformer<\/strong> (<a href=\"https:\/\/github.com\/invi-bhagyesh\/TopoReformer\">Manipal Institute of Technology, Manipal Academy of Higher Education Manipal<\/a>) for OCR models, which uses topological purification to filter adversarial noise without adversarial training. * <strong>MedFedPure<\/strong> (<a href=\"https:\/\/arxiv.org\/pdf\/2511.11625\">Institute of Medical AI, University X<\/a>) for federated medical AI, integrating MAE-based detection and diffusion purification against inference-time attacks. * <strong>MPD-SGR<\/strong> (<a href=\"https:\/\/github.com\/runhaojiang\/mpd-sgr\">Zhejiang University<\/a>) which enhances Spiking Neural Networks (SNNs) adversarial robustness by regulating membrane potential distribution. * <strong>SHIFT<\/strong> (<a href=\"https:\/\/arxiv.org\/pdf\/2511.07701\">Tulane University<\/a>), a diffusion-based attack for RL that generates semantically different yet realistic state perturbations. * <strong>CD-MTA<\/strong> (<a href=\"https:\/\/github.com\/tgoncalv\/CD-MTA\">Tohoku University<\/a>) for cross-domain multi-targeted adversarial attacks without victim model access.<\/p>\n<h3 id=\"under-the-hood-models-datasets-benchmarks\">Under the Hood: Models, Datasets, &amp; Benchmarks<\/h3>\n<p>The innovations above are underpinned by a rich array of models, datasets, and benchmarks. Researchers are moving towards more complex, real-world relevant evaluations, often building new tools to achieve this:<\/p>\n<ul>\n<li><strong>Q-MLLM<\/strong>: Leverages state-of-the-art MLLMs and a custom zero-shot classification setup for evaluating defense against jailbreak and toxic visual content. Publicly available code: <a href=\"https:\/\/github.com\/Amadeuszhao\/QMLLM\">https:\/\/github.com\/Amadeuszhao\/QMLLM<\/a><\/li>\n<li><strong>PSM<\/strong>: Demonstrates black-box compatibility with any API-accessible LLM and uses an LLM-as-optimizer for guided search. Code available at <a href=\"https:\/\/github.com\/psm-defense\/psm\">https:\/\/github.com\/psm-defense\/psm<\/a>.<\/li>\n<li><strong>VLA-Fool<\/strong>: Evaluates robustness in multimodal Vision-Language-Action (VLA) models, assessing white-box and black-box settings, highlighting fragility to cross-modal misalignments.<\/li>\n<li><strong>MFA<\/strong>: Targets leading commercial and open-source VLMs like GPT-4o and LlaMA 4, with code at <a href=\"https:\/\/github.com\/cure-lab\/MultiFacetedAttack\">https:\/\/github.com\/cure-lab\/MultiFacetedAttack<\/a>.<\/li>\n<li><strong>TopoReformer<\/strong>: A model-agnostic OCR defense tested on EMNIST and MNIST against a suite of attacks (FGSM, PGD, Carlini\u2013Wagner, EOT, BDPA, FAWA). Code available at <a href=\"https:\/\/github.com\/invi-bhagyesh\/TopoReformer\">https:\/\/github.com\/invi-bhagyesh\/TopoReformer<\/a>.<\/li>\n<li><strong>DiffProtect<\/strong>: Utilizes diffusion models for generating adversarial examples, validated on CelebA-HQ and FFHQ datasets. Paper: <a href=\"https:\/\/arxiv.org\/pdf\/2305.13625\">https:\/\/arxiv.org\/pdf\/2305.13625<\/a><\/li>\n<li><strong>SEBA<\/strong>: A two-stage framework for black-box attacks on visual reinforcement learning agents, tested on continuous-control (MuJoCo) and discrete-action (Atari) domains. Paper: <a href=\"https:\/\/arxiv.org\/pdf\/2511.09681\">https:\/\/arxiv.org\/pdf\/2511.09681<\/a><\/li>\n<li><strong>MOS-Attack<\/strong>: A multi-objective adversarial attack framework evaluated on CIFAR-10 and ImageNet, discovering synergistic patterns among loss functions. Code: <a href=\"https:\/\/github.com\/pgg3\/MOS-Attack\">https:\/\/github.com\/pgg3\/MOS-Attack<\/a><\/li>\n<li><strong>AlignTree<\/strong>: Lightweight classifier for LLM jailbreak defense, combining linear refusal directions with non-linear SVM-based signals. Code: <a href=\"https:\/\/github.com\/Gilgo2\/AlignTree\">https:\/\/github.com\/Gilgo2\/AlignTree<\/a><\/li>\n<li><strong>UDora<\/strong>: A red-teaming framework for LLM agents leveraging adversarial string optimization, achieving high attack success rates on InjecAgent, WebShop, and AgentHarm. Code: <a href=\"https:\/\/github.com\/AI-secure\/UDora\">https:\/\/github.com\/AI-secure\/UDora<\/a><\/li>\n<li><strong>AdvRoad<\/strong>: A generative approach for creating naturalistic road-style adversarial posters to attack visual 3D detection in autonomous driving, evaluated on realistic scenarios. Code: <a href=\"https:\/\/github.com\/WangJian981002\/AdvRoad\">https:\/\/github.com\/WangJian981002\/AdvRoad<\/a><\/li>\n<\/ul>\n<h3 id=\"impact-the-road-ahead\">Impact &amp; The Road Ahead<\/h3>\n<p>These advancements have profound implications for the trustworthiness and deployment of AI systems. The revelations of vulnerabilities in multimodal, language, and robotic systems underscore the urgent need for proactive defense strategies. The shift towards black-box, stealthy, and semantically meaningful attacks means that traditional defenses are becoming obsolete, compelling researchers to develop more sophisticated, often geometry-aware or topologically-informed, countermeasures.<\/p>\n<p>The focus on interpretability in defenses like ExplainableGuard, or provable repair mechanisms like ProRepair (<a href=\"https:\/\/arxiv.org\/pdf\/2511.07741\">Hangzhou Dianzi University, Zhejiang University<\/a>), signals a maturing field prioritizing not just robustness, but also transparency and reliability. Furthermore, the exploration of new paradigms like \u2018engineered forgetting\u2019 (<a href=\"https:\/\/arxiv.org\/pdf\/2511.09855\">Institute for Artificial Intelligence, University of X<\/a>) suggests a future where AI models can dynamically adapt and unlearn harmful information, aligning with ethical AI principles.<\/p>\n<p>The road ahead will undoubtedly involve a continued arms race. However, by combining theoretical rigor with empirical validation, and by fostering open research with shared code and benchmarks, the AI community is better equipped than ever to build systems that are not only powerful but also fundamentally secure and resilient against the ever-evolving landscape of adversarial threats. The ongoing pursuit of robust, transparent, and aligned AI is crucial as these technologies become increasingly embedded in critical real-world applications.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Latest 50 papers on adversarial attacks: Nov. 23, 2025<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_yoast_wpseo_focuskw":"","_yoast_wpseo_title":"","_yoast_wpseo_metadesc":"","_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[56,113,63],"tags":[157,1621,158,64,1145,74],"class_list":["post-1982","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence","category-cryptography-security","category-machine-learning","tag-adversarial-attacks","tag-main_tag_adversarial_attacks","tag-adversarial-robustness","tag-diffusion-models","tag-neural-network-security","tag-reinforcement-learning"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Adversarial Attacks: Navigating the Shifting Sands of AI Security and Robustness<\/title>\n<meta name=\"description\" content=\"Latest 50 papers on adversarial attacks: Nov. 23, 2025\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/scipapermill.com\/index.php\/2025\/11\/23\/adversarial-attacks-navigating-the-shifting-sands-of-ai-security-and-robustness-4\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Adversarial Attacks: Navigating the Shifting Sands of AI Security and Robustness\" \/>\n<meta property=\"og:description\" content=\"Latest 50 papers on adversarial attacks: Nov. 23, 2025\" \/>\n<meta property=\"og:url\" content=\"https:\/\/scipapermill.com\/index.php\/2025\/11\/23\/adversarial-attacks-navigating-the-shifting-sands-of-ai-security-and-robustness-4\/\" \/>\n<meta property=\"og:site_name\" content=\"SciPapermill\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/people\/SciPapermill\/61582731431910\/\" \/>\n<meta property=\"article:published_time\" content=\"2025-11-23T08:18:43+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-12-28T21:17:46+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1\" \/>\n\t<meta property=\"og:image:width\" content=\"512\" \/>\n\t<meta property=\"og:image:height\" content=\"512\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Kareem Darwish\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Kareem Darwish\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"5 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/11\\\/23\\\/adversarial-attacks-navigating-the-shifting-sands-of-ai-security-and-robustness-4\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/11\\\/23\\\/adversarial-attacks-navigating-the-shifting-sands-of-ai-security-and-robustness-4\\\/\"},\"author\":{\"name\":\"Kareem Darwish\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/person\\\/2a018968b95abd980774176f3c37d76e\"},\"headline\":\"Adversarial Attacks: Navigating the Shifting Sands of AI Security and Robustness\",\"datePublished\":\"2025-11-23T08:18:43+00:00\",\"dateModified\":\"2025-12-28T21:17:46+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/11\\\/23\\\/adversarial-attacks-navigating-the-shifting-sands-of-ai-security-and-robustness-4\\\/\"},\"wordCount\":1074,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#organization\"},\"keywords\":[\"adversarial attacks\",\"adversarial attacks\",\"adversarial robustness\",\"diffusion models\",\"neural network security\",\"reinforcement learning\"],\"articleSection\":[\"Artificial Intelligence\",\"Cryptography and Security\",\"Machine Learning\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/11\\\/23\\\/adversarial-attacks-navigating-the-shifting-sands-of-ai-security-and-robustness-4\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/11\\\/23\\\/adversarial-attacks-navigating-the-shifting-sands-of-ai-security-and-robustness-4\\\/\",\"url\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/11\\\/23\\\/adversarial-attacks-navigating-the-shifting-sands-of-ai-security-and-robustness-4\\\/\",\"name\":\"Adversarial Attacks: Navigating the Shifting Sands of AI Security and Robustness\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#website\"},\"datePublished\":\"2025-11-23T08:18:43+00:00\",\"dateModified\":\"2025-12-28T21:17:46+00:00\",\"description\":\"Latest 50 papers on adversarial attacks: Nov. 23, 2025\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/11\\\/23\\\/adversarial-attacks-navigating-the-shifting-sands-of-ai-security-and-robustness-4\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/11\\\/23\\\/adversarial-attacks-navigating-the-shifting-sands-of-ai-security-and-robustness-4\\\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/11\\\/23\\\/adversarial-attacks-navigating-the-shifting-sands-of-ai-security-and-robustness-4\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/scipapermill.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Adversarial Attacks: Navigating the Shifting Sands of AI Security and Robustness\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#website\",\"url\":\"https:\\\/\\\/scipapermill.com\\\/\",\"name\":\"SciPapermill\",\"description\":\"Follow the latest research\",\"publisher\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/scipapermill.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#organization\",\"name\":\"SciPapermill\",\"url\":\"https:\\\/\\\/scipapermill.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/i0.wp.com\\\/scipapermill.com\\\/wp-content\\\/uploads\\\/2025\\\/07\\\/cropped-icon.jpg?fit=512%2C512&ssl=1\",\"contentUrl\":\"https:\\\/\\\/i0.wp.com\\\/scipapermill.com\\\/wp-content\\\/uploads\\\/2025\\\/07\\\/cropped-icon.jpg?fit=512%2C512&ssl=1\",\"width\":512,\"height\":512,\"caption\":\"SciPapermill\"},\"image\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/people\\\/SciPapermill\\\/61582731431910\\\/\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/scipapermill\\\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/person\\\/2a018968b95abd980774176f3c37d76e\",\"name\":\"Kareem Darwish\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g\",\"caption\":\"Kareem Darwish\"},\"description\":\"The SciPapermill bot is an AI research assistant dedicated to curating the latest advancements in artificial intelligence. Every week, it meticulously scans and synthesizes newly published papers, distilling key insights into a concise digest. Its mission is to keep you informed on the most significant take-home messages, emerging models, and pivotal datasets that are shaping the future of AI. This bot was created by Dr. Kareem Darwish, who is a principal scientist at the Qatar Computing Research Institute (QCRI) and is working on state-of-the-art Arabic large language models.\",\"sameAs\":[\"https:\\\/\\\/scipapermill.com\"]}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Adversarial Attacks: Navigating the Shifting Sands of AI Security and Robustness","description":"Latest 50 papers on adversarial attacks: Nov. 23, 2025","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/scipapermill.com\/index.php\/2025\/11\/23\/adversarial-attacks-navigating-the-shifting-sands-of-ai-security-and-robustness-4\/","og_locale":"en_US","og_type":"article","og_title":"Adversarial Attacks: Navigating the Shifting Sands of AI Security and Robustness","og_description":"Latest 50 papers on adversarial attacks: Nov. 23, 2025","og_url":"https:\/\/scipapermill.com\/index.php\/2025\/11\/23\/adversarial-attacks-navigating-the-shifting-sands-of-ai-security-and-robustness-4\/","og_site_name":"SciPapermill","article_publisher":"https:\/\/www.facebook.com\/people\/SciPapermill\/61582731431910\/","article_published_time":"2025-11-23T08:18:43+00:00","article_modified_time":"2025-12-28T21:17:46+00:00","og_image":[{"width":512,"height":512,"url":"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1","type":"image\/jpeg"}],"author":"Kareem Darwish","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Kareem Darwish","Est. reading time":"5 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/scipapermill.com\/index.php\/2025\/11\/23\/adversarial-attacks-navigating-the-shifting-sands-of-ai-security-and-robustness-4\/#article","isPartOf":{"@id":"https:\/\/scipapermill.com\/index.php\/2025\/11\/23\/adversarial-attacks-navigating-the-shifting-sands-of-ai-security-and-robustness-4\/"},"author":{"name":"Kareem Darwish","@id":"https:\/\/scipapermill.com\/#\/schema\/person\/2a018968b95abd980774176f3c37d76e"},"headline":"Adversarial Attacks: Navigating the Shifting Sands of AI Security and Robustness","datePublished":"2025-11-23T08:18:43+00:00","dateModified":"2025-12-28T21:17:46+00:00","mainEntityOfPage":{"@id":"https:\/\/scipapermill.com\/index.php\/2025\/11\/23\/adversarial-attacks-navigating-the-shifting-sands-of-ai-security-and-robustness-4\/"},"wordCount":1074,"commentCount":0,"publisher":{"@id":"https:\/\/scipapermill.com\/#organization"},"keywords":["adversarial attacks","adversarial attacks","adversarial robustness","diffusion models","neural network security","reinforcement learning"],"articleSection":["Artificial Intelligence","Cryptography and Security","Machine Learning"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/scipapermill.com\/index.php\/2025\/11\/23\/adversarial-attacks-navigating-the-shifting-sands-of-ai-security-and-robustness-4\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/scipapermill.com\/index.php\/2025\/11\/23\/adversarial-attacks-navigating-the-shifting-sands-of-ai-security-and-robustness-4\/","url":"https:\/\/scipapermill.com\/index.php\/2025\/11\/23\/adversarial-attacks-navigating-the-shifting-sands-of-ai-security-and-robustness-4\/","name":"Adversarial Attacks: Navigating the Shifting Sands of AI Security and Robustness","isPartOf":{"@id":"https:\/\/scipapermill.com\/#website"},"datePublished":"2025-11-23T08:18:43+00:00","dateModified":"2025-12-28T21:17:46+00:00","description":"Latest 50 papers on adversarial attacks: Nov. 23, 2025","breadcrumb":{"@id":"https:\/\/scipapermill.com\/index.php\/2025\/11\/23\/adversarial-attacks-navigating-the-shifting-sands-of-ai-security-and-robustness-4\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/scipapermill.com\/index.php\/2025\/11\/23\/adversarial-attacks-navigating-the-shifting-sands-of-ai-security-and-robustness-4\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/scipapermill.com\/index.php\/2025\/11\/23\/adversarial-attacks-navigating-the-shifting-sands-of-ai-security-and-robustness-4\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/scipapermill.com\/"},{"@type":"ListItem","position":2,"name":"Adversarial Attacks: Navigating the Shifting Sands of AI Security and Robustness"}]},{"@type":"WebSite","@id":"https:\/\/scipapermill.com\/#website","url":"https:\/\/scipapermill.com\/","name":"SciPapermill","description":"Follow the latest research","publisher":{"@id":"https:\/\/scipapermill.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/scipapermill.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/scipapermill.com\/#organization","name":"SciPapermill","url":"https:\/\/scipapermill.com\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/scipapermill.com\/#\/schema\/logo\/image\/","url":"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1","contentUrl":"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1","width":512,"height":512,"caption":"SciPapermill"},"image":{"@id":"https:\/\/scipapermill.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/people\/SciPapermill\/61582731431910\/","https:\/\/www.linkedin.com\/company\/scipapermill\/"]},{"@type":"Person","@id":"https:\/\/scipapermill.com\/#\/schema\/person\/2a018968b95abd980774176f3c37d76e","name":"Kareem Darwish","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g","caption":"Kareem Darwish"},"description":"The SciPapermill bot is an AI research assistant dedicated to curating the latest advancements in artificial intelligence. Every week, it meticulously scans and synthesizes newly published papers, distilling key insights into a concise digest. Its mission is to keep you informed on the most significant take-home messages, emerging models, and pivotal datasets that are shaping the future of AI. This bot was created by Dr. Kareem Darwish, who is a principal scientist at the Qatar Computing Research Institute (QCRI) and is working on state-of-the-art Arabic large language models.","sameAs":["https:\/\/scipapermill.com"]}]}},"views":30,"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_shortlink":"https:\/\/wp.me\/pgIXGY-vY","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts\/1982","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/comments?post=1982"}],"version-history":[{"count":1,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts\/1982\/revisions"}],"predecessor-version":[{"id":3193,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts\/1982\/revisions\/3193"}],"wp:attachment":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/media?parent=1982"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/categories?post=1982"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/tags?post=1982"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}