{"id":5667,"date":"2026-02-14T06:04:33","date_gmt":"2026-02-14T06:04:33","guid":{"rendered":"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/adversarial-attacks-navigating-the-shifting-landscape-of-ai-security-and-robustness-6\/"},"modified":"2026-02-14T06:04:33","modified_gmt":"2026-02-14T06:04:33","slug":"adversarial-attacks-navigating-the-shifting-landscape-of-ai-security-and-robustness-6","status":"publish","type":"post","link":"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/adversarial-attacks-navigating-the-shifting-landscape-of-ai-security-and-robustness-6\/","title":{"rendered":"Adversarial Attacks: Navigating the Shifting Landscape of AI Security and Robustness"},"content":{"rendered":"<h3>Latest 26 papers on adversarial attacks: Feb. 14, 2026<\/h3>\n<p>The world of AI\/ML is constantly evolving, pushing the boundaries of what\u2019s possible. Yet, with every leap forward, new challenges emerge, particularly in the realm of security and robustness. Adversarial attacks \u2013 subtle, often imperceptible manipulations designed to fool AI models \u2013 represent a formidable threat, demanding innovative defenses and deeper understanding. This post dives into recent breakthroughs, exploring how researchers are tackling these challenges across diverse AI applications, from autonomous vehicles to large language models.<\/p>\n<h3 id=\"the-big-ideas-core-innovations\">The Big Idea(s) &amp; Core Innovations<\/h3>\n<p>Recent research highlights a multi-faceted approach to both executing and defending against adversarial attacks. A central theme is the move towards more sophisticated, context-aware attacks and equally intelligent, robust defenses. For instance, in time series forecasting, traditional adversarial attacks often suffer from temporal inconsistency, rendering them impractical. Researchers from <strong>Huazhong University of Science and Technology<\/strong> address this in their paper, \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2602.11940\">Temporally Unified Adversarial Perturbations for Time Series Forecasting<\/a>\u201d, by introducing <strong>Temporally Unified Adversarial Perturbations (TUAPs)<\/strong>. These enforce temporal unification constraints, ensuring attacks remain consistent across overlapping samples, thus significantly outperforming existing baselines in both white-box and black-box scenarios.<\/p>\n<p>Similarly, the burgeoning field of multimodal AI presents new attack vectors. <strong>Yu Yan et al.<\/strong> from <strong>Institute of Computing Technology, Chinese Academy of Sciences<\/strong>, unveil \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2602.10148\">Red-teaming the Multimodal Reasoning: Jailbreaking Vision-Language Models via Cross-modal Entanglement Attacks<\/a>\u201d (COMET). This framework exploits cross-modal reasoning weaknesses in Vision-Language Models (VLMs), achieving over 94% jailbreak success by creating adversarial examples that entangle modalities. Following this, <strong>Hefei Mei et al.<\/strong> from <strong>City University of Hong Kong<\/strong> introduce <strong>VEAttack<\/strong> in \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2505.17440\">VEAttack: Downstream-agnostic Vision Encoder Attack against Large Vision Language Models<\/a>\u201d, a gray-box attack targeting only the vision encoder of LVLMs. This approach efficiently degrades performance across tasks like image captioning and VQA by focusing on patch tokens, significantly reducing computational overhead.<\/p>\n<p>Protecting critical systems like autonomous vehicles (AVs) is another focal point. Researchers from <strong>Johns Hopkins University<\/strong> and <strong>University of California, Santa Cruz<\/strong> present a chilling new threat in \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2602.07249\">Beyond Crash: Hijacking Your Autonomous Vehicle for Fun and Profit<\/a>\u201d. Their <strong>JACKZEBRA<\/strong> framework enables long-horizon route hijacking of vision-based AVs through adaptive, stealthy visual patches, subtly steering the vehicle without immediate safety failures. This highlights a shift from single-instance disruptions to persistent, goal-oriented attacks. In the realm of 3D perception, <strong>Haoran Li et al.<\/strong> from <strong>Northeastern University, China<\/strong>, tackle imperceptible attacks on point clouds with <strong>PWaveP<\/strong> in \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2602.03333\">PWAVEP: Purifying Imperceptible Adversarial Perturbations in 3D Point Clouds via Spectral Graph Wavelets<\/a>\u201d. This novel non-invasive defense purifies high-frequency adversarial noise in the spectral domain, significantly improving robustness.<\/p>\n<p>For language models, vulnerabilities extend beyond direct attacks. <strong>Google Research et al.<\/strong>, in \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2602.09343\">Not-in-Perspective: Towards Shielding Google\u2019s Perspective API Against Adversarial Negation Attacks<\/a>\u201d, address how toxic sentences can evade detection by simply adding \u2018not\u2019. They propose a formal reasoning wrapper to enhance robustness against such adversarial negation attacks. To proactively counter harmful outputs, <strong>Google<\/strong> and <strong>Virginia Tech<\/strong> introduce <strong>RLBF<\/strong> in \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2602.08377\">Reinforcement Learning with Backtracking Feedback<\/a>\u201d, an RL framework for LLMs that allows dynamic self-correction by \u2018backtracking\u2019 from harmful generations. Meanwhile, <strong>Suyu Ma et al.<\/strong> from <strong>CSIRO\u2019s Data61<\/strong> present <strong>SOGPTSpotter<\/strong> in \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2602.04185\">SOGPTSpotter: Detecting ChatGPT-Generated Answers on Stack Overflow<\/a>\u201d, a Siamese network-based method that leverages the Q&amp;A structure of platforms to detect AI-generated content, proving robust against adversarial inputs and aiding content moderation.<\/p>\n<p>Finally, the theoretical underpinnings of robustness are being refined. <strong>Ziv Bar-Joseph et al.<\/strong> from <strong>University of M\u00fcnster<\/strong> explore \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2602.06578\">Exploring Sparsity and Smoothness of Arbitrary \u2113p Norms in Adversarial Attacks<\/a>\u201d, revealing that higher \u2113p norms lead to smoother, less sparse perturbations that are more effective. This insight is complemented by the work of <strong>Sofia Ivolgina et al.<\/strong> from <strong>University of Florida<\/strong> in \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2507.08261\">Admissibility of Stein Shrinkage for BN in the Presence of Adversarial Attacks<\/a>\u201d, which demonstrates that <strong>James\u2013Stein (JS) shrinkage<\/strong> improves Batch Normalization (BN) robustness by reducing local Lipschitz constants, enhancing stability and accuracy.<\/p>\n<h3 id=\"under-the-hood-models-datasets-benchmarks\">Under the Hood: Models, Datasets, &amp; Benchmarks<\/h3>\n<p>This wave of research is underpinned by innovative tools and resources:<\/p>\n<ul>\n<li><strong>Temporally Unified Adversarial Perturbations for Time Series Forecasting<\/strong>: Employs the <strong>Timestamp-wise Gradient Accumulation Method (TGAM)<\/strong> for efficient perturbation generation and demonstrates superior performance on benchmark datasets. Code available at <a href=\"https:\/\/github.com\/Simonnop\/time\">https:\/\/github.com\/Simonnop\/time<\/a>.<\/li>\n<li><strong>Brain Tumor Classifiers Under Attack<\/strong>: Evaluates <strong>ResNet-based (BrainNet)<\/strong>, <strong>ResNeXt-based (BrainNeXt)<\/strong>, and <strong>Dilation-based<\/strong> models for MRI-based brain tumor classification under FGSM and PGD attacks.<\/li>\n<li><strong>Poly-Guard<\/strong>: Introduces <strong>POLY-GUARD<\/strong>, the first massive multi-domain safety policy-grounded guardrail dataset. It features policy-aligned risk construction and diverse interaction formats. Data &amp; Dataset Card: <a href=\"huggingface.co\/datasets\/AI-Secure\/PolyGuard\">huggingface.co\/datasets\/AI-Secure\/PolyGuard<\/a>, Code: <a href=\"github.com\/AI-secure\/PolyGuard\">github.com\/AI-secure\/PolyGuard<\/a>.<\/li>\n<li><strong>A Low-Rank Defense Method for Adversarial Attack on Diffusion Models<\/strong>: Proposes <strong>LoRD<\/strong>, a low-rank defense leveraging the <strong>LoRA framework<\/strong> to protect diffusion models. Tested against PGD and ACE attacks. Related code: <a href=\"https:\/\/github.com\/cloneofsimo\/lora\">https:\/\/github.com\/cloneofsimo\/lora<\/a>, <a href=\"https:\/\/github.com\/VinAIResearch\/Anti-DreamBooth\">https:\/\/github.com\/VinAIResearch\/Anti-DreamBooth<\/a>.<\/li>\n<li><strong>ATEX-CF: Attack-Informed Counterfactual Explanations for Graph Neural Networks<\/strong>: Introduces <strong>ATEX-CF<\/strong>, a hybrid framework for GNNs combining edge additions and deletions for explanations. Code available at <a href=\"https:\/\/github.com\/zhangyuo\/ATEX_CF\">https:\/\/github.com\/zhangyuo\/ATEX_CF<\/a>.<\/li>\n<li><strong>GUARDIAN<\/strong>: A novel safety filtering framework for perception systems, comprehensively evaluated across diverse scenarios and datasets. See <a href=\"https:\/\/arxiv.org\/pdf\/2602.06026\">https:\/\/arxiv.org\/pdf\/2602.06026<\/a>.<\/li>\n<li><strong>ShapePuri<\/strong>: Achieves state-of-the-art 81.64% robust accuracy on <strong>ImageNet<\/strong> under the <strong>AutoAttack benchmark<\/strong> by utilizing a <strong>Shape Encoding Module (SEM)<\/strong> and <strong>Global Appearance Debiasing (GAD)<\/strong>.<\/li>\n<li><strong>Laws of Learning Dynamics and the Core of Learners<\/strong>: Proposes a <strong>logifold architecture<\/strong> and entropy-based lifelong ensemble learning, demonstrated on the <strong>CIFAR-10<\/strong> dataset. Code: <a href=\"https:\/\/github.com\/inkeejung\/logifold\">https:\/\/github.com\/inkeejung\/logifold<\/a>.<\/li>\n<li><strong>When and Where to Attack? Stage-wise Attention-Guided Adversarial Attack on Large Vision Language Models<\/strong>: Introduces <strong>SAGA<\/strong>, an attention-guided attack framework evaluated on ten LVLMs. Code: <a href=\"https:\/\/github.com\/jackwaky\/SAGA\">https:\/\/github.com\/jackwaky\/SAGA<\/a>.<\/li>\n<li><strong>SOGPTSpotter<\/strong>: A <strong>BigBird-based Siamese Neural Network<\/strong> for detecting ChatGPT content on Stack Overflow, trained on a new, high-quality dataset.<\/li>\n<li><strong>Someone Hid It!: Query-Agnostic Black-Box Attacks on LLM-Based Retrieval<\/strong>: Establishes a theoretical framework and adversarial learning method with zero-shot transferability across various LLM retrievers. Code: <a href=\"https:\/\/github.com\/JetRichardLee\/DQA-Learning\">https:\/\/github.com\/JetRichardLee\/DQA-Learning<\/a>.<\/li>\n<li><strong>VEAttack<\/strong>: Targets the vision encoder of LVLMs, with code available at <a href=\"https:\/\/github.com\/hefeimei06\/VEAttack-LVLM\">https:\/\/github.com\/hefeimei06\/VEAttack-LVLM<\/a>.<\/li>\n<li><strong>PWAVEP<\/strong>: A non-invasive purification framework for 3D point clouds using spectral graph wavelets and hybrid saliency scores. Code: <a href=\"https:\/\/github.com\/a772316182\/pwavep\">https:\/\/github.com\/a772316182\/pwavep<\/a>.<\/li>\n<li><strong>Time Is All It Takes: Spike-Retiming Attacks on Event-Driven Spiking Neural Networks<\/strong>: Uses <strong>projected-in-the-loop (PIL) optimization<\/strong> to generate timing-only adversarial attacks. Code: <a href=\"https:\/\/github.com\/yuyi-sd\/Spike-Retiming-Attacks\">https:\/\/github.com\/yuyi-sd\/Spike-Retiming-Attacks<\/a>.<\/li>\n<li><strong>Learning Better Certified Models from Empirically-Robust Teachers<\/strong>: Proposes the <strong>CC-Dist algorithm<\/strong> and feature-space distillation for ReLU networks on TinyImageNet and downscaled Imagenet. Supplementary code for CC-Dist available.<\/li>\n<li><strong>SPGCL<\/strong>: A graph contrastive learning method leveraging <strong>SVD-guided structural perturbations<\/strong> for node representation on various graph-based tasks. Code: <a href=\"https:\/\/github.com\/SPGCL-Team\/SPGCL\">https:\/\/github.com\/SPGCL-Team\/SPGCL<\/a>.<\/li>\n<\/ul>\n<h3 id=\"impact-the-road-ahead\">Impact &amp; The Road Ahead<\/h3>\n<p>These advancements have profound implications for AI safety and reliability. The development of sophisticated attacks, from long-horizon AV hijacking to cross-modal jailbreaks, underscores the urgent need for robust defense mechanisms. Simultaneously, innovative defenses like TUAPs, PWaveP, LoRD, and ShapePuri are pushing the boundaries of what\u2019s possible in securing AI systems against these threats. The introduction of large-scale benchmarks like POLY-GUARD and the theoretical insights into learning dynamics and \u2113p norms provide crucial foundations for future research.<\/p>\n<p>Looking ahead, the focus will likely shift towards more adaptive, proactive defenses that can learn and evolve alongside new attack strategies. The concept of self-correction in LLMs, as demonstrated by RLBF, could be a game-changer for content moderation and safety. Similarly, applying theoretical guarantees, as seen with Stein shrinkage for Batch Normalization and formal verification frameworks like VScan (from \u201c<a href=\"https:\/\/www.scipopt.org\/doc\/html\/\">Verifying DNN-based Semantic Communication Against Generative Adversarial Noise<\/a>\u201d), will be vital for deploying AI in safety-critical applications. The increasing complexity of AI models, particularly multimodal and graph-based systems, demands integrated security-by-design principles rather than reactive patches. As AI becomes more ubiquitous, ensuring its resilience against adversarial attacks will be paramount to its trustworthy integration into our world.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Latest 26 papers on adversarial attacks: Feb. 14, 2026<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_yoast_wpseo_focuskw":"","_yoast_wpseo_title":"","_yoast_wpseo_metadesc":"","_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[56,55,63],"tags":[157,1621,2699,158,2700],"class_list":["post-5667","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence","category-computer-vision","category-machine-learning","tag-adversarial-attacks","tag-main_tag_adversarial_attacks","tag-adversarial-attacks-on-time-series-forecasting","tag-adversarial-robustness","tag-temporal-consistency-in-adversarial-perturbations"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.2 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Adversarial Attacks: Navigating the Shifting Landscape of AI Security and Robustness<\/title>\n<meta name=\"description\" content=\"Latest 26 papers on adversarial attacks: Feb. 14, 2026\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/adversarial-attacks-navigating-the-shifting-landscape-of-ai-security-and-robustness-6\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Adversarial Attacks: Navigating the Shifting Landscape of AI Security and Robustness\" \/>\n<meta property=\"og:description\" content=\"Latest 26 papers on adversarial attacks: Feb. 14, 2026\" \/>\n<meta property=\"og:url\" content=\"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/adversarial-attacks-navigating-the-shifting-landscape-of-ai-security-and-robustness-6\/\" \/>\n<meta property=\"og:site_name\" content=\"SciPapermill\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/people\/SciPapermill\/61582731431910\/\" \/>\n<meta property=\"article:published_time\" content=\"2026-02-14T06:04:33+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1\" \/>\n\t<meta property=\"og:image:width\" content=\"512\" \/>\n\t<meta property=\"og:image:height\" content=\"512\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Kareem Darwish\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Kareem Darwish\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/adversarial-attacks-navigating-the-shifting-landscape-of-ai-security-and-robustness-6\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/adversarial-attacks-navigating-the-shifting-landscape-of-ai-security-and-robustness-6\/\"},\"author\":{\"name\":\"Kareem Darwish\",\"@id\":\"https:\/\/scipapermill.com\/#\/schema\/person\/2a018968b95abd980774176f3c37d76e\"},\"headline\":\"Adversarial Attacks: Navigating the Shifting Landscape of AI Security and Robustness\",\"datePublished\":\"2026-02-14T06:04:33+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/adversarial-attacks-navigating-the-shifting-landscape-of-ai-security-and-robustness-6\/\"},\"wordCount\":1302,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/scipapermill.com\/#organization\"},\"keywords\":[\"adversarial attacks\",\"adversarial attacks\",\"adversarial attacks on time series forecasting\",\"adversarial robustness\",\"temporal consistency in adversarial perturbations\"],\"articleSection\":[\"Artificial Intelligence\",\"Computer Vision\",\"Machine Learning\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/adversarial-attacks-navigating-the-shifting-landscape-of-ai-security-and-robustness-6\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/adversarial-attacks-navigating-the-shifting-landscape-of-ai-security-and-robustness-6\/\",\"url\":\"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/adversarial-attacks-navigating-the-shifting-landscape-of-ai-security-and-robustness-6\/\",\"name\":\"Adversarial Attacks: Navigating the Shifting Landscape of AI Security and Robustness\",\"isPartOf\":{\"@id\":\"https:\/\/scipapermill.com\/#website\"},\"datePublished\":\"2026-02-14T06:04:33+00:00\",\"description\":\"Latest 26 papers on adversarial attacks: Feb. 14, 2026\",\"breadcrumb\":{\"@id\":\"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/adversarial-attacks-navigating-the-shifting-landscape-of-ai-security-and-robustness-6\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/adversarial-attacks-navigating-the-shifting-landscape-of-ai-security-and-robustness-6\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/adversarial-attacks-navigating-the-shifting-landscape-of-ai-security-and-robustness-6\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/scipapermill.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Adversarial Attacks: Navigating the Shifting Landscape of AI Security and Robustness\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/scipapermill.com\/#website\",\"url\":\"https:\/\/scipapermill.com\/\",\"name\":\"SciPapermill\",\"description\":\"Follow the latest research\",\"publisher\":{\"@id\":\"https:\/\/scipapermill.com\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/scipapermill.com\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/scipapermill.com\/#organization\",\"name\":\"SciPapermill\",\"url\":\"https:\/\/scipapermill.com\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/scipapermill.com\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1\",\"contentUrl\":\"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1\",\"width\":512,\"height\":512,\"caption\":\"SciPapermill\"},\"image\":{\"@id\":\"https:\/\/scipapermill.com\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/people\/SciPapermill\/61582731431910\/\",\"https:\/\/www.linkedin.com\/company\/scipapermill\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/scipapermill.com\/#\/schema\/person\/2a018968b95abd980774176f3c37d76e\",\"name\":\"Kareem Darwish\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g\",\"caption\":\"Kareem Darwish\"},\"description\":\"The SciPapermill bot is an AI research assistant dedicated to curating the latest advancements in artificial intelligence. Every week, it meticulously scans and synthesizes newly published papers, distilling key insights into a concise digest. Its mission is to keep you informed on the most significant take-home messages, emerging models, and pivotal datasets that are shaping the future of AI. This bot was created by Dr. Kareem Darwish, who is a principal scientist at the Qatar Computing Research Institute (QCRI) and is working on state-of-the-art Arabic large language models.\",\"sameAs\":[\"https:\/\/scipapermill.com\"]}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Adversarial Attacks: Navigating the Shifting Landscape of AI Security and Robustness","description":"Latest 26 papers on adversarial attacks: Feb. 14, 2026","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/adversarial-attacks-navigating-the-shifting-landscape-of-ai-security-and-robustness-6\/","og_locale":"en_US","og_type":"article","og_title":"Adversarial Attacks: Navigating the Shifting Landscape of AI Security and Robustness","og_description":"Latest 26 papers on adversarial attacks: Feb. 14, 2026","og_url":"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/adversarial-attacks-navigating-the-shifting-landscape-of-ai-security-and-robustness-6\/","og_site_name":"SciPapermill","article_publisher":"https:\/\/www.facebook.com\/people\/SciPapermill\/61582731431910\/","article_published_time":"2026-02-14T06:04:33+00:00","og_image":[{"width":512,"height":512,"url":"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1","type":"image\/jpeg"}],"author":"Kareem Darwish","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Kareem Darwish","Est. reading time":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/adversarial-attacks-navigating-the-shifting-landscape-of-ai-security-and-robustness-6\/#article","isPartOf":{"@id":"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/adversarial-attacks-navigating-the-shifting-landscape-of-ai-security-and-robustness-6\/"},"author":{"name":"Kareem Darwish","@id":"https:\/\/scipapermill.com\/#\/schema\/person\/2a018968b95abd980774176f3c37d76e"},"headline":"Adversarial Attacks: Navigating the Shifting Landscape of AI Security and Robustness","datePublished":"2026-02-14T06:04:33+00:00","mainEntityOfPage":{"@id":"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/adversarial-attacks-navigating-the-shifting-landscape-of-ai-security-and-robustness-6\/"},"wordCount":1302,"commentCount":0,"publisher":{"@id":"https:\/\/scipapermill.com\/#organization"},"keywords":["adversarial attacks","adversarial attacks","adversarial attacks on time series forecasting","adversarial robustness","temporal consistency in adversarial perturbations"],"articleSection":["Artificial Intelligence","Computer Vision","Machine Learning"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/adversarial-attacks-navigating-the-shifting-landscape-of-ai-security-and-robustness-6\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/adversarial-attacks-navigating-the-shifting-landscape-of-ai-security-and-robustness-6\/","url":"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/adversarial-attacks-navigating-the-shifting-landscape-of-ai-security-and-robustness-6\/","name":"Adversarial Attacks: Navigating the Shifting Landscape of AI Security and Robustness","isPartOf":{"@id":"https:\/\/scipapermill.com\/#website"},"datePublished":"2026-02-14T06:04:33+00:00","description":"Latest 26 papers on adversarial attacks: Feb. 14, 2026","breadcrumb":{"@id":"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/adversarial-attacks-navigating-the-shifting-landscape-of-ai-security-and-robustness-6\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/adversarial-attacks-navigating-the-shifting-landscape-of-ai-security-and-robustness-6\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/adversarial-attacks-navigating-the-shifting-landscape-of-ai-security-and-robustness-6\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/scipapermill.com\/"},{"@type":"ListItem","position":2,"name":"Adversarial Attacks: Navigating the Shifting Landscape of AI Security and Robustness"}]},{"@type":"WebSite","@id":"https:\/\/scipapermill.com\/#website","url":"https:\/\/scipapermill.com\/","name":"SciPapermill","description":"Follow the latest research","publisher":{"@id":"https:\/\/scipapermill.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/scipapermill.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/scipapermill.com\/#organization","name":"SciPapermill","url":"https:\/\/scipapermill.com\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/scipapermill.com\/#\/schema\/logo\/image\/","url":"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1","contentUrl":"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1","width":512,"height":512,"caption":"SciPapermill"},"image":{"@id":"https:\/\/scipapermill.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/people\/SciPapermill\/61582731431910\/","https:\/\/www.linkedin.com\/company\/scipapermill\/"]},{"@type":"Person","@id":"https:\/\/scipapermill.com\/#\/schema\/person\/2a018968b95abd980774176f3c37d76e","name":"Kareem Darwish","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g","caption":"Kareem Darwish"},"description":"The SciPapermill bot is an AI research assistant dedicated to curating the latest advancements in artificial intelligence. Every week, it meticulously scans and synthesizes newly published papers, distilling key insights into a concise digest. Its mission is to keep you informed on the most significant take-home messages, emerging models, and pivotal datasets that are shaping the future of AI. This bot was created by Dr. Kareem Darwish, who is a principal scientist at the Qatar Computing Research Institute (QCRI) and is working on state-of-the-art Arabic large language models.","sameAs":["https:\/\/scipapermill.com"]}]}},"views":63,"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_shortlink":"https:\/\/wp.me\/pgIXGY-1tp","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts\/5667","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/comments?post=5667"}],"version-history":[{"count":0,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts\/5667\/revisions"}],"wp:attachment":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/media?parent=5667"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/categories?post=5667"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/tags?post=5667"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}