{"id":5671,"date":"2026-02-14T06:07:59","date_gmt":"2026-02-14T06:07:59","guid":{"rendered":"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/catastrophic-forgetting-no-more-recent-breakthroughs-in-continual-and-adaptive-ai\/"},"modified":"2026-02-14T06:07:59","modified_gmt":"2026-02-14T06:07:59","slug":"catastrophic-forgetting-no-more-recent-breakthroughs-in-continual-and-adaptive-ai","status":"publish","type":"post","link":"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/catastrophic-forgetting-no-more-recent-breakthroughs-in-continual-and-adaptive-ai\/","title":{"rendered":"Catastrophic Forgetting No More: Recent Breakthroughs in Continual and Adaptive AI"},"content":{"rendered":"<h3>Latest 50 papers on catastrophic forgetting: Feb. 14, 2026<\/h3>\n<p>Catastrophic forgetting \u2013 the dreaded tendency of neural networks to forget previously learned information when acquiring new knowledge \u2013 has long been a formidable foe in the quest for truly intelligent, adaptive AI. Imagine a robot that learns to identify a cat, only to forget what a dog looks like after being trained on new images. This inherent instability has hindered the development of systems capable of continuous learning in dynamic, real-world environments. However, a flurry of recent research offers exciting breakthroughs, moving us closer to AI that learns and evolves gracefully.<\/p>\n<h3 id=\"the-big-ideas-core-innovations\">The Big Idea(s) &amp; Core Innovations<\/h3>\n<p>These recent papers tackle catastrophic forgetting from multiple angles, demonstrating a clear shift towards more robust, efficient, and biologically inspired continual learning paradigms. A prominent theme is the move beyond simple regularization or replay towards more nuanced approaches that understand and manage <em>how<\/em> knowledge is stored and adapted.<\/p>\n<p>One significant innovation lies in <strong>parameter-efficient fine-tuning (PEFT) with shared representations<\/strong>. For instance, in their work \u201cModular Multi-Task Learning for Chemical Reaction Prediction\u201d [https:\/\/arxiv.org\/pdf\/2602.10404], authors from <strong>University of Greenwich<\/strong> and <strong>University of Cambridge<\/strong> show that Low-Rank Adaptation (LoRA) can achieve comparable accuracy to full fine-tuning while significantly mitigating catastrophic forgetting in chemical reaction prediction. Building on this, <strong>Johns Hopkins University<\/strong> researchers in \u201cShared LoRA Subspaces for almost Strict Continual Learning\u201d [https:\/\/arxiv.org\/pdf\/2602.06043] introduce <em>Share<\/em>, a method leveraging shared low-rank subspaces. This drastically reduces parameter and memory usage (up to 100x and 281x respectively) by allowing a single model to flexibly integrate knowledge across hundreds of tasks.<\/p>\n<p>Another critical area of progress involves <strong>geometric and thermodynamic perspectives<\/strong> on learning. The paper \u201cBeyond Optimization: Intelligence as Metric-Topology Factorization under Geometric Incompleteness\u201d [https:\/\/arxiv.org\/pdf\/2602.07974] by <strong>Xin Li from the University at Albany<\/strong> posits that intelligence involves adapting metric structures to topological changes, introducing Metric-Topology Factorization (MTF) to decouple stable topological structure from plastic metric control. This theoretical grounding underpins architectures like the Topological Urysohn Machine (TUM) that enable rapid adaptation without forgetting. Complementing this, \u201cA Thermodynamic Theory of Learning Part II: Critical Period Closure and Continual Learning Failure\u201d [https:\/\/arxiv.org\/pdf\/2602.07950] by <strong>Daisuke Okanohara from Preferred Networks, Inc.<\/strong> reframes catastrophic forgetting as an irreversible loss of representational freedom due to finite-time dissipation, offering a deeper understanding of its fundamental limits. This suggests that instead of fighting forgetting directly, we need to design systems that minimize this \u2018critical period closure\u2019.<\/p>\n<p><strong>Adaptive control and selective modification<\/strong> are also proving effective. The <strong>KAIST<\/strong> team behind \u201cModel-Dowser: Data-Free Importance Probing to Mitigate Catastrophic Forgetting in Multimodal Large Language Models\u201d [https:\/\/arxiv.org\/pdf\/2602.04509] introduces a sparse fine-tuning method that uses data-free importance probing to preserve crucial parameters, maintaining generalization without task-specific data. Similarly, \u201cAttention Retention for Continual Learning with Vision Transformers\u201d [https:\/\/arxiv.org\/pdf\/2602.05454] by <strong>Northwestern Polytechnical University<\/strong> identifies attention drift as a key culprit in Vision Transformer forgetting and proposes <em>ARCL-ViT<\/em>, an attention-retaining framework using gradient masking. These methods selectively update parts of the model, minimizing interference with existing knowledge.<\/p>\n<p>For LLMs, <strong>robust policy optimization and data rewriting<\/strong> are critical. The paper \u201cRobust Policy Optimization to Prevent Catastrophic Forgetting\u201d [https:\/\/arxiv.org\/pdf\/2602.08813] from <strong>University of Pennsylvania<\/strong> and <strong>University of Southern California<\/strong> introduces <em>FRPO<\/em>, an RLHF framework that optimizes reward stability within a KL-bounded neighborhood, preserving safety guardrails during fine-tuning. Meanwhile, the <strong>Beijing Institute of Technology<\/strong> in \u201cPatch the Distribution Mismatch: RL Rewriting Agent for Stable Off-Policy SFT\u201d [https:\/\/arxiv.org\/pdf\/2602.11220] tackles distribution mismatch by using an RL-based rewriting agent to generate data closer to the model\u2019s natural generation style, significantly reducing forgetting.<\/p>\n<h3 id=\"under-the-hood-models-datasets-benchmarks\">Under the Hood: Models, Datasets, &amp; Benchmarks<\/h3>\n<p>The innovations discussed are often validated and enabled by specific architectural choices and rigorous benchmarking:<\/p>\n<ul>\n<li><strong>PEFT Techniques<\/strong>: LoRA (Low-Rank Adaptation) and its variants are foundational. <em>Share<\/em> builds on this by extending shared low-rank subspaces for broader applicability.<\/li>\n<li><strong>SNNs and Neuromorphic Vision<\/strong>: \u201cEnergy-Aware Spike Budgeting for Continual Learning in Spiking Neural Networks for Neuromorphic Vision\u201d [https:\/\/arxiv.org\/pdf\/2602.12236] from the <strong>University of Liberal Arts Bangladesh<\/strong> and <strong>Pennsylvania State University<\/strong> utilizes learnable Leaky Integrate-and-Fire (LIF) neuron parameters and adaptive spike scheduling, demonstrating modality-dependent behavior on frame-based and event-based datasets. This pushes the boundaries of energy-efficient continual learning.<\/li>\n<li><strong>Curriculum Learning and Reinforcement Alignment<\/strong>: Frameworks like AC-MASAC (\u201cAC-MASAC: An Attentive Curriculum Learning Framework for Heterogeneous UAV Swarm Coordination\u201d [https:\/\/arxiv.org\/pdf\/2602.11735] from <strong>Guangdong University of Technology<\/strong>) for UAV swarms, <em>RCPA<\/em> (\u201cReinforced Curriculum Pre-Alignment for Domain-Adaptive VLMs\u201d [https:\/\/arxiv.org\/pdf\/2602.10740] from <strong>Tencent<\/strong> and <strong>The University of Hong Kong<\/strong>) for Vision-Language Models (VLMs), and <em>ACuRL<\/em> (\u201cAutonomous Continual Learning of Computer-Use Agents for Environment Adaptation\u201d [https:\/\/arxiv.org\/pdf\/2602.10356] from <strong>The Ohio State University<\/strong> and <strong>University of California, Berkeley<\/strong>) for computer-use agents, all employ structured curricula and RL to prevent forgetting, often introducing novel attention mechanisms or automated evaluators like <em>CUAJudge<\/em>.<\/li>\n<li><strong>Memory Modules<\/strong>: <em>TS-Memory<\/em> (\u201cTS-Memory: Plug-and-Play Memory for Time Series Foundation Models\u201d [https:\/\/arxiv.org\/pdf\/2602.11550] by <strong>HKUST<\/strong> and <strong>Tencent<\/strong>) is a lightweight plug-and-play memory adapter for Time Series Foundation Models, improving performance on domain shifts without retraining through parametric memory distillation. Similarly, <em>Locas<\/em> (\u201cLocas: Your Models are Principled Initializers of Locally-Supported Parametric Memories\u201d [https:\/\/arxiv.org\/pdf\/2602.05085] from <strong>Stanford<\/strong>, <strong>Google Research<\/strong>, <em>et al.<\/em>) introduces locally-supported parametric memories for test-time training, designed to minimize forgetting.<\/li>\n<li><strong>Novel Training Data Paradigms<\/strong>: \u201cData Repetition Beats Data Scaling in Long-CoT Supervised Fine-Tuning\u201d [https:\/\/arxiv.org\/pdf\/2602.11149] by <strong>University of Technology Nuremberg<\/strong> and <strong>Mistral AI<\/strong> highlights that repetition on smaller datasets can outperform larger datasets, challenging traditional scaling laws. <em>TDScaling<\/em> (\u201cBeyond Quantity: Trajectory Diversity Scaling for Code Agents\u201d [https:\/\/arxiv.org\/pdf\/2602.03219] from <strong>Southern University of Science and Technology<\/strong> and <strong>Alibaba Group<\/strong>) focuses on <em>trajectory diversity<\/em> rather than quantity for code agents, improving generalization and mitigating forgetting of coding skills. The GitHub repository for TDScaling is expected post-publication.<\/li>\n<li><strong>Model Merging and Unlearning<\/strong>: <em>OrthoMerge<\/em> (\u201cOrthogonal Model Merging\u201d [https:\/\/arxiv.org\/pdf\/2602.05943] by <strong>The Chinese University of Hong Kong<\/strong>) uses orthogonal transformations on a Riemannian manifold to merge models, preserving geometric structure and reducing forgetting. For unlearning, <em>CATNIP<\/em> (\u201cCATNIP: LLM Unlearning via Calibrated and Tokenized Negative Preference Alignment\u201d [https:\/\/arxiv.org\/pdf\/2602.02824] from <strong>George Mason University<\/strong> and <strong>University of Texas at Austin<\/strong>) uses calibrated and tokenized negative preference alignment to remove undesirable knowledge without retention data, offering a more robust approach. <em>TFER<\/em> (\u201cDon\u2019t Break the Boundary: Continual Unlearning for OOD Detection Based on Free Energy Repulsion\u201d [https:\/\/arxiv.org\/pdf\/2602.06331] by <strong>Nanjing Normal University<\/strong> <em>et al.<\/em>) introduces a Push-Pull game mechanism for boundary-preserving class unlearning, transforming forgotten classes into OOD samples.<\/li>\n<\/ul>\n<h3 id=\"impact-the-road-ahead\">Impact &amp; The Road Ahead<\/h3>\n<p>The implications of these advancements are profound. The ability for AI systems to learn continually, adapt to new data, and even <em>unlearn<\/em> specific information without suffering catastrophic forgetting is critical for real-world deployment across diverse domains. From making robots truly \u201clong-lived\u201d and adaptable (as explored in \u201cTowards Long-Lived Robots: Continual Learning VLA Models via Reinforcement Fine-Tuning\u201d [https:\/\/arxiv.org\/pdf\/2602.10503] by <strong>NVIDIA Isaac Robotics Team<\/strong>) to ensuring the security of LLM-generated code (\u201cGoodVibe: Security-by-Vibe for LLM-Based Code Generation\u201d [https:\/\/arxiv.org\/pdf\/2602.10778] by <strong>Technical University of Darmstadt<\/strong> <em>et al.<\/em>), these innovations pave the way for more robust, efficient, and ethical AI.<\/p>\n<p>Looking ahead, the convergence of theoretical insights (like geometric incompleteness and thermodynamic constraints) with practical, parameter-efficient methods (such as advanced LoRA and adaptive prompt tuning) promises to unlock new levels of continual learning. The development of self-amplified learning frameworks like <em>SAIL<\/em> (\u201cSAIL: Self-Amplified Iterative Learning for Diffusion Model Alignment with Minimal Human Feedback\u201d [https:\/\/arxiv.org\/pdf\/2602.05380] by <strong>ZheJiang University<\/strong> and <strong>WeChat Vision, Tencent Inc<\/strong>) that require minimal human feedback suggests a future where AI systems can autonomously refine their capabilities. The challenge remains to bridge the gap between theoretical understanding and scalable, real-world implementations, but the progress is undeniable. The era of truly adaptive and continually learning AI is no longer a distant dream, but an exciting, unfolding reality.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Latest 50 papers on catastrophic forgetting: Feb. 14, 2026<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_yoast_wpseo_focuskw":"","_yoast_wpseo_title":"","_yoast_wpseo_metadesc":"","_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[56,57,63],"tags":[179,1617,178,1018,237,1576],"class_list":["post-5671","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence","category-cs-cl","category-machine-learning","tag-catastrophic-forgetting","tag-main_tag_catastrophic_forgetting","tag-continual-learning","tag-curriculum-learning","tag-parameter-efficient-fine-tuning","tag-main_tag_reinforcement_learning"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.2 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Catastrophic Forgetting No More: Recent Breakthroughs in Continual and Adaptive AI<\/title>\n<meta name=\"description\" content=\"Latest 50 papers on catastrophic forgetting: Feb. 14, 2026\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/catastrophic-forgetting-no-more-recent-breakthroughs-in-continual-and-adaptive-ai\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Catastrophic Forgetting No More: Recent Breakthroughs in Continual and Adaptive AI\" \/>\n<meta property=\"og:description\" content=\"Latest 50 papers on catastrophic forgetting: Feb. 14, 2026\" \/>\n<meta property=\"og:url\" content=\"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/catastrophic-forgetting-no-more-recent-breakthroughs-in-continual-and-adaptive-ai\/\" \/>\n<meta property=\"og:site_name\" content=\"SciPapermill\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/people\/SciPapermill\/61582731431910\/\" \/>\n<meta property=\"article:published_time\" content=\"2026-02-14T06:07:59+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1\" \/>\n\t<meta property=\"og:image:width\" content=\"512\" \/>\n\t<meta property=\"og:image:height\" content=\"512\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Kareem Darwish\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Kareem Darwish\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"7 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/catastrophic-forgetting-no-more-recent-breakthroughs-in-continual-and-adaptive-ai\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/catastrophic-forgetting-no-more-recent-breakthroughs-in-continual-and-adaptive-ai\/\"},\"author\":{\"name\":\"Kareem Darwish\",\"@id\":\"https:\/\/scipapermill.com\/#\/schema\/person\/2a018968b95abd980774176f3c37d76e\"},\"headline\":\"Catastrophic Forgetting No More: Recent Breakthroughs in Continual and Adaptive AI\",\"datePublished\":\"2026-02-14T06:07:59+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/catastrophic-forgetting-no-more-recent-breakthroughs-in-continual-and-adaptive-ai\/\"},\"wordCount\":1324,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/scipapermill.com\/#organization\"},\"keywords\":[\"catastrophic forgetting\",\"catastrophic forgetting\",\"continual learning\",\"curriculum learning\",\"parameter-efficient fine-tuning\",\"reinforcement learning\"],\"articleSection\":[\"Artificial Intelligence\",\"Computation and Language\",\"Machine Learning\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/catastrophic-forgetting-no-more-recent-breakthroughs-in-continual-and-adaptive-ai\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/catastrophic-forgetting-no-more-recent-breakthroughs-in-continual-and-adaptive-ai\/\",\"url\":\"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/catastrophic-forgetting-no-more-recent-breakthroughs-in-continual-and-adaptive-ai\/\",\"name\":\"Catastrophic Forgetting No More: Recent Breakthroughs in Continual and Adaptive AI\",\"isPartOf\":{\"@id\":\"https:\/\/scipapermill.com\/#website\"},\"datePublished\":\"2026-02-14T06:07:59+00:00\",\"description\":\"Latest 50 papers on catastrophic forgetting: Feb. 14, 2026\",\"breadcrumb\":{\"@id\":\"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/catastrophic-forgetting-no-more-recent-breakthroughs-in-continual-and-adaptive-ai\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/catastrophic-forgetting-no-more-recent-breakthroughs-in-continual-and-adaptive-ai\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/catastrophic-forgetting-no-more-recent-breakthroughs-in-continual-and-adaptive-ai\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/scipapermill.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Catastrophic Forgetting No More: Recent Breakthroughs in Continual and Adaptive AI\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/scipapermill.com\/#website\",\"url\":\"https:\/\/scipapermill.com\/\",\"name\":\"SciPapermill\",\"description\":\"Follow the latest research\",\"publisher\":{\"@id\":\"https:\/\/scipapermill.com\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/scipapermill.com\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/scipapermill.com\/#organization\",\"name\":\"SciPapermill\",\"url\":\"https:\/\/scipapermill.com\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/scipapermill.com\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1\",\"contentUrl\":\"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1\",\"width\":512,\"height\":512,\"caption\":\"SciPapermill\"},\"image\":{\"@id\":\"https:\/\/scipapermill.com\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/people\/SciPapermill\/61582731431910\/\",\"https:\/\/www.linkedin.com\/company\/scipapermill\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/scipapermill.com\/#\/schema\/person\/2a018968b95abd980774176f3c37d76e\",\"name\":\"Kareem Darwish\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g\",\"caption\":\"Kareem Darwish\"},\"description\":\"The SciPapermill bot is an AI research assistant dedicated to curating the latest advancements in artificial intelligence. Every week, it meticulously scans and synthesizes newly published papers, distilling key insights into a concise digest. Its mission is to keep you informed on the most significant take-home messages, emerging models, and pivotal datasets that are shaping the future of AI. This bot was created by Dr. Kareem Darwish, who is a principal scientist at the Qatar Computing Research Institute (QCRI) and is working on state-of-the-art Arabic large language models.\",\"sameAs\":[\"https:\/\/scipapermill.com\"]}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Catastrophic Forgetting No More: Recent Breakthroughs in Continual and Adaptive AI","description":"Latest 50 papers on catastrophic forgetting: Feb. 14, 2026","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/catastrophic-forgetting-no-more-recent-breakthroughs-in-continual-and-adaptive-ai\/","og_locale":"en_US","og_type":"article","og_title":"Catastrophic Forgetting No More: Recent Breakthroughs in Continual and Adaptive AI","og_description":"Latest 50 papers on catastrophic forgetting: Feb. 14, 2026","og_url":"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/catastrophic-forgetting-no-more-recent-breakthroughs-in-continual-and-adaptive-ai\/","og_site_name":"SciPapermill","article_publisher":"https:\/\/www.facebook.com\/people\/SciPapermill\/61582731431910\/","article_published_time":"2026-02-14T06:07:59+00:00","og_image":[{"width":512,"height":512,"url":"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1","type":"image\/jpeg"}],"author":"Kareem Darwish","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Kareem Darwish","Est. reading time":"7 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/catastrophic-forgetting-no-more-recent-breakthroughs-in-continual-and-adaptive-ai\/#article","isPartOf":{"@id":"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/catastrophic-forgetting-no-more-recent-breakthroughs-in-continual-and-adaptive-ai\/"},"author":{"name":"Kareem Darwish","@id":"https:\/\/scipapermill.com\/#\/schema\/person\/2a018968b95abd980774176f3c37d76e"},"headline":"Catastrophic Forgetting No More: Recent Breakthroughs in Continual and Adaptive AI","datePublished":"2026-02-14T06:07:59+00:00","mainEntityOfPage":{"@id":"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/catastrophic-forgetting-no-more-recent-breakthroughs-in-continual-and-adaptive-ai\/"},"wordCount":1324,"commentCount":0,"publisher":{"@id":"https:\/\/scipapermill.com\/#organization"},"keywords":["catastrophic forgetting","catastrophic forgetting","continual learning","curriculum learning","parameter-efficient fine-tuning","reinforcement learning"],"articleSection":["Artificial Intelligence","Computation and Language","Machine Learning"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/catastrophic-forgetting-no-more-recent-breakthroughs-in-continual-and-adaptive-ai\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/catastrophic-forgetting-no-more-recent-breakthroughs-in-continual-and-adaptive-ai\/","url":"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/catastrophic-forgetting-no-more-recent-breakthroughs-in-continual-and-adaptive-ai\/","name":"Catastrophic Forgetting No More: Recent Breakthroughs in Continual and Adaptive AI","isPartOf":{"@id":"https:\/\/scipapermill.com\/#website"},"datePublished":"2026-02-14T06:07:59+00:00","description":"Latest 50 papers on catastrophic forgetting: Feb. 14, 2026","breadcrumb":{"@id":"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/catastrophic-forgetting-no-more-recent-breakthroughs-in-continual-and-adaptive-ai\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/catastrophic-forgetting-no-more-recent-breakthroughs-in-continual-and-adaptive-ai\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/scipapermill.com\/index.php\/2026\/02\/14\/catastrophic-forgetting-no-more-recent-breakthroughs-in-continual-and-adaptive-ai\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/scipapermill.com\/"},{"@type":"ListItem","position":2,"name":"Catastrophic Forgetting No More: Recent Breakthroughs in Continual and Adaptive AI"}]},{"@type":"WebSite","@id":"https:\/\/scipapermill.com\/#website","url":"https:\/\/scipapermill.com\/","name":"SciPapermill","description":"Follow the latest research","publisher":{"@id":"https:\/\/scipapermill.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/scipapermill.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/scipapermill.com\/#organization","name":"SciPapermill","url":"https:\/\/scipapermill.com\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/scipapermill.com\/#\/schema\/logo\/image\/","url":"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1","contentUrl":"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1","width":512,"height":512,"caption":"SciPapermill"},"image":{"@id":"https:\/\/scipapermill.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/people\/SciPapermill\/61582731431910\/","https:\/\/www.linkedin.com\/company\/scipapermill\/"]},{"@type":"Person","@id":"https:\/\/scipapermill.com\/#\/schema\/person\/2a018968b95abd980774176f3c37d76e","name":"Kareem Darwish","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g","caption":"Kareem Darwish"},"description":"The SciPapermill bot is an AI research assistant dedicated to curating the latest advancements in artificial intelligence. Every week, it meticulously scans and synthesizes newly published papers, distilling key insights into a concise digest. Its mission is to keep you informed on the most significant take-home messages, emerging models, and pivotal datasets that are shaping the future of AI. This bot was created by Dr. Kareem Darwish, who is a principal scientist at the Qatar Computing Research Institute (QCRI) and is working on state-of-the-art Arabic large language models.","sameAs":["https:\/\/scipapermill.com"]}]}},"views":63,"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_shortlink":"https:\/\/wp.me\/pgIXGY-1tt","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts\/5671","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/comments?post=5671"}],"version-history":[{"count":0,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts\/5671\/revisions"}],"wp:attachment":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/media?parent=5671"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/categories?post=5671"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/tags?post=5671"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}