{"id":6798,"date":"2026-05-02T03:46:06","date_gmt":"2026-05-02T03:46:06","guid":{"rendered":"https:\/\/scipapermill.com\/index.php\/2026\/05\/02\/catastrophic-forgetting-no-more-recent-breakthroughs-in-continual-learning-across-ai\/"},"modified":"2026-05-02T03:46:06","modified_gmt":"2026-05-02T03:46:06","slug":"catastrophic-forgetting-no-more-recent-breakthroughs-in-continual-learning-across-ai","status":"publish","type":"post","link":"https:\/\/scipapermill.com\/index.php\/2026\/05\/02\/catastrophic-forgetting-no-more-recent-breakthroughs-in-continual-learning-across-ai\/","title":{"rendered":"Catastrophic Forgetting No More: Recent Breakthroughs in Continual Learning Across AI"},"content":{"rendered":"<h3>Latest 16 papers on catastrophic forgetting: May. 2, 2026<\/h3>\n<p>The dream of truly intelligent AI systems that can learn continuously from new experiences without forgetting old ones has long been a holy grail in machine learning. However, this dream is often thwarted by a persistent foe: <strong>catastrophic forgetting<\/strong>. This phenomenon, where a model rapidly loses previously acquired knowledge when learning new tasks, remains a significant bottleneck for achieving robust, adaptive, and lifelong AI. Fortunately, recent research is pushing the boundaries, offering ingenious solutions across diverse AI domains, from robotics and language models to formal theorem proving and beyond.<\/p>\n<h3 id=\"the-big-ideas-core-innovations\">The Big Idea(s) &amp; Core Innovations<\/h3>\n<p>The latest wave of research tackles catastrophic forgetting with a blend of architectural ingenuity, smart parameter management, and domain-specific insights. A recurring theme is the move away from traditional replay-based methods towards more efficient, architectural, or knowledge-aware strategies.<\/p>\n<p>In <strong>continual offline reinforcement learning (CORL)<\/strong>, a team from <strong>AGH University of Krakow and American University<\/strong> introduced <a href=\"https:\/\/arxiv.org\/pdf\/2604.25898\">TSN-Affinity: Similarity-Driven Parameter Reuse for Continual Offline Reinforcement Learning<\/a>. This groundbreaking work demonstrates that sparse task-specific subnetworks can completely eliminate catastrophic forgetting in offline RL. Their Affinity Routing mechanism leverages action and latent similarity to dynamically reuse frozen model parameters, showing that architectural solutions can significantly outperform replay-based approaches, especially in heterogeneous continuous-control settings. The core idea is to avoid re-learning and instead route tasks to the most compatible existing knowledge structures.<\/p>\n<p>Similarly, in <strong>computer vision<\/strong>, researchers from the <strong>University of Hong Kong<\/strong> in their paper <a href=\"https:\/\/arxiv.org\/pdf\/2407.19001\">Effective Prompt Pool Learning for Continual Category Discovery<\/a> present PromptCCD++. They\u2019ve found that the <em>number of known categories<\/em> is far more critical than sample size for novel category discovery. Their key innovation lies in learning finer-grained, part-level representations via PromptCCD++\u2019s Part-Level Prompting (PLP) module. This allows models to leverage transferable visual primitives, making them more resilient to the \u201ccategory-count bottleneck\u201d and effectively mitigating forgetting during continuous discovery of new classes.<\/p>\n<p>For <strong>human activity recognition (HAR)<\/strong> on mobile devices, where data streams are temporally correlated and non-i.i.d., a consortium including <strong>Great Bay University and Shenzhen University<\/strong> proposed <a href=\"https:\/\/arxiv.org\/pdf\/2604.25435\">PI-TTA: Physics-Informed Source-Free Test-Time Adaptation for Robust Human Activity Recognition on Mobile Devices<\/a>. They highlight that traditional vision-style test-time adaptation (TTA) methods often suffer from catastrophic forgetting and \u201clow-entropy traps.\u201d Their solution, PI-TTA, injects physics-informed constraints (gravity consistency, temporal continuity, spectral stability) to stabilize online updates. This prevents models from drifting into physically implausible states, anchoring adaptation and preserving knowledge without needing access to source data or labels.<\/p>\n<p>Across <strong>language models and robotics<\/strong>, new strategies focus on preserving the vast knowledge of pre-trained models. For <strong>Vision-Language-Action (VLA) models<\/strong> in robotics, a collaboration from <strong>Tsinghua University and Peng Cheng Laboratory<\/strong> presented <a href=\"https:\/\/arxiv.org\/pdf\/2604.24182\">M^2-VLA: Boosting Vision-Language Models for Generalizable Manipulation via Layer Mixture and Meta-Skills<\/a>. Their work reveals that fine-tuning VLM backbones for robotic control can degrade their generalization capabilities. M^2-VLA tackles this by <em>freezing<\/em> the VLM backbone and introducing a Mixture of Layers (MoL) to extract manipulation-critical information, along with a Meta Skill Module (MSM) for efficient trajectory learning. This ensures strong generalization to novel instructions and objects while completely circumventing catastrophic forgetting of the VLM\u2019s core knowledge.<\/p>\n<p>In the realm of <strong>Lifelong Knowledge Editing for LLMs<\/strong>, a team from <strong>Korea University<\/strong> introduced <a href=\"https:\/\/arxiv.org\/pdf\/2604.19089\">Towards Scalable Lifelong Knowledge Editing with Selective Knowledge Suppression<\/a>. Their LightEdit framework ingeniously avoids retraining model parameters entirely. Instead, it uses an edit-aware selector and an in-context decoding strategy to <em>suppress outdated knowledge probabilities<\/em> at inference time. This offers a highly scalable and computationally efficient way to update LLM knowledge without causing forgetting or compromising locality.<\/p>\n<p><strong>Formal theorem proving<\/strong> also benefits from continual learning. Researchers from <strong>Peking University and Huawei Technologies<\/strong> in their paper <a href=\"https:\/\/arxiv.org\/pdf\/2604.23712\">OptProver: Bridging Olympiad and Optimization through Continual Training in Formal Theorem Proving<\/a> developed OptProver. They discovered that naive continual training on optimization problems leads to fragility and forgetting of Olympiad-level math. Their solution combines a verifier-driven, utility-aware preference learning method with perplexity-weighted optimization. This explicitly penalizes strategically unhelpful tactics, allowing the model to adapt to new domains without sacrificing general proving capabilities.<\/p>\n<p>Other notable advancements include: * <strong>Functional Task Networks (FTN)<\/strong> from the <strong>Astera Institute<\/strong> (<a href=\"https:\/\/arxiv.org\/pdf\/2604.24637\">Cortex-Inspired Continual Learning: Unsupervised Instantiation and Recovery of Functional Task Networks<\/a>), which use a parallel-neuron backbone with a cortex-inspired mask configurer. This allows for parameter isolation with a structural no-forgetting guarantee and even unsupervised recovery of prior task subnetworks. * <strong>IntentVLM<\/strong> by <strong>Sorbonne University<\/strong> (<a href=\"https:\/\/arxiv.org\/pdf\/2604.24002\">IntentVLM: Open-Vocabulary Intention Recognition through Forward-Inverse Modeling with Video-Language Models<\/a>) for open-vocabulary human intention recognition. This two-stage video-language framework, inspired by cognitive science, decomposes intention understanding into goal candidate generation and structured selection, effectively reducing hallucinations and showing <em>no catastrophic forgetting<\/em> during training. * <strong>RefEvo<\/strong>, a multi-agent framework by <strong>Southeast University and National Center of Technology Innovation for EDA<\/strong> (<a href=\"https:\/\/arxiv.org\/pdf\/2604.24218\">RefEvo: Agentic Design with Co-Evolutionary Verification for Agile Reference Model Generation<\/a>), tackles \u2018coupled validation failure\u2019 in LLM-based hardware verification. It uses a novel co-evolutionary verification and \u2018Spec Anchoring\u2019 context management strategy that prevents catastrophic forgetting of specifications by pinning them as immutable anchors, dramatically cutting token usage. * In <strong>multi-user semantic communication<\/strong>, researchers from <strong>Kyung Hee University and University of Houston<\/strong> proposed <a href=\"https:\/\/arxiv.org\/pdf\/2604.19808\">Anchor-Aided Multi-User Semantic Communication with Adaptive Decoders<\/a>. Their two-stage training framework addresses forgetting when a base station encoder serves diverse deep learning decoders. By training the encoder with a symmetric decoder first (self-reflective learning) and then freezing it as an anchor, they enable scalable deployment without forgetting issues. * The paper <a href=\"https:\/\/arxiv.org\/pdf\/2406.11354\">Preserving Knowledge in Large Language Model with Model-Agnostic Self-Decompression<\/a> from <strong>Zhejiang University<\/strong> introduces Tree Generation (TG), a model-agnostic self-decompression method for LLMs and MLLMs. It extracts knowledge into synthetic training data using a tree-structured dialogue, effectively preserving original model capabilities during fine-tuning without manual prompt engineering. * In <strong>Generative Information Retrieval (GenIR)<\/strong>, a team from the <strong>University of Amsterdam<\/strong> presented <a href=\"https:\/\/arxiv.org\/pdf\/2604.23388\">A Parametric Memory Head for Continual Generative Retrieval<\/a>. Their Post-Adaptation Memory Tuning (PAMT) framework freezes the adapted backbone and uses a modular parametric memory head (PMH) for sparse, value-only calibration. This approach improves retention on legacy document slices while preserving plasticity for new ones, demonstrating that interference from parameter updates is the dominant source of forgetting. * For <strong>hybrid language models<\/strong>, researchers from <strong>VRAIN \u2013 Universitat Polit\u00e8cnica de Val\u00e8ncia<\/strong> explored <a href=\"https:\/\/arxiv.org\/pdf\/2604.22127\">Where Should LoRA Go? Component-Type Placement in Hybrid Language Models<\/a>. They discovered that targeting the <em>attention pathway<\/em> with LoRA consistently outperforms full-model adaptation with significantly fewer parameters. Crucially, the hybrid topology (sequential vs.\u00a0parallel) dictates adaptation behavior, with parallel hybrids showing positive cross-task transfer, while sequential ones suffer from forgetting. * Finally, addressing <strong>Safe Continual Reinforcement Learning (RL)<\/strong>, a team from <strong>Vanderbilt University<\/strong> published <a href=\"https:\/\/arxiv.org\/pdf\/2604.19737\">Safe Continual Reinforcement Learning in Non-stationary Environments<\/a>. They highlight a fundamental tension between maintaining safety and preventing catastrophic forgetting. Their Safe EWC (reward shaping with Elastic Weight Consolidation) algorithm offers a promising direction, balancing safety-forgetting trade-offs, though complex environments remain challenging. * A crucial re-evaluation comes from <strong>Universitat Polit\u00e8cnica de Catalunya<\/strong> with <a href=\"https:\/\/arxiv.org\/pdf\/2604.19401\">Revisiting Catastrophic Forgetting in Continual Knowledge Graph Embedding<\/a>. They identify a previously overlooked source of forgetting: <em>entity interference<\/em>, where new entity embeddings degrade performance on existing knowledge. They show that current evaluation protocols overestimate performance by up to 25% and propose a corrected protocol and unified metric, urging a re-think of CKGE research. * Bridging <strong>complex systems dynamics and continual learning<\/strong>, the <strong>Emergence Transformer<\/strong> from <strong>Fudan University<\/strong> (<a href=\"https:\/\/arxiv.org\/pdf\/2604.19816\">Emergence Transformer: Dynamical Temporal Attention Matters<\/a>) introduces Dynamical Temporal Attention (DTA) to modulate emergent coherence in coupled phase oscillators. This framework enables <em>emergent continual learning in Hopfield neural networks without catastrophic forgetting<\/em> by using separate attention networks to suppress old patterns while memorizing new ones.<\/p>\n<h3 id=\"under-the-hood-models-datasets-benchmarks\">Under the Hood: Models, Datasets, &amp; Benchmarks<\/h3>\n<p>These innovations are often powered by advancements in model architectures, specialized datasets, and robust benchmarks:<\/p>\n<ul>\n<li><strong>Decision Transformer<\/strong>: The backbone of TSN-Affinity, enabling architectural parameter reuse in CORL. Used with Atari discrete-control and Panda continuous robotic manipulation benchmarks. Code is available at <a href=\"https:\/\/github.com\/anonymized-for-submission123\/tsn-affinity\">https:\/\/github.com\/anonymized-for-submission123\/tsn-affinity<\/a>.<\/li>\n<li><strong>DINO\/DINOv2 Pretrained Vision Transformers<\/strong>: Used by PromptCCD++ for robust feature extraction, evaluated on CIFAR100, ImageNet-100, TinyImageNet, and fine-grained datasets like CUB. Code at <a href=\"https:\/\/visual-ai.github.io\/promptccd\">https:\/\/visual-ai.github.io\/promptccd<\/a>.<\/li>\n<li><strong>USCHAD, PAMAP2, mHealth<\/strong>: Key benchmarks for evaluating PI-TTA\u2019s performance in mobile Human Activity Recognition, stressing temporally correlated inertial streams.<\/li>\n<li><strong>Qwen3.5-0.8B, Falcon-H1-0.5B<\/strong>: Hybrid language models used to analyze LoRA placement in <a href=\"https:\/\/arxiv.org\/pdf\/2604.22127\">Where Should LoRA Go? Component-Type Placement in Hybrid Language Models<\/a>. Code is available at <a href=\"https:\/\/github.com\/hecboar\/lora-placement-hybrid\">https:\/\/github.com\/hecboar\/lora-placement-hybrid<\/a>.<\/li>\n<li><strong>LLaMA-3 (8B), GPT-J (6B)<\/strong>: Large Language Models used with ZSRE, Counterfact, and RIPE datasets for evaluating LightEdit\u2019s lifelong knowledge editing. Code is available at <a href=\"https:\/\/github.com\/ekgus9\/LightEdit\">https:\/\/github.com\/ekgus9\/LightEdit<\/a>.<\/li>\n<li><strong>Qwen3-VL<\/strong>: The base model for IntentVLM, fine-tuned with LoRA adapters on IntentQA and Inst-IT Bench datasets for open-vocabulary intention recognition.<\/li>\n<li><strong>OptBench<\/strong>: A novel benchmark with 400 problems based on Optlib for evaluating formal optimization proofs, introduced by OptProver. Leverages Lean 4 and Mathlib. Code references LeanDojo v2.1.3 and BFS-Prover-V2.<\/li>\n<li><strong>MS MARCO, Natural Questions<\/strong>: Datasets used by PAMT to characterize catastrophic forgetting in Generative Information Retrieval with T5-base and E5-Mistral-7B-Instruct backbones. Code references DSI-transformers implementation <a href=\"https:\/\/github.com\/ArvinZhuang\/DSI-transformers\">https:\/\/github.com\/ArvinZhuang\/DSI-transformers<\/a>.<\/li>\n<li><strong>MuJoCo HalfCheetah\/Ant, Meta World\/Continual World<\/strong>: New robotic benchmarks developed for safe continual RL, alongside Safe EWC and CF-EWC algorithms. Code available at <a href=\"https:\/\/github.com\/MACS-Research-Lab\/safe-crl\">https:\/\/github.com\/MACS-Research-Lab\/safe-crl<\/a>.<\/li>\n<li><strong>FB15K-237, ENTITY, RELATION, FACT, HYBRID, GraphEqual, GraphHigher, GraphLower, PS-CKGE<\/strong>: Datasets used to analyze catastrophic forgetting and entity interference in Continual Knowledge Graph Embedding. Code is available at <a href=\"https:\/\/github.com\/gerardponsrecasens\/RevisitingCKGE\">https:\/\/github.com\/gerardponsrecasens\/RevisitingCKGE<\/a>.<\/li>\n<\/ul>\n<h3 id=\"impact-the-road-ahead\">Impact &amp; The Road Ahead<\/h3>\n<p>The implications of these advancements are profound. By mitigating catastrophic forgetting, these papers pave the way for more robust, adaptive, and truly lifelong learning AI systems. Imagine robots that can continuously learn new manipulation skills without forgetting old ones, LLMs that stay up-to-date with evolving knowledge without re-training, or mobile devices that can adapt to new user activities while maintaining knowledge of past behaviors.<\/p>\n<p>The shift towards architectural solutions, parameter-efficient fine-tuning, and domain-informed regularization is a clear indicator of the field\u2019s maturity. We\u2019re seeing a move from brute-force memory replay to more intelligent, biologically inspired, or mathematically grounded approaches that intrinsically resist forgetting. The emphasis on practical concerns like computational efficiency, real-world deployment constraints, and scalable knowledge editing promises to accelerate the adoption of continual learning in production AI systems.<\/p>\n<p>However, challenges remain. The fundamental tension between safety and plasticity in continual RL, the need for better understanding and measurement of forgetting (as highlighted in CKGE research), and the optimal integration of diverse continual learning strategies across complex multi-modal systems are ripe areas for future exploration. The road ahead is exciting, promising an era where AI systems don\u2019t just learn, but <em>grow<\/em> their intelligence over time, much like humans do.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Latest 16 papers on catastrophic forgetting: May. 2, 2026<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_yoast_wpseo_focuskw":"","_yoast_wpseo_title":"","_yoast_wpseo_metadesc":"","_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[56,57,63],"tags":[179,1617,4174,178,237,4175],"class_list":["post-6798","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence","category-cs-cl","category-machine-learning","tag-catastrophic-forgetting","tag-main_tag_catastrophic_forgetting","tag-continual-category-discovery","tag-continual-learning","tag-parameter-efficient-fine-tuning","tag-prompt-pool-learning"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Catastrophic Forgetting No More: Recent Breakthroughs in Continual Learning Across AI<\/title>\n<meta name=\"description\" content=\"Latest 16 papers on catastrophic forgetting: May. 2, 2026\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/scipapermill.com\/index.php\/2026\/05\/02\/catastrophic-forgetting-no-more-recent-breakthroughs-in-continual-learning-across-ai\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Catastrophic Forgetting No More: Recent Breakthroughs in Continual Learning Across AI\" \/>\n<meta property=\"og:description\" content=\"Latest 16 papers on catastrophic forgetting: May. 2, 2026\" \/>\n<meta property=\"og:url\" content=\"https:\/\/scipapermill.com\/index.php\/2026\/05\/02\/catastrophic-forgetting-no-more-recent-breakthroughs-in-continual-learning-across-ai\/\" \/>\n<meta property=\"og:site_name\" content=\"SciPapermill\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/people\/SciPapermill\/61582731431910\/\" \/>\n<meta property=\"article:published_time\" content=\"2026-05-02T03:46:06+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1\" \/>\n\t<meta property=\"og:image:width\" content=\"512\" \/>\n\t<meta property=\"og:image:height\" content=\"512\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Kareem Darwish\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Kareem Darwish\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"9 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/05\\\/02\\\/catastrophic-forgetting-no-more-recent-breakthroughs-in-continual-learning-across-ai\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/05\\\/02\\\/catastrophic-forgetting-no-more-recent-breakthroughs-in-continual-learning-across-ai\\\/\"},\"author\":{\"name\":\"Kareem Darwish\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/person\\\/2a018968b95abd980774176f3c37d76e\"},\"headline\":\"Catastrophic Forgetting No More: Recent Breakthroughs in Continual Learning Across AI\",\"datePublished\":\"2026-05-02T03:46:06+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/05\\\/02\\\/catastrophic-forgetting-no-more-recent-breakthroughs-in-continual-learning-across-ai\\\/\"},\"wordCount\":1779,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#organization\"},\"keywords\":[\"catastrophic forgetting\",\"catastrophic forgetting\",\"continual category discovery\",\"continual learning\",\"parameter-efficient fine-tuning\",\"prompt pool learning\"],\"articleSection\":[\"Artificial Intelligence\",\"Computation and Language\",\"Machine Learning\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/05\\\/02\\\/catastrophic-forgetting-no-more-recent-breakthroughs-in-continual-learning-across-ai\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/05\\\/02\\\/catastrophic-forgetting-no-more-recent-breakthroughs-in-continual-learning-across-ai\\\/\",\"url\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/05\\\/02\\\/catastrophic-forgetting-no-more-recent-breakthroughs-in-continual-learning-across-ai\\\/\",\"name\":\"Catastrophic Forgetting No More: Recent Breakthroughs in Continual Learning Across AI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#website\"},\"datePublished\":\"2026-05-02T03:46:06+00:00\",\"description\":\"Latest 16 papers on catastrophic forgetting: May. 2, 2026\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/05\\\/02\\\/catastrophic-forgetting-no-more-recent-breakthroughs-in-continual-learning-across-ai\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/05\\\/02\\\/catastrophic-forgetting-no-more-recent-breakthroughs-in-continual-learning-across-ai\\\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/05\\\/02\\\/catastrophic-forgetting-no-more-recent-breakthroughs-in-continual-learning-across-ai\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/scipapermill.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Catastrophic Forgetting No More: Recent Breakthroughs in Continual Learning Across AI\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#website\",\"url\":\"https:\\\/\\\/scipapermill.com\\\/\",\"name\":\"SciPapermill\",\"description\":\"Follow the latest research\",\"publisher\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/scipapermill.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#organization\",\"name\":\"SciPapermill\",\"url\":\"https:\\\/\\\/scipapermill.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/i0.wp.com\\\/scipapermill.com\\\/wp-content\\\/uploads\\\/2025\\\/07\\\/cropped-icon.jpg?fit=512%2C512&ssl=1\",\"contentUrl\":\"https:\\\/\\\/i0.wp.com\\\/scipapermill.com\\\/wp-content\\\/uploads\\\/2025\\\/07\\\/cropped-icon.jpg?fit=512%2C512&ssl=1\",\"width\":512,\"height\":512,\"caption\":\"SciPapermill\"},\"image\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/people\\\/SciPapermill\\\/61582731431910\\\/\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/scipapermill\\\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/person\\\/2a018968b95abd980774176f3c37d76e\",\"name\":\"Kareem Darwish\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g\",\"caption\":\"Kareem Darwish\"},\"description\":\"The SciPapermill bot is an AI research assistant dedicated to curating the latest advancements in artificial intelligence. Every week, it meticulously scans and synthesizes newly published papers, distilling key insights into a concise digest. Its mission is to keep you informed on the most significant take-home messages, emerging models, and pivotal datasets that are shaping the future of AI. This bot was created by Dr. Kareem Darwish, who is a principal scientist at the Qatar Computing Research Institute (QCRI) and is working on state-of-the-art Arabic large language models.\",\"sameAs\":[\"https:\\\/\\\/scipapermill.com\"]}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Catastrophic Forgetting No More: Recent Breakthroughs in Continual Learning Across AI","description":"Latest 16 papers on catastrophic forgetting: May. 2, 2026","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/scipapermill.com\/index.php\/2026\/05\/02\/catastrophic-forgetting-no-more-recent-breakthroughs-in-continual-learning-across-ai\/","og_locale":"en_US","og_type":"article","og_title":"Catastrophic Forgetting No More: Recent Breakthroughs in Continual Learning Across AI","og_description":"Latest 16 papers on catastrophic forgetting: May. 2, 2026","og_url":"https:\/\/scipapermill.com\/index.php\/2026\/05\/02\/catastrophic-forgetting-no-more-recent-breakthroughs-in-continual-learning-across-ai\/","og_site_name":"SciPapermill","article_publisher":"https:\/\/www.facebook.com\/people\/SciPapermill\/61582731431910\/","article_published_time":"2026-05-02T03:46:06+00:00","og_image":[{"width":512,"height":512,"url":"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1","type":"image\/jpeg"}],"author":"Kareem Darwish","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Kareem Darwish","Est. reading time":"9 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/scipapermill.com\/index.php\/2026\/05\/02\/catastrophic-forgetting-no-more-recent-breakthroughs-in-continual-learning-across-ai\/#article","isPartOf":{"@id":"https:\/\/scipapermill.com\/index.php\/2026\/05\/02\/catastrophic-forgetting-no-more-recent-breakthroughs-in-continual-learning-across-ai\/"},"author":{"name":"Kareem Darwish","@id":"https:\/\/scipapermill.com\/#\/schema\/person\/2a018968b95abd980774176f3c37d76e"},"headline":"Catastrophic Forgetting No More: Recent Breakthroughs in Continual Learning Across AI","datePublished":"2026-05-02T03:46:06+00:00","mainEntityOfPage":{"@id":"https:\/\/scipapermill.com\/index.php\/2026\/05\/02\/catastrophic-forgetting-no-more-recent-breakthroughs-in-continual-learning-across-ai\/"},"wordCount":1779,"commentCount":0,"publisher":{"@id":"https:\/\/scipapermill.com\/#organization"},"keywords":["catastrophic forgetting","catastrophic forgetting","continual category discovery","continual learning","parameter-efficient fine-tuning","prompt pool learning"],"articleSection":["Artificial Intelligence","Computation and Language","Machine Learning"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/scipapermill.com\/index.php\/2026\/05\/02\/catastrophic-forgetting-no-more-recent-breakthroughs-in-continual-learning-across-ai\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/scipapermill.com\/index.php\/2026\/05\/02\/catastrophic-forgetting-no-more-recent-breakthroughs-in-continual-learning-across-ai\/","url":"https:\/\/scipapermill.com\/index.php\/2026\/05\/02\/catastrophic-forgetting-no-more-recent-breakthroughs-in-continual-learning-across-ai\/","name":"Catastrophic Forgetting No More: Recent Breakthroughs in Continual Learning Across AI","isPartOf":{"@id":"https:\/\/scipapermill.com\/#website"},"datePublished":"2026-05-02T03:46:06+00:00","description":"Latest 16 papers on catastrophic forgetting: May. 2, 2026","breadcrumb":{"@id":"https:\/\/scipapermill.com\/index.php\/2026\/05\/02\/catastrophic-forgetting-no-more-recent-breakthroughs-in-continual-learning-across-ai\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/scipapermill.com\/index.php\/2026\/05\/02\/catastrophic-forgetting-no-more-recent-breakthroughs-in-continual-learning-across-ai\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/scipapermill.com\/index.php\/2026\/05\/02\/catastrophic-forgetting-no-more-recent-breakthroughs-in-continual-learning-across-ai\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/scipapermill.com\/"},{"@type":"ListItem","position":2,"name":"Catastrophic Forgetting No More: Recent Breakthroughs in Continual Learning Across AI"}]},{"@type":"WebSite","@id":"https:\/\/scipapermill.com\/#website","url":"https:\/\/scipapermill.com\/","name":"SciPapermill","description":"Follow the latest research","publisher":{"@id":"https:\/\/scipapermill.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/scipapermill.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/scipapermill.com\/#organization","name":"SciPapermill","url":"https:\/\/scipapermill.com\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/scipapermill.com\/#\/schema\/logo\/image\/","url":"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1","contentUrl":"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1","width":512,"height":512,"caption":"SciPapermill"},"image":{"@id":"https:\/\/scipapermill.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/people\/SciPapermill\/61582731431910\/","https:\/\/www.linkedin.com\/company\/scipapermill\/"]},{"@type":"Person","@id":"https:\/\/scipapermill.com\/#\/schema\/person\/2a018968b95abd980774176f3c37d76e","name":"Kareem Darwish","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g","caption":"Kareem Darwish"},"description":"The SciPapermill bot is an AI research assistant dedicated to curating the latest advancements in artificial intelligence. Every week, it meticulously scans and synthesizes newly published papers, distilling key insights into a concise digest. Its mission is to keep you informed on the most significant take-home messages, emerging models, and pivotal datasets that are shaping the future of AI. This bot was created by Dr. Kareem Darwish, who is a principal scientist at the Qatar Computing Research Institute (QCRI) and is working on state-of-the-art Arabic large language models.","sameAs":["https:\/\/scipapermill.com"]}]}},"views":5,"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_shortlink":"https:\/\/wp.me\/pgIXGY-1LE","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts\/6798","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/comments?post=6798"}],"version-history":[{"count":0,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts\/6798\/revisions"}],"wp:attachment":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/media?parent=6798"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/categories?post=6798"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/tags?post=6798"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}