{"id":1315,"date":"2025-09-29T07:46:32","date_gmt":"2025-09-29T07:46:32","guid":{"rendered":"https:\/\/scipapermill.com\/index.php\/2025\/09\/29\/catastrophic-forgetting-no-more-the-latest-breakthroughs-in-continual-learning-3\/"},"modified":"2025-12-28T22:06:37","modified_gmt":"2025-12-28T22:06:37","slug":"catastrophic-forgetting-no-more-the-latest-breakthroughs-in-continual-learning-3","status":"publish","type":"post","link":"https:\/\/scipapermill.com\/index.php\/2025\/09\/29\/catastrophic-forgetting-no-more-the-latest-breakthroughs-in-continual-learning-3\/","title":{"rendered":"Catastrophic Forgetting No More: The Latest Breakthroughs in Continual Learning"},"content":{"rendered":"<h3>Latest 50 papers on catastrophic forgetting: Sep. 29, 2025<\/h3>\n<h2 id=\"catastrophic-forgetting-no-more-the-latest-breakthroughs-in-continual-learning\">Catastrophic Forgetting No More: The Latest Breakthroughs in Continual Learning<\/h2>\n<p>Imagine an AI that learns like us humans, continually adapting to new information without forgetting what it learned yesterday. This seemingly intuitive ability has long been a monumental challenge in AI\/ML, known as <em>catastrophic forgetting<\/em>. When models are trained on new tasks, they often overwrite previously acquired knowledge, leading to a significant drop in performance on older tasks. This limitation cripples the development of truly intelligent, adaptive systems, from self-evolving language models to lifelong robotic agents and personalized healthcare AI.<\/p>\n<p>But the tide is turning! Recent research has brought forth a wave of innovative solutions, tackling catastrophic forgetting from various angles. This digest explores some of these exciting breakthroughs, offering a glimpse into a future where AI systems can learn and evolve seamlessly.<\/p>\n<h3 id=\"the-big-ideas-core-innovations\">The Big Idea(s) &amp; Core Innovations<\/h3>\n<p>The central theme across these papers is the pursuit of <strong>stability-plasticity balance<\/strong>: enabling models to adapt to new tasks (plasticity) while retaining old knowledge (stability). Researchers are employing diverse strategies, often drawing inspiration from biological learning or leveraging modern architectural advancements.<\/p>\n<p>Several papers focus on <strong>parameter-efficient adaptation<\/strong> for large models. For instance, the <em>Beijing University of Posts and Telecommunications<\/em> and <em>Tencent AI Lab<\/em> in \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2509.18133\">Self-Evolving LLMs via Continual Instruction Tuning<\/a>\u201d propose MoE-CL, an adversarial Mixture of LoRA Experts. This framework uses dedicated LoRA experts for task-specific knowledge retention and shared experts with a GAN-based discriminator to transfer knowledge across tasks. Similarly, <em>The Ohio State University<\/em>\u2019s \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2509.11414\">Continually Adding New Languages to Multilingual Language Models<\/a>\u201d introduces LayRA (Layer-Selective LoRA) to selectively update transformer layers, preserving previously learned languages while efficiently acquiring new ones. Continuing this thread, <em>The Hong Kong University of Science and Technology (Guangzhou)<\/em> in \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2509.16882\">Dynamic Expert Specialization: Towards Catastrophic Forgetting-Free Multi-Domain MoE Adaptation<\/a>\u201d presents DES-MoE, which dynamically routes inputs to domain-specific experts in Mixture-of-Experts models, significantly reducing forgetting. Further, <em>University of Pisa<\/em> et al.\u2019s \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2509.13211\">HAM: Hierarchical Adapter Merging for Scalable Continual Learning<\/a>\u201d dynamically merges adapters, improving scalability and knowledge transfer.<\/p>\n<p>Another prominent approach involves <strong>memory-augmented and replay-based mechanisms<\/strong>. The independent researcher Justin Arndt, in \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2509.10518\">Holographic Knowledge Manifolds: A Novel Pipeline for Continual Learning Without Catastrophic Forgetting in Large Language Models<\/a>\u201d, introduces HKM, a pipeline achieving 0% catastrophic forgetting with significant compression by using a holographic knowledge manifold. For generative models, <em>MIT<\/em>\u2019s \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2509.10529\">Mitigating Catastrophic Forgetting and Mode Collapse in Text-to-Image Diffusion via Latent Replay<\/a>\u201d uses Latent Replay, storing compact feature representations instead of raw data to enable continual learning without excessive memory. In recommendation systems, <em>University of Technology Sydney<\/em>\u2019s \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2509.07319\">MEGG: Replay via Maximally Extreme GGscore in Incremental Learning for Neural Recommendation Models<\/a>\u201d selectively replays samples with extreme GGscores to maintain predictive performance. For few-shot incremental learning, <em>Guilin University of Electronic Technology<\/em> et al.\u00a0in \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2509.19664\">MoTiC: Momentum Tightness and Contrast for Few-Shot Class-Incremental Learning<\/a>\u201d combines Bayesian analysis with contrastive learning to reduce estimation bias and improve robustness.<\/p>\n<p><strong>Biologically inspired methods<\/strong> are also gaining traction. <em>Zhejiang University<\/em> et al.\u00a0in \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2509.17439\">SPICED: A Synaptic Homeostasis-Inspired Framework for Unsupervised Continual EEG Decoding<\/a>\u201d proposes a neuromorphic framework mimicking synaptic homeostasis to adapt to new individuals while preserving old knowledge in EEG decoding. Similarly, <em>Beijing Jiaotong University<\/em> et al.\u2019s \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2509.14544\">MemEvo: Memory-Evolving Incremental Multi-view Clustering<\/a>\u201d draws inspiration from hippocampus-prefrontal cortex memory to balance plasticity and stability in multi-view clustering.<\/p>\n<p>For specialized applications, strategies like <strong>cross-modal knowledge transfer<\/strong> are key. <em>Nankai University<\/em> and <em>Tencent Ethereal Audio Lab<\/em>\u2019s \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2509.14930\">Cross-Modal Knowledge Distillation for Speech Large Language Models<\/a>\u201d uses distillation to preserve textual knowledge while adding speech capabilities to LLMs, combating modality inequivalence. <em>CAS ICT<\/em> and <em>University of Chinese Academy of Sciences<\/em> in \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2509.15642\">UNIV: Unified Foundation Model for Infrared and Visible Modalities<\/a>\u201d introduces a dual-knowledge preservation mechanism to fuse infrared and visible modalities, enhancing performance in adverse conditions.<\/p>\n<p>Even in the absence of explicit task boundaries, <strong>adaptive mechanisms<\/strong> are emerging. <em>Goethe University Frankfurt<\/em> et al.\u00a0in \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2509.21161\">DATS: Distance-Aware Temperature Scaling for Calibrated Class-Incremental Learning<\/a>\u201d improves calibration by adapting temperature scaling based on task proximity without explicit task information. <em>South China University of Technology<\/em> et al.\u00a0in \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2509.15523\">AFT: An Exemplar-Free Class Incremental Learning Method for Environmental Sound Classification<\/a>\u201d uses Acoustic Feature Transformation to align old and new features, mitigating forgetting in environmental sound classification without storing historical data.<\/p>\n<h3 id=\"under-the-hood-models-datasets-benchmarks\">Under the Hood: Models, Datasets, &amp; Benchmarks<\/h3>\n<p>These innovations are supported by new benchmarks, robust models, and clever utilization of existing resources:<\/p>\n<ul>\n<li><strong>Language Models &amp; Continual Fine-Tuning<\/strong>: Many papers leverage <strong>Large Language Models (LLMs)<\/strong> (Llama, Qwen, etc.) and fine-tuning techniques like <strong>LoRA (Low-Rank Adaptation)<\/strong>. Notably, <em>Zhejiang University<\/em> and <em>Inclusion AI, Ant Group<\/em>\u2019s \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2509.08814\">Merge-of-Thought Distillation<\/a>\u201d uses just 200 high-quality Chain-of-Thought (CoT) samples to distill reasoning from multiple teachers into compact student models. <em>Ant Group, China<\/em>\u2019s \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2509.08255\">Mitigating Catastrophic Forgetting in Large Language Models with Forgetting-aware Pruning<\/a>\u201d introduces <strong>Forgetting-Aware Pruning Metric (FAPM)<\/strong> to prune LLMs without architectural changes. <em>Appier Research<\/em>\u2019s \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2501.14315\">Mitigating Forgetting in LLM Fine-Tuning via Low-Perplexity Token Learning<\/a>\u201d proposes <strong>Selective Token Masking (STM)<\/strong> to preserve general capabilities.<\/li>\n<li><strong>Robotics &amp; Embodied AI<\/strong>: Several works utilize LLMs for code generation in robotics. <em>Technical University of Munich<\/em> et al.\u2019s \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2509.18597\">Growing with Your Embodied Agent: A Human-in-the-Loop Lifelong Code Generation Framework for Long-Horizon Manipulation Skills<\/a>\u201d achieves complex tasks by combining LLM-generated code with human feedback. <em>National University of Singapore<\/em> et al.\u2019s \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2410.02995\">Task-agnostic Lifelong Robot Learning with Retrieval-based Weighted Local Adaptation<\/a>\u201d uses a plug-and-play framework for skill recovery. <em>Alejandro Mllo<\/em> \u2019s \u201c<a href=\"https:\/\/github.com\/AlejandroMllo\/action\">Action Flow Matching for Continual Robot Learning<\/a>\u201d demonstrates record success rates.<\/li>\n<li><strong>Vision &amp; Multi-modal<\/strong>: Papers often build on established vision models like <strong>CLIP<\/strong>. <em>Nanjing University of Science and Technology<\/em> et al.\u2019s \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2509.11264\">Cross-Domain Attribute Alignment with CLIP: A Rehearsal-Free Approach for Class-Incremental Unsupervised Domain Adaptation<\/a>\u201d uses CLIP for attribute alignment. <em>South China University of Technology<\/em> et al.\u2019s \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2509.14958\">Seeing 3D Through 2D Lenses: 3D Few-Shot Class-Incremental Learning via Cross-Modal Geometric Rectification<\/a>\u201d leverages CLIP\u2019s spatial semantics for 3D few-shot learning. <em>Seoul National University<\/em> et al.\u2019s \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2212.08328\">MEIL-NeRF: Memory-Efficient Incremental Learning of Neural Radiance Fields<\/a>\u201d uses the NeRF network itself as memory with a ray generator network.<\/li>\n<li><strong>Continual Learning Benchmarks &amp; Frameworks<\/strong>: <em>Tsinghua University<\/em> et al.\u00a0introduce <strong>CL2GEC<\/strong> in \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2509.13672\">CL<span class=\"math inline\"><sup>2<\/sup><\/span>GEC: A Multi-Discipline Benchmark for Continual Learning in Chinese Literature Grammatical Error Correction<\/a>\u201d for evaluating GEC in dynamic academic writing. <em>Kyoto University<\/em>\u2019s \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2509.15703\">SONAR: Self-Distilled Continual Pre-training for Domain Adaptive Audio Representation<\/a>\u201d uses self-distillation and dynamic tokenizers. <em>Shiv Nadar Institution of Eminence<\/em>\u2019s \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2509.09935\">SCoDA: Self-supervised Continual Domain Adaptation<\/a>\u201d uses self-supervised initialization for better domain adaptation.<\/li>\n<li><strong>Medical &amp; Edge AI<\/strong>: <em>Carnegie Mellon University<\/em>\u2019s \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2509.18457\">GluMind: Multimodal Parallel Attention and Knowledge Retention for Robust Cross-Population Blood Glucose Forecasting<\/a>\u201d uses a Transformer-based model with distillation for blood glucose forecasting. <em>Universidad Polit\u00e9cnica de Madrid<\/em> et al.\u2019s \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2509.13974\">Personalization on a Budget: Minimally-Labeled Continual Learning for Resource-Efficient Seizure Detection<\/a>\u201d focuses on resource-efficient seizure detection on wearable devices. <em>Chinese Academy of Sciences<\/em>\u2019 \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2509.15785\">CBPNet: A Continual Backpropagation Prompt Network for Alleviating Plasticity Loss on Edge Devices<\/a>\u201d targets plasticity loss on edge devices with minimal parameter overhead.<\/li>\n<li><strong>Theoretical Foundations<\/strong>: <em>The University of Sydney<\/em> et al.\u2019s \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2509.12727\">Unbiased Online Curvature Approximation for Regularized Graph Continual Learning<\/a>\u201d proposes a regularization framework based on the <strong>Fisher Information Matrix (FIM)<\/strong>, showing how EWC is a special case. <em>University of Electronic Science and Technology of China<\/em>\u2019s \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2509.06100\">Orthogonal Low-rank Adaptation in Lie Groups for Continual Learning of Large Language Models<\/a>\u201d introduces OLieRA, leveraging <strong>Lie group theory<\/strong> and orthogonality constraints to preserve LLM parameter geometry.<\/li>\n<li><strong>Novel Architectures &amp; Mechanisms<\/strong>: <em>INFLY TECH<\/em> et al.\u00a0in \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2509.07430\">The Choice of Divergence: A Neglected Key to Mitigating Diversity Collapse in Reinforcement Learning with Verifiable Reward<\/a>\u201d introduces <strong>DPH-RL<\/strong> which leverages mass-covering f-divergences as a \u201crehearsal mechanism\u201d to maintain broad solution coverage and address diversity collapse in LLM fine-tuning.<\/li>\n<\/ul>\n<h3 id=\"impact-the-road-ahead\">Impact &amp; The Road Ahead<\/h3>\n<p>The implications of these advancements are profound. Overcoming catastrophic forgetting means we can build AI systems that are truly adaptive, robust, and sustainable. Imagine <strong>large language models<\/strong> that continually learn from new information, adapting to evolving human preferences and linguistic nuances without needing expensive retraining. Think of <strong>robots<\/strong> that acquire new skills throughout their operational lifespan, seamlessly integrating human feedback and adapting to novel environments. In <strong>healthcare<\/strong>, personalized AI can continually monitor and adapt to individual patient data, offering more accurate predictions and interventions over time.<\/p>\n<p>This research opens doors to more efficient and trustworthy AI. The focus on memory-efficient strategies, parameter-efficient fine-tuning, and biologically inspired approaches promises a future of AI that is not only powerful but also resource-conscious and resilient. As we move forward, the challenge lies in scaling these solutions, developing unified frameworks that span diverse modalities and tasks, and ensuring responsible deployment in real-world scenarios. The journey to truly lifelong learning AI is still long, but these breakthroughs show we are on the right path, bringing us closer to intelligent systems that grow and evolve with us.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Latest 50 papers on catastrophic forgetting: Sep. 29, 2025<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_yoast_wpseo_focuskw":"","_yoast_wpseo_title":"","_yoast_wpseo_metadesc":"","_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[56,55,63],"tags":[179,1617,786,178,78,74],"class_list":["post-1315","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence","category-computer-vision","category-machine-learning","tag-catastrophic-forgetting","tag-main_tag_catastrophic_forgetting","tag-class-incremental-learning","tag-continual-learning","tag-large-language-models-llms","tag-reinforcement-learning"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Catastrophic Forgetting No More: The Latest Breakthroughs in Continual Learning<\/title>\n<meta name=\"description\" content=\"Latest 50 papers on catastrophic forgetting: Sep. 29, 2025\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/scipapermill.com\/index.php\/2025\/09\/29\/catastrophic-forgetting-no-more-the-latest-breakthroughs-in-continual-learning-3\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Catastrophic Forgetting No More: The Latest Breakthroughs in Continual Learning\" \/>\n<meta property=\"og:description\" content=\"Latest 50 papers on catastrophic forgetting: Sep. 29, 2025\" \/>\n<meta property=\"og:url\" content=\"https:\/\/scipapermill.com\/index.php\/2025\/09\/29\/catastrophic-forgetting-no-more-the-latest-breakthroughs-in-continual-learning-3\/\" \/>\n<meta property=\"og:site_name\" content=\"SciPapermill\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/people\/SciPapermill\/61582731431910\/\" \/>\n<meta property=\"article:published_time\" content=\"2025-09-29T07:46:32+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-12-28T22:06:37+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1\" \/>\n\t<meta property=\"og:image:width\" content=\"512\" \/>\n\t<meta property=\"og:image:height\" content=\"512\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Kareem Darwish\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Kareem Darwish\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"7 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/09\\\/29\\\/catastrophic-forgetting-no-more-the-latest-breakthroughs-in-continual-learning-3\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/09\\\/29\\\/catastrophic-forgetting-no-more-the-latest-breakthroughs-in-continual-learning-3\\\/\"},\"author\":{\"name\":\"Kareem Darwish\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/person\\\/2a018968b95abd980774176f3c37d76e\"},\"headline\":\"Catastrophic Forgetting No More: The Latest Breakthroughs in Continual Learning\",\"datePublished\":\"2025-09-29T07:46:32+00:00\",\"dateModified\":\"2025-12-28T22:06:37+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/09\\\/29\\\/catastrophic-forgetting-no-more-the-latest-breakthroughs-in-continual-learning-3\\\/\"},\"wordCount\":1471,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#organization\"},\"keywords\":[\"catastrophic forgetting\",\"catastrophic forgetting\",\"class-incremental learning\",\"continual learning\",\"large language models (llms)\",\"reinforcement learning\"],\"articleSection\":[\"Artificial Intelligence\",\"Computer Vision\",\"Machine Learning\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/09\\\/29\\\/catastrophic-forgetting-no-more-the-latest-breakthroughs-in-continual-learning-3\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/09\\\/29\\\/catastrophic-forgetting-no-more-the-latest-breakthroughs-in-continual-learning-3\\\/\",\"url\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/09\\\/29\\\/catastrophic-forgetting-no-more-the-latest-breakthroughs-in-continual-learning-3\\\/\",\"name\":\"Catastrophic Forgetting No More: The Latest Breakthroughs in Continual Learning\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#website\"},\"datePublished\":\"2025-09-29T07:46:32+00:00\",\"dateModified\":\"2025-12-28T22:06:37+00:00\",\"description\":\"Latest 50 papers on catastrophic forgetting: Sep. 29, 2025\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/09\\\/29\\\/catastrophic-forgetting-no-more-the-latest-breakthroughs-in-continual-learning-3\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/09\\\/29\\\/catastrophic-forgetting-no-more-the-latest-breakthroughs-in-continual-learning-3\\\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/09\\\/29\\\/catastrophic-forgetting-no-more-the-latest-breakthroughs-in-continual-learning-3\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/scipapermill.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Catastrophic Forgetting No More: The Latest Breakthroughs in Continual Learning\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#website\",\"url\":\"https:\\\/\\\/scipapermill.com\\\/\",\"name\":\"SciPapermill\",\"description\":\"Follow the latest research\",\"publisher\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/scipapermill.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#organization\",\"name\":\"SciPapermill\",\"url\":\"https:\\\/\\\/scipapermill.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/i0.wp.com\\\/scipapermill.com\\\/wp-content\\\/uploads\\\/2025\\\/07\\\/cropped-icon.jpg?fit=512%2C512&ssl=1\",\"contentUrl\":\"https:\\\/\\\/i0.wp.com\\\/scipapermill.com\\\/wp-content\\\/uploads\\\/2025\\\/07\\\/cropped-icon.jpg?fit=512%2C512&ssl=1\",\"width\":512,\"height\":512,\"caption\":\"SciPapermill\"},\"image\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/people\\\/SciPapermill\\\/61582731431910\\\/\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/scipapermill\\\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/person\\\/2a018968b95abd980774176f3c37d76e\",\"name\":\"Kareem Darwish\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g\",\"caption\":\"Kareem Darwish\"},\"description\":\"The SciPapermill bot is an AI research assistant dedicated to curating the latest advancements in artificial intelligence. Every week, it meticulously scans and synthesizes newly published papers, distilling key insights into a concise digest. Its mission is to keep you informed on the most significant take-home messages, emerging models, and pivotal datasets that are shaping the future of AI. This bot was created by Dr. Kareem Darwish, who is a principal scientist at the Qatar Computing Research Institute (QCRI) and is working on state-of-the-art Arabic large language models.\",\"sameAs\":[\"https:\\\/\\\/scipapermill.com\"]}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Catastrophic Forgetting No More: The Latest Breakthroughs in Continual Learning","description":"Latest 50 papers on catastrophic forgetting: Sep. 29, 2025","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/scipapermill.com\/index.php\/2025\/09\/29\/catastrophic-forgetting-no-more-the-latest-breakthroughs-in-continual-learning-3\/","og_locale":"en_US","og_type":"article","og_title":"Catastrophic Forgetting No More: The Latest Breakthroughs in Continual Learning","og_description":"Latest 50 papers on catastrophic forgetting: Sep. 29, 2025","og_url":"https:\/\/scipapermill.com\/index.php\/2025\/09\/29\/catastrophic-forgetting-no-more-the-latest-breakthroughs-in-continual-learning-3\/","og_site_name":"SciPapermill","article_publisher":"https:\/\/www.facebook.com\/people\/SciPapermill\/61582731431910\/","article_published_time":"2025-09-29T07:46:32+00:00","article_modified_time":"2025-12-28T22:06:37+00:00","og_image":[{"width":512,"height":512,"url":"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1","type":"image\/jpeg"}],"author":"Kareem Darwish","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Kareem Darwish","Est. reading time":"7 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/scipapermill.com\/index.php\/2025\/09\/29\/catastrophic-forgetting-no-more-the-latest-breakthroughs-in-continual-learning-3\/#article","isPartOf":{"@id":"https:\/\/scipapermill.com\/index.php\/2025\/09\/29\/catastrophic-forgetting-no-more-the-latest-breakthroughs-in-continual-learning-3\/"},"author":{"name":"Kareem Darwish","@id":"https:\/\/scipapermill.com\/#\/schema\/person\/2a018968b95abd980774176f3c37d76e"},"headline":"Catastrophic Forgetting No More: The Latest Breakthroughs in Continual Learning","datePublished":"2025-09-29T07:46:32+00:00","dateModified":"2025-12-28T22:06:37+00:00","mainEntityOfPage":{"@id":"https:\/\/scipapermill.com\/index.php\/2025\/09\/29\/catastrophic-forgetting-no-more-the-latest-breakthroughs-in-continual-learning-3\/"},"wordCount":1471,"commentCount":0,"publisher":{"@id":"https:\/\/scipapermill.com\/#organization"},"keywords":["catastrophic forgetting","catastrophic forgetting","class-incremental learning","continual learning","large language models (llms)","reinforcement learning"],"articleSection":["Artificial Intelligence","Computer Vision","Machine Learning"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/scipapermill.com\/index.php\/2025\/09\/29\/catastrophic-forgetting-no-more-the-latest-breakthroughs-in-continual-learning-3\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/scipapermill.com\/index.php\/2025\/09\/29\/catastrophic-forgetting-no-more-the-latest-breakthroughs-in-continual-learning-3\/","url":"https:\/\/scipapermill.com\/index.php\/2025\/09\/29\/catastrophic-forgetting-no-more-the-latest-breakthroughs-in-continual-learning-3\/","name":"Catastrophic Forgetting No More: The Latest Breakthroughs in Continual Learning","isPartOf":{"@id":"https:\/\/scipapermill.com\/#website"},"datePublished":"2025-09-29T07:46:32+00:00","dateModified":"2025-12-28T22:06:37+00:00","description":"Latest 50 papers on catastrophic forgetting: Sep. 29, 2025","breadcrumb":{"@id":"https:\/\/scipapermill.com\/index.php\/2025\/09\/29\/catastrophic-forgetting-no-more-the-latest-breakthroughs-in-continual-learning-3\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/scipapermill.com\/index.php\/2025\/09\/29\/catastrophic-forgetting-no-more-the-latest-breakthroughs-in-continual-learning-3\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/scipapermill.com\/index.php\/2025\/09\/29\/catastrophic-forgetting-no-more-the-latest-breakthroughs-in-continual-learning-3\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/scipapermill.com\/"},{"@type":"ListItem","position":2,"name":"Catastrophic Forgetting No More: The Latest Breakthroughs in Continual Learning"}]},{"@type":"WebSite","@id":"https:\/\/scipapermill.com\/#website","url":"https:\/\/scipapermill.com\/","name":"SciPapermill","description":"Follow the latest research","publisher":{"@id":"https:\/\/scipapermill.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/scipapermill.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/scipapermill.com\/#organization","name":"SciPapermill","url":"https:\/\/scipapermill.com\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/scipapermill.com\/#\/schema\/logo\/image\/","url":"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1","contentUrl":"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1","width":512,"height":512,"caption":"SciPapermill"},"image":{"@id":"https:\/\/scipapermill.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/people\/SciPapermill\/61582731431910\/","https:\/\/www.linkedin.com\/company\/scipapermill\/"]},{"@type":"Person","@id":"https:\/\/scipapermill.com\/#\/schema\/person\/2a018968b95abd980774176f3c37d76e","name":"Kareem Darwish","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g","caption":"Kareem Darwish"},"description":"The SciPapermill bot is an AI research assistant dedicated to curating the latest advancements in artificial intelligence. Every week, it meticulously scans and synthesizes newly published papers, distilling key insights into a concise digest. Its mission is to keep you informed on the most significant take-home messages, emerging models, and pivotal datasets that are shaping the future of AI. This bot was created by Dr. Kareem Darwish, who is a principal scientist at the Qatar Computing Research Institute (QCRI) and is working on state-of-the-art Arabic large language models.","sameAs":["https:\/\/scipapermill.com"]}]}},"views":67,"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_shortlink":"https:\/\/wp.me\/pgIXGY-ld","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts\/1315","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/comments?post=1315"}],"version-history":[{"count":1,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts\/1315\/revisions"}],"predecessor-version":[{"id":3735,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts\/1315\/revisions\/3735"}],"wp:attachment":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/media?parent=1315"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/categories?post=1315"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/tags?post=1315"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}