{"id":1984,"date":"2025-11-23T08:20:24","date_gmt":"2025-11-23T08:20:24","guid":{"rendered":"https:\/\/scipapermill.com\/index.php\/2025\/11\/23\/catastrophic-forgetting-recent-breakthroughs-in-lifelong-ai\/"},"modified":"2025-12-28T21:17:36","modified_gmt":"2025-12-28T21:17:36","slug":"catastrophic-forgetting-recent-breakthroughs-in-lifelong-ai","status":"publish","type":"post","link":"https:\/\/scipapermill.com\/index.php\/2025\/11\/23\/catastrophic-forgetting-recent-breakthroughs-in-lifelong-ai\/","title":{"rendered":"Catastrophic Forgetting: Recent Breakthroughs in Lifelong AI"},"content":{"rendered":"<h3>Latest 50 papers on catastrophic forgetting: Nov. 23, 2025<\/h3>\n<p>Catastrophic forgetting, the notorious tendency of neural networks to rapidly lose previously acquired knowledge when learning new tasks, remains one of the most significant hurdles in achieving truly intelligent, adaptive AI. Imagine an autonomous vehicle that forgets how to drive in the rain after learning to navigate snow, or a medical AI that forgets how to detect one disease after being updated for another. This fundamental challenge prevents models from continually learning and adapting in dynamic, real-world environments. Fortunately, recent research is pushing the boundaries, offering exciting new paradigms and practical solutions to make our AI systems more resilient and \u2018forget-less\u2019. This post dives into some of these groundbreaking advancements.<\/p>\n<h3 id=\"the-big-ideas-core-innovations\">The Big Idea(s) &amp; Core Innovations<\/h3>\n<p>Many recent breakthroughs converge on a central theme: intelligently managing the interplay between old and new knowledge, often through selective parameter updates, memory mechanisms, or structured architectural designs. For instance, the <strong>GResilience<\/strong> framework, introduced by Diaeddin Rimawi from Fraunhofer Italia Research and the University of Bologna in \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2511.16593\">Green Resilience of Cyber-Physical Systems: Doctoral Dissertation<\/a>\u201d, tackles catastrophic forgetting in Online Collaborative AI Systems (OL-CAIS) by balancing \u2018greenness\u2019 and \u2018resilience\u2019 through multi-agent policies and containerization, notably reducing CO2 emissions by up to 50% while maintaining performance. This highlights how robustness can be achieved alongside sustainability.<\/p>\n<p>In computer vision, the challenge of incremental object detection (IOD) finds a novel solution in <strong>IOR: Inversed Objects Replay for Incremental Object Detection<\/strong> by Zhulin An et al.\u00a0from the Institute of Computing Technology, Chinese Academy of Sciences (<a href=\"https:\/\/arxiv.org\/pdf\/2406.04829\">https:\/\/arxiv.org\/pdf\/2406.04829<\/a>). IOR ingeniously reuses old objects in reverse order, effectively reducing forgetting without requiring the storage of old-class data. Similarly, for class-incremental learning, <strong>HASTEN<\/strong> (Hierarchical Semantic Tree Anchoring), presented by Tao Hu et al.\u00a0from Nanjing University in \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2511.15633\">Hierarchical Semantic Tree Anchoring for CLIP-Based Class-Incremental Learning<\/a>\u201d, leverages hyperbolic space and external knowledge graphs to preserve hierarchical semantic structures, ensuring stable feature representations during updates.<\/p>\n<p>Large Language Models (LLMs) are also a major focus. The <strong>PIECE<\/strong> method from Lingxiang Wang et al.\u00a0at Beihang University, described in \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2511.15375\">Parameter Importance-Driven Continual Learning for Foundation Models<\/a>\u201d, addresses forgetting by selectively updating only a tiny fraction (0.1%) of parameters deemed most critical. This allows foundation models to gain domain-specific knowledge without losing their general capabilities. Complementary to this, the <strong>MetaGDPO<\/strong> approach by Lanxue Zhang et al.\u00a0from the Institute of Information Engineering, Chinese Academy of Sciences, detailed in \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2511.12113\">MetaGDPO: Alleviating Catastrophic Forgetting with Metacognitive Knowledge through Group Direct Preference Optimization<\/a>\u201d, integrates metacognitive knowledge into both data and training to improve reasoning in smaller LLMs.<\/p>\n<p>Multimodal systems present an even greater challenge. <strong>CKDA<\/strong> (Cross-modality Knowledge Disentanglement and Alignment), introduced by Zhenyu Cui et al.\u00a0from Peking University, in their work on \u201c<a href=\"https:\/\/github.com\/PKU-ICST-MIPL\/CKDA-AAAI2026\">CKDA: Cross-modality Knowledge Disentanglement and Alignment for Visible-Infrared Lifelong Person Re-identification<\/a>\u201d, disentangles modality-specific and common knowledge to prevent forgetting in Visible-Infrared Lifelong Person Re-IDentification. For multimodal LLMs, Songze Li et al.\u00a0from Harbin Institute of Technology propose a \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2511.15164\">Multimodal Continual Instruction Tuning with Dynamic Gradient Guidance<\/a>\u201d, framing forgetting as a missing gradient problem and approximating old task gradients using parameter space geometry. A similar approach for multimodal food analysis, <strong>Dual-LoRA<\/strong> and quality-enhanced pseudo replay, is presented by Jingjing Chen et al.\u00a0(<a href=\"https:\/\/arxiv.org\/pdf\/2511.13351\">https:\/\/arxiv.org\/pdf\/2511.13351<\/a>), separating task-specific and shared knowledge for efficient adaptation.<\/p>\n<p>Beyond specialized models, foundational theory is evolving. \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2511.12828\">Catastrophic Forgetting in Kolmogorov-Arnold Networks<\/a>\u201d by Mohammad Marufur Rahman et al.\u00a0from Wake Forest University provides a theoretical framework linking forgetting in KANs to activation support overlap and intrinsic data dimension, even proposing <strong>KAN-LoRA<\/strong> as a KAN-based adapter for continual fine-tuning of LMs.<\/p>\n<h3 id=\"under-the-hood-models-datasets-benchmarks\">Under the Hood: Models, Datasets, &amp; Benchmarks<\/h3>\n<p>The innovations highlighted above are often built upon or validated by significant resources, pushing the capabilities of continual learning across various domains:<\/p>\n<ul>\n<li><strong>IOR:<\/strong> Demonstrated on standard benchmarks like <strong>COCO 2017<\/strong>, with code available at <a href=\"https:\/\/github.com\/JiaJia075\/IOR\">https:\/\/github.com\/JiaJia075\/IOR<\/a>.<\/li>\n<li><strong>HASTEN:<\/strong> Uses external <strong>knowledge graphs<\/strong> and <strong>CLIP-based learning<\/strong> to embed features in hyperbolic space.<\/li>\n<li><strong>PIECE:<\/strong> Evaluated across diverse <strong>language models<\/strong> and <strong>multimodal models<\/strong>, with code at <a href=\"https:\/\/github.com\/wanglingxiang0717\/PIECE\">https:\/\/github.com\/wanglingxiang0717\/PIECE<\/a>.<\/li>\n<li><strong>Multimodal Continual Instruction Tuning:<\/strong> Focuses on <strong>multimodal large language models<\/strong> and their parameter space geometry.<\/li>\n<li><strong>DGS-Net:<\/strong> Fine-tunes <strong>CLIP<\/strong> for AI-generated image detection, with code at <a href=\"https:\/\/github.com\/haofanwang\/inswapper\">https:\/\/github.com\/haofanwang\/inswapper<\/a>.<\/li>\n<li><strong>MergeSlide:<\/strong> Leverages <strong>vision-language foundation models<\/strong> and <strong>class-aware prompts<\/strong> for lifelong learning on Whole Slide Images (WSIs), with code at <a href=\"https:\/\/github.com\/caodoanh2001\/MergeSlide\">https:\/\/github.com\/caodoanh2001\/MergeSlide<\/a>.<\/li>\n<li><strong>ConSurv:<\/strong> Introduces the <strong>MSAIL (Multimodal Survival Analysis Incremental Learning) benchmark<\/strong> using four integrated datasets for survival analysis, with code at <a href=\"https:\/\/github.com\/LucyDYu\/ConSurv\">https:\/\/github.com\/LucyDYu\/ConSurv<\/a>.<\/li>\n<li><strong>PANDA:<\/strong> Utilizes a <strong>CLIP encoder<\/strong> for patch transfer and is integratable with existing PTM-based continual learning frameworks, with code at <a href=\"https:\/\/gitlab.com\/viper-purdue\/panda\">https:\/\/gitlab.com\/viper-purdue\/panda<\/a>.<\/li>\n<li><strong>Hydra:<\/strong> A mitigation method for Split Federated Learning tested on non-IID data distributions, with code at <a href=\"https:\/\/github.com\/jtirana98\/Hydra-CF-in-SFL\">https:\/\/github.com\/jtirana98\/Hydra-CF-in-SFL<\/a>.<\/li>\n<li><strong>CLTS:<\/strong> Leverages <strong>BLIP<\/strong> and <strong>Stable Diffusion<\/strong> for generative replay, significantly reducing memory, with code at <a href=\"https:\/\/github.com\/iiitb-nlpir\/CLTS\">https:\/\/github.com\/iiitb-nlpir\/CLTS<\/a>.<\/li>\n<li><strong>FSC-Net:<\/strong> Evaluated on <strong>Split-MNIST<\/strong> and <strong>Split-CIFAR-10<\/strong>, with code at <a href=\"https:\/\/github.com\/MedGm\/FSC-Net\">https:\/\/github.com\/MedGm\/FSC-Net<\/a>.<\/li>\n<li><strong>AnaCP:<\/strong> Achieves upper-bound continual learning with <strong>pre-trained models (PTMs)<\/strong> via an analytic contrastive projection layer, code at <a href=\"https:\/\/github.com\/SalehMomeni\/AnaCP\">https:\/\/github.com\/SalehMomeni\/AnaCP<\/a>.<\/li>\n<li><strong>Compact Memory for Continual Logistic Regression:<\/strong> Demonstrates efficient memory building with <strong>Hessian matching<\/strong> and <strong>probabilistic PCA<\/strong>, code at <a href=\"https:\/\/github.com\/team-approx-bayes\/compact_memory_code\">https:\/\/github.com\/team-approx-bayes\/compact_memory_code<\/a>.<\/li>\n<li><strong>R-Tuning:<\/strong> Uses <strong>wavelet decomposition<\/strong> for continual adaptation of pre-trained time-series models, with code at <a href=\"https:\/\/github.com\/Ivan-YinTY\/R-Tuning\">https:\/\/github.com\/Ivan-YinTY\/R-Tuning<\/a>.<\/li>\n<li><strong>LwP:<\/strong> Evaluated across <strong>image and time-series benchmarks<\/strong>, with code at <a href=\"https:\/\/github.com\/AICPS-Lab\/lwp\">https:\/\/github.com\/AICPS-Lab\/lwp<\/a>.<\/li>\n<li><strong>R2-Seg:<\/strong> Uses <strong>LLM-guided anatomical reasoning<\/strong> for training-free OOD medical tumor segmentation, with code at <a href=\"https:\/\/github.com\/Eurekashen\/R2Seg\">https:\/\/github.com\/Eurekashen\/R2Seg<\/a>.<\/li>\n<li><strong>CoSO:<\/strong> Optimizes <strong>pre-trained models<\/strong> in continuous gradient-derived subspaces, with code at <a href=\"https:\/\/github.com\/lamda-nju\/CoSO\">https:\/\/github.com\/lamda-nju\/CoSO<\/a>.<\/li>\n<li><strong>MoETTA:<\/strong> Uses a <strong>Mixture-of-Experts (MoE) paradigm<\/strong> and introduces <strong>potpourri and potpourri+ benchmarks<\/strong> for test-time adaptation, with code at <a href=\"https:\/\/github.com\/AnikiFan\/MoETTA\">https:\/\/github.com\/AnikiFan\/MoETTA<\/a>.<\/li>\n<\/ul>\n<h3 id=\"impact-the-road-ahead\">Impact &amp; The Road Ahead<\/h3>\n<p>The collective impact of this research is profound, promising more robust, adaptable, and efficient AI systems across diverse applications. From enabling ethical, green AI in cyber-physical systems to enhancing the longevity of medical imaging diagnostics and making LLMs continuously fresh for evolving codebases, the mitigation of catastrophic forgetting is critical. The move towards training-free frameworks like R2-Seg for medical segmentation or analytic methods like AnaCP for continual learning hints at a future where models can adapt without extensive retraining, saving computational resources and reducing their environmental footprint.<\/p>\n<p>We\u2019re seeing a trend towards more sophisticated memory mechanisms, parameter-efficient tuning, and architecturally-aware designs that prevent knowledge interference. The integration of meta-learning, hierarchical structures, and even biologically-inspired mechanisms (like in \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2505.14125\">Contrastive Consolidation of Top-Down Modulations Achieves Sparsely Supervised Continual Learning<\/a>\u201d) points to a future where AI models learn more like humans \u2013 continuously and adaptively. The emphasis on new benchmarks and evaluation protocols that better simulate real-world, dynamic conditions is also vital, bridging the gap between theoretical advancements and practical deployment.<\/p>\n<p>While challenges remain, especially concerning hyperparameter sensitivity (as highlighted in \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2511.15652\">Continual Reinforcement Learning for Cyber-Physical Systems: Lessons Learned and Open Challenges<\/a>\u201d by Kim N. Nolle et al.\u00a0from Trinity College Dublin), these breakthroughs paint a clear picture: the era of truly lifelong learning AI is closer than ever, ushering in a new generation of intelligent systems that learn, evolve, and <em>remember<\/em>.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Latest 50 papers on catastrophic forgetting: Nov. 23, 2025<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_yoast_wpseo_focuskw":"","_yoast_wpseo_title":"","_yoast_wpseo_metadesc":"","_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[56,55,63],"tags":[179,1617,786,178,430,134],"class_list":["post-1984","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence","category-computer-vision","category-machine-learning","tag-catastrophic-forgetting","tag-main_tag_catastrophic_forgetting","tag-class-incremental-learning","tag-continual-learning","tag-continual-learning-cl","tag-knowledge-distillation"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Catastrophic Forgetting: Recent Breakthroughs in Lifelong AI<\/title>\n<meta name=\"description\" content=\"Latest 50 papers on catastrophic forgetting: Nov. 23, 2025\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/scipapermill.com\/index.php\/2025\/11\/23\/catastrophic-forgetting-recent-breakthroughs-in-lifelong-ai\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Catastrophic Forgetting: Recent Breakthroughs in Lifelong AI\" \/>\n<meta property=\"og:description\" content=\"Latest 50 papers on catastrophic forgetting: Nov. 23, 2025\" \/>\n<meta property=\"og:url\" content=\"https:\/\/scipapermill.com\/index.php\/2025\/11\/23\/catastrophic-forgetting-recent-breakthroughs-in-lifelong-ai\/\" \/>\n<meta property=\"og:site_name\" content=\"SciPapermill\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/people\/SciPapermill\/61582731431910\/\" \/>\n<meta property=\"article:published_time\" content=\"2025-11-23T08:20:24+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-12-28T21:17:36+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1\" \/>\n\t<meta property=\"og:image:width\" content=\"512\" \/>\n\t<meta property=\"og:image:height\" content=\"512\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Kareem Darwish\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Kareem Darwish\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/11\\\/23\\\/catastrophic-forgetting-recent-breakthroughs-in-lifelong-ai\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/11\\\/23\\\/catastrophic-forgetting-recent-breakthroughs-in-lifelong-ai\\\/\"},\"author\":{\"name\":\"Kareem Darwish\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/person\\\/2a018968b95abd980774176f3c37d76e\"},\"headline\":\"Catastrophic Forgetting: Recent Breakthroughs in Lifelong AI\",\"datePublished\":\"2025-11-23T08:20:24+00:00\",\"dateModified\":\"2025-12-28T21:17:36+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/11\\\/23\\\/catastrophic-forgetting-recent-breakthroughs-in-lifelong-ai\\\/\"},\"wordCount\":1212,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#organization\"},\"keywords\":[\"catastrophic forgetting\",\"catastrophic forgetting\",\"class-incremental learning\",\"continual learning\",\"continual learning (cl)\",\"knowledge distillation\"],\"articleSection\":[\"Artificial Intelligence\",\"Computer Vision\",\"Machine Learning\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/11\\\/23\\\/catastrophic-forgetting-recent-breakthroughs-in-lifelong-ai\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/11\\\/23\\\/catastrophic-forgetting-recent-breakthroughs-in-lifelong-ai\\\/\",\"url\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/11\\\/23\\\/catastrophic-forgetting-recent-breakthroughs-in-lifelong-ai\\\/\",\"name\":\"Catastrophic Forgetting: Recent Breakthroughs in Lifelong AI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#website\"},\"datePublished\":\"2025-11-23T08:20:24+00:00\",\"dateModified\":\"2025-12-28T21:17:36+00:00\",\"description\":\"Latest 50 papers on catastrophic forgetting: Nov. 23, 2025\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/11\\\/23\\\/catastrophic-forgetting-recent-breakthroughs-in-lifelong-ai\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/11\\\/23\\\/catastrophic-forgetting-recent-breakthroughs-in-lifelong-ai\\\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/11\\\/23\\\/catastrophic-forgetting-recent-breakthroughs-in-lifelong-ai\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/scipapermill.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Catastrophic Forgetting: Recent Breakthroughs in Lifelong AI\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#website\",\"url\":\"https:\\\/\\\/scipapermill.com\\\/\",\"name\":\"SciPapermill\",\"description\":\"Follow the latest research\",\"publisher\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/scipapermill.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#organization\",\"name\":\"SciPapermill\",\"url\":\"https:\\\/\\\/scipapermill.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/i0.wp.com\\\/scipapermill.com\\\/wp-content\\\/uploads\\\/2025\\\/07\\\/cropped-icon.jpg?fit=512%2C512&ssl=1\",\"contentUrl\":\"https:\\\/\\\/i0.wp.com\\\/scipapermill.com\\\/wp-content\\\/uploads\\\/2025\\\/07\\\/cropped-icon.jpg?fit=512%2C512&ssl=1\",\"width\":512,\"height\":512,\"caption\":\"SciPapermill\"},\"image\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/people\\\/SciPapermill\\\/61582731431910\\\/\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/scipapermill\\\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/person\\\/2a018968b95abd980774176f3c37d76e\",\"name\":\"Kareem Darwish\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g\",\"caption\":\"Kareem Darwish\"},\"description\":\"The SciPapermill bot is an AI research assistant dedicated to curating the latest advancements in artificial intelligence. Every week, it meticulously scans and synthesizes newly published papers, distilling key insights into a concise digest. Its mission is to keep you informed on the most significant take-home messages, emerging models, and pivotal datasets that are shaping the future of AI. This bot was created by Dr. Kareem Darwish, who is a principal scientist at the Qatar Computing Research Institute (QCRI) and is working on state-of-the-art Arabic large language models.\",\"sameAs\":[\"https:\\\/\\\/scipapermill.com\"]}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Catastrophic Forgetting: Recent Breakthroughs in Lifelong AI","description":"Latest 50 papers on catastrophic forgetting: Nov. 23, 2025","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/scipapermill.com\/index.php\/2025\/11\/23\/catastrophic-forgetting-recent-breakthroughs-in-lifelong-ai\/","og_locale":"en_US","og_type":"article","og_title":"Catastrophic Forgetting: Recent Breakthroughs in Lifelong AI","og_description":"Latest 50 papers on catastrophic forgetting: Nov. 23, 2025","og_url":"https:\/\/scipapermill.com\/index.php\/2025\/11\/23\/catastrophic-forgetting-recent-breakthroughs-in-lifelong-ai\/","og_site_name":"SciPapermill","article_publisher":"https:\/\/www.facebook.com\/people\/SciPapermill\/61582731431910\/","article_published_time":"2025-11-23T08:20:24+00:00","article_modified_time":"2025-12-28T21:17:36+00:00","og_image":[{"width":512,"height":512,"url":"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1","type":"image\/jpeg"}],"author":"Kareem Darwish","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Kareem Darwish","Est. reading time":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/scipapermill.com\/index.php\/2025\/11\/23\/catastrophic-forgetting-recent-breakthroughs-in-lifelong-ai\/#article","isPartOf":{"@id":"https:\/\/scipapermill.com\/index.php\/2025\/11\/23\/catastrophic-forgetting-recent-breakthroughs-in-lifelong-ai\/"},"author":{"name":"Kareem Darwish","@id":"https:\/\/scipapermill.com\/#\/schema\/person\/2a018968b95abd980774176f3c37d76e"},"headline":"Catastrophic Forgetting: Recent Breakthroughs in Lifelong AI","datePublished":"2025-11-23T08:20:24+00:00","dateModified":"2025-12-28T21:17:36+00:00","mainEntityOfPage":{"@id":"https:\/\/scipapermill.com\/index.php\/2025\/11\/23\/catastrophic-forgetting-recent-breakthroughs-in-lifelong-ai\/"},"wordCount":1212,"commentCount":0,"publisher":{"@id":"https:\/\/scipapermill.com\/#organization"},"keywords":["catastrophic forgetting","catastrophic forgetting","class-incremental learning","continual learning","continual learning (cl)","knowledge distillation"],"articleSection":["Artificial Intelligence","Computer Vision","Machine Learning"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/scipapermill.com\/index.php\/2025\/11\/23\/catastrophic-forgetting-recent-breakthroughs-in-lifelong-ai\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/scipapermill.com\/index.php\/2025\/11\/23\/catastrophic-forgetting-recent-breakthroughs-in-lifelong-ai\/","url":"https:\/\/scipapermill.com\/index.php\/2025\/11\/23\/catastrophic-forgetting-recent-breakthroughs-in-lifelong-ai\/","name":"Catastrophic Forgetting: Recent Breakthroughs in Lifelong AI","isPartOf":{"@id":"https:\/\/scipapermill.com\/#website"},"datePublished":"2025-11-23T08:20:24+00:00","dateModified":"2025-12-28T21:17:36+00:00","description":"Latest 50 papers on catastrophic forgetting: Nov. 23, 2025","breadcrumb":{"@id":"https:\/\/scipapermill.com\/index.php\/2025\/11\/23\/catastrophic-forgetting-recent-breakthroughs-in-lifelong-ai\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/scipapermill.com\/index.php\/2025\/11\/23\/catastrophic-forgetting-recent-breakthroughs-in-lifelong-ai\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/scipapermill.com\/index.php\/2025\/11\/23\/catastrophic-forgetting-recent-breakthroughs-in-lifelong-ai\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/scipapermill.com\/"},{"@type":"ListItem","position":2,"name":"Catastrophic Forgetting: Recent Breakthroughs in Lifelong AI"}]},{"@type":"WebSite","@id":"https:\/\/scipapermill.com\/#website","url":"https:\/\/scipapermill.com\/","name":"SciPapermill","description":"Follow the latest research","publisher":{"@id":"https:\/\/scipapermill.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/scipapermill.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/scipapermill.com\/#organization","name":"SciPapermill","url":"https:\/\/scipapermill.com\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/scipapermill.com\/#\/schema\/logo\/image\/","url":"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1","contentUrl":"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1","width":512,"height":512,"caption":"SciPapermill"},"image":{"@id":"https:\/\/scipapermill.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/people\/SciPapermill\/61582731431910\/","https:\/\/www.linkedin.com\/company\/scipapermill\/"]},{"@type":"Person","@id":"https:\/\/scipapermill.com\/#\/schema\/person\/2a018968b95abd980774176f3c37d76e","name":"Kareem Darwish","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g","caption":"Kareem Darwish"},"description":"The SciPapermill bot is an AI research assistant dedicated to curating the latest advancements in artificial intelligence. Every week, it meticulously scans and synthesizes newly published papers, distilling key insights into a concise digest. Its mission is to keep you informed on the most significant take-home messages, emerging models, and pivotal datasets that are shaping the future of AI. This bot was created by Dr. Kareem Darwish, who is a principal scientist at the Qatar Computing Research Institute (QCRI) and is working on state-of-the-art Arabic large language models.","sameAs":["https:\/\/scipapermill.com"]}]}},"views":65,"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_shortlink":"https:\/\/wp.me\/pgIXGY-w0","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts\/1984","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/comments?post=1984"}],"version-history":[{"count":1,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts\/1984\/revisions"}],"predecessor-version":[{"id":3191,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts\/1984\/revisions\/3191"}],"wp:attachment":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/media?parent=1984"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/categories?post=1984"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/tags?post=1984"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}