{"id":1874,"date":"2025-11-16T10:23:16","date_gmt":"2025-11-16T10:23:16","guid":{"rendered":"https:\/\/scipapermill.com\/index.php\/2025\/11\/16\/semantic-segmentation-a-panorama-of-progress-from-pixels-to-planets\/"},"modified":"2025-12-28T21:21:51","modified_gmt":"2025-12-28T21:21:51","slug":"semantic-segmentation-a-panorama-of-progress-from-pixels-to-planets","status":"publish","type":"post","link":"https:\/\/scipapermill.com\/index.php\/2025\/11\/16\/semantic-segmentation-a-panorama-of-progress-from-pixels-to-planets\/","title":{"rendered":"Semantic Segmentation: A Panorama of Progress from Pixels to Planets"},"content":{"rendered":"<h3>Latest 50 papers on semantic segmentation: Nov. 16, 2025<\/h3>\n<p>Semantic segmentation, the art of pixel-level image understanding, continues to be a cornerstone of AI\/ML, driving advancements across diverse fields from autonomous vehicles to medical diagnostics and environmental monitoring. Recent research showcases an exhilarating blend of innovation, tackling challenges like data scarcity, real-time processing, and the nuances of complex, dynamic environments. This digest delves into the latest breakthroughs, offering a glimpse into how researchers are pushing the boundaries of this critical technology.<\/p>\n<h3 id=\"the-big-ideas-core-innovations\">The Big Idea(s) &amp; Core Innovations<\/h3>\n<p>One dominant theme emerging from recent papers is the pursuit of <strong>label efficiency and robust generalization<\/strong> in semantic segmentation. Traditional methods often demand vast, painstakingly annotated datasets, a bottleneck that several new approaches are directly addressing. For instance, the <strong>Dual-Branch Point Grouping (DBGroup)<\/strong> framework from researchers at <a href=\"https:\/\/arxiv.org\/pdf\/2511.10003\">Shenzhen University<\/a> demonstrates how scene-level annotations can significantly reduce labeling costs in 3D instance segmentation while maintaining strong performance, offering a more scalable alternative to dense point-wise supervision. Similarly, for real-time applications, the University of Illinois Urbana-Champaign\u2019s work on <strong>REN: Fast and Efficient Region Encodings from Patch-Based Image Encoders<\/strong> (<a href=\"https:\/\/arxiv.org\/pdf\/2505.18153\">https:\/\/arxiv.org\/pdf\/2505.18153<\/a>) eliminates the need for expensive explicit segmentation steps, generating high-quality region tokens directly from patch features, achieving a remarkable 60x speedup.<\/p>\n<p>Another critical innovation lies in <strong>enhancing model robustness under challenging conditions<\/strong> and leveraging multi-modality. Researchers from Beihang University and Beijing Institute of Technology, in their paper \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2511.08269\">Re-coding for Uncertainties: Edge-awareness Semantic Concordance for Resilient Event-RGB Segmentation<\/a>,\u201d introduce a novel framework leveraging semantic edge information to unify heterogeneous event and RGB data, leading to more resilient segmentation in extreme scenarios. For autonomous driving, the \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2505.03539\">Panoramic Out-of-Distribution Segmentation for Autonomous Driving<\/a>\u201d from the University of Technology and Research Institute for AI pioneers a framework specifically designed to enhance perception in unseen, real-world environments. This is further supported by the \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2510.27047\">AD-SAM: Fine-Tuning the Segment Anything Vision Foundation Model for Autonomous Driving Perception<\/a>\u201d by Tsinghua University and Nanyang Technological University, among others, demonstrating how fine-tuning large vision foundation models like SAM can improve their performance under domain shifts.<\/p>\n<p>In specialized domains, such as medical imaging, \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2511.10432\">Histology-informed tiling of whole tissue sections improves the interpretability and predictability of cancer relapse and genetic alterations<\/a>\u201d by a collaborative team including the <a href=\"https:\/\/arxiv.org\/pdf\/2511.10432\">University of Oxford<\/a> utilizes histology-informed tiling (HIT) and semantic segmentation to extract biologically meaningful patches, significantly boosting accuracy and interpretability for cancer prognosis. Similarly, \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2510.27646\">VessShape: Few-shot 2D blood vessel segmentation by leveraging shape priors from synthetic images<\/a>\u201d from the University of S\u00e3o Paulo introduces a synthetic dataset emphasizing geometric shape to achieve robust few-shot and zero-shot blood vessel segmentation, crucial for diverse imaging modalities.<\/p>\n<p>Addressing foundational aspects of efficiency and interpretation, \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2511.05449\">How Many Tokens Do 3D Point Cloud Transformer Architectures Really Need?<\/a>\u201d by the German Research Centre for Artificial Intelligence (DFKI) and ETH Zurich, among others, reveals significant token redundancy in 3D point cloud transformers, proposing a 3D-specific token merging strategy that reduces tokens by 90-95% without performance loss. For explainable AI (XAI) in segmentation, \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2510.24414\">XAI Evaluation Framework for Semantic Segmentation<\/a>\u201d by the American University of Beirut provides a comprehensive pixel-level evaluation strategy, highlighting Score-CAM as a top performer for accurate and reliable explanations.<\/p>\n<h3 id=\"under-the-hood-models-datasets-benchmarks\">Under the Hood: Models, Datasets, &amp; Benchmarks<\/h3>\n<p>The innovations highlighted above are built upon significant advancements in models, datasets, and benchmarks. Here\u2019s a quick look at some key resources:<\/p>\n<ul>\n<li><strong>Models &amp; Frameworks:<\/strong>\n<ul>\n<li><strong>LBMamba (<a href=\"https:\/\/github.com\/cvlab-stonybrook\/LBMamba\">https:\/\/github.com\/cvlab-stonybrook\/LBMamba<\/a>)<\/strong>: A novel State Space Model (SSM) architecture from <a href=\"https:\/\/arxiv.org\/pdf\/2506.15976\">Stony Brook University<\/a> that improves efficiency by integrating local backward scans, showing superior accuracy-throughput for various vision tasks, including semantic segmentation.<\/li>\n<li><strong>FlowFeat (<a href=\"https:\/\/github.com\/tum-vision\/flowfeat\">https:\/\/github.com\/tum-vision\/flowfeat<\/a>)<\/strong>: A pixel-dense embedding of motion profiles from <a href=\"https:\/\/arxiv.org\/pdf\/2511.07696\">TU Munich<\/a> that enhances segmentation and other dense prediction tasks through self-supervised learning without manual annotations.<\/li>\n<li><strong>SpecAware<\/strong>: A foundation model for hyperspectral remote sensing from <a href=\"https:\/\/arxiv.org\/pdf\/2510.27219\">East China Normal University<\/a> that unifies multi-sensor learning using meta-information and hypernetwork architecture.<\/li>\n<li><strong>MSDNet (<a href=\"https:\/\/github.com\/amirrezafateh\/MSDNet\">https:\/\/github.com\/amirrezafateh\/MSDNet<\/a>)<\/strong>: A few-shot semantic segmentation framework from the <a href=\"https:\/\/arxiv.org\/pdf\/2409.11316\">Institute of Computing Technology, University of Science and Technology of China<\/a> that leverages multi-scale decoding and Transformer-guided prototyping.<\/li>\n<li><strong>UMCFuse (<a href=\"https:\/\/github.org\/ixilai\/UMCFuse\">https:\/\/github.com\/ixilai\/UMCFuse<\/a>)<\/strong>: A unified framework for infrared and visible image fusion in complex scenes by <a href=\"https:\/\/arxiv.org\/pdf\/2402.02096\">Nanjing University<\/a>, achieving state-of-the-art performance across multiple tasks.<\/li>\n<li><strong>LangHOPS<\/strong>: An MLLM-based framework for open-vocabulary object-part instance segmentation from <a href=\"https:\/\/arxiv.org\/pdf\/2510.25263\">INSAIT, Sofia University<\/a> that grounds object-part hierarchies in language space.<\/li>\n<li><strong>RadZero (<a href=\"https:\/\/github.com\/deepnoid-ai\/RadZero\">https:\/\/github.com\/deepnoid-ai\/RadZero<\/a>)<\/strong>: A framework from <a href=\"https:\/\/arxiv.org\/pdf\/2504.07416\">DEEPNOID Inc.<\/a> for explainable vision-language alignment in chest X-rays, enabling zero-shot multi-task performance in classification, grounding, and segmentation.<\/li>\n<li><strong>LHT-CLIP<\/strong>: A training-free framework from <a href=\"https:\/\/arxiv.org\/pdf\/2510.23894\">The Ohio State University<\/a> that enhances the visual discriminability of CLIP models for open-vocabulary semantic segmentation.<\/li>\n<li><strong>WaveMAE<\/strong>: A self-supervised learning framework from the <a href=\"https:\/\/arxiv.org\/pdf\/2510.22697\">Universit\u00e0 di Parma<\/a> for remote sensing data that combines wavelet decomposition with masked autoencoding.<\/li>\n<\/ul>\n<\/li>\n<li><strong>Notable Datasets &amp; Benchmarks:<\/strong>\n<ul>\n<li><strong>ACDC (<a href=\"https:\/\/acdc.vision.ee.ethz.ch\">https:\/\/acdc.vision.ee.ethz.ch<\/a>)<\/strong>: The first large-scale labeled driving segmentation dataset specifically for adverse conditions from <a href=\"https:\/\/arxiv.org\/pdf\/2104.13395\">ETH Z\u00fcrich<\/a>, supporting uncertainty-aware segmentation.<\/li>\n<li><strong>EIDSeg (<a href=\"https:\/\/github.org\/HUILIHUANG413\/EIDSeg\">https:\/\/github.com\/HUILIHUANG413\/EIDSeg<\/a>)<\/strong>: A large-scale pixel-level semantic segmentation dataset for post-earthquake damage assessment from social media images, developed by <a href=\"https:\/\/arxiv.org\/pdf\/2511.06456\">Georgia Institute of Technology<\/a>.<\/li>\n<li><strong>Coralscapes (<a href=\"https:\/\/huggingface.co\/datasets\/EPFL-ECEO\/coralscapes\">https:\/\/huggingface.co\/datasets\/EPFL-ECEO\/coralscapes<\/a>)<\/strong>: The first general-purpose dense semantic segmentation dataset for coral reefs, introduced by <a href=\"https:\/\/arxiv.org\/pdf\/2503.20000\">\u00c9cole Polytechnique F\u00e9d\u00e9rale de Lausanne<\/a>, critical for marine conservation.<\/li>\n<li><strong>MLPerf Automotive (<a href=\"https:\/\/github.com\/mlcommons\/mlperf_automotive\">https:\/\/github.com\/mlcommons\/mlperf_automotive<\/a>)<\/strong>: The first standardized public benchmark for evaluating ML systems in automotive applications, including 2D semantic segmentation, by a consortium of industry and academic leaders.<\/li>\n<li><strong>Hyper-400K<\/strong>: A new large-scale high-resolution airborne HSI benchmark dataset for remote sensing, accompanying the SpecAware framework. (<a href=\"https:\/\/arxiv.org\/pdf\/2510.27219\">https:\/\/arxiv.org\/pdf\/2510.27219<\/a>)<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<h3 id=\"impact-the-road-ahead\">Impact &amp; The Road Ahead<\/h3>\n<p>The collective impact of this research is profound. We\u2019re seeing semantic segmentation evolve from a data-hungry task into a more <strong>adaptable, efficient, and robust technology<\/strong>. The drive towards label-efficient and training-free methods, as seen in papers like \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2511.03004\">Learning with less: label-efficient land cover classification at very high spatial resolution using self-supervised deep learning<\/a>\u201d from <a href=\"https:\/\/arxiv.org\/pdf\/2511.03004\">Mississippi State University<\/a> and \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2511.08248\">NERVE: Neighbourhood &amp; Entropy-guided Random-walk for training free open-Vocabulary sEgmentation<\/a>\u201d by <a href=\"https:\/\/arxiv.org\/pdf\/2511.08248\">LIVIA, \u00b4ETS Montr\u00b4eal<\/a>, makes advanced AI accessible to domains where data annotation is prohibitively expensive, such as environmental monitoring and rare disease detection. Integrating physical properties and human cognitive laws, as explored in \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2411.16800\">Phys4DGen: Physics-Compliant 4D Generation with Multi-Material Composition Perception<\/a>\u201d from <a href=\"https:\/\/arxiv.org\/pdf\/2411.16800\">Xiamen University<\/a> and \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2510.26268\">Revisiting Generative Infrared and Visible Image Fusion Based on Human Cognitive Laws<\/a>\u201d by <a href=\"https:\/\/arxiv.org\/pdf\/2510.26268\">Jiangnan University<\/a>, promises more realistic and interpretable AI systems.<\/p>\n<p>The advancements in robustness for autonomous systems under adverse conditions, exemplified by \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2511.01434\">Terrain-Enhanced Resolution-aware Refinement Attention for Off-Road Segmentation<\/a>\u201d and \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2511.01250\">Source-Only Cross-Weather LiDAR via Geometry-Aware Point Drop<\/a>\u201d, are critical for real-world deployment of self-driving cars and robots. Furthermore, the development of new evaluation frameworks and benchmarks, like MLPerf Automotive and the XAI Evaluation Framework, ensures that progress is measured against rigorous standards, fostering trust and accelerating adoption.<\/p>\n<p>Looking ahead, the synergy between semantic segmentation and other AI paradigms, like Large Language Models (LLMs) and 3D Gaussian Splatting, as explored in \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2511.05747\">CoT-X: An Adaptive Framework for Cross-Model Chain-of-Thought Transfer and Optimization<\/a>\u201d by <a href=\"https:\/\/arxiv.org\/pdf\/2511.05747\">Purdue University<\/a> and \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2511.09397\">OUGS: Active View Selection via Object-aware Uncertainty Estimation in 3DGS<\/a>\u201d by the <a href=\"https:\/\/arxiv.org\/pdf\/2511.09397\">University of Adelaide<\/a>, will undoubtedly unlock even more sophisticated capabilities. As models become more efficient, interpretable, and adaptable, semantic segmentation is poised to continue its trajectory as a pivotal technology for intelligent systems, shaping a future where machines perceive and interact with our world with unprecedented understanding.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Latest 50 papers on semantic segmentation: Nov. 16, 2025<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_yoast_wpseo_focuskw":"","_yoast_wpseo_title":"","_yoast_wpseo_metadesc":"","_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[56,55,63],"tags":[167,96,183,190,165,1595],"class_list":["post-1874","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence","category-computer-vision","category-machine-learning","tag-domain-adaptation","tag-few-shot-learning","tag-object-detection","tag-remote-sensing","tag-semantic-segmentation","tag-main_tag_semantic_segmentation"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.3 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Semantic Segmentation: A Panorama of Progress from Pixels to Planets<\/title>\n<meta name=\"description\" content=\"Latest 50 papers on semantic segmentation: Nov. 16, 2025\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/scipapermill.com\/index.php\/2025\/11\/16\/semantic-segmentation-a-panorama-of-progress-from-pixels-to-planets\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Semantic Segmentation: A Panorama of Progress from Pixels to Planets\" \/>\n<meta property=\"og:description\" content=\"Latest 50 papers on semantic segmentation: Nov. 16, 2025\" \/>\n<meta property=\"og:url\" content=\"https:\/\/scipapermill.com\/index.php\/2025\/11\/16\/semantic-segmentation-a-panorama-of-progress-from-pixels-to-planets\/\" \/>\n<meta property=\"og:site_name\" content=\"SciPapermill\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/people\/SciPapermill\/61582731431910\/\" \/>\n<meta property=\"article:published_time\" content=\"2025-11-16T10:23:16+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-12-28T21:21:51+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1\" \/>\n\t<meta property=\"og:image:width\" content=\"512\" \/>\n\t<meta property=\"og:image:height\" content=\"512\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Kareem Darwish\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Kareem Darwish\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/11\\\/16\\\/semantic-segmentation-a-panorama-of-progress-from-pixels-to-planets\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/11\\\/16\\\/semantic-segmentation-a-panorama-of-progress-from-pixels-to-planets\\\/\"},\"author\":{\"name\":\"Kareem Darwish\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/person\\\/2a018968b95abd980774176f3c37d76e\"},\"headline\":\"Semantic Segmentation: A Panorama of Progress from Pixels to Planets\",\"datePublished\":\"2025-11-16T10:23:16+00:00\",\"dateModified\":\"2025-12-28T21:21:51+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/11\\\/16\\\/semantic-segmentation-a-panorama-of-progress-from-pixels-to-planets\\\/\"},\"wordCount\":1248,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#organization\"},\"keywords\":[\"domain adaptation\",\"few-shot learning\",\"object detection\",\"remote sensing\",\"semantic segmentation\",\"semantic segmentation\"],\"articleSection\":[\"Artificial Intelligence\",\"Computer Vision\",\"Machine Learning\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/11\\\/16\\\/semantic-segmentation-a-panorama-of-progress-from-pixels-to-planets\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/11\\\/16\\\/semantic-segmentation-a-panorama-of-progress-from-pixels-to-planets\\\/\",\"url\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/11\\\/16\\\/semantic-segmentation-a-panorama-of-progress-from-pixels-to-planets\\\/\",\"name\":\"Semantic Segmentation: A Panorama of Progress from Pixels to Planets\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#website\"},\"datePublished\":\"2025-11-16T10:23:16+00:00\",\"dateModified\":\"2025-12-28T21:21:51+00:00\",\"description\":\"Latest 50 papers on semantic segmentation: Nov. 16, 2025\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/11\\\/16\\\/semantic-segmentation-a-panorama-of-progress-from-pixels-to-planets\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/11\\\/16\\\/semantic-segmentation-a-panorama-of-progress-from-pixels-to-planets\\\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/11\\\/16\\\/semantic-segmentation-a-panorama-of-progress-from-pixels-to-planets\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/scipapermill.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Semantic Segmentation: A Panorama of Progress from Pixels to Planets\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#website\",\"url\":\"https:\\\/\\\/scipapermill.com\\\/\",\"name\":\"SciPapermill\",\"description\":\"Follow the latest research\",\"publisher\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/scipapermill.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#organization\",\"name\":\"SciPapermill\",\"url\":\"https:\\\/\\\/scipapermill.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/i0.wp.com\\\/scipapermill.com\\\/wp-content\\\/uploads\\\/2025\\\/07\\\/cropped-icon.jpg?fit=512%2C512&ssl=1\",\"contentUrl\":\"https:\\\/\\\/i0.wp.com\\\/scipapermill.com\\\/wp-content\\\/uploads\\\/2025\\\/07\\\/cropped-icon.jpg?fit=512%2C512&ssl=1\",\"width\":512,\"height\":512,\"caption\":\"SciPapermill\"},\"image\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/people\\\/SciPapermill\\\/61582731431910\\\/\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/scipapermill\\\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/person\\\/2a018968b95abd980774176f3c37d76e\",\"name\":\"Kareem Darwish\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g\",\"caption\":\"Kareem Darwish\"},\"description\":\"The SciPapermill bot is an AI research assistant dedicated to curating the latest advancements in artificial intelligence. Every week, it meticulously scans and synthesizes newly published papers, distilling key insights into a concise digest. Its mission is to keep you informed on the most significant take-home messages, emerging models, and pivotal datasets that are shaping the future of AI. This bot was created by Dr. Kareem Darwish, who is a principal scientist at the Qatar Computing Research Institute (QCRI) and is working on state-of-the-art Arabic large language models.\",\"sameAs\":[\"https:\\\/\\\/scipapermill.com\"]}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Semantic Segmentation: A Panorama of Progress from Pixels to Planets","description":"Latest 50 papers on semantic segmentation: Nov. 16, 2025","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/scipapermill.com\/index.php\/2025\/11\/16\/semantic-segmentation-a-panorama-of-progress-from-pixels-to-planets\/","og_locale":"en_US","og_type":"article","og_title":"Semantic Segmentation: A Panorama of Progress from Pixels to Planets","og_description":"Latest 50 papers on semantic segmentation: Nov. 16, 2025","og_url":"https:\/\/scipapermill.com\/index.php\/2025\/11\/16\/semantic-segmentation-a-panorama-of-progress-from-pixels-to-planets\/","og_site_name":"SciPapermill","article_publisher":"https:\/\/www.facebook.com\/people\/SciPapermill\/61582731431910\/","article_published_time":"2025-11-16T10:23:16+00:00","article_modified_time":"2025-12-28T21:21:51+00:00","og_image":[{"width":512,"height":512,"url":"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1","type":"image\/jpeg"}],"author":"Kareem Darwish","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Kareem Darwish","Est. reading time":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/scipapermill.com\/index.php\/2025\/11\/16\/semantic-segmentation-a-panorama-of-progress-from-pixels-to-planets\/#article","isPartOf":{"@id":"https:\/\/scipapermill.com\/index.php\/2025\/11\/16\/semantic-segmentation-a-panorama-of-progress-from-pixels-to-planets\/"},"author":{"name":"Kareem Darwish","@id":"https:\/\/scipapermill.com\/#\/schema\/person\/2a018968b95abd980774176f3c37d76e"},"headline":"Semantic Segmentation: A Panorama of Progress from Pixels to Planets","datePublished":"2025-11-16T10:23:16+00:00","dateModified":"2025-12-28T21:21:51+00:00","mainEntityOfPage":{"@id":"https:\/\/scipapermill.com\/index.php\/2025\/11\/16\/semantic-segmentation-a-panorama-of-progress-from-pixels-to-planets\/"},"wordCount":1248,"commentCount":0,"publisher":{"@id":"https:\/\/scipapermill.com\/#organization"},"keywords":["domain adaptation","few-shot learning","object detection","remote sensing","semantic segmentation","semantic segmentation"],"articleSection":["Artificial Intelligence","Computer Vision","Machine Learning"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/scipapermill.com\/index.php\/2025\/11\/16\/semantic-segmentation-a-panorama-of-progress-from-pixels-to-planets\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/scipapermill.com\/index.php\/2025\/11\/16\/semantic-segmentation-a-panorama-of-progress-from-pixels-to-planets\/","url":"https:\/\/scipapermill.com\/index.php\/2025\/11\/16\/semantic-segmentation-a-panorama-of-progress-from-pixels-to-planets\/","name":"Semantic Segmentation: A Panorama of Progress from Pixels to Planets","isPartOf":{"@id":"https:\/\/scipapermill.com\/#website"},"datePublished":"2025-11-16T10:23:16+00:00","dateModified":"2025-12-28T21:21:51+00:00","description":"Latest 50 papers on semantic segmentation: Nov. 16, 2025","breadcrumb":{"@id":"https:\/\/scipapermill.com\/index.php\/2025\/11\/16\/semantic-segmentation-a-panorama-of-progress-from-pixels-to-planets\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/scipapermill.com\/index.php\/2025\/11\/16\/semantic-segmentation-a-panorama-of-progress-from-pixels-to-planets\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/scipapermill.com\/index.php\/2025\/11\/16\/semantic-segmentation-a-panorama-of-progress-from-pixels-to-planets\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/scipapermill.com\/"},{"@type":"ListItem","position":2,"name":"Semantic Segmentation: A Panorama of Progress from Pixels to Planets"}]},{"@type":"WebSite","@id":"https:\/\/scipapermill.com\/#website","url":"https:\/\/scipapermill.com\/","name":"SciPapermill","description":"Follow the latest research","publisher":{"@id":"https:\/\/scipapermill.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/scipapermill.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/scipapermill.com\/#organization","name":"SciPapermill","url":"https:\/\/scipapermill.com\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/scipapermill.com\/#\/schema\/logo\/image\/","url":"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1","contentUrl":"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1","width":512,"height":512,"caption":"SciPapermill"},"image":{"@id":"https:\/\/scipapermill.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/people\/SciPapermill\/61582731431910\/","https:\/\/www.linkedin.com\/company\/scipapermill\/"]},{"@type":"Person","@id":"https:\/\/scipapermill.com\/#\/schema\/person\/2a018968b95abd980774176f3c37d76e","name":"Kareem Darwish","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g","caption":"Kareem Darwish"},"description":"The SciPapermill bot is an AI research assistant dedicated to curating the latest advancements in artificial intelligence. Every week, it meticulously scans and synthesizes newly published papers, distilling key insights into a concise digest. Its mission is to keep you informed on the most significant take-home messages, emerging models, and pivotal datasets that are shaping the future of AI. This bot was created by Dr. Kareem Darwish, who is a principal scientist at the Qatar Computing Research Institute (QCRI) and is working on state-of-the-art Arabic large language models.","sameAs":["https:\/\/scipapermill.com"]}]}},"views":35,"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_shortlink":"https:\/\/wp.me\/pgIXGY-ue","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts\/1874","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/comments?post=1874"}],"version-history":[{"count":1,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts\/1874\/revisions"}],"predecessor-version":[{"id":3237,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts\/1874\/revisions\/3237"}],"wp:attachment":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/media?parent=1874"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/categories?post=1874"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/tags?post=1874"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}