{"id":1978,"date":"2025-11-23T08:16:20","date_gmt":"2025-11-23T08:16:20","guid":{"rendered":"https:\/\/scipapermill.com\/index.php\/2025\/11\/23\/feature-extraction-from-quantum-sensors-to-semantic-insights\/"},"modified":"2025-12-28T21:18:11","modified_gmt":"2025-12-28T21:18:11","slug":"feature-extraction-from-quantum-sensors-to-semantic-insights","status":"publish","type":"post","link":"https:\/\/scipapermill.com\/index.php\/2025\/11\/23\/feature-extraction-from-quantum-sensors-to-semantic-insights\/","title":{"rendered":"Feature Extraction: From Quantum Sensors to Semantic Insights"},"content":{"rendered":"<h3>Latest 50 papers on feature extraction: Nov. 23, 2025<\/h3>\n<p>The landscape of AI\/ML is constantly evolving, with recent breakthroughs pushing the boundaries of what\u2019s possible in diverse fields from robotics to medical diagnostics. At the heart of many of these advancements lies sophisticated feature extraction\u2014the art and science of identifying meaningful patterns in data that enable models to learn and predict with uncanny accuracy. This blog post dives into a collection of recent research papers, unveiling novel approaches to feature extraction that promise to reshape how we interact with and understand complex data.<\/p>\n<h3 id=\"the-big-ideas-core-innovations\">The Big Idea(s) &amp; Core Innovations<\/h3>\n<p>Many of these papers address the fundamental challenge of extracting robust and interpretable features from increasingly complex and diverse data types. A common thread is the move towards more specialized and context-aware feature extraction, often integrating domain-specific knowledge or hybrid architectures.<\/p>\n<p>In the realm of <strong>multimodal and contextual understanding<\/strong>, we see significant progress. The paper, \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2511.10935\">CAT-Net: A Cross-Attention Tone Network for Cross-Subject EEG-EMG Fusion Tone Decoding<\/a>\u201d by Yifan Zhuang et al.\u00a0from Sony Interactive Entertainment and others, introduces a cross-attention mechanism for EEG-EMG fusion, crucial for tone classification even in silent speech. This sophisticated interaction between modalities allows for capturing nuanced neural-muscular coordination, a key insight for practical Brain-Computer Interface (BCI) applications. Similarly, the survey \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2312.05735\">A Comprehensive Survey on Multi-modal Conversational Emotion Recognition with Deep Learning<\/a>\u201d by Yuntao Shou et al., underscores the importance of integrating textual, audio, and visual modalities for robust emotion recognition, highlighting how multimodal feature spaces offer better inter-class separation for subtle emotions.<\/p>\n<p><strong>Leveraging prior knowledge and interpretability<\/strong> is another major theme. \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2511.11750\">IDOL: Meeting Diverse Distribution Shifts with Prior Physics for Tropical Cyclone Multi-Task Estimation<\/a>\u201d by Hanting Yan et al.\u00a0from Zhejiang University of Technology, proposes a framework that uses prior physical knowledge to learn invariant features, crucial for robust tropical cyclone estimation under distribution shifts. For enhancing interpretability, \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2511.12880\">Simple Lines, Big Ideas: Towards Interpretable Assessment of Human Creativity from Drawings<\/a>\u201d by Zihao Lin et al.\u00a0from South China Normal University, decomposes drawings into content and style components, providing interpretable creativity assessments. This decomposition allows models to dynamically adapt to different drawing styles and content types, offering a more nuanced understanding of creative output.<\/p>\n<p><strong>Addressing resource constraints and data scarcity<\/strong> is paramount. \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2511.12528\">D<span class=\"math inline\"><sup>2<\/sup><\/span>-VPR: A Parameter-efficient Visual-foundation-model-based Visual Place Recognition Method via Knowledge Distillation and Deformable Aggregation<\/a>\u201d by Zheyuan Zhang et al.\u00a0from Beijing University of Posts and Telecommunications, introduces a parameter-efficient visual place recognition method. It achieves significant reductions in parameters and FLOPs while maintaining performance, vital for deploying large foundation models on edge devices. For medical applications with scarce labels, \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2511.10432\">Histology-informed tiling of whole tissue sections improves the interpretability and predictability of cancer relapse and genetic alterations<\/a>\u201d by Willem Bonnaff\u00e9 et al.\u00a0from the University of Oxford, uses semantic segmentation to extract biologically meaningful patches, improving cancer relapse prediction and interpretability by focusing on glandular structures.<\/p>\n<p>Finally, <strong>specialized architectures and quantum advancements<\/strong> are emerging. The \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2511.08349\">Hybrid Quantum-Classical Selective State Space Artificial Intelligence<\/a>\u201d paper by Amin Ebrahimi and Farzan Haddadi from Iran University of Science &amp; Technology, proposes a hybrid quantum-classical selection mechanism for the Mamba architecture, using Variational Quantum Circuits (VQCs) to enhance feature extraction and improve information suppression in deep learning models. This groundbreaking work shows how quantum gating can boost model efficiency and performance in NLP tasks.<\/p>\n<h3 id=\"under-the-hood-models-datasets-benchmarks\">Under the Hood: Models, Datasets, &amp; Benchmarks<\/h3>\n<p>Recent research highlights a drive towards more efficient, accurate, and robust models, often facilitated by novel datasets and specialized benchmarks. Here\u2019s a glimpse into the key resources enabling these innovations:<\/p>\n<ul>\n<li><strong>For Robust Feature Learning &amp; Localization:<\/strong>\n<ul>\n<li>The novel \u201c<a href=\"https:\/\/mertcookimg.github.io\/bi-aqua\">Bi-AQUA: Bilateral Control-Based Imitation Learning for Underwater Robot Arms via Lighting-Aware Action Chunking with Transformers<\/a>\u201d from Mert Cook et al.\u00a0(University of Tokyo) introduces a transformer-based framework for precise, force-sensitive underwater robotic manipulation, robust to dynamic lighting conditions.<\/li>\n<li>\u201c<a href=\"https:\/\/arxiv.org\/pdf\/2511.15077\">MambaTrack3D: A State Space Model Framework for LiDAR-Based Object Tracking under High Temporal Variation<\/a>\u201d by Y. Xia et al., utilizes state space models for robust LiDAR-based object tracking, reducing reliance on traditional detection modules. <\/li>\n<li>\u201c<a href=\"https:\/\/arxiv.org\/pdf\/2511.14076\">Meta-SimGNN: Adaptive and Robust WiFi Localization Across Dynamic Configurations and Diverse Scenarios<\/a>\u201d hints at a meta-learning Graph Neural Network (GNN) approach for adaptive WiFi localization.<\/li>\n<\/ul>\n<\/li>\n<li><strong>For Efficient Image\/Signal Processing:<\/strong>\n<ul>\n<li>\u201c<a href=\"https:\/\/arxiv.org\/pdf\/2511.15692\">SS-MixNet: Hyperspectral Image Classification using Spectral-Spatial Mixer Network<\/a>\u201d by Mohammed Q. Alkhatib (University of Dubai) introduces a lightweight deep learning model combining 3D convolutions with MLP-style mixer blocks for hyperspectral image classification, achieving SOTA on QUH-Tangdaowan and QUH-Qingyun datasets with only 1% labeled data. Code is available at <a href=\"https:\/\/github.com\/mqalkhatib\/SS-MixNet\">https:\/\/github.com\/mqalkhatib\/SS-MixNet<\/a>.<\/li>\n<li>\u201c<a href=\"https:\/\/arxiv.org\/pdf\/2511.13607\">ICLR: Inter-Chrominance and Luminance Interaction for Natural Color Restoration in Low-Light Image Enhancement<\/a>\u201d by Xin Xu et al.\u00a0(Wuhan University of Science and Technology) introduces a Dual-stream Interaction Enhancement Module (DIEM) and Covariance Correction Loss (CCL) to address color restoration in low-light images.<\/li>\n<li>\u201c<a href=\"https:\/\/arxiv.org\/pdf\/2511.13110\">NeDR-Dehaze: Learning Implicit Neural Degradation Representation for Unpaired Image Dehazing<\/a>\u201d by Shuaibin Fan et al.\u00a0(Chongqing University of Technology) uses implicit neural representations and a KAN-CID mechanism for unsupervised image dehazing. Code is at <a href=\"https:\/\/github.com\/Fan-pixel\/NeDR-Dehaze\">https:\/\/github.com\/Fan-pixel\/NeDR-Dehaze<\/a>.<\/li>\n<li>\u201c<a href=\"https:\/\/arxiv.org\/pdf\/2511.13115\">A Lightweight 3D Anomaly Detection Method with Rotationally Invariant Features<\/a>\u201d by Hanzhe Liang et al.\u00a0(Shenzhen University) proposes the RIF framework, Point Coordinate Mapping (PCM), and CTF-Net for robust 3D anomaly detection on datasets like Anomaly-ShapeNet and Real3D-AD. Code: <a href=\"https:\/\/github.com\/hzzzzzhappy\/RIF\">https:\/\/github.com\/hzzzzzhappy\/RIF<\/a>.<\/li>\n<\/ul>\n<\/li>\n<li><strong>For Enhanced Interpretability &amp; Automation:<\/strong>\n<ul>\n<li>\u201c<a href=\"https:\/\/arxiv.org\/pdf\/2511.15074\">Rogue One: Knowledge-Informed Automatic Feature Extraction via Collaborative Large Language Model Agents<\/a>\u201d by Henrik Br\u00e5dland et al.\u00a0(University of Pittsburgh, University of Agder) presents a multi-agent LLM framework for automated, interpretable feature extraction, outperforming SOTA on 19 classification and 9 regression datasets. Code: <a href=\"https:\/\/github.com\/henrikbradland\/Rogue-One-Codebase\">https:\/\/github.com\/henrikbradland\/Rogue-One-Codebase<\/a>.<\/li>\n<li>\u201c<a href=\"https:\/\/arxiv.org\/pdf\/2511.15389\">Unveiling Inference Scaling for Difference-Aware User Modeling in LLM Personalization<\/a>\u201d by Suyu Chen et al.\u00a0(University of Science and Technology of China) introduces DRP, a framework enhancing LLM personalization using inference scaling for deeper analysis of user differences. Code: <a href=\"https:\/\/github.com\/ustc-NLP\/DRP\">https:\/\/github.com\/ustc-NLP\/DRP<\/a>.<\/li>\n<li>\u201c<a href=\"https:\/\/arxiv.org\/pdf\/2511.10394\">LLM-YOLOMS: Large Language Model-based Semantic Interpretation and Fault Diagnosis for Wind Turbine Components<\/a>\u201d by Yaru Li et al.\u00a0(Beijing University of Civil Engineering and Architecture) integrates YOLOMS with LLMs for wind turbine fault diagnosis, enhancing interpretability.<\/li>\n<li>\u201c<a href=\"https:\/\/arxiv.org\/pdf\/2511.09947\">EEGAgent: A Unified Framework for Automated EEG Analysis Using Large Language Models<\/a>\u201d by Sha Zhao et al.\u00a0(Zhejiang University) proposes the first agent-based LLM framework for unified, multi-task EEG analysis.<\/li>\n<\/ul>\n<\/li>\n<li><strong>For Medical &amp; Scientific Applications:<\/strong>\n<ul>\n<li>\u201c<a href=\"https:\/\/arxiv.org\/pdf\/2511.08663\">3D-TDA: Topological feature extraction from 3D images for Alzheimer\u2019s disease classification<\/a>\u201d by Faisal Ahmed et al.\u00a0(University of Texas at Dallas) uses persistent homology for 3D MRI topological feature extraction, achieving high accuracy in AD diagnosis.<\/li>\n<li>\u201c<a href=\"https:\/\/arxiv.org\/pdf\/2511.12386\">Leveraging Quantum-Based Architectures for Robust Diagnostics<\/a>\u201d by Shabnam Sodagari and Tommy Long (California State University, Long Beach) uses a hybrid ResNet50-QCNN for kidney CT image classification, demonstrating 99% test accuracy with a 12-qubit QCNN.<\/li>\n<li>\u201c<a href=\"https:\/\/arxiv.org\/pdf\/2511.11032\">MPCGNet: A Multiscale Feature Extraction and Progressive Feature Aggregation Network Using Coupling Gates for Polyp Segmentation<\/a>\u201d by Author One et al.\u00a0introduces a novel network architecture for accurate polyp segmentation using multiscale feature extraction and progressive aggregation. Code: <a href=\"https:\/\/github.com\/yourusername\/MPCGNet\">https:\/\/github.com\/yourusername\/MPCGNet<\/a>.<\/li>\n<li>\u201c<a href=\"https:\/\/arxiv.org\/pdf\/2511.13752\">Motor Imagery Classification Using Feature Fusion of Spatially Weighted Electroencephalography<\/a>\u201d by Author A et al.\u00a0proposes spatial weighting and feature fusion in EEG for improved motor imagery classification on BCI competition datasets.<\/li>\n<li>\u201c<a href=\"https:\/\/arxiv.org\/pdf\/2511.15151\">DCL-SE: Dynamic Curriculum Learning for Spatiotemporal Encoding of Brain Imaging<\/a>\u201d from Author A et al.\u00a0(University of XYZ) introduces Dynamic Curriculum Learning for spatiotemporal encoding in brain imaging, improving accuracy and efficiency on neuroimaging datasets.<\/li>\n<\/ul>\n<\/li>\n<li><strong>Other Noteworthy Contributions:<\/strong>\n<ul>\n<li>\u201c<a href=\"https:\/\/peilinwu.site\/looping-sim-and-real.github.io\/\">LoopSR: Looping Sim-and-Real for Lifelong Policy Adaptation of Legged Robots<\/a>\u201d by Peilin Wu and Xiaoxuan Zhang (Peking University) enhances lifelong policy adaptation for legged robots with superior data efficiency. Code: <a href=\"https:\/\/peilinwu.site\/looping-sim-and-real.github.io\/\">https:\/\/peilinwu.site\/looping-sim-and-real.github.io\/<\/a>.<\/li>\n<li>\u201c<a href=\"https:\/\/arxiv.org\/pdf\/2511.11700\">EPSegFZ: Efficient Point Cloud Semantic Segmentation for Few- and Zero-Shot Scenarios with Language Guidance<\/a>\u201d by Jiahui Wang et al.\u00a0(National University of Singapore) achieves SOTA in 3D few- and zero-shot semantic segmentation without pre-training.<\/li>\n<li>\u201c<a href=\"https:\/\/arxiv.org\/pdf\/2511.13904\">SAE-MCVT: A Real-Time and Scalable Multi-Camera Vehicle Tracking Framework Powered by Edge Computing<\/a>\u201d by Yuqiang Lin et al.\u00a0(University of Bath, Starwit Technologies GmbH) introduces the first real-time, city-scale multi-camera vehicle tracking system, alongside the RoundaboutHD dataset. Code for BoxMOT: <a href=\"https:\/\/github.com\/mikel-brostrom\/boxmot\">https:\/\/github.com\/mikel-brostrom\/boxmot<\/a> and SAE-Engine: <a href=\"https:\/\/github.com\/starwit\/starwit-awareness-engine\">https:\/\/github.com\/starwit\/starwit-awareness-engine<\/a>.<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<h3 id=\"impact-the-road-ahead\">Impact &amp; The Road Ahead<\/h3>\n<p>These advancements in feature extraction are poised to have a profound impact across numerous AI\/ML domains. The ability to automatically generate interpretable features (as seen in <a href=\"https:\/\/arxiv.org\/pdf\/2511.15074\">Rogue One<\/a>) will make AI systems more transparent and trustworthy, especially in critical applications like medical diagnostics (<a href=\"https:\/\/arxiv.org\/pdf\/2511.10432\">Histology-informed tiling<\/a>, <a href=\"https:\/\/arxiv.org\/pdf\/2511.08663\">3D-TDA<\/a>) and smart contract security (<a href=\"https:\/\/arxiv.org\/pdf\/2511.11411\">SCRUTINEER<\/a>). The push for parameter-efficient models (<a href=\"https:\/\/arxiv.org\/pdf\/2511.12528\">D<span class=\"math inline\"><sup>2<\/sup><\/span>-VPR<\/a>, <a href=\"https:\/\/arxiv.org\/pdf\/2511.14322\">LSP-YOLO<\/a>) signals a future where sophisticated AI can be deployed on resource-constrained edge devices, democratizing access to powerful intelligence in real-time. This is particularly exciting for autonomous systems and smart cities, enabling real-time decision-making without constant cloud connectivity.<\/p>\n<p>The integration of quantum computing (<a href=\"https:\/\/arxiv.org\/pdf\/2511.08349\">Hybrid Quantum-Classical Selective State Space Artificial Intelligence<\/a>) opens up tantalizing possibilities for supercharging feature extraction with capabilities beyond classical computation, potentially unlocking new frontiers in complex problem-solving. Furthermore, the explicit modeling of domain-specific knowledge, whether physics-based (<a href=\"https:\/\/arxiv.org\/pdf\/2511.11750\">IDOL<\/a>, <a href=\"https:\/\/arxiv.org\/pdf\/2511.15204\">Physics-Based Benchmarking Metrics<\/a>) or human-interpretable concepts (<a href=\"https:\/\/arxiv.org\/pdf\/2511.12880\">Simple Lines, Big Ideas<\/a>), promises to build more robust and generalizable AI. The future of feature extraction will likely involve an even deeper synthesis of AI techniques with domain expertise, creating intelligent systems that are not just accurate, but also insightful, adaptable, and deployable everywhere.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Latest 50 papers on feature extraction: Nov. 23, 2025<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_yoast_wpseo_focuskw":"","_yoast_wpseo_title":"","_yoast_wpseo_metadesc":"","_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[56,55,63],"tags":[410,1623,139,1142,1143,943],"class_list":["post-1978","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence","category-computer-vision","category-machine-learning","tag-feature-extraction","tag-main_tag_feature_extraction","tag-graph-neural-networks","tag-hyperspectral-image-classification","tag-point-cloud-processing","tag-state-space-models"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Feature Extraction: From Quantum Sensors to Semantic Insights<\/title>\n<meta name=\"description\" content=\"Latest 50 papers on feature extraction: Nov. 23, 2025\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/scipapermill.com\/index.php\/2025\/11\/23\/feature-extraction-from-quantum-sensors-to-semantic-insights\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Feature Extraction: From Quantum Sensors to Semantic Insights\" \/>\n<meta property=\"og:description\" content=\"Latest 50 papers on feature extraction: Nov. 23, 2025\" \/>\n<meta property=\"og:url\" content=\"https:\/\/scipapermill.com\/index.php\/2025\/11\/23\/feature-extraction-from-quantum-sensors-to-semantic-insights\/\" \/>\n<meta property=\"og:site_name\" content=\"SciPapermill\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/people\/SciPapermill\/61582731431910\/\" \/>\n<meta property=\"article:published_time\" content=\"2025-11-23T08:16:20+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-12-28T21:18:11+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1\" \/>\n\t<meta property=\"og:image:width\" content=\"512\" \/>\n\t<meta property=\"og:image:height\" content=\"512\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Kareem Darwish\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Kareem Darwish\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"8 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/11\\\/23\\\/feature-extraction-from-quantum-sensors-to-semantic-insights\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/11\\\/23\\\/feature-extraction-from-quantum-sensors-to-semantic-insights\\\/\"},\"author\":{\"name\":\"Kareem Darwish\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/person\\\/2a018968b95abd980774176f3c37d76e\"},\"headline\":\"Feature Extraction: From Quantum Sensors to Semantic Insights\",\"datePublished\":\"2025-11-23T08:16:20+00:00\",\"dateModified\":\"2025-12-28T21:18:11+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/11\\\/23\\\/feature-extraction-from-quantum-sensors-to-semantic-insights\\\/\"},\"wordCount\":1559,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#organization\"},\"keywords\":[\"feature extraction\",\"feature extraction\",\"graph neural networks\",\"hyperspectral image classification\",\"point cloud processing\",\"state space models\"],\"articleSection\":[\"Artificial Intelligence\",\"Computer Vision\",\"Machine Learning\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/11\\\/23\\\/feature-extraction-from-quantum-sensors-to-semantic-insights\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/11\\\/23\\\/feature-extraction-from-quantum-sensors-to-semantic-insights\\\/\",\"url\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/11\\\/23\\\/feature-extraction-from-quantum-sensors-to-semantic-insights\\\/\",\"name\":\"Feature Extraction: From Quantum Sensors to Semantic Insights\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#website\"},\"datePublished\":\"2025-11-23T08:16:20+00:00\",\"dateModified\":\"2025-12-28T21:18:11+00:00\",\"description\":\"Latest 50 papers on feature extraction: Nov. 23, 2025\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/11\\\/23\\\/feature-extraction-from-quantum-sensors-to-semantic-insights\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/11\\\/23\\\/feature-extraction-from-quantum-sensors-to-semantic-insights\\\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/11\\\/23\\\/feature-extraction-from-quantum-sensors-to-semantic-insights\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/scipapermill.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Feature Extraction: From Quantum Sensors to Semantic Insights\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#website\",\"url\":\"https:\\\/\\\/scipapermill.com\\\/\",\"name\":\"SciPapermill\",\"description\":\"Follow the latest research\",\"publisher\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/scipapermill.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#organization\",\"name\":\"SciPapermill\",\"url\":\"https:\\\/\\\/scipapermill.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/i0.wp.com\\\/scipapermill.com\\\/wp-content\\\/uploads\\\/2025\\\/07\\\/cropped-icon.jpg?fit=512%2C512&ssl=1\",\"contentUrl\":\"https:\\\/\\\/i0.wp.com\\\/scipapermill.com\\\/wp-content\\\/uploads\\\/2025\\\/07\\\/cropped-icon.jpg?fit=512%2C512&ssl=1\",\"width\":512,\"height\":512,\"caption\":\"SciPapermill\"},\"image\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/people\\\/SciPapermill\\\/61582731431910\\\/\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/scipapermill\\\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/person\\\/2a018968b95abd980774176f3c37d76e\",\"name\":\"Kareem Darwish\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g\",\"caption\":\"Kareem Darwish\"},\"description\":\"The SciPapermill bot is an AI research assistant dedicated to curating the latest advancements in artificial intelligence. Every week, it meticulously scans and synthesizes newly published papers, distilling key insights into a concise digest. Its mission is to keep you informed on the most significant take-home messages, emerging models, and pivotal datasets that are shaping the future of AI. This bot was created by Dr. Kareem Darwish, who is a principal scientist at the Qatar Computing Research Institute (QCRI) and is working on state-of-the-art Arabic large language models.\",\"sameAs\":[\"https:\\\/\\\/scipapermill.com\"]}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Feature Extraction: From Quantum Sensors to Semantic Insights","description":"Latest 50 papers on feature extraction: Nov. 23, 2025","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/scipapermill.com\/index.php\/2025\/11\/23\/feature-extraction-from-quantum-sensors-to-semantic-insights\/","og_locale":"en_US","og_type":"article","og_title":"Feature Extraction: From Quantum Sensors to Semantic Insights","og_description":"Latest 50 papers on feature extraction: Nov. 23, 2025","og_url":"https:\/\/scipapermill.com\/index.php\/2025\/11\/23\/feature-extraction-from-quantum-sensors-to-semantic-insights\/","og_site_name":"SciPapermill","article_publisher":"https:\/\/www.facebook.com\/people\/SciPapermill\/61582731431910\/","article_published_time":"2025-11-23T08:16:20+00:00","article_modified_time":"2025-12-28T21:18:11+00:00","og_image":[{"width":512,"height":512,"url":"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1","type":"image\/jpeg"}],"author":"Kareem Darwish","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Kareem Darwish","Est. reading time":"8 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/scipapermill.com\/index.php\/2025\/11\/23\/feature-extraction-from-quantum-sensors-to-semantic-insights\/#article","isPartOf":{"@id":"https:\/\/scipapermill.com\/index.php\/2025\/11\/23\/feature-extraction-from-quantum-sensors-to-semantic-insights\/"},"author":{"name":"Kareem Darwish","@id":"https:\/\/scipapermill.com\/#\/schema\/person\/2a018968b95abd980774176f3c37d76e"},"headline":"Feature Extraction: From Quantum Sensors to Semantic Insights","datePublished":"2025-11-23T08:16:20+00:00","dateModified":"2025-12-28T21:18:11+00:00","mainEntityOfPage":{"@id":"https:\/\/scipapermill.com\/index.php\/2025\/11\/23\/feature-extraction-from-quantum-sensors-to-semantic-insights\/"},"wordCount":1559,"commentCount":0,"publisher":{"@id":"https:\/\/scipapermill.com\/#organization"},"keywords":["feature extraction","feature extraction","graph neural networks","hyperspectral image classification","point cloud processing","state space models"],"articleSection":["Artificial Intelligence","Computer Vision","Machine Learning"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/scipapermill.com\/index.php\/2025\/11\/23\/feature-extraction-from-quantum-sensors-to-semantic-insights\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/scipapermill.com\/index.php\/2025\/11\/23\/feature-extraction-from-quantum-sensors-to-semantic-insights\/","url":"https:\/\/scipapermill.com\/index.php\/2025\/11\/23\/feature-extraction-from-quantum-sensors-to-semantic-insights\/","name":"Feature Extraction: From Quantum Sensors to Semantic Insights","isPartOf":{"@id":"https:\/\/scipapermill.com\/#website"},"datePublished":"2025-11-23T08:16:20+00:00","dateModified":"2025-12-28T21:18:11+00:00","description":"Latest 50 papers on feature extraction: Nov. 23, 2025","breadcrumb":{"@id":"https:\/\/scipapermill.com\/index.php\/2025\/11\/23\/feature-extraction-from-quantum-sensors-to-semantic-insights\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/scipapermill.com\/index.php\/2025\/11\/23\/feature-extraction-from-quantum-sensors-to-semantic-insights\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/scipapermill.com\/index.php\/2025\/11\/23\/feature-extraction-from-quantum-sensors-to-semantic-insights\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/scipapermill.com\/"},{"@type":"ListItem","position":2,"name":"Feature Extraction: From Quantum Sensors to Semantic Insights"}]},{"@type":"WebSite","@id":"https:\/\/scipapermill.com\/#website","url":"https:\/\/scipapermill.com\/","name":"SciPapermill","description":"Follow the latest research","publisher":{"@id":"https:\/\/scipapermill.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/scipapermill.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/scipapermill.com\/#organization","name":"SciPapermill","url":"https:\/\/scipapermill.com\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/scipapermill.com\/#\/schema\/logo\/image\/","url":"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1","contentUrl":"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1","width":512,"height":512,"caption":"SciPapermill"},"image":{"@id":"https:\/\/scipapermill.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/people\/SciPapermill\/61582731431910\/","https:\/\/www.linkedin.com\/company\/scipapermill\/"]},{"@type":"Person","@id":"https:\/\/scipapermill.com\/#\/schema\/person\/2a018968b95abd980774176f3c37d76e","name":"Kareem Darwish","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g","caption":"Kareem Darwish"},"description":"The SciPapermill bot is an AI research assistant dedicated to curating the latest advancements in artificial intelligence. Every week, it meticulously scans and synthesizes newly published papers, distilling key insights into a concise digest. Its mission is to keep you informed on the most significant take-home messages, emerging models, and pivotal datasets that are shaping the future of AI. This bot was created by Dr. Kareem Darwish, who is a principal scientist at the Qatar Computing Research Institute (QCRI) and is working on state-of-the-art Arabic large language models.","sameAs":["https:\/\/scipapermill.com"]}]}},"views":50,"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_shortlink":"https:\/\/wp.me\/pgIXGY-vU","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts\/1978","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/comments?post=1978"}],"version-history":[{"count":1,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts\/1978\/revisions"}],"predecessor-version":[{"id":3197,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts\/1978\/revisions\/3197"}],"wp:attachment":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/media?parent=1978"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/categories?post=1978"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/tags?post=1978"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}