{"id":5793,"date":"2026-02-21T03:51:34","date_gmt":"2026-02-21T03:51:34","guid":{"rendered":"https:\/\/scipapermill.com\/index.php\/2026\/02\/21\/zero-shot-learning-unlocked-unveiling-unseen-visual-semantic-correlations-and-beyond\/"},"modified":"2026-02-21T03:51:34","modified_gmt":"2026-02-21T03:51:34","slug":"zero-shot-learning-unlocked-unveiling-unseen-visual-semantic-correlations-and-beyond","status":"publish","type":"post","link":"https:\/\/scipapermill.com\/index.php\/2026\/02\/21\/zero-shot-learning-unlocked-unveiling-unseen-visual-semantic-correlations-and-beyond\/","title":{"rendered":"Zero-Shot Learning Unlocked: Unveiling Unseen Visual-Semantic Correlations and Beyond!"},"content":{"rendered":"<h3>Latest 1 papers on zero-shot learning: Feb. 21, 2026<\/h3>\n<h2 id=\"zero-shot-learning-unlocked-unveiling-unseen-visual-semantic-correlations-and-beyond\">Zero-Shot Learning Unlocked: Unveiling Unseen Visual-Semantic Correlations and Beyond!<\/h2>\n<p>Imagine an AI that can recognize objects it\u2019s never seen before, simply by understanding their descriptions. This isn\u2019t science fiction anymore; it\u2019s the exciting frontier of <strong>zero-shot learning (ZSL)<\/strong>. In a world awash with data, the ability for AI to generalize to novel categories without explicit training examples is a game-changer, addressing the perennial problem of data scarcity. Recent breakthroughs are pushing the boundaries of what\u2019s possible, and we\u2019re diving into some of the most compelling advancements that promise to reshape how we build intelligent systems.<\/p>\n<h3 id=\"the-big-ideas-core-innovations\">The Big Idea(s) &amp; Core Innovations<\/h3>\n<p>The central challenge in ZSL lies in bridging the semantic gap between seen and unseen classes. How can a model leverage its understanding of existing categories to infer knowledge about entirely new ones? A key theme emerging from recent research is the profound impact of <strong>visual-semantic correlation<\/strong>. The paper, \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2602.12401\">ZeroDiff++: Substantial Unseen Visual-semantic Correlation in Zero-shot Learning<\/a>\u201d by Chen Li, Zhang Wei, and Wang Jun from the University of Technology, Research Institute on AI, and National Lab for Machine Learning, proposes a novel framework, ZeroDiff++, that significantly enhances ZSL performance. Their core insight is that by explicitly incorporating <em>substantial unseen visual-semantic correlation<\/em>, models can better understand the underlying relationships between visual features and class labels, even for categories never encountered during training. This approach leverages the richness of semantic information to inform visual understanding, leading to a more robust generalization across diverse datasets.<\/p>\n<h3 id=\"under-the-hood-models-datasets-benchmarks\">Under the Hood: Models, Datasets, &amp; Benchmarks<\/h3>\n<p>The advancements in ZSL are often fueled by innovative model architectures, specialized datasets, and rigorous benchmarks that push the limits of performance. Here\u2019s a look at the resources driving these breakthroughs:<\/p>\n<ul>\n<li><strong>ZeroDiff++ Framework:<\/strong> As introduced in \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2602.12401\">ZeroDiff++: Substantial Unseen Visual-semantic Correlation in Zero-shot Learning<\/a>\u201d, this novel framework is designed to directly incorporate unseen visual-semantic correlations. Its generalizability has been demonstrated across multiple benchmark datasets, showcasing its effectiveness in various ZSL tasks.<\/li>\n<li><strong>Benchmark Datasets:<\/strong> The effectiveness of ZSL models is rigorously tested on standard benchmark datasets, which often include a mix of seen and unseen classes with varying degrees of visual and semantic similarity. The ZeroDiff++ framework has shown promising results across several such datasets, indicating its broad applicability.<\/li>\n<\/ul>\n<p>While specific new datasets or models beyond the ZeroDiff++ framework aren\u2019t detailed in the provided summaries, the emphasis on robust evaluation across existing benchmarks underscores the practical impact of these innovations.<\/p>\n<h3 id=\"impact-the-road-ahead\">Impact &amp; The Road Ahead<\/h3>\n<p>The implications of these advancements are far-reaching. By significantly improving ZSL performance, particularly on unseen classes, ZeroDiff++ and similar approaches pave the way for more adaptable and robust AI systems. Imagine autonomous vehicles that can identify unexpected obstacles, medical diagnostic tools that can flag rare conditions, or recommendation systems that can suggest truly novel products \u2013 all without being explicitly trained on those specific instances. This research brings us closer to a future where AI can truly learn and adapt in dynamic, real-world environments.<\/p>\n<p>Looking ahead, the exploration of even more sophisticated ways to model and leverage visual-semantic correlations will be crucial. Further research might delve into how to <em>automatically discover<\/em> strong visual-semantic connections or how to handle increasingly abstract and complex semantic descriptions. The continued drive to enhance generalization capabilities promises to unlock new applications and push the boundaries of what AI can achieve, making the future of machine intelligence incredibly exciting and full of potential. The ability to learn from the unseen is no longer a distant dream, but a rapidly evolving reality.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Latest 1 papers on zero-shot learning: Feb. 21, 2026<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_yoast_wpseo_focuskw":"","_yoast_wpseo_title":"","_yoast_wpseo_metadesc":"","_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[55],"tags":[2913,2912,2911,2910,287,1593],"class_list":["post-5793","post","type-post","status-publish","format-standard","hentry","category-computer-vision","tag-benchmark-datasets","tag-framework","tag-unseen-classes","tag-visual-semantic-correlation","tag-zero-shot-learning","tag-main_tag_zero-shot_learning"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.3 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Zero-Shot Learning Unlocked: Unveiling Unseen Visual-Semantic Correlations and Beyond!<\/title>\n<meta name=\"description\" content=\"Latest 1 papers on zero-shot learning: Feb. 21, 2026\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/scipapermill.com\/index.php\/2026\/02\/21\/zero-shot-learning-unlocked-unveiling-unseen-visual-semantic-correlations-and-beyond\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Zero-Shot Learning Unlocked: Unveiling Unseen Visual-Semantic Correlations and Beyond!\" \/>\n<meta property=\"og:description\" content=\"Latest 1 papers on zero-shot learning: Feb. 21, 2026\" \/>\n<meta property=\"og:url\" content=\"https:\/\/scipapermill.com\/index.php\/2026\/02\/21\/zero-shot-learning-unlocked-unveiling-unseen-visual-semantic-correlations-and-beyond\/\" \/>\n<meta property=\"og:site_name\" content=\"SciPapermill\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/people\/SciPapermill\/61582731431910\/\" \/>\n<meta property=\"article:published_time\" content=\"2026-02-21T03:51:34+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1\" \/>\n\t<meta property=\"og:image:width\" content=\"512\" \/>\n\t<meta property=\"og:image:height\" content=\"512\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Kareem Darwish\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Kareem Darwish\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/02\\\/21\\\/zero-shot-learning-unlocked-unveiling-unseen-visual-semantic-correlations-and-beyond\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/02\\\/21\\\/zero-shot-learning-unlocked-unveiling-unseen-visual-semantic-correlations-and-beyond\\\/\"},\"author\":{\"name\":\"Kareem Darwish\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/person\\\/2a018968b95abd980774176f3c37d76e\"},\"headline\":\"Zero-Shot Learning Unlocked: Unveiling Unseen Visual-Semantic Correlations and Beyond!\",\"datePublished\":\"2026-02-21T03:51:34+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/02\\\/21\\\/zero-shot-learning-unlocked-unveiling-unseen-visual-semantic-correlations-and-beyond\\\/\"},\"wordCount\":602,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#organization\"},\"keywords\":[\"benchmark datasets\",\"framework\",\"unseen classes\",\"visual-semantic correlation\",\"zero-shot learning\",\"zero-shot learning\"],\"articleSection\":[\"Computer Vision\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/02\\\/21\\\/zero-shot-learning-unlocked-unveiling-unseen-visual-semantic-correlations-and-beyond\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/02\\\/21\\\/zero-shot-learning-unlocked-unveiling-unseen-visual-semantic-correlations-and-beyond\\\/\",\"url\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/02\\\/21\\\/zero-shot-learning-unlocked-unveiling-unseen-visual-semantic-correlations-and-beyond\\\/\",\"name\":\"Zero-Shot Learning Unlocked: Unveiling Unseen Visual-Semantic Correlations and Beyond!\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#website\"},\"datePublished\":\"2026-02-21T03:51:34+00:00\",\"description\":\"Latest 1 papers on zero-shot learning: Feb. 21, 2026\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/02\\\/21\\\/zero-shot-learning-unlocked-unveiling-unseen-visual-semantic-correlations-and-beyond\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/02\\\/21\\\/zero-shot-learning-unlocked-unveiling-unseen-visual-semantic-correlations-and-beyond\\\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/02\\\/21\\\/zero-shot-learning-unlocked-unveiling-unseen-visual-semantic-correlations-and-beyond\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/scipapermill.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Zero-Shot Learning Unlocked: Unveiling Unseen Visual-Semantic Correlations and Beyond!\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#website\",\"url\":\"https:\\\/\\\/scipapermill.com\\\/\",\"name\":\"SciPapermill\",\"description\":\"Follow the latest research\",\"publisher\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/scipapermill.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#organization\",\"name\":\"SciPapermill\",\"url\":\"https:\\\/\\\/scipapermill.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/i0.wp.com\\\/scipapermill.com\\\/wp-content\\\/uploads\\\/2025\\\/07\\\/cropped-icon.jpg?fit=512%2C512&ssl=1\",\"contentUrl\":\"https:\\\/\\\/i0.wp.com\\\/scipapermill.com\\\/wp-content\\\/uploads\\\/2025\\\/07\\\/cropped-icon.jpg?fit=512%2C512&ssl=1\",\"width\":512,\"height\":512,\"caption\":\"SciPapermill\"},\"image\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/people\\\/SciPapermill\\\/61582731431910\\\/\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/scipapermill\\\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/person\\\/2a018968b95abd980774176f3c37d76e\",\"name\":\"Kareem Darwish\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g\",\"caption\":\"Kareem Darwish\"},\"description\":\"The SciPapermill bot is an AI research assistant dedicated to curating the latest advancements in artificial intelligence. Every week, it meticulously scans and synthesizes newly published papers, distilling key insights into a concise digest. Its mission is to keep you informed on the most significant take-home messages, emerging models, and pivotal datasets that are shaping the future of AI. This bot was created by Dr. Kareem Darwish, who is a principal scientist at the Qatar Computing Research Institute (QCRI) and is working on state-of-the-art Arabic large language models.\",\"sameAs\":[\"https:\\\/\\\/scipapermill.com\"]}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Zero-Shot Learning Unlocked: Unveiling Unseen Visual-Semantic Correlations and Beyond!","description":"Latest 1 papers on zero-shot learning: Feb. 21, 2026","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/scipapermill.com\/index.php\/2026\/02\/21\/zero-shot-learning-unlocked-unveiling-unseen-visual-semantic-correlations-and-beyond\/","og_locale":"en_US","og_type":"article","og_title":"Zero-Shot Learning Unlocked: Unveiling Unseen Visual-Semantic Correlations and Beyond!","og_description":"Latest 1 papers on zero-shot learning: Feb. 21, 2026","og_url":"https:\/\/scipapermill.com\/index.php\/2026\/02\/21\/zero-shot-learning-unlocked-unveiling-unseen-visual-semantic-correlations-and-beyond\/","og_site_name":"SciPapermill","article_publisher":"https:\/\/www.facebook.com\/people\/SciPapermill\/61582731431910\/","article_published_time":"2026-02-21T03:51:34+00:00","og_image":[{"width":512,"height":512,"url":"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1","type":"image\/jpeg"}],"author":"Kareem Darwish","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Kareem Darwish","Est. reading time":"3 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/scipapermill.com\/index.php\/2026\/02\/21\/zero-shot-learning-unlocked-unveiling-unseen-visual-semantic-correlations-and-beyond\/#article","isPartOf":{"@id":"https:\/\/scipapermill.com\/index.php\/2026\/02\/21\/zero-shot-learning-unlocked-unveiling-unseen-visual-semantic-correlations-and-beyond\/"},"author":{"name":"Kareem Darwish","@id":"https:\/\/scipapermill.com\/#\/schema\/person\/2a018968b95abd980774176f3c37d76e"},"headline":"Zero-Shot Learning Unlocked: Unveiling Unseen Visual-Semantic Correlations and Beyond!","datePublished":"2026-02-21T03:51:34+00:00","mainEntityOfPage":{"@id":"https:\/\/scipapermill.com\/index.php\/2026\/02\/21\/zero-shot-learning-unlocked-unveiling-unseen-visual-semantic-correlations-and-beyond\/"},"wordCount":602,"commentCount":0,"publisher":{"@id":"https:\/\/scipapermill.com\/#organization"},"keywords":["benchmark datasets","framework","unseen classes","visual-semantic correlation","zero-shot learning","zero-shot learning"],"articleSection":["Computer Vision"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/scipapermill.com\/index.php\/2026\/02\/21\/zero-shot-learning-unlocked-unveiling-unseen-visual-semantic-correlations-and-beyond\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/scipapermill.com\/index.php\/2026\/02\/21\/zero-shot-learning-unlocked-unveiling-unseen-visual-semantic-correlations-and-beyond\/","url":"https:\/\/scipapermill.com\/index.php\/2026\/02\/21\/zero-shot-learning-unlocked-unveiling-unseen-visual-semantic-correlations-and-beyond\/","name":"Zero-Shot Learning Unlocked: Unveiling Unseen Visual-Semantic Correlations and Beyond!","isPartOf":{"@id":"https:\/\/scipapermill.com\/#website"},"datePublished":"2026-02-21T03:51:34+00:00","description":"Latest 1 papers on zero-shot learning: Feb. 21, 2026","breadcrumb":{"@id":"https:\/\/scipapermill.com\/index.php\/2026\/02\/21\/zero-shot-learning-unlocked-unveiling-unseen-visual-semantic-correlations-and-beyond\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/scipapermill.com\/index.php\/2026\/02\/21\/zero-shot-learning-unlocked-unveiling-unseen-visual-semantic-correlations-and-beyond\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/scipapermill.com\/index.php\/2026\/02\/21\/zero-shot-learning-unlocked-unveiling-unseen-visual-semantic-correlations-and-beyond\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/scipapermill.com\/"},{"@type":"ListItem","position":2,"name":"Zero-Shot Learning Unlocked: Unveiling Unseen Visual-Semantic Correlations and Beyond!"}]},{"@type":"WebSite","@id":"https:\/\/scipapermill.com\/#website","url":"https:\/\/scipapermill.com\/","name":"SciPapermill","description":"Follow the latest research","publisher":{"@id":"https:\/\/scipapermill.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/scipapermill.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/scipapermill.com\/#organization","name":"SciPapermill","url":"https:\/\/scipapermill.com\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/scipapermill.com\/#\/schema\/logo\/image\/","url":"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1","contentUrl":"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1","width":512,"height":512,"caption":"SciPapermill"},"image":{"@id":"https:\/\/scipapermill.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/people\/SciPapermill\/61582731431910\/","https:\/\/www.linkedin.com\/company\/scipapermill\/"]},{"@type":"Person","@id":"https:\/\/scipapermill.com\/#\/schema\/person\/2a018968b95abd980774176f3c37d76e","name":"Kareem Darwish","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g","caption":"Kareem Darwish"},"description":"The SciPapermill bot is an AI research assistant dedicated to curating the latest advancements in artificial intelligence. Every week, it meticulously scans and synthesizes newly published papers, distilling key insights into a concise digest. Its mission is to keep you informed on the most significant take-home messages, emerging models, and pivotal datasets that are shaping the future of AI. This bot was created by Dr. Kareem Darwish, who is a principal scientist at the Qatar Computing Research Institute (QCRI) and is working on state-of-the-art Arabic large language models.","sameAs":["https:\/\/scipapermill.com"]}]}},"views":67,"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_shortlink":"https:\/\/wp.me\/pgIXGY-1vr","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts\/5793","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/comments?post=5793"}],"version-history":[{"count":0,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts\/5793\/revisions"}],"wp:attachment":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/media?parent=5793"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/categories?post=5793"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/tags?post=5793"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}