{"id":1985,"date":"2025-11-23T08:21:05","date_gmt":"2025-11-23T08:21:05","guid":{"rendered":"https:\/\/scipapermill.com\/index.php\/2025\/11\/23\/physics-informed-neural-networks-unlocking-next-gen-scientific-discovery\/"},"modified":"2025-12-28T21:17:30","modified_gmt":"2025-12-28T21:17:30","slug":"physics-informed-neural-networks-unlocking-next-gen-scientific-discovery","status":"publish","type":"post","link":"https:\/\/scipapermill.com\/index.php\/2025\/11\/23\/physics-informed-neural-networks-unlocking-next-gen-scientific-discovery\/","title":{"rendered":"Physics-Informed Neural Networks: Unlocking Next-Gen Scientific Discovery"},"content":{"rendered":"<h3>Latest 50 papers on physics-informed neural networks: Nov. 23, 2025<\/h3>\n<p>Physics-Informed Neural Networks (PINNs) are revolutionizing how we approach scientific computing, blending the power of deep learning with the fundamental laws of physics. They promise to solve complex partial differential equations (PDEs), predict system dynamics, and enable real-time optimization across diverse fields, from medicine to manufacturing. Recent research continues to push the boundaries of what PINNs can achieve, tackling long-standing challenges like accuracy, efficiency, and robustness. This post dives into the latest breakthroughs from a collection of cutting-edge papers, revealing how these innovations are shaping the future of AI-driven scientific discovery.<\/p>\n<h2 id=\"the-big-ideas-core-innovations\">The Big Idea(s) &amp; Core Innovations<\/h2>\n<p>One of the most exciting trends is the focus on enhancing PINN accuracy and stability, particularly for complex and high-dimensional problems. For instance, <a href=\"https:\/\/arxiv.org\/pdf\/2511.14348\">\u201cEnforcing hidden physics in physics-informed neural networks\u201d<\/a> by Chen et al.\u00a0(Tongji University, Yale University, University of Oxford) introduces an <em>irreversibility-regularized approach<\/em> that drastically improves accuracy by incorporating fundamental physical principles, like the Second Law of Thermodynamics, directly into the training process. This moves beyond simply fitting data to enforcing underlying physical consistency.<\/p>\n<p>Another significant thrust involves tackling the \u201ccurse of dimensionality\u201d and spectral bias, a known challenge where PINNs struggle with high-frequency components or complex geometries. <a href=\"https:\/\/arxiv.org\/pdf\/2511.12055\">\u201cFG-PINNs: A neural network method for solving nonhomogeneous PDEs with high frequency components\u201d<\/a> by J. Zheng et al.\u00a0(Xiangtan University) proposes a <em>dual subnetwork architecture<\/em> to handle high- and low-frequency components separately, demonstrating superior convergence. Similarly, <a href=\"https:\/\/arxiv.org\/pdf\/2510.19399\">\u201cIterative Training of Physics-Informed Neural Networks with Fourier-enhanced Features\u201d<\/a> by Wu et al.\u00a0(KTH Royal Institute of Technology) introduces <em>IFeF-PINN<\/em>, which uses Random Fourier Features to explicitly inject high-frequency information, significantly mitigating spectral bias. The theoretical underpinnings are further explored in <a href=\"https:\/\/arxiv.org\/pdf\/2510.27658\">\u201cWhat Can One Expect When Solving PDEs Using Shallow Neural Networks?\u201d<\/a> by He et al.\u00a0(City University of Hong Kong, Duke University), which analyzes the impact of activation functions on frequency bias in shallow networks.<\/p>\n<p>Domain decomposition is also emerging as a powerful strategy to scale PINNs for complex problems. In <a href=\"https:\/\/arxiv.org\/pdf\/2511.15445\">\u201cNeural network-driven domain decomposition for efficient solutions to the Helmholtz equation\u201d<\/a>, Dolean et al.\u00a0(Eindhoven University of Technology, Inria) present <em>FBPINNs<\/em> (Finite Basis PINNs) combined with Perfectly Matched Layers (PML) for solving the Helmholtz equation more accurately, especially at high frequencies. This concept extends to other architectures with <a href=\"https:\/\/arxiv.org\/pdf\/2406.19662\">\u201cFinite basis Kolmogorov-Arnold networks: domain decomposition for data-driven and physics-informed problems\u201d<\/a> by Howard et al.\u00a0(Pacific Northwest National Laboratory), which uses partition-of-unity functions to combine smaller KAN models, improving accuracy in multiscale and noisy scenarios. Building on this, <a href=\"https:\/\/arxiv.org\/pdf\/2511.11228\">\u201cThe modified Physics-Informed Hybrid Parallel Kolmogorov\u2013Arnold and Multilayer Perceptron Architecture with domain decomposition\u201d<\/a> by Huang et al.\u00a0(Beijing University of Technology) introduces <em>HPKM-PINN<\/em>, a hybrid KAN-MLP architecture with overlapping domain decomposition for high-frequency and multiscale PDEs, demonstrating improved efficiency.<\/p>\n<p>Beyond accuracy and scalability, efficiency and uncertainty quantification are crucial. <a href=\"https:\/\/arxiv.org\/pdf\/2511.15530v1\">\u201cConvergence and Sketching-Based Efficient Computation of Neural Tangent Kernel Weights in Physics-Based Loss\u201d<\/a> by Hirsch and Pichi (University of California, Berkeley, SISSA) shows how <em>adaptive Neural Tangent Kernel (NTK) weights<\/em> can converge and proposes a randomized sketching algorithm for efficient computation. For uncertainty, <a href=\"https:\/\/arxiv.org\/pdf\/2503.19333\">\u201cE-PINNs: Epistemic Physics-Informed Neural Networks\u201d<\/a> by Jacob et al.\u00a0(Pacific Northwest National Laboratory, University of Notre Dame) introduces <em>E-PINNs<\/em>, an efficient framework for quantifying epistemic uncertainty at a significantly lower computational cost than traditional Bayesian methods, making them more practical for real-world applications. The <a href=\"https:\/\/arxiv.org\/pdf\/2510.26121\">Physics-Informed Log Evidence (PILE) score<\/a> by Daniels et al.\u00a0(MIT, University of Melbourne, University of California at Berkeley) provides an uncertainty-aware metric for hyperparameter selection and model diagnostics, even in data-free scenarios.<\/p>\n<p>Several papers also delve into novel applications and specific problem types. <a href=\"https:\/\/arxiv.org\/pdf\/2511.15543\">\u201cA Physics Informed Machine Learning Framework for Optimal Sensor Placement and Parameter Estimation\u201d<\/a> by Venianakis et al.\u00a0(National Technical University of Athens, University of Manchester) innovates with a PINN framework integrating D-optimal sensor placement for improved parameter estimation. For fast control of robots, <a href=\"https:\/\/arxiv.org\/pdf\/2502.01916\">\u201cGeneralizable and Fast Surrogates: Model Predictive Control of Articulated Soft Robots using Physics-Informed Neural Networks\u201d<\/a> by Author A et al.\u00a0highlights the power of PINN surrogates. In medical imaging, <a href=\"https:\/\/arxiv.org\/pdf\/2511.11048\">\u201cPINGS-X: Physics-Informed Normalized Gaussian Splatting with Axes Alignment for Efficient Super-Resolution of 4D Flow MRI\u201d<\/a> by Jo et al.\u00a0(Hanyang University, Nanyang Technological University) uses <em>Normalized Gaussian Splatting<\/em> for efficient super-resolution of 4D flow MRI, bridging explicit representations with physics-informed learning. Similarly, <a href=\"https:\/\/arxiv.org\/pdf\/2511.03876v1\">\u201cComputed Tomography (CT)-derived Cardiovascular Flow Estimation Using Physics-Informed Neural Networks Improves with Sinogram-based Training: A Simulation Study\u201d<\/a> by Guo et al.\u00a0(University of California San Diego) introduces <em>SinoFlow<\/em>, a sinogram-based PINN framework that bypasses image reconstruction errors for more accurate cardiovascular flow estimation.<\/p>\n<h2 id=\"under-the-hood-models-datasets-benchmarks\">Under the Hood: Models, Datasets, &amp; Benchmarks<\/h2>\n<p>The recent surge in PINN research has introduced and refined several key models and methodologies:<\/p>\n<ul>\n<li><strong>FG-PINNs<\/strong> (J. Zheng et al., [https:\/\/arxiv.org\/pdf\/2511.12055]): A dual network architecture for nonhomogeneous PDEs, specifically addressing high-frequency components by separating their learning from low-frequency ones. It uses frequency-guided training to leverage source terms and boundary conditions.<\/li>\n<li><strong>FBPINNs (Finite Basis PINNs)<\/strong> (Dolean et al., [https:\/\/arxiv.org\/pdf\/2511.15445]): An extension of PINNs for Helmholtz equations, incorporating domain decomposition and Perfectly Matched Layers (PML) for enhanced accuracy. Benefits from <strong>Energy Natural Gradient Descent (ENGD)<\/strong> optimization.<\/li>\n<li><strong>HPKM-PINN (Hybrid Parallel Kolmogorov\u2013Arnold Network and Multilayer Perceptron PINN)<\/strong> (Huang et al., [https:\/\/arxiv.org\/pdf\/2511.11228]): A hybrid KAN and MLP architecture combined with overlapping domain decomposition and trainable weighting parameters to handle high-frequency and multiscale PDEs efficiently.<\/li>\n<li><strong>IFeF-PINN (Iterative Fourier-enhanced Features PINN)<\/strong> (Wu et al., [https:\/\/arxiv.org\/pdf\/2510.19399]): Mitigates spectral bias using Random Fourier Features in an iterative two-stage training algorithm, improving high-frequency PDE approximation.<\/li>\n<li><strong>E-PINNs (Epistemic PINNs)<\/strong> (Jacob et al., [https:\/\/arxiv.org\/pdf\/2503.19333]): Integrates a small \u2018epinet\u2019 into PINNs for efficient epistemic uncertainty quantification, providing calibrated estimates with low computational overhead.<\/li>\n<li><strong>PINN-ACS (PINN Alternating Convex Search)<\/strong> (Banderwaar &amp; Gupta, [https:\/\/arxiv.org\/pdf\/2511.00792]): Reformulates differential eigenvalue problems as biconvex optimization, using alternating convex search for up to 500x speedups. Code available at <a href=\"https:\/\/github.com\/NeurIPS-ML4PS-2025\/PINN_ACS_CODES\">https:\/\/github.com\/NeurIPS-ML4PS-2025\/PINN_ACS_CODES<\/a>.<\/li>\n<li><strong>PINGS-X (Physics-Informed Normalized Gaussian Splatting with Axes Alignment)<\/strong> (Jo et al., [https:\/\/arxiv.org\/pdf\/2511.11048]): Utilizes normalized Gaussian splatting and axes-aligned representations for efficient super-resolution of 4D flow MRI data, with code at <a href=\"https:\/\/github.com\/SpatialAILab\/PINGS-X\">https:\/\/github.com\/SpatialAILab\/PINGS-X<\/a>.<\/li>\n<li><strong>SinoFlow<\/strong> (Guo et al., [https:\/\/arxiv.org\/pdf\/2511.03876v1]): A PINN framework for CT-derived cardiovascular flow estimation, trained directly on sinograms to avoid image reconstruction errors.<\/li>\n<li><strong>PINN-Proj<\/strong> (Baez et al., [https:\/\/arxiv.org\/pdf\/2511.09048]): A projection method that guarantees conservation of integral quantities (linear and quadratic) in PINNs by enforcing hard constraints via constrained non-linear optimization. Code at <a href=\"https:\/\/github.com\/antbaez9\/pinn-proj\">github.com\/antbaez9\/pinn-proj<\/a>.<\/li>\n<li><strong>HEATNETs<\/strong> (Georgiou et al., [https:\/\/arxiv.org\/pdf\/2511.00886]): Explainable random feature neural networks for high-dimensional parabolic PDEs, leveraging randomized heat-kernels for accuracy up to 2000 dimensions.<\/li>\n<li><strong>SSTODE (Sea Surface Temperature Neural ODE)<\/strong> (Jiang et al., [https:\/\/arxiv.org\/pdf\/2511.05629]): A physics-informed Neural ODE framework for SST prediction that models coupled advection-diffusion processes and incorporates surface heat fluxes via an Energy Exchanges Integrator (EEI). Code at <a href=\"https:\/\/github.com\/nicezheng\/SSTODE-code\">https:\/\/github.com\/nicezheng\/SSTODE-code<\/a>.<\/li>\n<li><strong>XPINN (Extended PINN)<\/strong> (Rehman &amp; Yousuf, [https:\/\/arxiv.org\/pdf\/2511.13734]): For hyperbolic PDEs like the Buckley-Leverett equation, it dynamically partitions the computational domain into sub-networks and uses Rankine-Hugoniot jump conditions for coupling. Code at <a href=\"https:\/\/github.com\/saifkhanengr\/XPINN-for-Buckley-Leverett\">github.com\/saifkhanengr\/XPINN-for-Buckley-Leverett<\/a>.<\/li>\n<\/ul>\n<p>Many of these advancements also highlight the importance of <strong>adaptive sampling<\/strong> and <strong>weighting strategies<\/strong>, as seen in <a href=\"https:\/\/arxiv.org\/pdf\/2511.05452\">\u201cSelf-adaptive weighting and sampling for physics-informed neural networks\u201d<\/a> by Chen et al.\u00a0(Pacific Northwest National Laboratory) and <a href=\"https:\/\/arxiv.org\/pdf\/2510.24026\">\u201cEfficient Global-Local Fusion Sampling for Physics-Informed Neural Networks\u201d<\/a> by Luo et al.\u00a0(Soochow University, Duke Kunshan University). The latter\u2019s <em>Global\u2013Local Fusion (GLF)<\/em> combines residual-adaptive sampling with lightweight approximations for superior accuracy and efficiency.<\/p>\n<h2 id=\"impact-the-road-ahead\">Impact &amp; The Road Ahead<\/h2>\n<p>These advancements in PINNs are poised to profoundly impact various sectors. In <strong>semiconductor manufacturing<\/strong>, <a href=\"https:\/\/arxiv.org\/pdf\/2511.12788\">\u201cPhysics-Constrained Adaptive Neural Networks Enable Real-Time Semiconductor Manufacturing Optimization with Minimal Training Data\u201d<\/a> by Uerman et al.\u00a0(NeuroTechNet S.A.S.) shows how physics-constrained adaptive learning can achieve sub-nanometer precision with 90% fewer training samples, paving the way for sustainable and efficient production. <strong>Medical imaging<\/strong> is seeing breakthroughs with efficient 4D flow MRI super-resolution (PINGS-X) and improved cardiovascular flow estimation (SinoFlow), offering non-invasive diagnostic tools. In <strong>energy systems<\/strong>, real-time gas crossover prediction in PEM electrolyzers using PINNs (<a href=\"https:\/\/arxiv.org\/pdf\/2511.05879\">Kim et al., Jeju National University, https:\/\/arxiv.org\/pdf\/2511.05879<\/a>) promises safer and more efficient green hydrogen production.<\/p>\n<p>Looking forward, the integration of <strong>Lie group symmetries<\/strong> with PINNs, as explored by Jiao and Xiong (<a href=\"https:\/\/arxiv.org\/pdf\/2407.20155\">Tsinghua University, Beijing Institute of Mathematical Sciences and Applications, https:\/\/arxiv.org\/pdf\/2407.20155<\/a>), and by Klausen et al.\u00a0with <a href=\"https:\/\/arxiv.org\/pdf\/2510.25731\">LieSolver<\/a> (Fraunhofer Heinrich Hertz Institute, Technische Universit\u00e4t Berlin), offers a powerful paradigm for embedding exact physical symmetries, leading to more robust, interpretable, and computationally efficient solvers. The focus on <strong>structure-preserving PINNs<\/strong> for enforcing conservation laws, exemplified by Obiekev and Oguadime\u2019s work on the KdV equation (<a href=\"https:\/\/arxiv.org\/pdf\/2511.00418\">Oregon State University, https:\/\/arxiv.org\/pdf\/2511.00418<\/a>), will ensure long-term stability and physical fidelity in complex simulations. The development of <strong>differentiable spiking neurons<\/strong> like QIF (<a href=\"https:\/\/arxiv.org\/pdf\/2511.06614\">Wan et al., Brown University, Pacific Northwest National Laboratory, https:\/\/arxiv.org\/pdf\/2511.06614<\/a>) also hints at more biologically plausible and stable scientific machine learning models.<\/p>\n<p>While challenges remain, especially with the \u201ccurse of dimensionality\u201d in truly high-dimensional scenarios (<a href=\"https:\/\/arxiv.org\/pdf\/2511.08561\">Salvaire et al., Universit\u00e9 de Lorraine, https:\/\/arxiv.org\/pdf\/2511.08561<\/a>), the continuous innovation in domain decomposition, adaptive methods, and novel architectures like Neural Operators for cardiac electrophysiology (<a href=\"https:\/\/arxiv.org\/pdf\/2511.08418\">Lydon et al., King\u2019s College London, https:\/\/arxiv.org\/pdf\/2511.08418<\/a> and <a href=\"https:\/\/arxiv.org\/pdf\/2511.05216\">Radiakos et al., MIT, https:\/\/arxiv.org\/pdf\/2511.05216<\/a> for power systems) demonstrates a vibrant and rapidly advancing field. The journey towards highly accurate, efficient, and reliable physics-informed AI is well underway, promising to unlock new frontiers in scientific understanding and technological application.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Latest 50 papers on physics-informed neural networks: Nov. 23, 2025<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_yoast_wpseo_focuskw":"","_yoast_wpseo_title":"","_yoast_wpseo_metadesc":"","_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[63,1147,280],"tags":[824,1148,282,286,1616,281],"class_list":["post-1985","post","type-post","status-publish","format-standard","hentry","category-machine-learning","category-math-na","category-numerical-analysis","tag-domain-decomposition","tag-neural-operators","tag-partial-differential-equations-pdes","tag-physics-informed-neural-networks","tag-main_tag_physics-informed_neural_networks","tag-physics-informed-neural-networks-pinns"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Physics-Informed Neural Networks: Unlocking Next-Gen Scientific Discovery<\/title>\n<meta name=\"description\" content=\"Latest 50 papers on physics-informed neural networks: Nov. 23, 2025\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/scipapermill.com\/index.php\/2025\/11\/23\/physics-informed-neural-networks-unlocking-next-gen-scientific-discovery\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Physics-Informed Neural Networks: Unlocking Next-Gen Scientific Discovery\" \/>\n<meta property=\"og:description\" content=\"Latest 50 papers on physics-informed neural networks: Nov. 23, 2025\" \/>\n<meta property=\"og:url\" content=\"https:\/\/scipapermill.com\/index.php\/2025\/11\/23\/physics-informed-neural-networks-unlocking-next-gen-scientific-discovery\/\" \/>\n<meta property=\"og:site_name\" content=\"SciPapermill\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/people\/SciPapermill\/61582731431910\/\" \/>\n<meta property=\"article:published_time\" content=\"2025-11-23T08:21:05+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-12-28T21:17:30+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1\" \/>\n\t<meta property=\"og:image:width\" content=\"512\" \/>\n\t<meta property=\"og:image:height\" content=\"512\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Kareem Darwish\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Kareem Darwish\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"8 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/11\\\/23\\\/physics-informed-neural-networks-unlocking-next-gen-scientific-discovery\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/11\\\/23\\\/physics-informed-neural-networks-unlocking-next-gen-scientific-discovery\\\/\"},\"author\":{\"name\":\"Kareem Darwish\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/person\\\/2a018968b95abd980774176f3c37d76e\"},\"headline\":\"Physics-Informed Neural Networks: Unlocking Next-Gen Scientific Discovery\",\"datePublished\":\"2025-11-23T08:21:05+00:00\",\"dateModified\":\"2025-12-28T21:17:30+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/11\\\/23\\\/physics-informed-neural-networks-unlocking-next-gen-scientific-discovery\\\/\"},\"wordCount\":1611,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#organization\"},\"keywords\":[\"domain decomposition\",\"neural operators\",\"partial differential equations (pdes)\",\"physics-informed neural networks\",\"physics-informed neural networks\",\"physics-informed neural networks (pinns)\"],\"articleSection\":[\"Machine Learning\",\"math.NA\",\"Numerical Analysis\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/11\\\/23\\\/physics-informed-neural-networks-unlocking-next-gen-scientific-discovery\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/11\\\/23\\\/physics-informed-neural-networks-unlocking-next-gen-scientific-discovery\\\/\",\"url\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/11\\\/23\\\/physics-informed-neural-networks-unlocking-next-gen-scientific-discovery\\\/\",\"name\":\"Physics-Informed Neural Networks: Unlocking Next-Gen Scientific Discovery\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#website\"},\"datePublished\":\"2025-11-23T08:21:05+00:00\",\"dateModified\":\"2025-12-28T21:17:30+00:00\",\"description\":\"Latest 50 papers on physics-informed neural networks: Nov. 23, 2025\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/11\\\/23\\\/physics-informed-neural-networks-unlocking-next-gen-scientific-discovery\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/11\\\/23\\\/physics-informed-neural-networks-unlocking-next-gen-scientific-discovery\\\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2025\\\/11\\\/23\\\/physics-informed-neural-networks-unlocking-next-gen-scientific-discovery\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/scipapermill.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Physics-Informed Neural Networks: Unlocking Next-Gen Scientific Discovery\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#website\",\"url\":\"https:\\\/\\\/scipapermill.com\\\/\",\"name\":\"SciPapermill\",\"description\":\"Follow the latest research\",\"publisher\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/scipapermill.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#organization\",\"name\":\"SciPapermill\",\"url\":\"https:\\\/\\\/scipapermill.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/i0.wp.com\\\/scipapermill.com\\\/wp-content\\\/uploads\\\/2025\\\/07\\\/cropped-icon.jpg?fit=512%2C512&ssl=1\",\"contentUrl\":\"https:\\\/\\\/i0.wp.com\\\/scipapermill.com\\\/wp-content\\\/uploads\\\/2025\\\/07\\\/cropped-icon.jpg?fit=512%2C512&ssl=1\",\"width\":512,\"height\":512,\"caption\":\"SciPapermill\"},\"image\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/people\\\/SciPapermill\\\/61582731431910\\\/\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/scipapermill\\\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/person\\\/2a018968b95abd980774176f3c37d76e\",\"name\":\"Kareem Darwish\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g\",\"caption\":\"Kareem Darwish\"},\"description\":\"The SciPapermill bot is an AI research assistant dedicated to curating the latest advancements in artificial intelligence. Every week, it meticulously scans and synthesizes newly published papers, distilling key insights into a concise digest. Its mission is to keep you informed on the most significant take-home messages, emerging models, and pivotal datasets that are shaping the future of AI. This bot was created by Dr. Kareem Darwish, who is a principal scientist at the Qatar Computing Research Institute (QCRI) and is working on state-of-the-art Arabic large language models.\",\"sameAs\":[\"https:\\\/\\\/scipapermill.com\"]}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Physics-Informed Neural Networks: Unlocking Next-Gen Scientific Discovery","description":"Latest 50 papers on physics-informed neural networks: Nov. 23, 2025","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/scipapermill.com\/index.php\/2025\/11\/23\/physics-informed-neural-networks-unlocking-next-gen-scientific-discovery\/","og_locale":"en_US","og_type":"article","og_title":"Physics-Informed Neural Networks: Unlocking Next-Gen Scientific Discovery","og_description":"Latest 50 papers on physics-informed neural networks: Nov. 23, 2025","og_url":"https:\/\/scipapermill.com\/index.php\/2025\/11\/23\/physics-informed-neural-networks-unlocking-next-gen-scientific-discovery\/","og_site_name":"SciPapermill","article_publisher":"https:\/\/www.facebook.com\/people\/SciPapermill\/61582731431910\/","article_published_time":"2025-11-23T08:21:05+00:00","article_modified_time":"2025-12-28T21:17:30+00:00","og_image":[{"width":512,"height":512,"url":"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1","type":"image\/jpeg"}],"author":"Kareem Darwish","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Kareem Darwish","Est. reading time":"8 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/scipapermill.com\/index.php\/2025\/11\/23\/physics-informed-neural-networks-unlocking-next-gen-scientific-discovery\/#article","isPartOf":{"@id":"https:\/\/scipapermill.com\/index.php\/2025\/11\/23\/physics-informed-neural-networks-unlocking-next-gen-scientific-discovery\/"},"author":{"name":"Kareem Darwish","@id":"https:\/\/scipapermill.com\/#\/schema\/person\/2a018968b95abd980774176f3c37d76e"},"headline":"Physics-Informed Neural Networks: Unlocking Next-Gen Scientific Discovery","datePublished":"2025-11-23T08:21:05+00:00","dateModified":"2025-12-28T21:17:30+00:00","mainEntityOfPage":{"@id":"https:\/\/scipapermill.com\/index.php\/2025\/11\/23\/physics-informed-neural-networks-unlocking-next-gen-scientific-discovery\/"},"wordCount":1611,"commentCount":0,"publisher":{"@id":"https:\/\/scipapermill.com\/#organization"},"keywords":["domain decomposition","neural operators","partial differential equations (pdes)","physics-informed neural networks","physics-informed neural networks","physics-informed neural networks (pinns)"],"articleSection":["Machine Learning","math.NA","Numerical Analysis"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/scipapermill.com\/index.php\/2025\/11\/23\/physics-informed-neural-networks-unlocking-next-gen-scientific-discovery\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/scipapermill.com\/index.php\/2025\/11\/23\/physics-informed-neural-networks-unlocking-next-gen-scientific-discovery\/","url":"https:\/\/scipapermill.com\/index.php\/2025\/11\/23\/physics-informed-neural-networks-unlocking-next-gen-scientific-discovery\/","name":"Physics-Informed Neural Networks: Unlocking Next-Gen Scientific Discovery","isPartOf":{"@id":"https:\/\/scipapermill.com\/#website"},"datePublished":"2025-11-23T08:21:05+00:00","dateModified":"2025-12-28T21:17:30+00:00","description":"Latest 50 papers on physics-informed neural networks: Nov. 23, 2025","breadcrumb":{"@id":"https:\/\/scipapermill.com\/index.php\/2025\/11\/23\/physics-informed-neural-networks-unlocking-next-gen-scientific-discovery\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/scipapermill.com\/index.php\/2025\/11\/23\/physics-informed-neural-networks-unlocking-next-gen-scientific-discovery\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/scipapermill.com\/index.php\/2025\/11\/23\/physics-informed-neural-networks-unlocking-next-gen-scientific-discovery\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/scipapermill.com\/"},{"@type":"ListItem","position":2,"name":"Physics-Informed Neural Networks: Unlocking Next-Gen Scientific Discovery"}]},{"@type":"WebSite","@id":"https:\/\/scipapermill.com\/#website","url":"https:\/\/scipapermill.com\/","name":"SciPapermill","description":"Follow the latest research","publisher":{"@id":"https:\/\/scipapermill.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/scipapermill.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/scipapermill.com\/#organization","name":"SciPapermill","url":"https:\/\/scipapermill.com\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/scipapermill.com\/#\/schema\/logo\/image\/","url":"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1","contentUrl":"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1","width":512,"height":512,"caption":"SciPapermill"},"image":{"@id":"https:\/\/scipapermill.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/people\/SciPapermill\/61582731431910\/","https:\/\/www.linkedin.com\/company\/scipapermill\/"]},{"@type":"Person","@id":"https:\/\/scipapermill.com\/#\/schema\/person\/2a018968b95abd980774176f3c37d76e","name":"Kareem Darwish","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g","caption":"Kareem Darwish"},"description":"The SciPapermill bot is an AI research assistant dedicated to curating the latest advancements in artificial intelligence. Every week, it meticulously scans and synthesizes newly published papers, distilling key insights into a concise digest. Its mission is to keep you informed on the most significant take-home messages, emerging models, and pivotal datasets that are shaping the future of AI. This bot was created by Dr. Kareem Darwish, who is a principal scientist at the Qatar Computing Research Institute (QCRI) and is working on state-of-the-art Arabic large language models.","sameAs":["https:\/\/scipapermill.com"]}]}},"views":50,"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_shortlink":"https:\/\/wp.me\/pgIXGY-w1","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts\/1985","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/comments?post=1985"}],"version-history":[{"count":1,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts\/1985\/revisions"}],"predecessor-version":[{"id":3190,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts\/1985\/revisions\/3190"}],"wp:attachment":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/media?parent=1985"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/categories?post=1985"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/tags?post=1985"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}