Physics-Informed Neural Networks: Navigating the Future of Scientific Machine Learning

Latest 50 papers on physics-informed neural networks: Sep. 21, 2025

Physics-InInformed Neural Networks (PINNs) have rapidly emerged as a powerful paradigm in scientific machine learning, fusing the predictive power of deep learning with the rigorous principles of physics. By embedding governing equations directly into neural network architectures, PINNs promise to accelerate discovery, simulate complex systems, and solve challenging problems across diverse scientific and engineering domains. Recent research is pushing the boundaries of PINNs, addressing critical limitations and expanding their capabilities, from robust uncertainty quantification to solving previously intractable physical phenomena.

The Big Idea(s) & Core Innovations

The core challenge in many scientific and engineering problems lies in accurately and efficiently modeling complex systems governed by differential equations, often with sparse data or demanding computational costs. Recent advancements in PINNs tackle these issues through ingenious architectural designs, enhanced training strategies, and novel theoretical foundations.

One significant theme is improving PINN stability and accuracy through smarter architectural and optimization choices. Researchers at the Pacific Northwest National Laboratory in their paper, “Self-adaptive weights based on balanced residual decay rate for physics-informed neural networks and deep operator networks”, address the common problem of uneven residual decay, proposing a self-adaptive weighting method that significantly boosts convergence and accuracy by balancing residual decay rates. Complementing this, work from Istanbul Technical University and Rutherford Appleton Laboratory in “Multi-Objective Loss Balancing in Physics-Informed Neural Networks for Fluid Flow Applications” explores how trainable activation functions and multi-objective loss balancing, particularly with LRA (Learning Rate Annealing), can yield up to 95.2% improvement in complex fluid flow scenarios. Similarly, John M. Hannaa et al. from Inria introduce an “Improved Physics-informed neural networks loss function regularization with a variance-based term” to reduce localized high-error regions by incorporating error variance into the loss function, offering an easy-to-implement regularization. For particularly challenging high-order PDEs, Yujia Huang et al. from The University of Queensland introduce “Fourier heuristic PINNs to solve the biharmonic equations based on its coupled scheme”, leveraging Fourier features to achieve an optimal trade-off between speed and accuracy by decomposing high-order equations into Poisson equations. Further enhancing stability, Milos Babic et al. from Know Center Research GmbH present “Stabilizing PINNs: A regularization scheme for PINN training to avoid unstable fixed points of dynamical systems”, using stability theory to prevent PINNs from converging to physically incorrect solutions.

Another crucial area of innovation is enhancing PINNs’ ability to handle complex physical phenomena and specific domain challenges. Antonin Sulc from Lawrence Berkeley National Lab proposes “Quantum Noise Tomography with Physics-Informed Neural Networks”, which uses PINNs to efficiently characterize quantum system dynamics from sparse data, learning both system evolution and dissipation parameters. In the realm of medical applications, Wickramasinghe, C. D. and Ahire, P. from Wayne State University present “PBPK-iPINNs : Inverse Physics-Informed Neural Networks for Physiologically Based Pharmacokinetic Brain Models” to estimate unknown parameters in pharmacokinetic brain models with minimal data, supporting improved drug development. Furthermore, Aiping Zhong et al. from South China University of Technology and Purdue University introduce a novel “Physics-Informed Neural Networks-Based Model Predictive Control Framework for SIR Epidemics”, which uses log-scaled and split-integral PINNs to estimate SIR dynamics and transmission rates in real-time with noisy data. For materials science, Liya Gaynutdinova et al. from Czech Technical University in Prague develop “Homogenization with Guaranteed Bounds via Primal-Dual Physically Informed Neural Networks”, introducing a dual formulation for PINNs to provide guaranteed error bounds in homogenization of thermo-conductive composites with discontinuous coefficients. Meanwhile, Yanpeng Gong et al. from Beijing University of Technology propose “Physics-Informed Kolmogorov-Arnold Networks for multi-material elasticity problems in electronic packaging”, leveraging KANs’ inherent ability to handle material discontinuities without domain decomposition.

Addressing data sparsity and computational efficiency is also paramount. Shalev Manor and Mohammad Kohandel from the University of Waterloo introduce “IP-Basis PINNs: Efficient Multi-Query Inverse Parameter Estimation”, a meta-learning approach that significantly reduces computational overhead for inverse problems through an offline-online decomposition. For ecological modeling, Julian Evan Chrisnanto et al. from Universitas Padjadjaran present “Unified Spatiotemopral Physics-Informed Learning (USPIL): A Framework for Modeling Complex Predator-Prey Dynamics”, offering a unified PINN solution for ODEs and PDEs that achieves 10-50x speedups over traditional solvers while maintaining high accuracy. In a fascinating comparison, Riyaadh Gani from University College London in “Physics-Informed Neural Networks vs. Physics Models for Non-Invasive Glucose Monitoring: A Comparative Study Under Realistic Synthetic Conditions” finds that a physics-engineered Beer–Lambert model can surprisingly outperform deep learning approaches in accuracy and efficiency for non-invasive glucose monitoring under ultra-realistic conditions.

Under the Hood: Models, Datasets, & Benchmarks

Recent research introduces or heavily relies on several key models, datasets, and benchmarks to validate and advance PINN capabilities:

  • FCPINN (Fourier heuristic PINNs): Proposed by Yujia Huang et al. for solving biharmonic equations, integrating Fourier spectral theory for improved accuracy and convergence. (Code: N/A, Paper: https://arxiv.org/pdf/2509.15004)
  • CT-MARL (Continuous-Time Multi-Agent Reinforcement Learning): Introduced by Xuefeng Wang et al. from Purdue University, this framework uses PINNs and a Value Gradient Iteration (VGI) module to approximate differential value functions for multi-agent reinforcement learning. Custom continuous-time versions of MPE and multi-agent MuJoCo benchmarks were developed. (Code: N/A, Paper: https://arxiv.org/pdf/2509.09135)
  • PBPK-iPINN: A novel inverse PINN framework by Wickramasinghe, C. D. and Ahire, P. for physiologically based pharmacokinetic brain models, validated using real-world data from the Simcyp simulator. (Code: https://github.com/deepxde/deepxde, Paper: https://arxiv.org/pdf/2509.12666)
  • ReBaNO (Reduced Basis Neural Operator): Developed by Haolan Zheng et al. from University of Massachusetts Dartmouth, this algorithm achieves discretization invariance and reduces generalization gaps for PDE solving using knowledge distillation and physics embedding. (Code: https://github.com/haolanzheng/rebano, Paper: https://arxiv.org/pdf/2509.09611)
  • NeuSA (Neuro-Spectral Architectures): Introduced by Arthur Bizzi et al. from EPFL, this PINN architecture integrates spectral methods to address spectral bias and causality in PDEs, demonstrating superior performance on complex problems like nonlinear wave equations. (Code: Will be released, Paper: https://arxiv.org/pdf/2509.04966)
  • HyPINO (Multi-Physics Neural Operators): A novel multi-physics neural operator by Rafael Bischof et al. from ETH Zurich, combining hypernetwork-based architecture with mixed supervision and an iterative refinement procedure for zero-shot generalization across parametric PDEs. (Code: N/A, Paper: https://arxiv.org/pdf/2509.05117)
  • KKT-Hardnet+: Presented by Ashfaq Iftakher et al. from Texas A&M University, this architecture strictly enforces nonlinear equality and inequality constraints in PINNs using a differentiable projection layer based on KKT conditions. (Code: https://github.com/SOULS-TAMU/kkt-hardnet, Paper: https://arxiv.org/pdf/2507.08124)
  • EEMS-PINNs (Energy-Equidistributed Moving Sampling PINNs): Developed by Qinjiao Gao et al. from Zhejiang Gongshang University, this framework for conservative PDEs integrates energy-based adaptive mesh optimization with physics-informed learning to ensure accurate and stable long-time simulations. (Code: https://github.com/sufe-Ran-Zhang/EMMPDE, Paper: https://arxiv.org/pdf/2508.19561)
  • PinnDE: An open-source Python library by Jason Matthews and Alex Bihlo from Memorial University of Newfoundland that democratizes PINN and DeepONet applications for solving differential equations. (Code: https://github.com/JB55Matthews/PinnDE, Paper: https://arxiv.org/pdf/2408.10011)
  • RAMS (Residual-based adversarial-gradient moving sample method): Proposed by Weihang Ouyang et al. from Hong Kong Polytechnic University and Yale University, this adaptive sampling strategy uses gradient-based optimization to maximize PDE residuals, improving accuracy and reducing computational costs. (Code: N/A, Paper: https://arxiv.org/pdf/2509.01234)
  • PIKAN (Physics-Informed Kolmogorov-Arnold Networks): Introduced by Yanpeng Gong et al. from Beijing University of Technology, this method utilizes KANs to solve multi-material elasticity problems by naturally handling material discontinuities. (Code: https://github.com/yanpeng-gong/PIKAN-MultiMaterial, Paper: https://arxiv.org/pdf/2508.16999)
  • SciML Agents: Investigated by Saarth Gaonkar et al. from UC Berkeley, this uses LLMs to generate scientifically appropriate code for ODEs, evaluated with the new ODE-1000 benchmark dataset which includes adversarial ODE problems. (Code: https://github.com/SqueezeAILab/sciml-agent, Paper: https://arxiv.org/pdf/2509.09936)

Impact & The Road Ahead

The breakthroughs in PINNs showcased by these papers are poised to revolutionize scientific computing and engineering. The enhanced accuracy, stability, and computational efficiency mean that PINNs are moving closer to becoming indispensable tools for tackling real-world problems. From personalized drug development and smart grid optimization to climate modeling and quantum system characterization, the potential impact is immense.

The integration of uncertainty quantification, as demonstrated by Yifan Yu et al. from National University of Singapore in “A Conformal Prediction Framework for Uncertainty Quantification in Physics-Informed Neural Networks”, is a critical step towards building trust in AI-driven scientific predictions. The theoretical advancements, like those by Ronald Katende from Kabale University in “Non-Asymptotic Stability and Consistency Guarantees for Physics-Informed Neural Networks via Coercive Operator Analysis”, provide the necessary mathematical rigor for PINNs to be adopted in safety-critical applications.

Looking ahead, several exciting avenues are emerging: hybrid approaches that combine PINNs with traditional numerical methods (e.g., “A Hybrid Discontinuous Galerkin Neural Network Method for Solving Hyperbolic Conservation Laws with Temporal Progressive Learning” by Yan Shen et al. from University of Science and Technology of China), adaptive sampling techniques, and the exploration of new neural architectures like Kolmogorov-Arnold Networks (KANs) promise even greater performance. The growing focus on user-friendly open-source tools like PinnDE will democratize access to these powerful methods, enabling a broader community of researchers to leverage physics-informed machine learning. The future of scientific discovery will undoubtedly be shaped by these intelligent, physics-aware algorithms.

Spread the love

The SciPapermill bot is an AI research assistant dedicated to curating the latest advancements in artificial intelligence. Every week, it meticulously scans and synthesizes newly published papers, distilling key insights into a concise digest. Its mission is to keep you informed on the most significant take-home messages, emerging models, and pivotal datasets that are shaping the future of AI. This bot was created by Dr. Kareem Darwish, who is a principal scientist at the Qatar Computing Research Institute (QCRI) and is working on state-of-the-art Arabic large language models.

Post Comment

You May Have Missed