Physics-Informed Neural Networks: Unlocking Next-Gen Scientific Discovery and Engineering Solutions
Latest 50 papers on physics-informed neural networks: Oct. 27, 2025
Physics-InIn the dynamic landscape of AI and ML, Physics-Informed Neural Networks (PINNs) have emerged as a powerful paradigm, blending data-driven learning with the rigor of physical laws. This exciting convergence addresses critical challenges in scientific computing, from solving complex partial differential equations (PDEs) to enabling robust control systems and accurate biomedical modeling. Recent research highlights a surge in innovative approaches, pushing PINNs beyond their initial limitations and opening new frontiers for real-world applications.
The Big Idea(s) & Core Innovations
One of the central themes in recent PINN advancements is the relentless pursuit of enhanced accuracy and efficiency, particularly in tackling complex, high-dimensional, or stiff problems. Researchers are creatively engineering new architectures and optimization strategies to address common PINN bottlenecks like spectral bias, slow convergence, and robust uncertainty quantification.
Several papers tackle the core optimization challenges. Andrés Guzmán-Cordero et al. from the Vector Institute, Mila – Quebec AI Institute, Université de Montréal, in their paper “Improving Energy Natural Gradient Descent through Woodbury, Momentum, and Randomization”, introduce techniques to accelerate Energy Natural Gradient Descent (ENGD). By leveraging Woodbury’s matrix identity, a momentum scheme (SPRING), and Nyström approximation, they drastically cut computational costs for kernel matrix inversion, making PINN training more efficient, especially for large datasets.
Complementing this, Kang An et al. from Rice University and The Chinese University of Hong Kong, Shenzhen, introduce “AutoBalance: An Automatic Balancing Framework for Training Physics-Informed Neural Networks”. They identify a flaw in traditional gradient balancing methods due to heterogeneous Hessian spectra and propose a ‘post-combine’ approach where independent optimizers handle each loss component, significantly improving stability and performance. Further deepening our understanding of PINN optimization, Sifan Wang et al. from the Institution for Foundation of Data Science, Yale University and Penn Institute for Computational Science, University of Pennsylvania, explore “Gradient Alignment in Physics-informed Neural Networks: A Second-Order Optimization Perspective”. They show that second-order optimizers like SOAP implicitly mitigate directional gradient conflicts, leading to state-of-the-art results, even on challenging turbulent flow problems. This work highlights that understanding and managing gradient dynamics is crucial for robust PINN training.
The challenge of spectral bias, which hinders PINNs from capturing high-frequency components, is addressed by Yulun Wu et al. from KTH Royal Institute of Technology, in “Iterative Training of Physics-Informed Neural Networks with Fourier-enhanced Features”. Their IFeF-PINN framework uses Random Fourier Features and a two-stage iterative training algorithm to mitigate this bias, demonstrating superior performance on high-frequency PDEs. Similarly, Rohan Arni and Carlos Blanco from High Technology High School and The Pennsylvania State University, introduce the “Physics-Informed Neural Networks with Fourier Features and Attention-Driven Decoding” or S-Pformer, leveraging Fourier feature embeddings and attention mechanisms to mitigate spectral bias and reduce parameter count in PDE solving.
Adaptive strategies are also proving pivotal. Coen Visser et al. from Delft University of Technology, present “PACMANN: Point Adaptive Collocation Method for Artificial Neural Networks”. This method dynamically adjusts collocation points based on residual gradients, yielding improved accuracy and efficiency, especially in high-dimensional scenarios. For complex multiscale PDEs, Jonah Botvinick-Greenhouse et al. from Cornell University and Mitsubishi Electric Research Laboratories, propose “AB-PINNS: Adaptive-Basis Physics-Informed Neural Networks for Residual-Driven Domain Decomposition”. AB-PINNs dynamically adapt subdomains and add new ones based on residuals, enhancing expressiveness and preventing local minima.
The robustness and reliability of PINNs are also undergoing significant improvements. Frank Shih et al. from Memorial Sloan Kettering Cancer Center and Purdue University, introduce “Uncertainty Quantification for Physics-Informed Neural Networks with Extended Fiducial Inference”. This novel method provides rigorous, distribution-free confidence sets, overcoming limitations of Bayesian and dropout UQ. Yifan Yu et al. from the National University of Singapore and University of British Columbia, further develop “A Conformal Prediction Framework for Uncertainty Quantification in Physics-Informed Neural Networks”, offering finite-sample coverage guarantees and localized conformal quantile estimation for better adaptability.
Beyond PDEs, PINNs are finding homes in diverse applications. Jostein Barry-Straume et al. from Virginia Tech, present “Ensemble based Closed-Loop Optimal Control using Physics-Informed Neural Networks” to solve optimal control problems via the Hamilton-Jacobi-Bellman equation, showing robustness in noisy, nonlinear systems. In a critical clinical application, Kayode Olumoyin and Katarzyna Rejniak from the H. Lee Moffitt Cancer Center and Research Institute, use “Modeling Adoptive Cell Therapy in Bladder Cancer from Sparse Biological Data using PINNs” to capture unmodeled biological effects from sparse data, highlighting PINNs’ ability to encode prior knowledge as regularization. And in a timely application, P. Rothenbeck et al. from the University of Cologne, Germany, deploy “Modeling COVID-19 Dynamics in German States Using Physics-Informed Neural Networks” for spatio-temporal analysis of the pandemic, demonstrating how PINNs can estimate epidemiological parameters and track the impact of interventions.
Under the Hood: Models, Datasets, & Benchmarks
The recent breakthroughs in PINNs are underpinned by innovative models, novel architectures, and rigorous benchmarking. These resources are critical for both advancing research and facilitating real-world deployment.
- Optimizers & Architectures: Many papers focus on enhancing existing architectures. The work by Andrés Guzmán-Cordero et al. (Improving Energy Natural Gradient Descent through Woodbury, Momentum, and Randomization) introduces techniques like Woodbury’s matrix identity and the SPRING momentum scheme to improve ENGD, with an assumed code repository at https://github.com/VectorInstitute/ENGD-optimizers. Kang An et al. (AutoBalance: An Automatic Balancing Framework for Training Physics-Informed Neural Networks) proposes a ‘post-combine’ framework for loss balancing. Sifan Wang et al. (Gradient Alignment in Physics-informed Neural Networks: A Second-Order Optimization Perspective) highlight the SOAP optimizer for its effectiveness in mitigating gradient conflicts, with code available at https://github.com/PredictiveIntelligenceLab/jaxpi/tree/pirate.
- Spectral Bias Mitigation: The IFeF-PINN framework by Yulun Wu et al. (Iterative Training of Physics-Informed Neural Networks with Fourier-enhanced Features) uses Random Fourier Features. Rohan Arni and Carlos Blanco’s “Physics-Informed Neural Networks with Fourier Features and Attention-Driven Decoding” introduces S-Pformer, an encoder-free Transformer-based PINN. Yujia Huang et al. (Fourier heuristic PINNs to solve the biharmonic equations based on its coupled scheme) proposes FCPINN, another Fourier-enhanced PINN for high-order PDEs.
- Adaptive & Decomposition Methods: PACMANN by Coen Visser et al. (Point Adaptive Collocation Method for Artificial Neural Networks) is an adaptive sampling method, with its code at https://github.com/CoenVisser/PACMANN. AB-PINNs from Jonah Botvinick-Greenhouse et al. (Adaptive-Basis Physics-Informed Neural Networks for Residual-Driven Domain Decomposition) dynamically adapt subdomains for multiscale PDEs. Vikas Dwivedi et al. from CREATIS Biomedical Imaging Laboratory introduce Gated X-TFC (Gated X-TFC: Soft Domain Decomposition for Forward and Inverse Problems in Sharp-Gradient PDEs) for sharp-gradient PDEs using differentiable logistic gates, with code at https://github.com/GatedX-TFC.
- Novel Network Architectures: Junyi Wu and Guang Lin from Purdue University introduce PO-CKAN (PO-CKAN: Physics Informed Deep Operator Kolmogorov Arnold Networks with Chunk Rational Structure), leveraging rational KAN modules for efficient PDE solving. Kürsat Tekbıyık and Anil Gurses from Bilkent University present PIKAN (PIKAN: Physics-Inspired Kolmogorov-Arnold Networks for Explainable UAV Channel Modelling) for explainable UAV channel modeling, with code at https://github.com/anilgurses/. Mahdi Movahedian Moghaddam et al. introduce RISN (Advanced Physics-Informed Neural Network with Residuals for Solving Complex Integral Equations), integrating residual connections for integral equations.
- Faster Training & Scalability: Frozen-PINN by Chinmay Datar et al. (Fast training of accurate physics-informed neural networks without gradient descent) achieves remarkable speedups without gradient descent. “Nyström-Accelerated Primal LS-SVMs: Breaking the O(an3) Complexity Bottleneck for Scalable ODEs Learning” by Weikuo Wang et al. offers massive speedups for ODEs, with code at https://github.com/AI4SciCompLab/NLS-SVMs. Chunyang Liao from UCLA presents a random feature-based framework for PDEs (Solving Partial Differential Equations with Random Feature Models) with code at https://github.com/liaochunyang/RF_PDE.
- Physical Consistency & Interpretability: Kim Bente et al. from The University of Sydney introduce divergence-free neural networks (dfNNs) in “Mass Conservation on Rails – Rethinking Physics-Informed Learning of Ice Flow Vector Fields” to enforce exact mass conservation, with code at https://github.com/kimbente/mass_conservation_on_rails. Javier Castro and Benjamin Gess from Technische Universität Berlin and Max Planck Institute for Mathematics in the Sciences propose THINNs (THINNs: Thermodynamically Informed Neural Networks), a thermodynamically consistent PINN extension. Matteo Scialpi et al. from Università di Ferrara introduce APRIL (APRIL: Auxiliary Physically-Redundant Information in Loss – A physics-informed framework for parameter estimation with a gravitational-wave case study) with an auxiliary loss term for physical redundancies, code at https://github.com/scialpi/APRIL.
- Benchmarks: “Morpheus: Benchmarking Physical Reasoning of Video Generative Models with Real Physical Experiments” by Chenyu Zhang et al. provides a novel benchmark for video generative models, with resources and code at https://physics-from-video.github.io/morpheus-bench/. “Neural PDE Solvers with Physics Constraints: A Comparative Study of PINNs, DRM, and WANs” by Jiakang Chen from UCL offers a comparative analysis of different neural PDE solvers, with code at https://github.com/JiakangC/Neural-Network-Based-PDE-Solver.git.
Impact & The Road Ahead
The innovations in physics-informed neural networks signal a transformative shift across scientific and engineering disciplines. We’re moving beyond mere curve-fitting towards models that inherently understand and respect the underlying physics. This leads to more robust, generalizable, and interpretable AI systems, especially critical in fields where data is scarce or expensive, and physical consistency is paramount.
The potential impact is vast:
- Scientific Discovery: From more accurate climate models (ice flow dynamics by Kim Bente et al.) to understanding complex biological processes (bladder cancer modeling by Kayode Olumoyin and Katarzyna Rejniak), PINNs are accelerating scientific inquiry. The capability to learn time-varying parameters and unmodeled effects from sparse data is a game-changer for fields like epidemiology (P. Rothenbeck et al. and G. Dimarco et al.) and material science.
- Engineering & Design: The integration of PINNs with CAD domains (Moritz von Tresckow et al.) and finite element methods (Hélène Barucq et al., “Enriching continuous Lagrange finite element approximation spaces using neural networks”) promises more efficient design cycles and simulations. Real-time control systems (Jostein Barry-Straume et al. and Author A, Author B, “Data-Driven Adaptive PID Control Based on Physics-Informed Neural Networks”) and engine health monitoring (Kamaljyoti Nath et al., “A Digital Twin for Diesel Engines: Operator-infused Physics-Informed Neural Networks with Transfer Learning for Engine Health Monitoring”) are becoming smarter and more resilient. The development of specialized PINNs for battery modeling (Khoa Tran et al., “SeqBattNet: A Discrete-State Physics-Informed Neural Network with Aging Adaptation for Battery Modeling”) will improve battery life and safety.
- Broader AI/ML Advancements: The focus on efficient optimization, robust uncertainty quantification, and interpretable models benefits the wider machine learning community. Techniques like temporal lifting from Jeffrey Camlin from Red Dawn Academic Press (Temporal Lifting as Latent-Space Regularization for Continuous-Time Flow Models in AI Systems) for stabilizing continuous-time flow models, or randomized matrix sketching for memory-efficient gradient monitoring from Harbir Antil and Deepanshu Verma at the University of Maryland, College Park (Randomized Matrix Sketching for Neural Network Training and Gradient Monitoring), have implications far beyond PINNs. Furthermore, StruSR by Yunpeng Gong et al. (StruSR: Structure-Aware Symbolic Regression with Physics-Informed Taylor Guidance) uses PINNs to guide symbolic regression, bridging the gap between deep learning and interpretable mathematical expressions.
- Automation of Scientific Workflows: The emergence of LLM-driven frameworks like Lang-PINN by Xin He et al. from the **Agency for Science, Technology and Research (A*STAR), Singapore**, which automatically generates PINNs from natural language descriptions (Lang-PINN: From Language to Physics-Informed Neural Networks via a Multi-Agent Framework), promises to democratize complex scientific modeling, making it accessible to a broader audience.
The road ahead involves further enhancing the robustness of PINNs to noise, as highlighted by Aleksandra Jekica et al. from the Norwegian University of Science and Technology (Examining the robustness of Physics-Informed Neural Networks to noise for Inverse Problems), and exploring more sophisticated evolutionary optimization methods to automatically discover optimal architectures and hyperparameters, as discussed in “Evolutionary Optimization of Physics-Informed Neural Networks: Evo-PINN Frontiers and Opportunities”. The detailed review of PIML in biomedical science and engineering by Nazanin Ahmadi et al. from Brown University (Physics-Informed Machine Learning in Biomedical Science and Engineering) points towards integration with large language models for automated problem translation and model discovery. As PINNs continue to mature, they will not only solve existing problems more effectively but also unlock the potential to tackle previously intractable challenges, fundamentally changing how we understand and interact with the physical world. The future of scientific machine learning is incredibly bright, and PINNs are at its forefront!
Post Comment