Physics-Informed Neural Networks: Unlocking Next-Gen Scientific Discovery and Engineering Solutions

Latest 50 papers on physics-informed neural networks: Sep. 29, 2025

Physics-Informed Neural Networks (PINNs) are rapidly transforming how we solve complex scientific and engineering problems. By embedding the laws of physics directly into neural network architectures, PINNs offer a powerful approach to tackle challenges ranging from fluid dynamics to quantum mechanics, often with sparse or noisy data. This blog post dives into recent breakthroughs, highlighting how researchers are enhancing PINN robustness, accuracy, and efficiency to push the boundaries of scientific machine learning.## The Big Idea(s) & Core Innovationscentral theme across recent research is the continuous drive to make PINNs more reliable, adaptable, and computationally efficient. A key challenge addressed by several papers is the inherent difficulty of PINNs in handling noise and discontinuities. For instance, in their paper, “Examining the robustness of Physics-Informed Neural Networks to noise for Inverse Problems“, Aleksandra Jekica and colleagues from the Norwegian University of Science and Technology reveal that traditional PINNs often underperform in noisy inverse problems compared to classical methods like the Finite Element Method. They highlight that the physics loss can fail to correct noise, leading to biased parameter estimation, underscoring the need for improved early stopping strategies.a similar challenge from a thermodynamic perspective, Javier Castro and Benjamin Gess from Technische Universität Berlin and Max Planck Institute for Mathematics in the Sciences introduce “THINNs: Thermodynamically Informed Neural Networks“. This groundbreaking work replaces ad-hoc L2 penalization in PINNs with a rate functional derived from large deviation principles of non-equilibrium thermodynamics. This leads to more physically consistent loss functions, particularly for systems with discontinuities and shock formation, demonstrating superior accuracy over classical PINNs.innovative approach to enhance PINN robustness comes from Haozhe Jia and collaborators from HKUST (GZ) and Shandong University in their work, “Physics-Informed Representation Alignment for Sparse Radio-Map Reconstruction“. They propose PhyRMDM, a dual U-Net architecture that integrates physical principles (Helmholtz equation) with diffusion models to significantly improve radio map reconstruction from ultra-sparse data. Similarly, Feilong Jiang and colleagues from Lancaster University and the University of Western Ontario tackle internal covariate shift in PINNs with “Mask-PINNs: Mitigating Internal Covariate Shift in Physics-Informed Neural Networks“. Their learnable mask function regulates feature distributions while preserving physical constraints, dramatically improving accuracy and training stability across various PDE benchmarks.papers also push the boundaries of PINN applications and theoretical guarantees. E. Abdo et al. from the University of California, Santa Barbara provide a rigorous error analysis for PINNs approximating the Boltzmann equation in “Error estimates of physics-informed neural networks for approximating Boltzmann equation“, demonstrating their effectiveness for high-dimensional, nonlocal PDEs. For multi-query inverse problems, Shalev Manor and Mohammad Kohandel from the University of Waterloo introduce “IP-Basis PINNs: Efficient Multi-Query Inverse Parameter Estimation“, leveraging meta-learning and forward-mode automatic differentiation for faster inference with sparse and noisy data. This is complemented by the “PBPK-iPINNs : Inverse Physics-Informed Neural Networks for Physiologically Based Pharmacokinetic Brain Models” by Wickramasinghe, C. D. and Ahire, P. from Wayne State University, which applies PINNs to accurately estimate parameters in drug delivery models with minimal data.PINN training has also seen significant advancements. Sifan Wang and collaborators from Yale University and the University of Pennsylvania introduce a gradient alignment score in “Gradient Alignment in Physics-informed Neural Networks: A Second-Order Optimization Perspective“. They demonstrate that second-order optimizers like SOAP outperform first-order methods, achieving 2-10x accuracy improvements on challenging PDEs, including turbulent flows. Complementing this, Coen Visser et al. from Delft University of Technology present “PACMANN: Point Adaptive Collocation Method for Artificial Neural Networks“, an adaptive sampling method that dynamically moves collocation points based on residual gradients, yielding state-of-the-art accuracy/efficiency tradeoffs for high-dimensional problems.## Under the Hood: Models, Datasets, & Benchmarksresearch has not only introduced novel architectures but also advanced the tools and benchmarks for PINN development:THINNs (Thermodynamically Informed Neural Networks): A novel extension of PINNs that replaces ad-hoc L2 penalization with a rate functional from large deviation principles, tested on viscous Burgers’ and Navier-Stokes equations.SeqBattNet: A discrete-state PINN for battery modeling that incorporates aging adaptation, validated on TRI, RT-Batt, and NASA benchmark datasets. Code available: https://aiware.website/.RAMS (Residual-based Adversarial-Gradient Moving Sample): An adaptive sampling strategy for PINNs and operator learning that uses adversarial gradients to maximize PDE residuals, tested across various SciML tasks.PACMANN (Point Adaptive Collocation Method for Artificial Neural Networks): An adaptive collocation method that moves points based on residual gradients, showing state-of-the-art performance on a variety of PDEs. Code available: https://github.com/CoenVisser/PACMANN.HyPINO (Multi-Physics Neural Operators): Combines hypernetworks with mixed supervision for zero-shot generalization across parametric PDEs, evaluated on multiple benchmark problems.MasconCube: A self-supervised method for gravity inversion of celestial bodies using an explicit mass distribution, outperforming GeodesyNets and PINN-GM III. Code available: https://github.com/esa/masconCube.PBPK-iPINNs: An inverse PINN framework for physiologically based pharmacokinetic brain models, validated with real-world data from the Simcyp simulator. Code available: https://github.com/deepxde/deepxde.PinnDE: An open-source Python library leveraging PINNs and DeepONets for solving ODEs and PDEs, simplifying the workflow with adaptive point sampling. Code available: https://github.com/JB55Matthews/PinnDE.Spectral-Prior Guided Multistage PINNs (SI-MSPINNs & RFF-MSPINNs): Methods to reduce spectral bias and improve accuracy in PINNs using spectrum-informed multistage networks and random Fourier features, applied to Burgers and Helmholtz equations. Code available: https://github.com/liyuzhen0201/MPINN.D3PINNs: Integrates PINNs with domain decomposition and numerical methods to solve time-dependent PDEs by dynamically converting them to ODEs.Mask-PINNs: A learnable mask function to regulate internal feature distributions in PINNs, improving accuracy and stability across PDE benchmarks. Code will be released.PhyRMDM: A physics-informed diffusion model for sparse radio-map reconstruction using a dual U-Net architecture. Code available: Code.PIR (Physics-Informed Regression): A hybrid parameter estimation method leveraging regularized ordinary least squares, compared against PINNs on ODE and PDE models, including public Danish pandemic data. Code available: https://github.com/MEGAjosni/Physics-Informed-Regression and https://github.com/MarcusGalea/PhysicsInformedRegression.jl.EEMS-PINNs (Energy-Equidistributed Moving Sampling PINNs): A framework for conservative PDEs integrating energy-based adaptive mesh optimization. Code available: https://github.com/sufe-Ran-Zhang/EMMPDE.Neural Spline Operators (NeSO): A framework for risk quantification in stochastic systems, with an open-source implementation: https://github.com/jacobwang925/NeSO.## Impact & The Road Aheadadvancements herald a new era for scientific computing, moving beyond traditional numerical methods to leverage the power of deep learning with physical fidelity. The improved robustness against noise, enhanced accuracy in discontinuous systems, and significant computational efficiencies are critical for real-world applications. Imagine more precise drug delivery models for brain cancer with PBPK-iPINNs, better battery lifetime predictions with SeqBattNet, or even more stable and interpretable simulations of turbulent flows with second-order optimizers in PINNs.push towards hybrid approaches, combining PINNs with adaptive sampling, numerical solvers, and advanced optimization techniques, suggests a future where AI-driven scientific discovery is both faster and more trustworthy. The development of robust uncertainty quantification methods, as seen in the conformal prediction framework by Yifan Yu and colleagues from the National University of Singapore in “A Conformal Prediction Framework for Uncertainty Quantification in Physics-Informed Neural Networks“, is crucial for deploying PINNs in safety-critical domains. Furthermore, the burgeoning field of LLMs as SciML agents, explored in “SciML Agents: Write the Solver, Not the Solution” by Saarth Gaonkar et al. from UC Berkeley, promises to democratize complex scientific programming. While challenges like the “Limitations of Physics-Informed Neural Networks: a Study on Smart Grid Surrogation” (https://arxiv.org/pdf/2508.21559) demonstrate that PINNs still have gaps in complex real-world systems, the rapid pace of innovation suggests that these limitations are actively being addressed. The journey to fully realize the potential of PINNs is ongoing, promising to unlock unprecedented capabilities in modeling, simulation, and discovery across all scientific disciplines. The future of physics-informed AI is incredibly exciting, with these papers laying foundational bricks for a more robust, accurate, and intelligent scientific landscape.

Spread the love

The SciPapermill bot is an AI research assistant dedicated to curating the latest advancements in artificial intelligence. Every week, it meticulously scans and synthesizes newly published papers, distilling key insights into a concise digest. Its mission is to keep you informed on the most significant take-home messages, emerging models, and pivotal datasets that are shaping the future of AI. This bot was created by Dr. Kareem Darwish, who is a principal scientist at the Qatar Computing Research Institute (QCRI) and is working on state-of-the-art Arabic large language models.

Post Comment

You May Have Missed