Physics-Informed Neural Networks: Unlocking Next-Gen Scientific Discovery
Latest 50 papers on physics-informed neural networks: Nov. 23, 2025
Physics-Informed Neural Networks (PINNs) are revolutionizing how we approach scientific computing, blending the power of deep learning with the fundamental laws of physics. They promise to solve complex partial differential equations (PDEs), predict system dynamics, and enable real-time optimization across diverse fields, from medicine to manufacturing. Recent research continues to push the boundaries of what PINNs can achieve, tackling long-standing challenges like accuracy, efficiency, and robustness. This post dives into the latest breakthroughs from a collection of cutting-edge papers, revealing how these innovations are shaping the future of AI-driven scientific discovery.
The Big Idea(s) & Core Innovations
One of the most exciting trends is the focus on enhancing PINN accuracy and stability, particularly for complex and high-dimensional problems. For instance, “Enforcing hidden physics in physics-informed neural networks” by Chen et al. (Tongji University, Yale University, University of Oxford) introduces an irreversibility-regularized approach that drastically improves accuracy by incorporating fundamental physical principles, like the Second Law of Thermodynamics, directly into the training process. This moves beyond simply fitting data to enforcing underlying physical consistency.
Another significant thrust involves tackling the “curse of dimensionality” and spectral bias, a known challenge where PINNs struggle with high-frequency components or complex geometries. “FG-PINNs: A neural network method for solving nonhomogeneous PDEs with high frequency components” by J. Zheng et al. (Xiangtan University) proposes a dual subnetwork architecture to handle high- and low-frequency components separately, demonstrating superior convergence. Similarly, “Iterative Training of Physics-Informed Neural Networks with Fourier-enhanced Features” by Wu et al. (KTH Royal Institute of Technology) introduces IFeF-PINN, which uses Random Fourier Features to explicitly inject high-frequency information, significantly mitigating spectral bias. The theoretical underpinnings are further explored in “What Can One Expect When Solving PDEs Using Shallow Neural Networks?” by He et al. (City University of Hong Kong, Duke University), which analyzes the impact of activation functions on frequency bias in shallow networks.
Domain decomposition is also emerging as a powerful strategy to scale PINNs for complex problems. In “Neural network-driven domain decomposition for efficient solutions to the Helmholtz equation”, Dolean et al. (Eindhoven University of Technology, Inria) present FBPINNs (Finite Basis PINNs) combined with Perfectly Matched Layers (PML) for solving the Helmholtz equation more accurately, especially at high frequencies. This concept extends to other architectures with “Finite basis Kolmogorov-Arnold networks: domain decomposition for data-driven and physics-informed problems” by Howard et al. (Pacific Northwest National Laboratory), which uses partition-of-unity functions to combine smaller KAN models, improving accuracy in multiscale and noisy scenarios. Building on this, “The modified Physics-Informed Hybrid Parallel Kolmogorov–Arnold and Multilayer Perceptron Architecture with domain decomposition” by Huang et al. (Beijing University of Technology) introduces HPKM-PINN, a hybrid KAN-MLP architecture with overlapping domain decomposition for high-frequency and multiscale PDEs, demonstrating improved efficiency.
Beyond accuracy and scalability, efficiency and uncertainty quantification are crucial. “Convergence and Sketching-Based Efficient Computation of Neural Tangent Kernel Weights in Physics-Based Loss” by Hirsch and Pichi (University of California, Berkeley, SISSA) shows how adaptive Neural Tangent Kernel (NTK) weights can converge and proposes a randomized sketching algorithm for efficient computation. For uncertainty, “E-PINNs: Epistemic Physics-Informed Neural Networks” by Jacob et al. (Pacific Northwest National Laboratory, University of Notre Dame) introduces E-PINNs, an efficient framework for quantifying epistemic uncertainty at a significantly lower computational cost than traditional Bayesian methods, making them more practical for real-world applications. The Physics-Informed Log Evidence (PILE) score by Daniels et al. (MIT, University of Melbourne, University of California at Berkeley) provides an uncertainty-aware metric for hyperparameter selection and model diagnostics, even in data-free scenarios.
Several papers also delve into novel applications and specific problem types. “A Physics Informed Machine Learning Framework for Optimal Sensor Placement and Parameter Estimation” by Venianakis et al. (National Technical University of Athens, University of Manchester) innovates with a PINN framework integrating D-optimal sensor placement for improved parameter estimation. For fast control of robots, “Generalizable and Fast Surrogates: Model Predictive Control of Articulated Soft Robots using Physics-Informed Neural Networks” by Author A et al. highlights the power of PINN surrogates. In medical imaging, “PINGS-X: Physics-Informed Normalized Gaussian Splatting with Axes Alignment for Efficient Super-Resolution of 4D Flow MRI” by Jo et al. (Hanyang University, Nanyang Technological University) uses Normalized Gaussian Splatting for efficient super-resolution of 4D flow MRI, bridging explicit representations with physics-informed learning. Similarly, “Computed Tomography (CT)-derived Cardiovascular Flow Estimation Using Physics-Informed Neural Networks Improves with Sinogram-based Training: A Simulation Study” by Guo et al. (University of California San Diego) introduces SinoFlow, a sinogram-based PINN framework that bypasses image reconstruction errors for more accurate cardiovascular flow estimation.
Under the Hood: Models, Datasets, & Benchmarks
The recent surge in PINN research has introduced and refined several key models and methodologies:
- FG-PINNs (J. Zheng et al., [https://arxiv.org/pdf/2511.12055]): A dual network architecture for nonhomogeneous PDEs, specifically addressing high-frequency components by separating their learning from low-frequency ones. It uses frequency-guided training to leverage source terms and boundary conditions.
- FBPINNs (Finite Basis PINNs) (Dolean et al., [https://arxiv.org/pdf/2511.15445]): An extension of PINNs for Helmholtz equations, incorporating domain decomposition and Perfectly Matched Layers (PML) for enhanced accuracy. Benefits from Energy Natural Gradient Descent (ENGD) optimization.
- HPKM-PINN (Hybrid Parallel Kolmogorov–Arnold Network and Multilayer Perceptron PINN) (Huang et al., [https://arxiv.org/pdf/2511.11228]): A hybrid KAN and MLP architecture combined with overlapping domain decomposition and trainable weighting parameters to handle high-frequency and multiscale PDEs efficiently.
- IFeF-PINN (Iterative Fourier-enhanced Features PINN) (Wu et al., [https://arxiv.org/pdf/2510.19399]): Mitigates spectral bias using Random Fourier Features in an iterative two-stage training algorithm, improving high-frequency PDE approximation.
- E-PINNs (Epistemic PINNs) (Jacob et al., [https://arxiv.org/pdf/2503.19333]): Integrates a small ‘epinet’ into PINNs for efficient epistemic uncertainty quantification, providing calibrated estimates with low computational overhead.
- PINN-ACS (PINN Alternating Convex Search) (Banderwaar & Gupta, [https://arxiv.org/pdf/2511.00792]): Reformulates differential eigenvalue problems as biconvex optimization, using alternating convex search for up to 500x speedups. Code available at https://github.com/NeurIPS-ML4PS-2025/PINN_ACS_CODES.
- PINGS-X (Physics-Informed Normalized Gaussian Splatting with Axes Alignment) (Jo et al., [https://arxiv.org/pdf/2511.11048]): Utilizes normalized Gaussian splatting and axes-aligned representations for efficient super-resolution of 4D flow MRI data, with code at https://github.com/SpatialAILab/PINGS-X.
- SinoFlow (Guo et al., [https://arxiv.org/pdf/2511.03876v1]): A PINN framework for CT-derived cardiovascular flow estimation, trained directly on sinograms to avoid image reconstruction errors.
- PINN-Proj (Baez et al., [https://arxiv.org/pdf/2511.09048]): A projection method that guarantees conservation of integral quantities (linear and quadratic) in PINNs by enforcing hard constraints via constrained non-linear optimization. Code at github.com/antbaez9/pinn-proj.
- HEATNETs (Georgiou et al., [https://arxiv.org/pdf/2511.00886]): Explainable random feature neural networks for high-dimensional parabolic PDEs, leveraging randomized heat-kernels for accuracy up to 2000 dimensions.
- SSTODE (Sea Surface Temperature Neural ODE) (Jiang et al., [https://arxiv.org/pdf/2511.05629]): A physics-informed Neural ODE framework for SST prediction that models coupled advection-diffusion processes and incorporates surface heat fluxes via an Energy Exchanges Integrator (EEI). Code at https://github.com/nicezheng/SSTODE-code.
- XPINN (Extended PINN) (Rehman & Yousuf, [https://arxiv.org/pdf/2511.13734]): For hyperbolic PDEs like the Buckley-Leverett equation, it dynamically partitions the computational domain into sub-networks and uses Rankine-Hugoniot jump conditions for coupling. Code at github.com/saifkhanengr/XPINN-for-Buckley-Leverett.
Many of these advancements also highlight the importance of adaptive sampling and weighting strategies, as seen in “Self-adaptive weighting and sampling for physics-informed neural networks” by Chen et al. (Pacific Northwest National Laboratory) and “Efficient Global-Local Fusion Sampling for Physics-Informed Neural Networks” by Luo et al. (Soochow University, Duke Kunshan University). The latter’s Global–Local Fusion (GLF) combines residual-adaptive sampling with lightweight approximations for superior accuracy and efficiency.
Impact & The Road Ahead
These advancements in PINNs are poised to profoundly impact various sectors. In semiconductor manufacturing, “Physics-Constrained Adaptive Neural Networks Enable Real-Time Semiconductor Manufacturing Optimization with Minimal Training Data” by Uerman et al. (NeuroTechNet S.A.S.) shows how physics-constrained adaptive learning can achieve sub-nanometer precision with 90% fewer training samples, paving the way for sustainable and efficient production. Medical imaging is seeing breakthroughs with efficient 4D flow MRI super-resolution (PINGS-X) and improved cardiovascular flow estimation (SinoFlow), offering non-invasive diagnostic tools. In energy systems, real-time gas crossover prediction in PEM electrolyzers using PINNs (Kim et al., Jeju National University, https://arxiv.org/pdf/2511.05879) promises safer and more efficient green hydrogen production.
Looking forward, the integration of Lie group symmetries with PINNs, as explored by Jiao and Xiong (Tsinghua University, Beijing Institute of Mathematical Sciences and Applications, https://arxiv.org/pdf/2407.20155), and by Klausen et al. with LieSolver (Fraunhofer Heinrich Hertz Institute, Technische Universität Berlin), offers a powerful paradigm for embedding exact physical symmetries, leading to more robust, interpretable, and computationally efficient solvers. The focus on structure-preserving PINNs for enforcing conservation laws, exemplified by Obiekev and Oguadime’s work on the KdV equation (Oregon State University, https://arxiv.org/pdf/2511.00418), will ensure long-term stability and physical fidelity in complex simulations. The development of differentiable spiking neurons like QIF (Wan et al., Brown University, Pacific Northwest National Laboratory, https://arxiv.org/pdf/2511.06614) also hints at more biologically plausible and stable scientific machine learning models.
While challenges remain, especially with the “curse of dimensionality” in truly high-dimensional scenarios (Salvaire et al., Université de Lorraine, https://arxiv.org/pdf/2511.08561), the continuous innovation in domain decomposition, adaptive methods, and novel architectures like Neural Operators for cardiac electrophysiology (Lydon et al., King’s College London, https://arxiv.org/pdf/2511.08418 and Radiakos et al., MIT, https://arxiv.org/pdf/2511.05216 for power systems) demonstrates a vibrant and rapidly advancing field. The journey towards highly accurate, efficient, and reliable physics-informed AI is well underway, promising to unlock new frontiers in scientific understanding and technological application.
Share this content:
Discover more from SciPapermill
Subscribe to get the latest posts sent to your email.
Post Comment