ICLR 2026

Extending Fourier Neural Operators for Modeling Parameterized and Coupled PDEs

* Equal Contribution† Corresponding Author: [email protected]
1Arizona State University2Applied Materials
Arizona State UniversityApplied Materials
Uvini is seeking Summer 2026 internship opportunities —Uvini's CV
Plasma simulation and Gray-Scott reaction-diffusion patterns

Left: 1D capacitively coupled plasma (CCP) — electron density and electric field dynamics.  Right: Gray–Scott reaction-diffusion — labyrinthine pattern formation.

Abstract

Parameterized and coupled partial differential equations (PDEs) are central to modeling phenomena in science and engineering, yet neural operator methods that address both aspects remain limited. We extend Fourier neural operators (FNOs) with minimal architectural modifications along two directions. For parameterized dynamics, we propose a hypernetwork-based modulation that conditions the operator on physical parameters. For coupled systems, we conduct a systematic exploration of architectural choices, examining how operator components can be adapted to balance shared structure with cross-variable interactions while retaining the efficiency of standard FNOs. Evaluations on benchmark PDEs, including the one-dimensional capacitively coupled plasma equations and the Gray–Scott system, show that our methods achieve up to 55∼72% lower errors than strong baselines, demonstrating the effectiveness of principled modulation and systematic design exploration.

55–72%
Error reduction over baselines
hpFNOₓ
Best model on both benchmarks
2
Principled architectural extensions
1D CCP
New plasma benchmark dataset

Method

Two principled extensions to the standard Fourier Neural Operator (FNO) architecture.

1

hpFNO — Hypernetwork-based Parameter Modulation

For parameterized PDEs: conditioning the operator on physical parameters at every Fourier layer.

v+1(x)=σ ⁣(Wv(x)+(K(a;φ)v)(x)+s(x,μ))v_{\ell+1}(x) = \sigma\!\left( W v_\ell(x) + \bigl(\mathcal{K}(a;\varphi)\,v_\ell\bigr)(x) + \mathbf{s}_\ell(x,\mu) \right)

A compact hypernetwork fhyperf_{\mathrm{hyper}} takes physical parameters μ\mu and position xx, producing layer-specific shift terms s(x,μ)\mathbf{s}_\ell(x,\mu). Unlike HyperFNO which infers all core weights, hpFNO only infers shift biases — adding parameter-dependent modulation with negligible overhead.

2

FNOₓ — Spectral Coupling for Multi-variable PDEs

For coupled systems: cross-variable interaction entirely in Fourier space.

Z~=fenc ⁣(Fvα,Fvβ),Z~=RφZ~,(v~α,v~β)=fdec(Z~)\tilde{Z}' = f_{\mathrm{enc}}\!\left(\mathcal{F}v^\alpha,\, \mathcal{F}v^\beta\right), \quad \tilde{Z}'' = R_\varphi \cdot \tilde{Z}', \quad (\tilde{v}^\alpha,\tilde{v}^\beta) = f_{\mathrm{dec}}(\tilde{Z}'')

Variables vαv^\alpha and vβv^\beta are independently transformed to Fourier space, jointly encoded by fencf_{\mathrm{enc}} into a shared latent, filtered by a single spectral kernel RφR_\varphi, then decoded by fdecf_{\mathrm{dec}} back into per-variable representations. This captures cross-variable interactions while preserving FNO's efficiency.

Combined FNOx + hpFNO architecture diagram

Combined architecture: two coupled variables (α\alpha, β\beta) pass through shared spectral coupling (fencRφfdecf_{\mathrm{enc}} \to R_\varphi \to f_{\mathrm{dec}}) while a HyperNetwork injects parameter-conditioned shifts s(μ,x)s(\mu,x) at every layer.

Results

Evaluated on two benchmark PDEs. Performance measured in normalized RMSE (nRMSE, lower is better). All experiments repeated 5× with different random seeds.

1D Capacitively Coupled Plasma (CCP)

Novel benchmark introduced in this work. Models are tested on three physical parameters independently: reaction rate R0R_0, driving voltage V0V_0, and ion mass mim_i. Reported as mean ± std.

ModelR0R_0V0V_0mim_i
FNOm0.04030.07910.0363
FNOc0.03750.08730.0299
CFNO0.03150.04280.0333
HyperFNOc0.02780.03550.0253
MWTc0.04090.06390.0403
CMWNO0.03120.05260.0241
DONc0.08440.21470.1035
U-NETc0.10840.08440.1719
FNOx0.01930.03450.0212
pFNOx0.01940.02780.0142
hpFNOx ★0.01540.01920.0128
nRMSE mean (±std omitted for space). Source: Table 1 — Jing et al., ICLR 2026

Governing Equations

tne=xΓe+R\partial_t n_e = -\partial_x \Gamma_e + R
xxϕ=eϵ0(neni0)\partial_{xx} \phi = -\tfrac{e}{\epsilon_0}(n_e - n_{i0})

Parameters: V0[100,300]V_0 \in [100, 300], R0[2.7×1019,2.7×1020]R_0 \in [2.7{\times}10^{19},\,2.7{\times}10^{20}], mi[1.67×1026,6.68×1026]m_i \in [1.67{\times}10^{-26},\,6.68{\times}10^{-26}]. 100 trajectories, 9:1 train/test split.

hpFNOₓ achieves best nRMSE on all three parameters

e.g. V0V_0: 0.0192 vs. best baseline CFNO: 0.0428 — 55% lower error

Robustness to Input History Length TinT_{\mathrm{in}}

Performance on the reaction rate case as the number of input time steps decreases. Parameterized variants (hp) remain stable even with minimal history (Tin=1T_{\mathrm{in}}=1), while non-parameterized models diverge (†).

ModelTin=10T_{\mathrm{in}}=10Tin=5T_{\mathrm{in}}=5Tin=2T_{\mathrm{in}}=2Tin=1T_{\mathrm{in}}=1
FNOc0.0375 ±0.00550.1324 ±0.03041.0048†1.4484†
pFNOc0.0334 ±0.01010.0923 ±0.02760.2151‡ ±0.03740.3757‡ ±0.0679
hpFNOc0.0196 ±0.00210.0804 ±0.02370.1515‡ ±0.00700.1609‡ ±0.0043
FNOx0.0193 ±0.00590.0406 ±0.00931.2832†1.8143†
pFNOx0.0194 ±0.00750.0464 ±0.01580.1640‡ ±0.01550.2522‡ ±0.1376
hpFNOx ★0.0154 ±0.00290.0317 ±0.00220.1324‡ ±0.02420.1372‡ ±0.0100
† Model fails to learn the dynamics. ‡ Averaged from best 5 of 10 runs due to training instability. Source: Table 2 — Jing et al., ICLR 2026

Efficiency Analysis

hpFNOₓ achieves the best accuracy at comparable model size and training time to standard FNO — a Pareto-optimal solution. Results from the CCP varying driving voltage (V0V_0) scenario. Source: Figure 4, Jing et al., ICLR 2026.

FNO Baselines
HyperFNO
Wavelet-based
Other NOs
Ours

(a) Model Size vs. Accuracy

Our methods cluster at low parameter count with best accuracy

(b) Training Time vs. Accuracy

Our methods achieve best accuracy without sacrificing training speed

Citation

If you find this work useful, please cite:

@inproceedings{jingextending,
  title={Extending Fourier Neural Operators for Modeling
         Parameterized and Coupled PDEs},
  author={Jing, Cheng and Mudiyanselage, Uvini Balasuriya
          and Verma, Abhishek and Bera, Kallol
          and Rauf, Shahid and Lee, Kookjin},
  booktitle={The Fourteenth International Conference
             on Learning Representations}
}