Skip to content

Commit 32ff5de

Browse files
ndem0benv123
andcommitted
tutorial validation (#185)
Co-authored-by: Ben Volokh <89551265+benv123@users.noreply.github.com>
1 parent 2e2fe93 commit 32ff5de

38 files changed

Lines changed: 1066 additions & 1000 deletions

docs/source/_rst/tutorial1/tutorial.rst

Lines changed: 139 additions & 204 deletions
Large diffs are not rendered by default.

docs/source/_rst/tutorial2/tutorial.rst

Lines changed: 53 additions & 62 deletions
Original file line numberDiff line numberDiff line change
@@ -8,12 +8,18 @@ This tutorial presents how to solve with Physics-Informed Neural
88
Networks a 2D Poisson problem with Dirichlet boundary conditions. Using
99
extrafeatures.
1010

11-
The problem is written as: :raw-latex:`\begin{equation}
12-
\begin{cases}
13-
\Delta u = \sin{(\pi x)} \sin{(\pi y)} \text{ in } D, \\
14-
u = 0 \text{ on } \Gamma_1 \cup \Gamma_2 \cup \Gamma_3 \cup \Gamma_4,
15-
\end{cases}
16-
\end{equation}` where :math:`D` is a square domain :math:`[0,1]^2`, and
11+
The problem is written as:
12+
13+
.. raw:: latex
14+
15+
\begin{equation}
16+
\begin{cases}
17+
\Delta u = \sin{(\pi x)} \sin{(\pi y)} \text{ in } D, \\
18+
u = 0 \text{ on } \Gamma_1 \cup \Gamma_2 \cup \Gamma_3 \cup \Gamma_4,
19+
\end{cases}
20+
\end{equation}
21+
22+
where :math:`D` is a square domain :math:`[0,1]^2`, and
1723
:math:`\Gamma_i`, with :math:`i=1,...,4`, are the boundaries of the
1824
square.
1925

@@ -37,8 +43,8 @@ First of all, some useful imports.
3743
3844
Now, the Poisson problem is written in PINA code as a class. The
3945
equations are written as *conditions* that should be satisfied in the
40-
corresponding domains. *truth_solution* is the exact solution which will
41-
be compared with the predicted one.
46+
corresponding domains. *truth\_solution* is the exact solution which
47+
will be compared with the predicted one.
4248

4349
.. code:: ipython3
4450
@@ -107,12 +113,20 @@ of 0.006. These parameters can be modified as desired.
107113
108114
.. parsed-literal::
109115
110-
GPU available: False, used: False
116+
/u/n/ndemo/.local/lib/python3.9/site-packages/torch/cuda/__init__.py:546: UserWarning: Can't initialize NVML
117+
warnings.warn("Can't initialize NVML")
118+
GPU available: True (cuda), used: True
111119
TPU available: False, using: 0 TPU cores
112120
IPU available: False, using: 0 IPUs
113121
HPU available: False, using: 0 HPUs
114-
/Users/dariocoscia/anaconda3/envs/pina/lib/python3.9/site-packages/lightning/pytorch/trainer/connectors/logger_connector/logger_connector.py:67: UserWarning: Starting from v1.9.0, `tensorboardX` has been removed as a dependency of the `lightning.pytorch` package, due to potential conflicts with other packages in the ML ecosystem. For this reason, `logger=True` will use `CSVLogger` as the default logger, unless the `tensorboard` or `tensorboardX` packages are found. Please `pip install lightning[extra]` or one of them to enable TensorBoard support by default
115-
warning_cache.warn(
122+
Missing logger folder: /u/n/ndemo/PINA/tutorials/tutorial2/lightning_logs
123+
2023-10-17 10:09:18.208459: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
124+
2023-10-17 10:09:18.235849: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
125+
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
126+
2023-10-17 10:09:20.462393: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
127+
/opt/sissa/apps/intelpython/2022.0.2/intelpython/latest/lib/python3.9/site-packages/scipy/__init__.py:138: UserWarning: A NumPy version >=1.16.5 and <1.23.0 is required for this version of SciPy (detected version 1.26.0)
128+
warnings.warn(f"A NumPy version >={np_minversion} and <{np_maxversion} is required for this version of "
129+
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
116130
117131
| Name | Type | Params
118132
----------------------------------------
@@ -125,21 +139,18 @@ of 0.006. These parameters can be modified as desired.
125139
0.001 Total estimated model params size (MB)
126140
127141
128-
.. parsed-literal::
129-
130-
Epoch 999: : 1it [00:00, 129.50it/s, v_num=45, mean_loss=0.00196, gamma1_loss=0.0093, gamma2_loss=0.000146, gamma3_loss=8.16e-5, gamma4_loss=0.000201, D_loss=8.44e-5]
131142
132143
.. parsed-literal::
133144
134-
`Trainer.fit` stopped: `max_epochs=1000` reached.
145+
Training: 0it [00:00, ?it/s]
135146
136147
137148
.. parsed-literal::
138149
139-
Epoch 999: : 1it [00:00, 101.25it/s, v_num=45, mean_loss=0.00196, gamma1_loss=0.0093, gamma2_loss=0.000146, gamma3_loss=8.16e-5, gamma4_loss=0.000201, D_loss=8.44e-5]
150+
`Trainer.fit` stopped: `max_epochs=1000` reached.
140151
141152
142-
Now the *Plotter* class is used to plot the results. The solution
153+
Now the ``Plotter`` class is used to plot the results. The solution
143154
predicted by the neural network is plotted on the left, the exact one is
144155
represented at the center and on the right the error between the exact
145156
and the predicted solutions is showed.
@@ -151,7 +162,7 @@ and the predicted solutions is showed.
151162
152163
153164
154-
.. image:: tutorial_files/tutorial_11_0.png
165+
.. image:: output_11_0.png
155166

156167

157168
The problem solution with extra-features
@@ -162,9 +173,11 @@ is now defined, with an additional input variable, named extra-feature,
162173
which coincides with the forcing term in the Laplace equation. The set
163174
of input variables to the neural network is:
164175

165-
:raw-latex:`\begin{equation}
166-
[x, y, k(x, y)], \text{ with } k(x, y)=\sin{(\pi x)}\sin{(\pi y)},
167-
\end{equation}`
176+
.. raw:: latex
177+
178+
\begin{equation}
179+
[x, y, k(x, y)], \text{ with } k(x, y)=\sin{(\pi x)}\sin{(\pi y)},
180+
\end{equation}
168181

169182
where :math:`x` and :math:`y` are the spatial coordinates and
170183
:math:`k(x, y)` is the added feature.
@@ -210,10 +223,11 @@ new extra feature.
210223
211224
.. parsed-literal::
212225
213-
GPU available: False, used: False
226+
GPU available: True (cuda), used: True
214227
TPU available: False, using: 0 TPU cores
215228
IPU available: False, using: 0 IPUs
216229
HPU available: False, using: 0 HPUs
230+
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
217231
218232
| Name | Type | Params
219233
----------------------------------------
@@ -226,18 +240,15 @@ new extra feature.
226240
0.001 Total estimated model params size (MB)
227241
228242
229-
.. parsed-literal::
230-
231-
Epoch 999: : 1it [00:00, 112.55it/s, v_num=46, mean_loss=2.73e-7, gamma1_loss=1.13e-6, gamma2_loss=7.1e-8, gamma3_loss=4.69e-8, gamma4_loss=6.81e-8, D_loss=4.65e-8]
232243
233244
.. parsed-literal::
234245
235-
`Trainer.fit` stopped: `max_epochs=1000` reached.
246+
Training: 0it [00:00, ?it/s]
236247
237248
238249
.. parsed-literal::
239250
240-
Epoch 999: : 1it [00:00, 92.69it/s, v_num=46, mean_loss=2.73e-7, gamma1_loss=1.13e-6, gamma2_loss=7.1e-8, gamma3_loss=4.69e-8, gamma4_loss=6.81e-8, D_loss=4.65e-8]
251+
`Trainer.fit` stopped: `max_epochs=1000` reached.
241252
242253
243254
The predicted and exact solutions and the error between them are
@@ -251,7 +262,7 @@ of magnitudes in accuracy.
251262
252263
253264
254-
.. image:: tutorial_files/tutorial_16_0.png
265+
.. image:: output_16_0.png
255266

256267

257268
The problem solution with learnable extra-features
@@ -263,9 +274,11 @@ Another way to exploit the extra features is the addition of learnable
263274
parameter inside them. In this way, the added parameters are learned
264275
during the training phase of the neural network. In this case, we use:
265276

266-
:raw-latex:`\begin{equation}
267-
k(x, \mathbf{y}) = \beta \sin{(\alpha x)} \sin{(\alpha y)},
268-
\end{equation}`
277+
.. raw:: latex
278+
279+
\begin{equation}
280+
k(x, \mathbf{y}) = \beta \sin{(\alpha x)} \sin{(\alpha y)},
281+
\end{equation}
269282

270283
where :math:`\alpha` and :math:`\beta` are the abovementioned
271284
parameters. Their implementation is quite trivial: by using the class
@@ -306,10 +319,11 @@ need, and they are managed by ``autograd`` module!
306319
307320
.. parsed-literal::
308321
309-
GPU available: False, used: False
322+
GPU available: True (cuda), used: True
310323
TPU available: False, using: 0 TPU cores
311324
IPU available: False, using: 0 IPUs
312325
HPU available: False, using: 0 HPUs
326+
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
313327
314328
| Name | Type | Params
315329
----------------------------------------
@@ -322,18 +336,15 @@ need, and they are managed by ``autograd`` module!
322336
0.001 Total estimated model params size (MB)
323337
324338
325-
.. parsed-literal::
326-
327-
Epoch 999: : 1it [00:00, 91.07it/s, v_num=47, mean_loss=2.11e-6, gamma1_loss=1.03e-5, gamma2_loss=4.17e-8, gamma3_loss=4.28e-8, gamma4_loss=5.65e-8, D_loss=6.21e-8]
328339
329340
.. parsed-literal::
330341
331-
`Trainer.fit` stopped: `max_epochs=1000` reached.
342+
Training: 0it [00:00, ?it/s]
332343
333344
334345
.. parsed-literal::
335346
336-
Epoch 999: : 1it [00:00, 76.19it/s, v_num=47, mean_loss=2.11e-6, gamma1_loss=1.03e-5, gamma2_loss=4.17e-8, gamma3_loss=4.28e-8, gamma4_loss=5.65e-8, D_loss=6.21e-8]
347+
`Trainer.fit` stopped: `max_epochs=1000` reached.
337348
338349
339350
Umh, the final loss is not appreciabily better than previous model (with
@@ -365,10 +376,11 @@ removing all the hidden layers in the ``FeedForward``, keeping only the
365376
366377
.. parsed-literal::
367378
368-
GPU available: False, used: False
379+
GPU available: True (cuda), used: True
369380
TPU available: False, using: 0 TPU cores
370381
IPU available: False, using: 0 IPUs
371382
HPU available: False, using: 0 HPUs
383+
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
372384
373385
| Name | Type | Params
374386
----------------------------------------
@@ -381,18 +393,15 @@ removing all the hidden layers in the ``FeedForward``, keeping only the
381393
0.000 Total estimated model params size (MB)
382394
383395
384-
.. parsed-literal::
385-
386-
Epoch 999: : 1it [00:00, 149.45it/s, v_num=48, mean_loss=1.34e-16, gamma1_loss=6.66e-16, gamma2_loss=2.6e-18, gamma3_loss=4.84e-19, gamma4_loss=2.59e-18, D_loss=4.84e-19]
387396
388397
.. parsed-literal::
389398
390-
`Trainer.fit` stopped: `max_epochs=1000` reached.
399+
Training: 0it [00:00, ?it/s]
391400
392401
393402
.. parsed-literal::
394403
395-
Epoch 999: : 1it [00:00, 117.81it/s, v_num=48, mean_loss=1.34e-16, gamma1_loss=6.66e-16, gamma2_loss=2.6e-18, gamma3_loss=4.84e-19, gamma4_loss=2.59e-18, D_loss=4.84e-19]
404+
`Trainer.fit` stopped: `max_epochs=1000` reached.
396405
397406
398407
In such a way, the model is able to reach a very high accuracy! Of
@@ -413,23 +422,5 @@ features.
413422
414423
415424
416-
.. image:: tutorial_files/tutorial_23_0.png
417-
418-
419-
.. code:: ipython3
420-
421-
import matplotlib.pyplot as plt
422-
423-
plt.figure(figsize=(16, 6))
424-
plotter.plot_loss(trainer, label='Standard')
425-
plotter.plot_loss(trainer_feat, label='Static Features')
426-
plotter.plot_loss(trainer_learn, label='Learnable Features')
427-
428-
plt.grid()
429-
plt.legend()
430-
plt.show()
431-
432-
433-
434-
.. image:: tutorial_files/tutorial_24_0.png
425+
.. image:: output_23_0.png
435426

42.8 KB
Loading
35.4 KB
Loading
58.1 KB
Loading
-41.7 KB
Binary file not shown.
-40.3 KB
Binary file not shown.
-44.7 KB
Binary file not shown.
-51.7 KB
Binary file not shown.

docs/source/_rst/tutorial3/tutorial.rst

Lines changed: 42 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -1,22 +1,24 @@
11
Tutorial 3: resolution of wave equation with hard constraint PINNs.
22
===================================================================
33

4-
The problem solution
5-
~~~~~~~~~~~~~~~~~~~~
4+
The problem definition
5+
----------------------
66

77
In this tutorial we present how to solve the wave equation using hard
88
constraint PINNs. For doing so we will build a costum torch model and
99
pass it to the ``PINN`` solver.
1010

1111
The problem is written in the following form:
1212

13-
:raw-latex:`\begin{equation}
14-
\begin{cases}
15-
\Delta u(x,y,t) = \frac{\partial^2}{\partial t^2} u(x,y,t) \quad \text{in } D, \\\\
16-
u(x, y, t=0) = \sin(\pi x)\sin(\pi y), \\\\
17-
u(x, y, t) = 0 \quad \text{on } \Gamma_1 \cup \Gamma_2 \cup \Gamma_3 \cup \Gamma_4,
18-
\end{cases}
19-
\end{equation}`
13+
.. raw:: latex
14+
15+
\begin{equation}
16+
\begin{cases}
17+
\Delta u(x,y,t) = \frac{\partial^2}{\partial t^2} u(x,y,t) \quad \text{in } D, \\\\
18+
u(x, y, t=0) = \sin(\pi x)\sin(\pi y), \\\\
19+
u(x, y, t) = 0 \quad \text{on } \Gamma_1 \cup \Gamma_2 \cup \Gamma_3 \cup \Gamma_4,
20+
\end{cases}
21+
\end{equation}
2022

2123
where :math:`D` is a square domain :math:`[0,1]^2`, and
2224
:math:`\Gamma_i`, with :math:`i=1,...,4`, are the boundaries of the
@@ -80,21 +82,24 @@ predicted one.
8082
8183
problem = Wave()
8284
85+
Hard Constraint Model
86+
---------------------
87+
8388
After the problem, a **torch** model is needed to solve the PINN.
84-
Usually many models are already implemented in ``PINA``, but the user
85-
has the possibility to build his/her own model in ``pyTorch``. The hard
86-
constraint we impose are on the boundary of the spatial domain.
87-
Specificly our solution is written as:
89+
Usually, many models are already implemented in ``PINA``, but the user
90+
has the possibility to build his/her own model in ``PyTorch``. The hard
91+
constraint we impose is on the boundary of the spatial domain.
92+
Specifically, our solution is written as:
8893

8994
.. math:: u_{\rm{pinn}} = xy(1-x)(1-y)\cdot NN(x, y, t),
9095

9196
where :math:`NN` is the neural net output. This neural network takes as
9297
input the coordinates (in this case :math:`x`, :math:`y` and :math:`t`)
93-
and provides the unkwown field of the Wave problem. By construction it
94-
is zero on the boundaries. The residual of the equations are evaluated
95-
at several sampling points (which the user can manipulate using the
96-
method ``discretise_domain``) and the loss minimized by the neural
97-
network is the sum of the residuals.
98+
and provides the unknown field :math:`u`. By construction, it is zero on
99+
the boundaries. The residuals of the equations are evaluated at several
100+
sampling points (which the user can manipulate using the method
101+
``discretise_domain``) and the loss minimized by the neural network is
102+
the sum of the residuals.
98103

99104
.. code:: ipython3
100105
@@ -114,6 +119,9 @@ network is the sum of the residuals.
114119
hard = x.extract(['x'])*(1-x.extract(['x']))*x.extract(['y'])*(1-x.extract(['y']))
115120
return hard*self.layers(x)
116121
122+
Train and Inference
123+
-------------------
124+
117125
In this tutorial, the neural network is trained for 3000 epochs with a
118126
learning rate of 0.001 (default in ``PINN``). Training takes
119127
approximately 1 minute.
@@ -128,10 +136,20 @@ approximately 1 minute.
128136
129137
.. parsed-literal::
130138
131-
GPU available: False, used: False
139+
/u/n/ndemo/.local/lib/python3.9/site-packages/torch/cuda/__init__.py:546: UserWarning: Can't initialize NVML
140+
warnings.warn("Can't initialize NVML")
141+
GPU available: True (cuda), used: True
132142
TPU available: False, using: 0 TPU cores
133143
IPU available: False, using: 0 IPUs
134144
HPU available: False, using: 0 HPUs
145+
Missing logger folder: /u/n/ndemo/PINA/tutorials/tutorial3/lightning_logs
146+
2023-10-17 10:24:02.163746: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
147+
2023-10-17 10:24:02.218849: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
148+
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
149+
2023-10-17 10:24:07.063047: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
150+
/opt/sissa/apps/intelpython/2022.0.2/intelpython/latest/lib/python3.9/site-packages/scipy/__init__.py:138: UserWarning: A NumPy version >=1.16.5 and <1.23.0 is required for this version of SciPy (detected version 1.26.0)
151+
warnings.warn(f"A NumPy version >={np_minversion} and <{np_maxversion} is required for this version of "
152+
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
135153
136154
| Name | Type | Params
137155
----------------------------------------
@@ -144,18 +162,15 @@ approximately 1 minute.
144162
0.002 Total estimated model params size (MB)
145163
146164
147-
.. parsed-literal::
148-
149-
Epoch 2999: : 1it [00:00, 79.33it/s, v_num=5, mean_loss=0.00119, D_loss=0.00542, t0_loss=0.0017, gamma1_loss=0.000, gamma2_loss=0.000, gamma3_loss=0.000, gamma4_loss=0.000]
150165
151166
.. parsed-literal::
152167
153-
`Trainer.fit` stopped: `max_epochs=3000` reached.
168+
Training: 0it [00:00, ?it/s]
154169
155170
156171
.. parsed-literal::
157172
158-
Epoch 2999: : 1it [00:00, 68.62it/s, v_num=5, mean_loss=0.00119, D_loss=0.00542, t0_loss=0.0017, gamma1_loss=0.000, gamma2_loss=0.000, gamma3_loss=0.000, gamma4_loss=0.000]
173+
`Trainer.fit` stopped: `max_epochs=3000` reached.
159174
160175
161176
Notice that the loss on the boundaries of the spatial domain is exactly
@@ -177,14 +192,13 @@ results using the ``Plotter`` class of **PINA**.
177192
178193
179194
180-
181-
.. image:: tutorial_files/tutorial_12_0.png
195+
.. image:: output_14_0.png
182196

183197

184198

185-
.. image:: tutorial_files/tutorial_12_1.png
199+
.. image:: output_14_1.png
186200

187201

188202

189-
.. image:: tutorial_files/tutorial_12_2.png
203+
.. image:: output_14_2.png
190204

0 commit comments

Comments
 (0)