Skip to content

Commit 0d38de5

Browse files
Dario Cosciandem0
authored andcommitted
update plotter
1 parent 934ae40 commit 0d38de5

21 files changed

Lines changed: 171 additions & 165 deletions

File tree

docs/source/_rst/tutorials/tutorial1/tutorial.rst

Lines changed: 32 additions & 32 deletions
Original file line numberDiff line numberDiff line change
@@ -28,8 +28,8 @@ Build a PINA problem
2828
Problem definition in the **PINA** framework is done by building a
2929
python ``class``, which inherits from one or more problem classes
3030
(``SpatialProblem``, ``TimeDependentProblem``, ``ParametricProblem``, …)
31-
depending on the nature of the problem. Below is an example. Consider the following
32-
simple Ordinary Differential Equation:
31+
depending on the nature of the problem. Below is an example: ### Simple
32+
Ordinary Differential Equation Consider the following:
3333

3434
.. math::
3535
@@ -49,7 +49,7 @@ our ``Problem`` class is going to be inherited from the
4949
.. code:: python
5050
5151
from pina.problem import SpatialProblem
52-
from pina import CartesianProblem
52+
from pina.geometry import CartesianProblem
5353
5454
class SimpleODE(SpatialProblem):
5555
@@ -73,7 +73,7 @@ What about if our equation is also time dependent? In this case, our
7373
.. code:: ipython3
7474
7575
from pina.problem import SpatialProblem, TimeDependentProblem
76-
from pina import CartesianDomain
76+
from pina.geometry import CartesianDomain
7777
7878
class TimeSpaceODE(SpatialProblem, TimeDependentProblem):
7979
@@ -215,26 +215,26 @@ calling the attribute ``input_pts`` of the problem
215215
216216
.. parsed-literal::
217217
218-
Input points: {'x0': LabelTensor([[[0.]]]), 'D': LabelTensor([[[0.8633]],
219-
[[0.4009]],
220-
[[0.6489]],
221-
[[0.9278]],
222-
[[0.3975]],
223-
[[0.1484]],
224-
[[0.9632]],
225-
[[0.5485]],
226-
[[0.2984]],
227-
[[0.5643]],
228-
[[0.0368]],
229-
[[0.7847]],
230-
[[0.4741]],
231-
[[0.6957]],
232-
[[0.3281]],
233-
[[0.0958]],
234-
[[0.1847]],
235-
[[0.2232]],
236-
[[0.8099]],
237-
[[0.7304]]])}
218+
Input points: {'x0': LabelTensor([[[0.]]]), 'D': LabelTensor([[[0.7644]],
219+
[[0.2028]],
220+
[[0.1789]],
221+
[[0.4294]],
222+
[[0.3239]],
223+
[[0.6531]],
224+
[[0.1406]],
225+
[[0.6062]],
226+
[[0.4969]],
227+
[[0.7429]],
228+
[[0.8681]],
229+
[[0.3800]],
230+
[[0.5357]],
231+
[[0.0152]],
232+
[[0.9679]],
233+
[[0.8101]],
234+
[[0.0662]],
235+
[[0.9095]],
236+
[[0.2503]],
237+
[[0.5580]]])}
238238
Input points labels: ['x']
239239
240240
@@ -271,7 +271,8 @@ If you want to track the metric by yourself without a logger, use
271271

272272
.. code:: ipython3
273273
274-
from pina import PINN, Trainer
274+
from pina import Trainer
275+
from pina.solvers import PINN
275276
from pina.model import FeedForward
276277
from pina.callbacks import MetricTracker
277278
@@ -300,12 +301,11 @@ If you want to track the metric by yourself without a logger, use
300301
TPU available: False, using: 0 TPU cores
301302
IPU available: False, using: 0 IPUs
302303
HPU available: False, using: 0 HPUs
303-
Missing logger folder: /Users/dariocoscia/Desktop/PINA/tutorials/tutorial1/lightning_logs
304304
305305
306306
.. parsed-literal::
307307
308-
Epoch 1499: : 1it [00:00, 316.24it/s, v_num=0, mean_loss=5.39e-5, x0_loss=1.26e-6, D_loss=0.000106]
308+
Epoch 1499: : 1it [00:00, 272.55it/s, v_num=3, x0_loss=7.71e-6, D_loss=0.000734, mean_loss=0.000371]
309309
310310
.. parsed-literal::
311311
@@ -314,7 +314,7 @@ If you want to track the metric by yourself without a logger, use
314314
315315
.. parsed-literal::
316316
317-
Epoch 1499: : 1it [00:00, 166.89it/s, v_num=0, mean_loss=5.39e-5, x0_loss=1.26e-6, D_loss=0.000106]
317+
Epoch 1499: : 1it [00:00, 167.14it/s, v_num=3, x0_loss=7.71e-6, D_loss=0.000734, mean_loss=0.000371]
318318
319319
320320
After the training we can inspect trainer logged metrics (by default
@@ -332,9 +332,9 @@ loss can be accessed by ``trainer.logged_metrics``
332332
333333
.. parsed-literal::
334334
335-
{'mean_loss': tensor(5.3852e-05),
336-
'x0_loss': tensor(1.2636e-06),
337-
'D_loss': tensor(0.0001)}
335+
{'x0_loss': tensor(7.7149e-06),
336+
'D_loss': tensor(0.0007),
337+
'mean_loss': tensor(0.0004)}
338338
339339
340340
@@ -362,7 +362,7 @@ indistinguishable. We can also plot easily the loss:
362362

363363
.. code:: ipython3
364364
365-
pl.plot_loss(trainer=trainer, label = 'mean_loss', logy=True)
365+
pl.plot_loss(trainer=trainer, label = 'mean_loss', logy=True)
366366
367367
368368
-145 Bytes
Loading
1.09 KB
Loading
-960 Bytes
Loading

docs/source/_rst/tutorials/tutorial2/tutorial.rst

Lines changed: 20 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -31,16 +31,12 @@ The problem definition
3131
----------------------
3232

3333
The two-dimensional Poisson problem is mathematically written as:
34-
35-
.. math::
36-
\begin{equation}
37-
\begin{cases}
38-
\Delta u = \sin{(\pi x)} \sin{(\pi y)} \text{ in } D, \\
39-
u = 0 \text{ on } \Gamma_1 \cup \Gamma_2 \cup \Gamma_3 \cup \Gamma_4,
40-
\end{cases}
41-
\end{equation}
42-
43-
where :math:`D` is a square domain :math:`[0,1]^2`, and
34+
:raw-latex:`\begin{equation}
35+
\begin{cases}
36+
\Delta u = \sin{(\pi x)} \sin{(\pi y)} \text{ in } D, \\
37+
u = 0 \text{ on } \Gamma_1 \cup \Gamma_2 \cup \Gamma_3 \cup \Gamma_4,
38+
\end{cases}
39+
\end{equation}` where :math:`D` is a square domain :math:`[0,1]^2`, and
4440
:math:`\Gamma_i`, with :math:`i=1,...,4`, are the boundaries of the
4541
square.
4642

@@ -127,7 +123,7 @@ These parameters can be modified as desired. We use the
127123
128124
.. parsed-literal::
129125
130-
Epoch 999: : 1it [00:00, 152.98it/s, v_num=9, mean_loss=0.000239, D_loss=0.000793, gamma1_loss=8.51e-5, gamma2_loss=0.000103, gamma3_loss=0.000122, gamma4_loss=9.14e-5]
126+
Epoch 999: : 1it [00:00, 158.53it/s, v_num=3, gamma1_loss=5.29e-5, gamma2_loss=4.09e-5, gamma3_loss=4.73e-5, gamma4_loss=4.18e-5, D_loss=0.00134, mean_loss=0.000304]
131127
132128
.. parsed-literal::
133129
@@ -136,7 +132,7 @@ These parameters can be modified as desired. We use the
136132
137133
.. parsed-literal::
138134
139-
Epoch 999: : 1it [00:00, 119.21it/s, v_num=9, mean_loss=0.000239, D_loss=0.000793, gamma1_loss=8.51e-5, gamma2_loss=0.000103, gamma3_loss=0.000122, gamma4_loss=9.14e-5]
135+
Epoch 999: : 1it [00:00, 105.33it/s, v_num=3, gamma1_loss=5.29e-5, gamma2_loss=4.09e-5, gamma3_loss=4.73e-5, gamma4_loss=4.18e-5, D_loss=0.00134, mean_loss=0.000304]
140136
141137
142138
Now the ``Plotter`` class is used to plot the results. The solution
@@ -162,10 +158,9 @@ is now defined, with an additional input variable, named extra-feature,
162158
which coincides with the forcing term in the Laplace equation. The set
163159
of input variables to the neural network is:
164160

165-
.. math::
166-
\begin{equation}
167-
[x, y, k(x, y)], \text{ with } k(x, y)=\sin{(\pi x)}\sin{(\pi y)},
168-
\end{equation}
161+
:raw-latex:`\begin{equation}
162+
[x, y, k(x, y)], \text{ with } k(x, y)=\sin{(\pi x)}\sin{(\pi y)},
163+
\end{equation}`
169164

170165
where :math:`x` and :math:`y` are the spatial coordinates and
171166
:math:`k(x, y)` is the added feature.
@@ -219,7 +214,7 @@ new extra feature.
219214
220215
.. parsed-literal::
221216
222-
Epoch 999: : 1it [00:00, 119.36it/s, v_num=10, mean_loss=8.97e-7, D_loss=4.43e-6, gamma1_loss=1.37e-8, gamma2_loss=1.68e-8, gamma3_loss=1.22e-8, gamma4_loss=1.77e-8]
217+
Epoch 999: : 1it [00:00, 111.88it/s, v_num=4, gamma1_loss=2.54e-7, gamma2_loss=2.17e-7, gamma3_loss=1.94e-7, gamma4_loss=2.69e-7, D_loss=9.2e-6, mean_loss=2.03e-6]
223218
224219
.. parsed-literal::
225220
@@ -228,7 +223,7 @@ new extra feature.
228223
229224
.. parsed-literal::
230225
231-
Epoch 999: : 1it [00:00, 95.23it/s, v_num=10, mean_loss=8.97e-7, D_loss=4.43e-6, gamma1_loss=1.37e-8, gamma2_loss=1.68e-8, gamma3_loss=1.22e-8, gamma4_loss=1.77e-8]
226+
Epoch 999: : 1it [00:00, 85.62it/s, v_num=4, gamma1_loss=2.54e-7, gamma2_loss=2.17e-7, gamma3_loss=1.94e-7, gamma4_loss=2.69e-7, D_loss=9.2e-6, mean_loss=2.03e-6]
232227
233228
234229
The predicted and exact solutions and the error between them are
@@ -254,10 +249,9 @@ Another way to exploit the extra features is the addition of learnable
254249
parameter inside them. In this way, the added parameters are learned
255250
during the training phase of the neural network. In this case, we use:
256251

257-
.. math::
258-
\begin{equation}
259-
k(x, \mathbf{y}) = \beta \sin{(\alpha x)} \sin{(\alpha y)},
260-
\end{equation}
252+
:raw-latex:`\begin{equation}
253+
k(x, \mathbf{y}) = \beta \sin{(\alpha x)} \sin{(\alpha y)},
254+
\end{equation}`
261255

262256
where :math:`\alpha` and :math:`\beta` are the abovementioned
263257
parameters. Their implementation is quite trivial: by using the class
@@ -306,7 +300,7 @@ need, and they are managed by ``autograd`` module!
306300
307301
.. parsed-literal::
308302
309-
Epoch 999: : 1it [00:00, 103.14it/s, v_num=14, mean_loss=1.39e-6, D_loss=6.04e-6, gamma1_loss=4.19e-7, gamma2_loss=2.8e-8, gamma3_loss=4.05e-7, gamma4_loss=3.49e-8]
303+
Epoch 999: : 1it [00:00, 119.29it/s, v_num=5, gamma1_loss=3.26e-8, gamma2_loss=7.84e-8, gamma3_loss=1.13e-7, gamma4_loss=3.02e-8, D_loss=2.66e-6, mean_loss=5.82e-7]
310304
311305
.. parsed-literal::
312306
@@ -315,7 +309,7 @@ need, and they are managed by ``autograd`` module!
315309
316310
.. parsed-literal::
317311
318-
Epoch 999: : 1it [00:00, 84.50it/s, v_num=14, mean_loss=1.39e-6, D_loss=6.04e-6, gamma1_loss=4.19e-7, gamma2_loss=2.8e-8, gamma3_loss=4.05e-7, gamma4_loss=3.49e-8]
312+
Epoch 999: : 1it [00:00, 85.94it/s, v_num=5, gamma1_loss=3.26e-8, gamma2_loss=7.84e-8, gamma3_loss=1.13e-7, gamma4_loss=3.02e-8, D_loss=2.66e-6, mean_loss=5.82e-7]
319313
320314
321315
Umh, the final loss is not appreciabily better than previous model (with
@@ -355,7 +349,7 @@ removing all the hidden layers in the ``FeedForward``, keeping only the
355349
356350
.. parsed-literal::
357351
358-
Epoch 999: : 1it [00:00, 130.55it/s, v_num=17, mean_loss=1.34e-14, D_loss=6.7e-14, gamma1_loss=5.13e-17, gamma2_loss=9.68e-18, gamma3_loss=5.14e-17, gamma4_loss=9.75e-18]
352+
Epoch 0: : 0it [00:00, ?it/s]Epoch 999: : 1it [00:00, 131.20it/s, v_num=6, gamma1_loss=2.55e-16, gamma2_loss=4.76e-17, gamma3_loss=2.55e-16, gamma4_loss=4.76e-17, D_loss=1.74e-13, mean_loss=3.5e-14]
359353
360354
.. parsed-literal::
361355
@@ -364,7 +358,7 @@ removing all the hidden layers in the ``FeedForward``, keeping only the
364358
365359
.. parsed-literal::
366360
367-
Epoch 999: : 1it [00:00, 104.91it/s, v_num=17, mean_loss=1.34e-14, D_loss=6.7e-14, gamma1_loss=5.13e-17, gamma2_loss=9.68e-18, gamma3_loss=5.14e-17, gamma4_loss=9.75e-18]
361+
Epoch 999: : 1it [00:00, 98.81it/s, v_num=6, gamma1_loss=2.55e-16, gamma2_loss=4.76e-17, gamma3_loss=2.55e-16, gamma4_loss=4.76e-17, D_loss=1.74e-13, mean_loss=3.5e-14]
368362
369363
370364
In such a way, the model is able to reach a very high accuracy! Of
-6.58 KB
Loading
-16.4 KB
Loading
-2.62 KB
Loading
2.78 KB
Loading

docs/source/_rst/tutorials/tutorial3/tutorial.rst

Lines changed: 11 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -25,14 +25,13 @@ The problem definition
2525

2626
The problem is written in the following form:
2727

28-
.. math::
29-
\begin{equation}
30-
\begin{cases}
31-
\Delta u(x,y,t) = \frac{\partial^2}{\partial t^2} u(x,y,t) \quad \text{in } D, \\\\
32-
u(x, y, t=0) = \sin(\pi x)\sin(\pi y), \\\\
33-
u(x, y, t) = 0 \quad \text{on } \Gamma_1 \cup \Gamma_2 \cup \Gamma_3 \cup \Gamma_4,
34-
\end{cases}
35-
\end{equation}
28+
:raw-latex:`\begin{equation}
29+
\begin{cases}
30+
\Delta u(x,y,t) = \frac{\partial^2}{\partial t^2} u(x,y,t) \quad \text{in } D, \\\\
31+
u(x, y, t=0) = \sin(\pi x)\sin(\pi y), \\\\
32+
u(x, y, t) = 0 \quad \text{on } \Gamma_1 \cup \Gamma_2 \cup \Gamma_3 \cup \Gamma_4,
33+
\end{cases}
34+
\end{equation}`
3635

3736
where :math:`D` is a square domain :math:`[0,1]^2`, and
3837
:math:`\Gamma_i`, with :math:`i=1,...,4`, are the boundaries of the
@@ -149,7 +148,7 @@ approximately 3 minutes.
149148
150149
.. parsed-literal::
151150
152-
Epoch 999: : 1it [00:00, 62.13it/s, v_num=0, mean_loss=0.0268, D_loss=0.0397, t0_loss=0.121, gamma1_loss=0.000, gamma2_loss=0.000, gamma3_loss=0.000, gamma4_loss=0.000]
151+
Epoch 999: : 1it [00:00, 84.47it/s, v_num=0, gamma1_loss=0.000, gamma2_loss=0.000, gamma3_loss=0.000, gamma4_loss=0.000, t0_loss=0.0419, D_loss=0.0307, mean_loss=0.0121]
153152
154153
.. parsed-literal::
155154
@@ -158,7 +157,7 @@ approximately 3 minutes.
158157
159158
.. parsed-literal::
160159
161-
Epoch 999: : 1it [00:00, 53.88it/s, v_num=0, mean_loss=0.0268, D_loss=0.0397, t0_loss=0.121, gamma1_loss=0.000, gamma2_loss=0.000, gamma3_loss=0.000, gamma4_loss=0.000]
160+
Epoch 999: : 1it [00:00, 68.69it/s, v_num=0, gamma1_loss=0.000, gamma2_loss=0.000, gamma3_loss=0.000, gamma4_loss=0.000, t0_loss=0.0419, D_loss=0.0307, mean_loss=0.0121]
162161
163162
164163
Notice that the loss on the boundaries of the spatial domain is exactly
@@ -263,7 +262,7 @@ Now let’s train with the same configuration as thre previous test
263262
264263
.. parsed-literal::
265264
266-
Epoch 999: : 1it [00:00, 48.54it/s, v_num=1, mean_loss=1.48e-8, D_loss=8.89e-8, t0_loss=0.000, gamma1_loss=2.06e-15, gamma2_loss=0.000, gamma3_loss=2.1e-15, gamma4_loss=0.000]
265+
Epoch 0: : 0it [00:00, ?it/s]Epoch 999: : 1it [00:00, 52.10it/s, v_num=1, gamma1_loss=1.97e-15, gamma2_loss=0.000, gamma3_loss=2.14e-15, gamma4_loss=0.000, t0_loss=0.000, D_loss=1.25e-7, mean_loss=2.09e-8]
267266
268267
.. parsed-literal::
269268
@@ -272,7 +271,7 @@ Now let’s train with the same configuration as thre previous test
272271
273272
.. parsed-literal::
274273
275-
Epoch 999: : 1it [00:00, 43.25it/s, v_num=1, mean_loss=1.48e-8, D_loss=8.89e-8, t0_loss=0.000, gamma1_loss=2.06e-15, gamma2_loss=0.000, gamma3_loss=2.1e-15, gamma4_loss=0.000]
274+
Epoch 999: : 1it [00:00, 45.78it/s, v_num=1, gamma1_loss=1.97e-15, gamma2_loss=0.000, gamma3_loss=2.14e-15, gamma4_loss=0.000, t0_loss=0.000, D_loss=1.25e-7, mean_loss=2.09e-8]
276275
277276
278277
We can clearly see that the loss is way lower now. Let’s plot the

0 commit comments

Comments
 (0)