Skip to content

Commit d556c59

Browse files
Dario Cosciandem0
authored andcommitted
modify tutorials for plotter compatibility
1 parent 5336f36 commit d556c59

36 files changed

Lines changed: 284 additions & 254 deletions

docs/source/_rst/tutorials/tutorial1/tutorial.rst

Lines changed: 34 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -215,26 +215,26 @@ calling the attribute ``input_pts`` of the problem
215215
216216
.. parsed-literal::
217217
218-
Input points: {'x0': LabelTensor([[[0.]]]), 'D': LabelTensor([[[0.8569]],
219-
[[0.9478]],
220-
[[0.3030]],
221-
[[0.8182]],
222-
[[0.4116]],
223-
[[0.6687]],
224-
[[0.5394]],
225-
[[0.9927]],
226-
[[0.6082]],
227-
[[0.4605]],
228-
[[0.2859]],
229-
[[0.7321]],
230-
[[0.5624]],
231-
[[0.1303]],
232-
[[0.2402]],
233-
[[0.0182]],
234-
[[0.0714]],
235-
[[0.3697]],
236-
[[0.7770]],
237-
[[0.1784]]])}
218+
Input points: {'x0': LabelTensor([[[0.]]]), 'D': LabelTensor([[[0.8633]],
219+
[[0.4009]],
220+
[[0.6489]],
221+
[[0.9278]],
222+
[[0.3975]],
223+
[[0.1484]],
224+
[[0.9632]],
225+
[[0.5485]],
226+
[[0.2984]],
227+
[[0.5643]],
228+
[[0.0368]],
229+
[[0.7847]],
230+
[[0.4741]],
231+
[[0.6957]],
232+
[[0.3281]],
233+
[[0.0958]],
234+
[[0.1847]],
235+
[[0.2232]],
236+
[[0.8099]],
237+
[[0.7304]]])}
238238
Input points labels: ['x']
239239
240240
@@ -296,19 +296,16 @@ If you want to track the metric by yourself without a logger, use
296296
297297
.. parsed-literal::
298298
299-
/u/d/dcoscia/.local/lib/python3.9/site-packages/torch/cuda/__init__.py:546: UserWarning: Can't initialize NVML
300-
warnings.warn("Can't initialize NVML")
301-
/u/d/dcoscia/.local/lib/python3.9/site-packages/torch/cuda/__init__.py:651: UserWarning: CUDA initialization: CUDA unknown error - this may be due to an incorrectly set up environment, e.g. changing env variable CUDA_VISIBLE_DEVICES after program start. Setting the available devices to be zero. (Triggered internally at ../c10/cuda/CUDAFunctions.cpp:109.)
302-
return torch._C._cuda_getDeviceCount() if nvml_count < 0 else nvml_count
303299
GPU available: False, used: False
304300
TPU available: False, using: 0 TPU cores
305301
IPU available: False, using: 0 IPUs
306302
HPU available: False, using: 0 HPUs
303+
Missing logger folder: /Users/dariocoscia/Desktop/PINA/tutorials/tutorial1/lightning_logs
307304
308305
309306
.. parsed-literal::
310307
311-
Epoch 1499: : 1it [00:00, 143.58it/s, v_num=5, mean_loss=1.09e-5, x0_loss=1.33e-7, D_loss=2.17e-5]
308+
Epoch 1499: : 1it [00:00, 316.24it/s, v_num=0, mean_loss=5.39e-5, x0_loss=1.26e-6, D_loss=0.000106]
312309
313310
.. parsed-literal::
314311
@@ -317,7 +314,7 @@ If you want to track the metric by yourself without a logger, use
317314
318315
.. parsed-literal::
319316
320-
Epoch 1499: : 1it [00:00, 65.39it/s, v_num=5, mean_loss=1.09e-5, x0_loss=1.33e-7, D_loss=2.17e-5]
317+
Epoch 1499: : 1it [00:00, 166.89it/s, v_num=0, mean_loss=5.39e-5, x0_loss=1.26e-6, D_loss=0.000106]
321318
322319
323320
After the training we can inspect trainer logged metrics (by default
@@ -335,9 +332,9 @@ loss can be accessed by ``trainer.logged_metrics``
335332
336333
.. parsed-literal::
337334
338-
{'mean_loss': tensor(1.0938e-05),
339-
'x0_loss': tensor(1.3328e-07),
340-
'D_loss': tensor(2.1743e-05)}
335+
{'mean_loss': tensor(5.3852e-05),
336+
'x0_loss': tensor(1.2636e-06),
337+
'D_loss': tensor(0.0001)}
341338
342339
343340
@@ -347,19 +344,25 @@ quatitative plots of the solution.
347344
.. code:: ipython3
348345
349346
# plotting the solution
350-
pl.plot(trainer=trainer)
347+
pl.plot(solver=pinn)
351348
352349
353350
354351
.. image:: tutorial_files/tutorial_23_0.png
355352

356353

354+
355+
.. parsed-literal::
356+
357+
<Figure size 640x480 with 0 Axes>
358+
359+
357360
The solution is overlapped with the actual one, and they are barely
358361
indistinguishable. We can also plot easily the loss:
359362

360363
.. code:: ipython3
361364
362-
pl.plot_loss(trainer=trainer, metric='mean_loss', log_scale=True)
365+
pl.plot_loss(trainer=trainer, label = 'mean_loss', logy=True)
363366
364367
365368
828 Bytes
Loading
3 KB
Loading
1.33 KB
Loading

docs/source/_rst/tutorials/tutorial2/tutorial.rst

Lines changed: 27 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -115,26 +115,24 @@ These parameters can be modified as desired. We use the
115115
116116
.. parsed-literal::
117117
118-
/u/d/dcoscia/.local/lib/python3.9/site-packages/torch/cuda/__init__.py:546: UserWarning: Can't initialize NVML
119-
warnings.warn("Can't initialize NVML")
120-
/u/d/dcoscia/.local/lib/python3.9/site-packages/torch/cuda/__init__.py:651: UserWarning: CUDA initialization: CUDA unknown error - this may be due to an incorrectly set up environment, e.g. changing env variable CUDA_VISIBLE_DEVICES after program start. Setting the available devices to be zero. (Triggered internally at ../c10/cuda/CUDAFunctions.cpp:109.)
121-
return torch._C._cuda_getDeviceCount() if nvml_count < 0 else nvml_count
122118
GPU available: False, used: False
123119
TPU available: False, using: 0 TPU cores
124120
IPU available: False, using: 0 IPUs
125121
HPU available: False, using: 0 HPUs
126-
Missing logger folder: /u/d/dcoscia/PINA/tutorials/tutorial2/lightning_logs
127122
128123
124+
.. parsed-literal::
125+
126+
Epoch 999: : 1it [00:00, 152.98it/s, v_num=9, mean_loss=0.000239, D_loss=0.000793, gamma1_loss=8.51e-5, gamma2_loss=0.000103, gamma3_loss=0.000122, gamma4_loss=9.14e-5]
129127
130128
.. parsed-literal::
131129
132-
Training: 0it [00:00, ?it/s]
130+
`Trainer.fit` stopped: `max_epochs=1000` reached.
133131
134132
135133
.. parsed-literal::
136134
137-
`Trainer.fit` stopped: `max_epochs=1000` reached.
135+
Epoch 999: : 1it [00:00, 119.21it/s, v_num=9, mean_loss=0.000239, D_loss=0.000793, gamma1_loss=8.51e-5, gamma2_loss=0.000103, gamma3_loss=0.000122, gamma4_loss=9.14e-5]
138136
139137
140138
Now the ``Plotter`` class is used to plot the results. The solution
@@ -145,7 +143,7 @@ and the predicted solutions is showed.
145143
.. code:: ipython3
146144
147145
plotter = Plotter()
148-
plotter.plot(trainer)
146+
plotter.plot(solver=pinn)
149147
150148
151149
@@ -214,15 +212,18 @@ new extra feature.
214212
HPU available: False, using: 0 HPUs
215213
216214
215+
.. parsed-literal::
216+
217+
Epoch 999: : 1it [00:00, 119.36it/s, v_num=10, mean_loss=8.97e-7, D_loss=4.43e-6, gamma1_loss=1.37e-8, gamma2_loss=1.68e-8, gamma3_loss=1.22e-8, gamma4_loss=1.77e-8]
217218
218219
.. parsed-literal::
219220
220-
Training: 0it [00:00, ?it/s]
221+
`Trainer.fit` stopped: `max_epochs=1000` reached.
221222
222223
223224
.. parsed-literal::
224225
225-
`Trainer.fit` stopped: `max_epochs=1000` reached.
226+
Epoch 999: : 1it [00:00, 95.23it/s, v_num=10, mean_loss=8.97e-7, D_loss=4.43e-6, gamma1_loss=1.37e-8, gamma2_loss=1.68e-8, gamma3_loss=1.22e-8, gamma4_loss=1.77e-8]
226227
227228
228229
The predicted and exact solutions and the error between them are
@@ -232,7 +233,7 @@ of magnitudes in accuracy.
232233

233234
.. code:: ipython3
234235
235-
plotter.plot(trainer_feat)
236+
plotter.plot(solver=pinn_feat)
236237
237238
238239
@@ -297,15 +298,18 @@ need, and they are managed by ``autograd`` module!
297298
HPU available: False, using: 0 HPUs
298299
299300
301+
.. parsed-literal::
302+
303+
Epoch 999: : 1it [00:00, 103.14it/s, v_num=14, mean_loss=1.39e-6, D_loss=6.04e-6, gamma1_loss=4.19e-7, gamma2_loss=2.8e-8, gamma3_loss=4.05e-7, gamma4_loss=3.49e-8]
300304
301305
.. parsed-literal::
302306
303-
Training: 0it [00:00, ?it/s]
307+
`Trainer.fit` stopped: `max_epochs=1000` reached.
304308
305309
306310
.. parsed-literal::
307311
308-
`Trainer.fit` stopped: `max_epochs=1000` reached.
312+
Epoch 999: : 1it [00:00, 84.50it/s, v_num=14, mean_loss=1.39e-6, D_loss=6.04e-6, gamma1_loss=4.19e-7, gamma2_loss=2.8e-8, gamma3_loss=4.05e-7, gamma4_loss=3.49e-8]
309313
310314
311315
Umh, the final loss is not appreciabily better than previous model (with
@@ -328,7 +332,7 @@ removing all the hidden layers in the ``FeedForward``, keeping only the
328332
output_dimensions=len(problem.output_variables),
329333
input_dimensions=len(problem.input_variables)+1
330334
)
331-
pinn_learn = PINN(problem, model_lean, extra_features=[SinSinAB()], optimizer_kwargs={'lr':0.006, 'weight_decay':1e-8})
335+
pinn_learn = PINN(problem, model_lean, extra_features=[SinSinAB()], optimizer_kwargs={'lr':0.01, 'weight_decay':1e-8})
332336
trainer_learn = Trainer(pinn_learn, max_epochs=1000, callbacks=[MetricTracker()], accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)
333337
334338
# train
@@ -343,15 +347,18 @@ removing all the hidden layers in the ``FeedForward``, keeping only the
343347
HPU available: False, using: 0 HPUs
344348
345349
350+
.. parsed-literal::
351+
352+
Epoch 999: : 1it [00:00, 130.55it/s, v_num=17, mean_loss=1.34e-14, D_loss=6.7e-14, gamma1_loss=5.13e-17, gamma2_loss=9.68e-18, gamma3_loss=5.14e-17, gamma4_loss=9.75e-18]
346353
347354
.. parsed-literal::
348355
349-
Training: 0it [00:00, ?it/s]
356+
`Trainer.fit` stopped: `max_epochs=1000` reached.
350357
351358
352359
.. parsed-literal::
353360
354-
`Trainer.fit` stopped: `max_epochs=1000` reached.
361+
Epoch 999: : 1it [00:00, 104.91it/s, v_num=17, mean_loss=1.34e-14, D_loss=6.7e-14, gamma1_loss=5.13e-17, gamma2_loss=9.68e-18, gamma3_loss=5.14e-17, gamma4_loss=9.75e-18]
355362
356363
357364
In such a way, the model is able to reach a very high accuracy! Of
@@ -368,7 +375,7 @@ features.
368375

369376
.. code:: ipython3
370377
371-
plotter.plot(trainer_learn)
378+
plotter.plot(solver=pinn_learn)
372379
373380
374381
@@ -379,9 +386,9 @@ Let us compare the training losses for the various types of training
379386

380387
.. code:: ipython3
381388
382-
plotter.plot_loss(trainer, label='Standard')
383-
plotter.plot_loss(trainer_feat, label='Static Features')
384-
plotter.plot_loss(trainer_learn, label='Learnable Features')
389+
plotter.plot_loss(trainer, logy=True, label='Standard')
390+
plotter.plot_loss(trainer_feat, logy=True,label='Static Features')
391+
plotter.plot_loss(trainer_learn, logy=True, label='Learnable Features')
385392
386393
387394
Binary file not shown.
21.1 KB
Loading
Binary file not shown.
41.1 KB
Loading
-741 Bytes
Loading

0 commit comments

Comments
 (0)