@@ -115,26 +115,24 @@ These parameters can be modified as desired. We use the
115115
116116 .. parsed-literal ::
117117
118- /u/d/dcoscia/.local/lib/python3.9/site-packages/torch/cuda/__init__.py:546: UserWarning: Can't initialize NVML
119- warnings.warn("Can't initialize NVML")
120- /u/d/dcoscia/.local/lib/python3.9/site-packages/torch/cuda/__init__.py:651: UserWarning: CUDA initialization: CUDA unknown error - this may be due to an incorrectly set up environment, e.g. changing env variable CUDA_VISIBLE_DEVICES after program start. Setting the available devices to be zero. (Triggered internally at ../c10/cuda/CUDAFunctions.cpp:109.)
121- return torch._C._cuda_getDeviceCount() if nvml_count < 0 else nvml_count
122118 GPU available: False, used: False
123119 TPU available: False, using: 0 TPU cores
124120 IPU available: False, using: 0 IPUs
125121 HPU available: False, using: 0 HPUs
126- Missing logger folder: /u/d/dcoscia/PINA/tutorials/tutorial2/lightning_logs
127122
128123
124+ .. parsed-literal ::
125+
126+ Epoch 999: : 1it [00:00, 152.98it/s, v_num=9, mean_loss=0.000239, D_loss=0.000793, gamma1_loss=8.51e-5, gamma2_loss=0.000103, gamma3_loss=0.000122, gamma4_loss=9.14e-5]
129127
130128 .. parsed-literal ::
131129
132- Training: 0it [00:00, ?it/s]
130+ ` Trainer.fit ` stopped: ` max_epochs=1000 ` reached.
133131
134132
135133 .. parsed-literal ::
136134
137- ` Trainer.fit ` stopped: ` max_epochs=1000 ` reached.
135+ Epoch 999: : 1it [00:00, 119.21it/s, v_num=9, mean_loss=0.000239, D_loss=0.000793, gamma1_loss=8.51e-5, gamma2_loss=0.000103, gamma3_loss=0.000122, gamma4_loss=9.14e-5]
138136
139137
140138 Now the ``Plotter `` class is used to plot the results. The solution
@@ -145,7 +143,7 @@ and the predicted solutions is showed.
145143.. code :: ipython3
146144
147145 plotter = Plotter()
148- plotter.plot(trainer )
146+ plotter.plot(solver=pinn )
149147
150148
151149
@@ -214,15 +212,18 @@ new extra feature.
214212 HPU available: False, using: 0 HPUs
215213
216214
215+ .. parsed-literal ::
216+
217+ Epoch 999: : 1it [00:00, 119.36it/s, v_num=10, mean_loss=8.97e-7, D_loss=4.43e-6, gamma1_loss=1.37e-8, gamma2_loss=1.68e-8, gamma3_loss=1.22e-8, gamma4_loss=1.77e-8]
217218
218219 .. parsed-literal ::
219220
220- Training: 0it [00:00, ?it/s]
221+ ` Trainer.fit ` stopped: ` max_epochs=1000 ` reached.
221222
222223
223224 .. parsed-literal ::
224225
225- ` Trainer.fit ` stopped: ` max_epochs=1000 ` reached.
226+ Epoch 999: : 1it [00:00, 95.23it/s, v_num=10, mean_loss=8.97e-7, D_loss=4.43e-6, gamma1_loss=1.37e-8, gamma2_loss=1.68e-8, gamma3_loss=1.22e-8, gamma4_loss=1.77e-8]
226227
227228
228229 The predicted and exact solutions and the error between them are
@@ -232,7 +233,7 @@ of magnitudes in accuracy.
232233
233234.. code :: ipython3
234235
235- plotter.plot(trainer_feat )
236+ plotter.plot(solver=pinn_feat )
236237
237238
238239
@@ -297,15 +298,18 @@ need, and they are managed by ``autograd`` module!
297298 HPU available: False, using: 0 HPUs
298299
299300
301+ .. parsed-literal ::
302+
303+ Epoch 999: : 1it [00:00, 103.14it/s, v_num=14, mean_loss=1.39e-6, D_loss=6.04e-6, gamma1_loss=4.19e-7, gamma2_loss=2.8e-8, gamma3_loss=4.05e-7, gamma4_loss=3.49e-8]
300304
301305 .. parsed-literal ::
302306
303- Training: 0it [00:00, ?it/s]
307+ ` Trainer.fit ` stopped: ` max_epochs=1000 ` reached.
304308
305309
306310 .. parsed-literal ::
307311
308- ` Trainer.fit ` stopped: ` max_epochs=1000 ` reached.
312+ Epoch 999: : 1it [00:00, 84.50it/s, v_num=14, mean_loss=1.39e-6, D_loss=6.04e-6, gamma1_loss=4.19e-7, gamma2_loss=2.8e-8, gamma3_loss=4.05e-7, gamma4_loss=3.49e-8]
309313
310314
311315 Umh, the final loss is not appreciabily better than previous model (with
@@ -328,7 +332,7 @@ removing all the hidden layers in the ``FeedForward``, keeping only the
328332 output_dimensions=len(problem.output_variables),
329333 input_dimensions=len(problem.input_variables)+1
330334 )
331- pinn_learn = PINN(problem, model_lean, extra_features=[SinSinAB()], optimizer_kwargs={'lr':0.006 , 'weight_decay':1e-8})
335+ pinn_learn = PINN(problem, model_lean, extra_features=[SinSinAB()], optimizer_kwargs={'lr':0.01 , 'weight_decay':1e-8})
332336 trainer_learn = Trainer(pinn_learn, max_epochs=1000, callbacks=[MetricTracker()], accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)
333337
334338 # train
@@ -343,15 +347,18 @@ removing all the hidden layers in the ``FeedForward``, keeping only the
343347 HPU available: False, using: 0 HPUs
344348
345349
350+ .. parsed-literal ::
351+
352+ Epoch 999: : 1it [00:00, 130.55it/s, v_num=17, mean_loss=1.34e-14, D_loss=6.7e-14, gamma1_loss=5.13e-17, gamma2_loss=9.68e-18, gamma3_loss=5.14e-17, gamma4_loss=9.75e-18]
346353
347354 .. parsed-literal ::
348355
349- Training: 0it [00:00, ?it/s]
356+ ` Trainer.fit ` stopped: ` max_epochs=1000 ` reached.
350357
351358
352359 .. parsed-literal ::
353360
354- ` Trainer.fit ` stopped: ` max_epochs=1000 ` reached.
361+ Epoch 999: : 1it [00:00, 104.91it/s, v_num=17, mean_loss=1.34e-14, D_loss=6.7e-14, gamma1_loss=5.13e-17, gamma2_loss=9.68e-18, gamma3_loss=5.14e-17, gamma4_loss=9.75e-18]
355362
356363
357364 In such a way, the model is able to reach a very high accuracy! Of
@@ -368,7 +375,7 @@ features.
368375
369376.. code :: ipython3
370377
371- plotter.plot(trainer_learn )
378+ plotter.plot(solver=pinn_learn )
372379
373380
374381
@@ -379,9 +386,9 @@ Let us compare the training losses for the various types of training
379386
380387.. code :: ipython3
381388
382- plotter.plot_loss(trainer, label='Standard')
383- plotter.plot_loss(trainer_feat, label='Static Features')
384- plotter.plot_loss(trainer_learn, label='Learnable Features')
389+ plotter.plot_loss(trainer, logy=True, label='Standard')
390+ plotter.plot_loss(trainer_feat, logy=True, label='Static Features')
391+ plotter.plot_loss(trainer_learn, logy=True, label='Learnable Features')
385392
386393
387394
0 commit comments