Skip to content

Commit ada9643

Browse files
authored
fix tutorials latex and links (#261)
1 parent 4f911f8 commit ada9643

15 files changed

Lines changed: 84 additions & 284 deletions

File tree

docs/source/_rst/tutorials/tutorial1/tutorial.rst

Lines changed: 12 additions & 44 deletions
Original file line numberDiff line numberDiff line change
@@ -28,8 +28,12 @@ Build a PINA problem
2828
Problem definition in the **PINA** framework is done by building a
2929
python ``class``, which inherits from one or more problem classes
3030
(``SpatialProblem``, ``TimeDependentProblem``, ``ParametricProblem``, …)
31-
depending on the nature of the problem. Below is an example: ### Simple
32-
Ordinary Differential Equation Consider the following:
31+
depending on the nature of the problem. Below is an example:
32+
33+
Simple Ordinary Differential Equation
34+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
35+
36+
Consider the following:
3337

3438
.. math::
3539
@@ -83,25 +87,19 @@ will inherit from both ``SpatialProblem`` and ``TimeDependentProblem``:
8387
# other stuff ...
8488
8589
86-
.. parsed-literal::
87-
88-
Intel MKL WARNING: Support of Intel(R) Streaming SIMD Extensions 4.2 (Intel(R) SSE4.2) enabled only processors has been deprecated. Intel oneAPI Math Kernel Library 2025.0 will require Intel(R) Advanced Vector Extensions (Intel(R) AVX) instructions.
89-
Intel MKL WARNING: Support of Intel(R) Streaming SIMD Extensions 4.2 (Intel(R) SSE4.2) enabled only processors has been deprecated. Intel oneAPI Math Kernel Library 2025.0 will require Intel(R) Advanced Vector Extensions (Intel(R) AVX) instructions.
90-
91-
9290
where we have included the ``temporal_domain`` variable, indicating the
9391
time domain wanted for the solution.
9492

9593
In summary, using **PINA**, we can initialize a problem with a class
9694
which inherits from different base classes: ``SpatialProblem``,
9795
``TimeDependentProblem``, ``ParametricProblem``, and so on depending on
9896
the type of problem we are considering. Here are some examples (more on
99-
the official documentation): \* ``SpatialProblem`` :math:`\rightarrow` a
100-
differential equation with spatial variable(s) \*
101-
``TimeDependentProblem`` :math:`\rightarrow` a time-dependent
102-
differential equation \* ``ParametricProblem`` :math:`\rightarrow` a
103-
parametrized differential equation \* ``AbstractProblem``
104-
:math:`\rightarrow` any **PINA** problem inherits from here
97+
the official documentation):
98+
99+
* ``SpatialProblem`` :math:`\rightarrow` a differential equation with spatial variable(s) ``spatial_domain``
100+
* ``TimeDependentProblem`` :math:`\rightarrow` a time-dependent differential equation with temporal variable(s) ``temporal_domain``
101+
* ``ParametricProblem`` :math:`\rightarrow` a parametrized differential equation with parametric variable(s) ``parameter_domain``
102+
* ``AbstractProblem`` :math:`\rightarrow` any **PINA** problem inherits from here
105103

106104
Write the problem class
107105
~~~~~~~~~~~~~~~~~~~~~~~
@@ -300,31 +298,6 @@ If you want to track the metric by yourself without a logger, use
300298
# train
301299
trainer.train()
302300
303-
304-
.. parsed-literal::
305-
306-
GPU available: False, used: False
307-
TPU available: False, using: 0 TPU cores
308-
IPU available: False, using: 0 IPUs
309-
HPU available: False, using: 0 HPUs
310-
/Users/alessio/opt/anaconda3/envs/pina/lib/python3.11/site-packages/pytorch_lightning/trainer/connectors/logger_connector/logger_connector.py:67: Starting from v1.9.0, `tensorboardX` has been removed as a dependency of the `pytorch_lightning` package, due to potential conflicts with other packages in the ML ecosystem. For this reason, `logger=True` will use `CSVLogger` as the default logger, unless the `tensorboard` or `tensorboardX` packages are found. Please `pip install lightning[extra]` or one of them to enable TensorBoard support by default
311-
Missing logger folder: /Users/alessio/Downloads/lightning_logs
312-
313-
314-
.. parsed-literal::
315-
316-
Epoch 1499: | | 1/? [00:00<00:00, 167.08it/s, v_num=0, x0_loss=1.07e-5, D_loss=0.000792, mean_loss=0.000401]
317-
318-
.. parsed-literal::
319-
320-
`Trainer.fit` stopped: `max_epochs=1500` reached.
321-
322-
323-
.. parsed-literal::
324-
325-
Epoch 1499: | | 1/? [00:00<00:00, 102.49it/s, v_num=0, x0_loss=1.07e-5, D_loss=0.000792, mean_loss=0.000401]
326-
327-
328301
After the training we can inspect trainer logged metrics (by default
329302
**PINA** logs mean square error residual loss). The logged metrics can
330303
be accessed online using one of the ``Lightinig`` loggers. The final
@@ -355,11 +328,6 @@ quatitative plots of the solution.
355328
pl.plot(solver=pinn)
356329
357330
358-
.. parsed-literal::
359-
360-
Intel MKL WARNING: Support of Intel(R) Streaming SIMD Extensions 4.2 (Intel(R) SSE4.2) enabled only processors has been deprecated. Intel oneAPI Math Kernel Library 2025.0 will require Intel(R) Advanced Vector Extensions (Intel(R) AVX) instructions.
361-
362-
363331
364332
.. image:: tutorial_files/tutorial_23_1.png
365333

docs/source/_rst/tutorials/tutorial2/tutorial.rst

Lines changed: 21 additions & 64 deletions
Original file line numberDiff line numberDiff line change
@@ -31,12 +31,17 @@ The problem definition
3131
----------------------
3232

3333
The two-dimensional Poisson problem is mathematically written as:
34-
:raw-latex:`\begin{equation}
35-
\begin{cases}
36-
\Delta u = \sin{(\pi x)} \sin{(\pi y)} \text{ in } D, \\
37-
u = 0 \text{ on } \Gamma_1 \cup \Gamma_2 \cup \Gamma_3 \cup \Gamma_4,
38-
\end{cases}
39-
\end{equation}` where :math:`D` is a square domain :math:`[0,1]^2`, and
34+
35+
.. math::
36+
37+
\begin{equation}
38+
\begin{cases}
39+
\Delta u = \sin{(\pi x)} \sin{(\pi y)} \text{ in } D, \\
40+
u = 0 \text{ on } \Gamma_1 \cup \Gamma_2 \cup \Gamma_3 \cup \Gamma_4,
41+
\end{cases}
42+
\end{equation}
43+
44+
where :math:`D` is a square domain :math:`[0,1]^2`, and
4045
:math:`\Gamma_i`, with :math:`i=1,...,4`, are the boundaries of the
4146
square.
4247

@@ -112,19 +117,6 @@ These parameters can be modified as desired. We use the
112117
# train
113118
trainer.train()
114119
115-
116-
.. parsed-literal::
117-
118-
GPU available: False, used: False
119-
TPU available: False, using: 0 TPU cores
120-
IPU available: False, using: 0 IPUs
121-
HPU available: False, using: 0 HPUs
122-
123-
124-
.. parsed-literal::
125-
126-
Epoch 999: : 1it [00:00, 158.53it/s, v_num=3, gamma1_loss=5.29e-5, gamma2_loss=4.09e-5, gamma3_loss=4.73e-5, gamma4_loss=4.18e-5, D_loss=0.00134, mean_loss=0.000304]
127-
128120
.. parsed-literal::
129121
130122
`Trainer.fit` stopped: `max_epochs=1000` reached.
@@ -158,9 +150,11 @@ is now defined, with an additional input variable, named extra-feature,
158150
which coincides with the forcing term in the Laplace equation. The set
159151
of input variables to the neural network is:
160152

161-
:raw-latex:`\begin{equation}
162-
[x, y, k(x, y)], \text{ with } k(x, y)=\sin{(\pi x)}\sin{(\pi y)},
163-
\end{equation}`
153+
.. math::
154+
155+
\begin{equation}
156+
[x, y, k(x, y)], \text{ with } k(x, y)=\sin{(\pi x)}\sin{(\pi y)},
157+
\end{equation}
164158
165159
where :math:`x` and :math:`y` are the spatial coordinates and
166160
:math:`k(x, y)` is the added feature.
@@ -203,19 +197,6 @@ new extra feature.
203197
# train
204198
trainer_feat.train()
205199
206-
207-
.. parsed-literal::
208-
209-
GPU available: False, used: False
210-
TPU available: False, using: 0 TPU cores
211-
IPU available: False, using: 0 IPUs
212-
HPU available: False, using: 0 HPUs
213-
214-
215-
.. parsed-literal::
216-
217-
Epoch 999: : 1it [00:00, 111.88it/s, v_num=4, gamma1_loss=2.54e-7, gamma2_loss=2.17e-7, gamma3_loss=1.94e-7, gamma4_loss=2.69e-7, D_loss=9.2e-6, mean_loss=2.03e-6]
218-
219200
.. parsed-literal::
220201
221202
`Trainer.fit` stopped: `max_epochs=1000` reached.
@@ -249,9 +230,11 @@ Another way to exploit the extra features is the addition of learnable
249230
parameter inside them. In this way, the added parameters are learned
250231
during the training phase of the neural network. In this case, we use:
251232

252-
:raw-latex:`\begin{equation}
253-
k(x, \mathbf{y}) = \beta \sin{(\alpha x)} \sin{(\alpha y)},
254-
\end{equation}`
233+
.. math::
234+
235+
\begin{equation}
236+
k(x, \mathbf{y}) = \beta \sin{(\alpha x)} \sin{(\alpha y)},
237+
\end{equation}
255238
256239
where :math:`\alpha` and :math:`\beta` are the abovementioned
257240
parameters. Their implementation is quite trivial: by using the class
@@ -289,19 +272,6 @@ need, and they are managed by ``autograd`` module!
289272
# train
290273
trainer_learn.train()
291274
292-
293-
.. parsed-literal::
294-
295-
GPU available: False, used: False
296-
TPU available: False, using: 0 TPU cores
297-
IPU available: False, using: 0 IPUs
298-
HPU available: False, using: 0 HPUs
299-
300-
301-
.. parsed-literal::
302-
303-
Epoch 999: : 1it [00:00, 119.29it/s, v_num=5, gamma1_loss=3.26e-8, gamma2_loss=7.84e-8, gamma3_loss=1.13e-7, gamma4_loss=3.02e-8, D_loss=2.66e-6, mean_loss=5.82e-7]
304-
305275
.. parsed-literal::
306276
307277
`Trainer.fit` stopped: `max_epochs=1000` reached.
@@ -338,19 +308,6 @@ removing all the hidden layers in the ``FeedForward``, keeping only the
338308
# train
339309
trainer_learn.train()
340310
341-
342-
.. parsed-literal::
343-
344-
GPU available: False, used: False
345-
TPU available: False, using: 0 TPU cores
346-
IPU available: False, using: 0 IPUs
347-
HPU available: False, using: 0 HPUs
348-
349-
350-
.. parsed-literal::
351-
352-
Epoch 0: : 0it [00:00, ?it/s]Epoch 999: : 1it [00:00, 131.20it/s, v_num=6, gamma1_loss=2.55e-16, gamma2_loss=4.76e-17, gamma3_loss=2.55e-16, gamma4_loss=4.76e-17, D_loss=1.74e-13, mean_loss=3.5e-14]
353-
354311
.. parsed-literal::
355312
356313
`Trainer.fit` stopped: `max_epochs=1000` reached.

docs/source/_rst/tutorials/tutorial3/tutorial.rst

Lines changed: 9 additions & 34 deletions
Original file line numberDiff line numberDiff line change
@@ -25,13 +25,14 @@ The problem definition
2525

2626
The problem is written in the following form:
2727

28-
:raw-latex:`\begin{equation}
29-
\begin{cases}
30-
\Delta u(x,y,t) = \frac{\partial^2}{\partial t^2} u(x,y,t) \quad \text{in } D, \\\\
31-
u(x, y, t=0) = \sin(\pi x)\sin(\pi y), \\\\
32-
u(x, y, t) = 0 \quad \text{on } \Gamma_1 \cup \Gamma_2 \cup \Gamma_3 \cup \Gamma_4,
33-
\end{cases}
34-
\end{equation}`
28+
.. math::
29+
\begin{equation}
30+
\begin{cases}
31+
\Delta u(x,y,t) = \frac{\partial^2}{\partial t^2} u(x,y,t) \quad \text{in } D, \\\\
32+
u(x, y, t=0) = \sin(\pi x)\sin(\pi y), \\\\
33+
u(x, y, t) = 0 \quad \text{on } \Gamma_1 \cup \Gamma_2 \cup \Gamma_3 \cup \Gamma_4,
34+
\end{cases}
35+
\end{equation}
3536
3637
where :math:`D` is a square domain :math:`[0,1]^2`, and
3738
:math:`\Gamma_i`, with :math:`i=1,...,4`, are the boundaries of the
@@ -136,20 +137,6 @@ approximately 3 minutes.
136137
trainer = Trainer(pinn, max_epochs=1000, accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)
137138
trainer.train()
138139
139-
140-
.. parsed-literal::
141-
142-
GPU available: False, used: False
143-
TPU available: False, using: 0 TPU cores
144-
IPU available: False, using: 0 IPUs
145-
HPU available: False, using: 0 HPUs
146-
Missing logger folder: /Users/dariocoscia/Desktop/PINA/tutorials/tutorial3/lightning_logs
147-
148-
149-
.. parsed-literal::
150-
151-
Epoch 999: : 1it [00:00, 84.47it/s, v_num=0, gamma1_loss=0.000, gamma2_loss=0.000, gamma3_loss=0.000, gamma4_loss=0.000, t0_loss=0.0419, D_loss=0.0307, mean_loss=0.0121]
152-
153140
.. parsed-literal::
154141
155142
`Trainer.fit` stopped: `max_epochs=1000` reached.
@@ -214,7 +201,7 @@ progress the solution get worse…. Can we do better?
214201
A valid option is to impose the initial condition as hard constraint as
215202
well. Specifically, our solution is written as:
216203

217-
.. math:: u_{\rm{pinn}} = xy(1-x)(1-y)\cdot NN(x, y, t)\cdot t + \cos(\sqrt{2}\pi t)sin(\pi x)\sin(\pi y),
204+
.. math:: u_{\rm{pinn}} = xy(1-x)(1-y)\cdot NN(x, y, t)\cdot t + \cos(\sqrt{2}\pi t)\sin(\pi x)\sin(\pi y),
218205

219206
Let us build the network first
220207

@@ -252,18 +239,6 @@ Now let’s train with the same configuration as thre previous test
252239
trainer.train()
253240
254241
255-
.. parsed-literal::
256-
257-
GPU available: False, used: False
258-
TPU available: False, using: 0 TPU cores
259-
IPU available: False, using: 0 IPUs
260-
HPU available: False, using: 0 HPUs
261-
262-
263-
.. parsed-literal::
264-
265-
Epoch 0: : 0it [00:00, ?it/s]Epoch 999: : 1it [00:00, 52.10it/s, v_num=1, gamma1_loss=1.97e-15, gamma2_loss=0.000, gamma3_loss=2.14e-15, gamma4_loss=0.000, t0_loss=0.000, D_loss=1.25e-7, mean_loss=2.09e-8]
266-
267242
.. parsed-literal::
268243
269244
`Trainer.fit` stopped: `max_epochs=1000` reached.

docs/source/_rst/tutorials/tutorial4/tutorial.rst

Lines changed: 0 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -415,15 +415,6 @@ juts 1 epoch using Adam optimizer with a :math:`0.001` learning rate.
415415
running_loss = 0.0
416416
417417
418-
419-
.. parsed-literal::
420-
421-
/u/d/dcoscia/.local/lib/python3.9/site-packages/torch/autograd/__init__.py:200: UserWarning: CUDA initialization: CUDA unknown error - this may be due to an incorrectly set up environment, e.g. changing env variable CUDA_VISIBLE_DEVICES after program start. Setting the available devices to be zero. (Triggered internally at ../c10/cuda/CUDAFunctions.cpp:109.)
422-
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
423-
/u/d/dcoscia/.local/lib/python3.9/site-packages/torch/cuda/__init__.py:546: UserWarning: Can't initialize NVML
424-
warnings.warn("Can't initialize NVML")
425-
426-
427418
.. parsed-literal::
428419
429420
batch [50/750] loss[0.161]
@@ -637,21 +628,6 @@ and the problem is a simple problem created by inheriting from
637628
trainer.train()
638629
639630
640-
641-
.. parsed-literal::
642-
643-
GPU available: False, used: False
644-
TPU available: False, using: 0 TPU cores
645-
IPU available: False, using: 0 IPUs
646-
HPU available: False, using: 0 HPUs
647-
648-
649-
650-
.. parsed-literal::
651-
652-
Training: 0it [00:00, ?it/s]
653-
654-
655631
.. parsed-literal::
656632
657633
`Trainer.fit` stopped: `max_epochs=150` reached.

docs/source/_rst/tutorials/tutorial7/tutorial.rst

Lines changed: 17 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
Tutorial 7: Resolution of an inverse problem
1+
Tutorial: Resolution of an inverse problem
22
============================================
33

44
Introduction to the inverse problem
@@ -7,26 +7,29 @@ Introduction to the inverse problem
77
This tutorial shows how to solve an inverse Poisson problem with
88
Physics-Informed Neural Networks. The problem definition is that of a
99
Poisson problem with homogeneous boundary conditions and it reads:
10-
:raw-latex:`\begin{equation}
11-
\begin{cases}
12-
\Delta u = e^{-2(x-\mu_1)^2-2(y-\mu_2)^2} \text{ in } \Omega\, ,\\
13-
u = 0 \text{ on }\partial \Omega,\\
14-
u(\mu_1, \mu_2) = \text{ data}
15-
\end{cases}
16-
\end{equation}` where :math:`\Omega` is a square domain
10+
11+
.. math::
12+
13+
\begin{equation}
14+
\begin{cases}
15+
\Delta u = e^{-2(x-\mu_1)^2-2(y-\mu_2)^2} \text{ in } \Omega\, ,\\
16+
u = 0 \text{ on }\partial \Omega,\\
17+
u(\mu_1, \mu_2) = \text{ data}
18+
\end{cases}
19+
\end{equation}
20+
21+
where :math:`\Omega` is a square domain
1722
:math:`[-2, 2] \times [-2, 2]`, and
1823
:math:`\partial \Omega=\Gamma_1 \cup \Gamma_2 \cup \Gamma_3 \cup \Gamma_4`
1924
is the union of the boundaries of the domain.
2025

2126
This kind of problem, namely the “inverse problem”, has two main goals:
22-
- find the solution :math:`u` that satisfies the Poisson equation; -
23-
find the unknown parameters (:math:`\mu_1`, :math:`\mu_2`) that better
24-
fit some given data (third equation in the system above).
2527

26-
In order to achieve both the goals we will need to define an
27-
``InverseProblem`` in PINA.
28+
* find the solution :math:`u` that satisfies the Poisson equation
29+
* find the unknown parameters (:math:`\mu_1`, :math:`\mu_2`) that better fit some given data (third equation in the system above).
2830

29-
Let’s start with useful imports.
31+
In order to achieve both the goals we will need to define an
32+
``InverseProblem`` in PINA. Let’s start with useful imports.
3033

3134
.. code:: ipython3
3235

0 commit comments

Comments
 (0)