@@ -31,16 +31,12 @@ The problem definition
3131----------------------
3232
3333The two-dimensional Poisson problem is mathematically written as:
34-
35- .. math ::
36- \begin {equation}
37- \begin {cases}
38- \Delta u = \sin {(\pi x)} \sin {(\pi y)} \text { in } D, \\
39- u = 0 \text { on } \Gamma _1 \cup \Gamma _2 \cup \Gamma _3 \cup \Gamma _4 ,
40- \end {cases}
41- \end {equation}
42-
43- where :math: `D` is a square domain :math: `[0 ,1 ]^2 `, and
34+ :raw-latex: `\b egin{equation}
35+ \b egin{cases}
36+ \D elta u = \s in{(\p i x)} \s in{(\p i y)} \t ext{ in } D, \\
37+ u = 0 \t ext{ on } \G amma_1 \c up \G amma_2 \c up \G amma_3 \c up \G amma_4,
38+ \e nd{cases}
39+ \e nd{equation} ` where :math: `D` is a square domain :math: `[0 ,1 ]^2 `, and
4440:math: `\Gamma _i`, with :math: `i=1 ,...,4 `, are the boundaries of the
4541square.
4642
@@ -127,7 +123,7 @@ These parameters can be modified as desired. We use the
127123
128124 .. parsed-literal ::
129125
130- Epoch 999: : 1it [00:00, 152.98it /s, v_num=9, mean_loss=0.000239, D_loss=0.000793, gamma1_loss=8.51e -5, gamma2_loss=0.000103, gamma3_loss =0.000122, gamma4_loss=9.14e-5]
126+ Epoch 999: : 1it [00:00, 158.53it /s, v_num=3, gamma1_loss=5.29e-5, gamma2_loss=4.09e-5, gamma3_loss=4.73e -5, gamma4_loss=4.18e-5, D_loss =0.00134, mean_loss=0.000304]
131127
132128 .. parsed-literal ::
133129
@@ -136,7 +132,7 @@ These parameters can be modified as desired. We use the
136132
137133 .. parsed-literal ::
138134
139- Epoch 999: : 1it [00:00, 119.21it /s, v_num=9, mean_loss=0.000239, D_loss=0.000793, gamma1_loss=8.51e -5, gamma2_loss=0.000103, gamma3_loss =0.000122, gamma4_loss=9.14e-5 ]
135+ Epoch 999: : 1it [00:00, 105.33it /s, v_num=3, gamma1_loss=5.29e-5, gamma2_loss=4.09e-5, gamma3_loss=4.73e -5, gamma4_loss=4.18e-5, D_loss =0.00134, mean_loss=0.000304 ]
140136
141137
142138 Now the ``Plotter `` class is used to plot the results. The solution
@@ -162,10 +158,9 @@ is now defined, with an additional input variable, named extra-feature,
162158which coincides with the forcing term in the Laplace equation. The set
163159of input variables to the neural network is:
164160
165- .. math ::
166- \begin {equation}
167- [x, y, k(x, y)], \text { with } k(x, y)=\sin {(\pi x)}\sin {(\pi y)},
168- \end {equation}
161+ :raw-latex: `\b egin{equation}
162+ [x, y, k(x, y)], \t ext{ with } k(x, y)=\s in{(\p i x)}\s in{(\p i y)},
163+ \e nd{equation} `
169164
170165where :math: `x` and :math: `y` are the spatial coordinates and
171166:math: `k(x, y)` is the added feature.
@@ -219,7 +214,7 @@ new extra feature.
219214
220215 .. parsed-literal ::
221216
222- Epoch 999: : 1it [00:00, 119.36it /s, v_num=10, mean_loss=8.97e -7, D_loss=4.43e-6, gamma1_loss =1.37e-8, gamma2_loss=1.68e-8, gamma3_loss=1.22e-8, gamma4_loss=1.77e-8 ]
217+ Epoch 999: : 1it [00:00, 111.88it /s, v_num=4, gamma1_loss=2.54e -7, gamma2_loss=2.17e-7, gamma3_loss =1.94e-7, gamma4_loss=2.69e-7, D_loss=9.2e-6, mean_loss=2.03e-6 ]
223218
224219 .. parsed-literal ::
225220
@@ -228,7 +223,7 @@ new extra feature.
228223
229224 .. parsed-literal ::
230225
231- Epoch 999: : 1it [00:00, 95.23it /s, v_num=10, mean_loss=8.97e -7, D_loss=4.43e-6, gamma1_loss =1.37e-8, gamma2_loss=1.68e-8, gamma3_loss=1.22e-8, gamma4_loss=1.77e-8 ]
226+ Epoch 999: : 1it [00:00, 85.62it /s, v_num=4, gamma1_loss=2.54e -7, gamma2_loss=2.17e-7, gamma3_loss =1.94e-7, gamma4_loss=2.69e-7, D_loss=9.2e-6, mean_loss=2.03e-6 ]
232227
233228
234229 The predicted and exact solutions and the error between them are
@@ -254,10 +249,9 @@ Another way to exploit the extra features is the addition of learnable
254249parameter inside them. In this way, the added parameters are learned
255250during the training phase of the neural network. In this case, we use:
256251
257- .. math ::
258- \begin {equation}
259- k(x, \mathbf {y}) = \beta \sin {(\alpha x)} \sin {(\alpha y)},
260- \end {equation}
252+ :raw-latex: `\b egin{equation}
253+ k(x, \m athbf{y}) = \b eta \s in{(\a lpha x)} \s in{(\a lpha y)},
254+ \e nd{equation} `
261255
262256where :math: `\alpha ` and :math: `\beta ` are the abovementioned
263257parameters. Their implementation is quite trivial: by using the class
@@ -306,7 +300,7 @@ need, and they are managed by ``autograd`` module!
306300
307301 .. parsed-literal ::
308302
309- Epoch 999: : 1it [00:00, 103.14it /s, v_num=14, mean_loss=1.39e-6, D_loss=6.04e-6, gamma1_loss=4.19e -7, gamma2_loss=2.8e -8, gamma3_loss=4.05e-7, gamma4_loss=3.49e-8]
303+ Epoch 999: : 1it [00:00, 119.29it /s, v_num=5, gamma1_loss=3.26e-8, gamma2_loss=7.84e-8, gamma3_loss=1.13e -7, gamma4_loss=3.02e -8, D_loss=2.66e-6, mean_loss=5.82e-7]
310304
311305 .. parsed-literal ::
312306
@@ -315,7 +309,7 @@ need, and they are managed by ``autograd`` module!
315309
316310 .. parsed-literal ::
317311
318- Epoch 999: : 1it [00:00, 84.50it /s, v_num=14, mean_loss=1.39e-6, D_loss=6.04e-6, gamma1_loss=4.19e -7, gamma2_loss=2.8e -8, gamma3_loss=4.05e-7, gamma4_loss=3.49e-8 ]
312+ Epoch 999: : 1it [00:00, 85.94it /s, v_num=5, gamma1_loss=3.26e-8, gamma2_loss=7.84e-8, gamma3_loss=1.13e -7, gamma4_loss=3.02e -8, D_loss=2.66e-6, mean_loss=5.82e-7 ]
319313
320314
321315 Umh, the final loss is not appreciabily better than previous model (with
@@ -355,7 +349,7 @@ removing all the hidden layers in the ``FeedForward``, keeping only the
355349
356350 .. parsed-literal ::
357351
358- Epoch 999: : 1it [00:00, 130.55it /s, v_num=17, mean_loss=1.34e-14, D_loss=6.7e-14, gamma1_loss=5.13e-17, gamma2_loss=9.68e-18, gamma3_loss=5.14e-17, gamma4_loss=9.75e-18 ]
352+ Epoch 0: : 0it [00:00, ?it/s]Epoch 999: : 1it [00:00, 131.20it /s, v_num=6, gamma1_loss=2.55e-16, gamma2_loss=4.76e-17, gamma3_loss=2.55e-16, gamma4_loss=4.76e-17, D_loss=1.74e-13, mean_loss=3.5e-14 ]
359353
360354 .. parsed-literal ::
361355
@@ -364,7 +358,7 @@ removing all the hidden layers in the ``FeedForward``, keeping only the
364358
365359 .. parsed-literal ::
366360
367- Epoch 999: : 1it [00:00, 104.91it /s, v_num=17, mean_loss=1.34e-14, D_loss=6.7e-14, gamma1_loss=5.13e-17, gamma2_loss=9.68e-18, gamma3_loss=5.14e-17, gamma4_loss=9.75e-18]
361+ Epoch 999: : 1it [00:00, 98.81it /s, v_num=6, gamma1_loss=2.55e-16, gamma2_loss=4.76e-17, gamma3_loss=2.55e-16, gamma4_loss=4.76e-17, D_loss=1.74e-13, mean_loss=3.5e-14]
368362
369363
370364 In such a way, the model is able to reach a very high accuracy! Of
0 commit comments