Skip to content

Commit 0b7a307

Browse files
annaivagnesDario Coscia
authored andcommitted
Inverse problem implementation (#177)
* inverse problem implementation * add tutorial7 for inverse Poisson problem * fix doc in equation, equation_interface, system_equation --------- Co-authored-by: Dario Coscia <dariocoscia@dhcp-015.eduroam.sissa.it>
1 parent a9f14ac commit 0b7a307

21 files changed

Lines changed: 967 additions & 40 deletions

docs/source/_rst/_tutorial.rst

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
PINA Tutorials
22
==============
33

4-
In this folder we collect useful tutorials in order to understand the principles and the potential of **PINA**.
4+
In this folder we collect useful tutorials in order to understand the principles and the potential of **PINA**.
55

66
Getting started with PINA
77
-------------------------
@@ -20,6 +20,7 @@ Physics Informed Neural Networks
2020

2121
Two dimensional Poisson problem using Extra Features Learning<tutorials/tutorial2/tutorial.rst>
2222
Two dimensional Wave problem with hard constraint<tutorials/tutorial3/tutorial.rst>
23+
Resolution of a 2D Poisson inverse problem<tutorials/tutorial7/tutorial.rst>
2324

2425

2526
Neural Operator Learning
@@ -36,4 +37,5 @@ Supervised Learning
3637
:maxdepth: 3
3738
:titlesonly:
3839

39-
Unstructured convolutional autoencoder via continuous convolution<tutorials/tutorial4/tutorial.rst>
40+
Unstructured convolutional autoencoder via continuous convolution<tutorials/tutorial4/tutorial.rst>
41+
Lines changed: 217 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,217 @@
1+
Tutorial 7: Resolution of an inverse problem
2+
============================================
3+
4+
Introduction to the inverse problem
5+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
6+
7+
This tutorial shows how to solve an inverse Poisson problem with
8+
Physics-Informed Neural Networks. The problem definition is that of a
9+
Poisson problem with homogeneous boundary conditions and it reads:
10+
:raw-latex:`\begin{equation}
11+
\begin{cases}
12+
\Delta u = e^{-2(x-\mu_1)^2-2(y-\mu_2)^2} \text{ in } \Omega\, ,\\
13+
u = 0 \text{ on }\partial \Omega,\\
14+
u(\mu_1, \mu_2) = \text{ data}
15+
\end{cases}
16+
\end{equation}` where :math:`\Omega` is a square domain
17+
:math:`[-2, 2] \times [-2, 2]`, and
18+
:math:`\partial \Omega=\Gamma_1 \cup \Gamma_2 \cup \Gamma_3 \cup \Gamma_4`
19+
is the union of the boundaries of the domain.
20+
21+
This kind of problem, namely the “inverse problem”, has two main goals:
22+
- find the solution :math:`u` that satisfies the Poisson equation; -
23+
find the unknown parameters (:math:`\mu_1`, :math:`\mu_2`) that better
24+
fit some given data (third equation in the system above).
25+
26+
In order to achieve both the goals we will need to define an
27+
``InverseProblem`` in PINA.
28+
29+
Let’s start with useful imports.
30+
31+
.. code:: ipython3
32+
33+
import matplotlib.pyplot as plt
34+
import torch
35+
from pytorch_lightning.callbacks import Callback
36+
from pina.problem import SpatialProblem, InverseProblem
37+
from pina.operators import laplacian
38+
from pina.model import FeedForward
39+
from pina.equation import Equation, FixedValue
40+
from pina import Condition, Trainer
41+
from pina.solvers import PINN
42+
from pina.geometry import CartesianDomain
43+
44+
Then, we import the pre-saved data, for (:math:`\mu_1`,
45+
:math:`\mu_2`)=(:math:`0.5`, :math:`0.5`). These two values are the
46+
optimal parameters that we want to find through the neural network
47+
training. In particular, we import the ``input_points``\ (the spatial
48+
coordinates), and the ``output_points`` (the corresponding :math:`u`
49+
values evaluated at the ``input_points``).
50+
51+
.. code:: ipython3
52+
53+
data_output = torch.load('data/pinn_solution_0.5_0.5').detach()
54+
data_input = torch.load('data/pts_0.5_0.5')
55+
56+
Moreover, let’s plot also the data points and the reference solution:
57+
this is the expected output of the neural network.
58+
59+
.. code:: ipython3
60+
61+
points = data_input.extract(['x', 'y']).detach().numpy()
62+
truth = data_output.detach().numpy()
63+
64+
plt.scatter(points[:, 0], points[:, 1], c=truth, s=8)
65+
plt.axis('equal')
66+
plt.colorbar()
67+
plt.show()
68+
69+
70+
71+
.. image:: tutorial_files/output_8_0.png
72+
73+
74+
Inverse problem definition in PINA
75+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
76+
77+
Then, we initialize the Poisson problem, that is inherited from the
78+
``SpatialProblem`` and from the ``InverseProblem`` classes. We here have
79+
to define all the variables, and the domain where our unknown parameters
80+
(:math:`\mu_1`, :math:`\mu_2`) belong. Notice that the laplace equation
81+
takes as inputs also the unknown variables, that will be treated as
82+
parameters that the neural network optimizes during the training
83+
process.
84+
85+
.. code:: ipython3
86+
87+
### Define ranges of variables
88+
x_min = -2
89+
x_max = 2
90+
y_min = -2
91+
y_max = 2
92+
93+
class Poisson(SpatialProblem, InverseProblem):
94+
'''
95+
Problem definition for the Poisson equation.
96+
'''
97+
output_variables = ['u']
98+
spatial_domain = CartesianDomain({'x': [x_min, x_max], 'y': [y_min, y_max]})
99+
# define the ranges for the parameters
100+
unknown_parameter_domain = CartesianDomain({'mu1': [-1, 1], 'mu2': [-1, 1]})
101+
102+
def laplace_equation(input_, output_, params_):
103+
'''
104+
Laplace equation with a force term.
105+
'''
106+
force_term = torch.exp(
107+
- 2*(input_.extract(['x']) - params_['mu1'])**2
108+
- 2*(input_.extract(['y']) - params_['mu2'])**2)
109+
delta_u = laplacian(output_, input_, components=['u'], d=['x', 'y'])
110+
111+
return delta_u - force_term
112+
113+
# define the conditions for the loss (boundary conditions, equation, data)
114+
conditions = {
115+
'gamma1': Condition(location=CartesianDomain({'x': [x_min, x_max],
116+
'y': y_max}),
117+
equation=FixedValue(0.0, components=['u'])),
118+
'gamma2': Condition(location=CartesianDomain({'x': [x_min, x_max], 'y': y_min
119+
}),
120+
equation=FixedValue(0.0, components=['u'])),
121+
'gamma3': Condition(location=CartesianDomain({'x': x_max, 'y': [y_min, y_max]
122+
}),
123+
equation=FixedValue(0.0, components=['u'])),
124+
'gamma4': Condition(location=CartesianDomain({'x': x_min, 'y': [y_min, y_max]
125+
}),
126+
equation=FixedValue(0.0, components=['u'])),
127+
'D': Condition(location=CartesianDomain({'x': [x_min, x_max], 'y': [y_min, y_max]
128+
}),
129+
equation=Equation(laplace_equation)),
130+
'data': Condition(input_points=data_input.extract(['x', 'y']), output_points=data_output)
131+
}
132+
133+
problem = Poisson()
134+
135+
Then, we define the model of the neural network we want to use. Here we
136+
used a model which impose hard constrains on the boundary conditions, as
137+
also done in the Wave tutorial!
138+
139+
.. code:: ipython3
140+
141+
model = FeedForward(
142+
layers=[20, 20, 20],
143+
func=torch.nn.Softplus,
144+
output_dimensions=len(problem.output_variables),
145+
input_dimensions=len(problem.input_variables)
146+
)
147+
148+
After that, we discretize the spatial domain.
149+
150+
.. code:: ipython3
151+
152+
problem.discretise_domain(20, 'grid', locations=['D'], variables=['x', 'y'])
153+
problem.discretise_domain(1000, 'random', locations=['gamma1', 'gamma2',
154+
'gamma3', 'gamma4'], variables=['x', 'y'])
155+
156+
Here, we define a simple callback for the trainer. We use this callback
157+
to save the parameters predicted by the neural network during the
158+
training. The parameters are saved every 100 epochs as ``torch`` tensors
159+
in a specified directory (``tmp_dir`` in our case). The goal is to read
160+
the saved parameters after training and plot their trend across the
161+
epochs.
162+
163+
.. code:: ipython3
164+
165+
# temporary directory for saving logs of training
166+
tmp_dir = "tmp_poisson_inverse"
167+
168+
class SaveParameters(Callback):
169+
'''
170+
Callback to save the parameters of the model every 100 epochs.
171+
'''
172+
def on_train_epoch_end(self, trainer, __):
173+
if trainer.current_epoch % 100 == 99:
174+
torch.save(trainer.solver.problem.unknown_parameters, '{}/parameters_epoch{}'.format(tmp_dir, trainer.current_epoch))
175+
176+
Then, we define the ``PINN`` object and train the solver using the
177+
``Trainer``.
178+
179+
.. code:: ipython3
180+
181+
### train the problem with PINN
182+
max_epochs = 5000
183+
pinn = PINN(problem, model, optimizer_kwargs={'lr':0.005})
184+
# define the trainer for the solver
185+
trainer = Trainer(solver=pinn, accelerator='cpu', max_epochs=max_epochs,
186+
default_root_dir=tmp_dir, callbacks=[SaveParameters()])
187+
trainer.train()
188+
189+
One can now see how the parameters vary during the training by reading
190+
the saved solution and plotting them. The plot shows that the parameters
191+
stabilize to their true value before reaching the epoch :math:`1000`!
192+
193+
.. code:: ipython3
194+
195+
epochs_saved = range(99, max_epochs, 100)
196+
parameters = torch.empty((int(max_epochs/100), 2))
197+
for i, epoch in enumerate(epochs_saved):
198+
params_torch = torch.load('{}/parameters_epoch{}'.format(tmp_dir, epoch))
199+
for e, var in enumerate(pinn.problem.unknown_variables):
200+
parameters[i, e] = params_torch[var].data
201+
202+
# Plot parameters
203+
plt.close()
204+
plt.plot(epochs_saved, parameters[:, 0], label='mu1', marker='o')
205+
plt.plot(epochs_saved, parameters[:, 1], label='mu2', marker='s')
206+
plt.ylim(-1, 1)
207+
plt.grid()
208+
plt.legend()
209+
plt.xlabel('Epoch')
210+
plt.ylabel('Parameter value')
211+
plt.show()
212+
213+
214+
215+
.. image:: tutorial_files/output_21_0.png
216+
217+
10.1 KB
Loading
111 KB
Loading

pina/condition.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ class Condition:
3737
>>> example_input_pts = LabelTensor(
3838
>>> torch.tensor([[0, 0, 0]]), ['x', 'y', 'z'])
3939
>>> example_output_pts = LabelTensor(torch.tensor([[1, 2]]), ['a', 'b'])
40-
>>>
40+
>>>
4141
>>> Condition(
4242
>>> input_points=example_input_pts,
4343
>>> output_points=example_output_pts)

pina/equation/__init__.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,4 +9,4 @@
99

1010
from .equation import Equation
1111
from .equation_factory import FixedFlux, FixedGradient, Laplace, FixedValue
12-
from .system_equation import SystemEquation
12+
from .system_equation import SystemEquation

pina/equation/equation.py

Lines changed: 16 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ def __init__(self, equation):
88
"""
99
Equation class for specifing any equation in PINA.
1010
Each ``equation`` passed to a ``Condition`` object
11-
must be an ``Equation`` or ``SystemEquation``.
11+
must be an ``Equation`` or ``SystemEquation``.
1212
1313
:param equation: A ``torch`` callable equation to
1414
evaluate the residual.
@@ -20,14 +20,26 @@ def __init__(self, equation):
2020
f'{equation}')
2121
self.__equation = equation
2222

23-
def residual(self, input_, output_):
23+
def residual(self, input_, output_, params_ = None):
2424
"""
2525
Residual computation of the equation.
2626
2727
:param LabelTensor input_: Input points to evaluate the equation.
28-
:param LabelTensor output_: Output vectors given my a model (e.g,
28+
:param LabelTensor output_: Output vectors given by a model (e.g,
2929
a ``FeedForward`` model).
30+
:param dict params_: Dictionary of parameters related to the inverse
31+
problem (if any).
32+
If the equation is not related to an ``InverseProblem``, the
33+
parameters are initialized to ``None`` and the residual is
34+
computed as ``equation(input_, output_)``.
35+
Otherwise, the parameters are automatically initialized in the
36+
ranges specified by the user.
37+
3038
:return: The residual evaluation of the specified equation.
3139
:rtype: LabelTensor
3240
"""
33-
return self.__equation(input_, output_)
41+
if params_ is None:
42+
result = self.__equation(input_, output_)
43+
else:
44+
result = self.__equation(input_, output_, params_)
45+
return result

pina/equation/equation_interface.py

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,3 +11,17 @@ class EquationInterface(metaclass=ABCMeta):
1111
the output variables, the condition(s), and the domain(s) where the
1212
conditions are applied.
1313
"""
14+
15+
@abstractmethod
16+
def residual(self, input_, output_, params_):
17+
"""
18+
Residual computation of the equation.
19+
20+
:param LabelTensor input_: Input points to evaluate the equation.
21+
:param LabelTensor output_: Output vectors given by my model (e.g., a ``FeedForward`` model).
22+
:param dict params_: Dictionary of unknown parameters, eventually
23+
related to an ``InverseProblem``.
24+
:return: The residual evaluation of the specified equation.
25+
:rtype: LabelTensor
26+
"""
27+
pass

pina/equation/system_equation.py

Lines changed: 18 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -11,14 +11,14 @@ def __init__(self, list_equation, reduction='mean'):
1111
System of Equation class for specifing any system
1212
of equations in PINA.
1313
Each ``equation`` passed to a ``Condition`` object
14-
must be an ``Equation`` or ``SystemEquation``.
15-
A ``SystemEquation`` is specified by a list of
14+
must be an ``Equation`` or ``SystemEquation``.
15+
A ``SystemEquation`` is specified by a list of
1616
equations.
1717
1818
:param Callable equation: A ``torch`` callable equation to
1919
evaluate the residual
2020
:param str reduction: Specifies the reduction to apply to the output:
21-
``none`` | ``mean`` | ``sum`` | ``callable``. ``none``: no reduction
21+
``none`` | ``mean`` | ``sum`` | ``callable``. ``none``: no reduction
2222
will be applied, ``mean``: the sum of the output will be divided
2323
by the number of elements in the output, ``sum``: the output will
2424
be summed. ``callable`` a callable function to perform reduction,
@@ -43,19 +43,28 @@ def __init__(self, list_equation, reduction='mean'):
4343
raise NotImplementedError(
4444
'Only mean and sum reductions implemented.')
4545

46-
def residual(self, input_, output_):
46+
def residual(self, input_, output_, params_=None):
4747
"""
48-
Residual computation of the equation.
48+
Residual computation for the equations of the system.
4949
50-
:param LabelTensor input_: Input points to evaluate the equation.
51-
:param LabelTensor output_: Output vectors given my a model (e.g,
50+
:param LabelTensor input_: Input points to evaluate the system of
51+
equations.
52+
:param LabelTensor output_: Output vectors given by a model (e.g,
5253
a ``FeedForward`` model).
53-
:return: The residual evaluation of the specified equation,
54+
:param dict params_: Dictionary of parameters related to the inverse
55+
problem (if any).
56+
If the equation is not related to an ``InverseProblem``, the
57+
parameters are initialized to ``None`` and the residual is
58+
computed as ``equation(input_, output_)``.
59+
Otherwise, the parameters are automatically initialized in the
60+
ranges specified by the user.
61+
62+
:return: The residual evaluation of the specified system of equations,
5463
aggregated by the ``reduction`` defined in the ``__init__``.
5564
:rtype: LabelTensor
5665
"""
5766
residual = torch.hstack(
58-
[equation.residual(input_, output_) for equation in self.equations])
67+
[equation.residual(input_, output_, params_) for equation in self.equations])
5968

6069
if self.reduction == 'none':
6170
return residual

pina/plotter.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -205,6 +205,7 @@ def plot(self,
205205
plt.savefig(filename)
206206
else:
207207
plt.show()
208+
plt.close()
208209

209210
def plot_loss(self,
210211
trainer,

0 commit comments

Comments
 (0)