Magnetic field mapping of inaccessible regions using physically informed neural networks | Scientific Reports – Nature.com | Region & Cash

In this work we are interested in predicting the magnetic field components within a three-dimensional space enclosed by an external surface S by using the knowledge of the magnetic field in some places on the surface S. Assuming there are no free streams, \(\varvec{J}=0\)and no magnetization, \(\varvec{M}=0\), in the area of ​​interest, the partial differential equations governing the static magnetic field are quite concise. They are

$$\begin{aligned} \varvec{\nabla \cdot B}= 0, \end{aligned}$$

(1)

and

$$\begin{aligned} \varvec{\nabla \times B}= 0. \end{aligned}$$

(2)

Therefore, it is possible to find a magnetic field that satisfies (1) and (2) along with knowing the magnetic field at some location. We select these locations on the surface of a closed region S of which we are interested in approximating the magnetic field inside (Fig. 1). Of course, according to the uniqueness theorem of electromagnetism, having a finite number of data on a surface does not guarantee a unique solution of equations (1) and (2). However, as we show later in this paper, our results show that one can successfully approximate the true solution by having a sufficient amount of data scattered on the surface.

Verification of the multipole expansion method

In this section we review the field monitoring method described in Ref.9. Equation (2) states that the magnetic field vector can be written as the gradient of a scalar magnetic potential function

$$\begin{aligned} \varvec{B}= -\nabla \Phi _M(\varvec{r}). \end{aligned}$$

(3)

Substituting (3) into Eq. (1) tells us that the magnetic scalar potential satisfies Laplace’s equation

$$\begin{aligned} \nabla ^2\Phi _M=0, \end{aligned}$$

(4)

and the solution of Laplace’s equation in spherical coordinates is given by

$$\begin{aligned} \Phi _M(r,\theta ,\phi ) = \sum _l^\infty \sum _m^lr^l P^m_l(\cos \theta ) [a_{lm}\cos (m\phi )+b_{lm}\sin (m\phi )]\end{aligned}$$

(5)

Where \(P_l^m\) are the associated Legendre polynomials, and \(a_{lm}\) and \(b_{lm}\) are expansion coefficients. The magnetic field can be obtained by calculating the gradient of the magnetic scalar potential, \(\varvec{B}= -\nabla \Phi _M(\varvec{r})\). Absorbent \(a_{lm}\) and \(b_{lm}\) in a coefficient \(c_n\)we can write the magnetic field in compact form

$$\begin{aligned} \varvec{B}(x,y,z) = \sum _n c_n \varvec{f}_n(x,y,z), \end{aligned}$$

(6)

Where \(\varvec{f}_n(x,y,z)\) vector basis functions are satisfactory \(\nabla \cdot \varvec{f}_n = 0\) and \(\nabla \times \varvec{f}_n = 0\).

Table 1 Table of x, j and e.g Components of the basis vector function \(\varvec{f}_n\) until the order \(n=9\).

For illustration, the first 10 \(\varvec{f}_n\) Basis vector functions are listed in Table 1. The right side of Eq. (6) is expanded to a finite order \(n=N\) and the magnetic field vector within the volume can be interpolated using linear regression methods.

Magnetic field prediction using PINNs

illustration 1

Magnetic field sensors (red dots) placed on a surface. S to predict the magnetic field \(\varvec{B}\) indoor.

The exact values ​​of the partial derivatives in (1) and (2) can be calculated by automatic differentiation11implemented in some popular machine learning libraries like TensorFlowfifteen and PyTorch16. The neural network that we train to approximate the magnetic field within the region will have the structure shown in Fig. 2. The hyperbolic tangent is used for the activation of each hidden layer. The other activation functions tested did not work as well as the hyperbolic tangent for this network architecture. The number of hidden layers is chosen to be 4 and 8, each having 32 or 64 neurons. The performance of these 4 different sized networks will be discussed later.

figure 2
figure 2

Network requires 3 inputs, (x, j, e.g) coordinates and outputs the magnetic field \(\varvec{B}\). Auto differentiation is used to compute the exact derivatives of the output \(\varvec{B}\) in terms of the input parameters.

Then the network can be trained by a combined loss function of data, ripple and divergence losses

$$\begin{aligned} \text {loss} = \text {loss}_{\varvec{B}} + \lambda (\text {loss}_{\varvec{\nabla \cdot B}} + \text {loss}_{\varvec{\nabla \times B}}), \end{aligned}$$

(7)

Where

$$\begin{aligned}&\text {loss}_{\varvec{B}} := \frac{1}{N_{\varvec{B}}} \sum _{i=1}^{N_{ \varvec{B}}} \Vert \varvec{B}(\varvec{r}_{\varvec{B}}^{i})-\varvec{B}_{s}(\varvec{r}_ {\varvec{B}}^{i})\Vert ^2, \end{aligned}$$

(8th)

$$\begin{aligned}&\text {loss}_{\varvec{\nabla \cdot B}} := \frac{1}{N_f} \sum _{i=1}^{N_f} |\varvec {\nabla \cdot B}(\varvec{r}_{d}^{i})|^2, \end{aligned}$$

(9)

and

$$\begin{aligned} \text {Loss}_{\varvec{\nabla \times B}} := \frac{1}{N_f} \sum _{i=1}^{N_f} \Vert \varvec {\nabla \times B}(\varvec{r}_{d}^{i})\Vert ^2, \end{aligned}$$

(10)

where the points \(\varvec{r}_{\varvec{B}}^{i}\) and \(\varvec{r}_{d}^{i}\) designate the positions of the magnetic sensors or the collocation points. \(N_{\varvec{B}}\) is the number of magnetic field sensors, \(N_f\) is the number of collocation points in the domain and \(\varvec{B}_{s}\) is the measured magnetic field vector at \(\varvec{r}_{d}^{i}\). The parameter \(\lambda\) in Eq. (7) can be adjusted according to the performance of the network. the collocation points, \(\varvec{r}_{d}^{i}\), in Eq. (9) and (10) are taken from the volume encapsulated by the surface S (Fig. 1) and can be set throughout the training process11. However, randomly selecting collocation points in each epoch leads to faster convergence as well as more accurate results. This is partly because fewer collocation points can be chosen, and because they are randomly assigned at each iteration, they represent the domain better than any fixed collocation point scheme. The ADAM optimizer17an adaptive method for gradient-based first-order optimization, we use to minimize the loss function 7. The general procedure for training is given in Algorithm 1.

Figure a

Leave a Comment