Difference between revisions of "Nonlinear problems in GetDP"

From ONELAB
Jump to: navigation, search
(Newton-Raphson method)
Line 71: Line 71:
 
\end{equation}
 
\end{equation}
  
Each NR-iteration provides an unique increment solution $\vec{\delta x}_i$ which depends on values taken from the previous iteration step $i-1$. Nevertheless, the convergence is by no means guaranteed. In case of strong non-linearity and/or unfavorable initialization conditions, divergence is not unlikely. Sufficiently close to the solution, the NR-method is convergent by quadratic speed. In order to ensure or accelerate convergence, relaxation technique may be applied :
+
Each NR-iteration provides an unique increment solution $\vec{\delta x}_i$ which depends on values taken from the previous iteration step $i-1$. Nevertheless, the convergence is by no means guaranteed. In case of strong non-linearity and/or unfavorable initialization conditions, divergence is not unlikely. Sufficiently close to the solution, the convergence of the NR-method is quadratic. In order to ensure or accelerate convergence, relaxation techniques may be applied:
 
\begin{equation}
 
\begin{equation}
 
\vec{x}_i=\vec{x}_{i-1}+\gamma_i \;\vec{\delta x}_i,
 
\vec{x}_i=\vec{x}_{i-1}+\gamma_i \;\vec{\delta x}_i,
 
\end{equation}
 
\end{equation}
with $\gamma_i$, the so-called \emph{relaxation factor}. This factor is judiciously selected (typically between $0$ and $1$) or determined so that the convergence speed up.
+
with $\gamma_i$, the so-called ''relaxation factor'', which should be chosen judiciously (typically between $0$ and $1$).
  
The matrix which appears in the system \eqref{SysNR} is called the \emph{Jacobian matrix} of the system and is noted $\Ja$ :
+
The matrix which appears in the system \eqref{SysNR} is called the ''Jacobian matrix'' (or ''Tangent stiffness matrix'') of the system and is noted $\Ja$ :
 
\begin{equation}
 
\begin{equation}
 
\Ja=(\frac{\partial \big( \mat{A}\;.\;\vec{x} \big)}{\partial \vec{x}}).
 
\Ja=(\frac{\partial \big( \mat{A}\;.\;\vec{x} \big)}{\partial \vec{x}}).

Revision as of 08:46, 28 April 2015

$ \newcommand{\vec}[1]{\mathbf{#1}} \newcommand{\mat}[1]{\mathbf{#1}} \newcommand{\curl}{\mathrm{curl}\;} \newcommand{\B}{\mathfrak{B}} \newcommand{\Ja}{\boldsymbol{\mathsf{J}}} \newcommand{\JacNL}{\boldsymbol{\mathsf{JacNL}}} $

This page explains how a system of nonlinear partial differential equations is written and solved in GetDP. A summary of the Picard and Newton-Raphson methods is recalled first, before proceeding to the GetDP implementation. The \verb|JacNL| function is explained in detail on a practical example.

Resolution of nonlinear problems

The finite element method allows to approximate a system of partial differential equations by a system of algebraic equations which can be written in the following matrix form : \begin{equation} \mat{A}\;.\;\vec{x}=\vec{b}, \end{equation} where $\mat{A}$ is the system matrix of the problem, $\vec{b}$ takes into account potential source terms and $\vec{x}$ represents the field of unknowns that has to be calculated.

In the presence of nonlinear media in the finite element model, the matri} $\mat{A}$ depends on the unknown field $\vec{x}$, and the system of equations becomes nonlinear: \begin{equation} \mat{A}(\vec{x})\;.\;\vec{x}=\vec{b}. \label{Sys} \end{equation} Therefore, the system must necessarily be solved iteratively. Starting from an initial vector $\vec{x}_0$ (a zero vector for example), the following calculated values $\vec{x}_1$, $\vec{x}_2$, ... are hoped to be closer and closer to the correct unknown solution. For the exact solution, the residual defined by $\vec{r}(\vec{x})=\mat{A}(\vec{x})\;.\;\vec{x}-\vec{b}$ equals zero. If after $p$ iterations, a satisfactory convergence is obtained, the iterative process is stopped. The convergence criterion could be based on some norm of the residual $\vec{r}(\vec{x}_p)$ or on the $p$\textsuperscript{th} increment $\vec{\delta x}_p=\vec{x}_p-\vec{x}_{p-1}$. For example, it could be : \begin{equation} \frac{||\vec{\delta x}_p||_\infty}{||\vec{x}_p||_\infty}\;<\;\varepsilon, \end{equation} with $\varepsilon$, a small dimensionless number ($10^{-6}$, ..., $10^{-3}$).

Picard's method

Picard's iteration is an easy way of handling nonlinear partial differential equations. In the Picard iterative process, a new approximation $\vec{x}_i$ is calculated by using a known, previous solution $\vec{x}_{i-1}$ in the nonlinear terms so that these terms become linear in the unknown $\vec{x}_i$. Therefore, the problem becomes : \begin{equation} \mat{A}(\vec{x}_{i-1}).\;\vec{x}_{i} = \vec{b}. \label{Picard} \end{equation}

The following iterative process summarizes the Picard method :

$\vec{x}_0=\vec{0}$; // Initialization
for $i=1,2,3,...$ {
   $\vec{x}_{i} = \Big( \mat{A}(\vec{x}_{i-1}) \Big)^{-1} \vec{b}$; // Find the new $\vec{x}$
}

Newton-Raphson method

Usually, the Newton-Raphson method (NR-method) is used. In that case, a new approximation $\vec{x}_i=\vec{x}_{i-1}+\vec{\delta x}_i$ is obtained through the linearization of the residual vector $\vec{r}(\vec{x}_i)$ around the previous approximated value $\vec{x}_{i-1}$: \begin{equation} \vec{r}(\vec{x}_i)=\vec{r}(\vec{x}_{i-1}+\vec{\delta x}_i)=0, \end{equation} \begin{equation} \vec{r}(\vec{x}_{i-1})+(\frac{\partial \vec{r}}{\partial \vec{x}})_{i-1} \vec{\delta x}_i + o(\vec{\delta \vec{x}}_i^2)= 0. \end{equation} By neglecting the higher-order terms $o(\vec{\delta \vec{x}}_i^2)$, a system of linear algebraic equations in $\vec{\delta x}_i$ is deduced: \begin{equation} (\frac{\partial \vec{r}}{\partial \vec{x}})_{i-1} \vec{\delta x}_i = -\vec{r}(\vec{x}_{i-1}). \label{NR} \end{equation}


Remark. The derivative of an $m \times 1$-column vector $\vec{y}$ with respect to an $n \times 1$-column vector $\vec{x}$ is an $m \times n$-matrix whose $(j,k)^\text{th}$ element is given by: \begin{equation*} (\frac{\partial \vec{y}}{\partial \vec{x}})_{j,k}=\frac{\partial \vec{y}_j}{\partial \vec{x}_k}. \end{equation*}

For the system written in \eqref{Sys}, with the right-hand side $\vec{b}$ assumed known, the formulation \eqref{NR} becomes : \begin{equation} (\frac{\partial \big( \mat{A}(\vec{x}).\;\vec{x} \big)}{\partial \vec{x}})_{i-1} \vec{\delta x}_i = \vec{b}-\mat{A}(\vec{x}_{i-1})\;.\;\vec{x}_{i-1}. \label{SysNR} \end{equation}

Each NR-iteration provides an unique increment solution $\vec{\delta x}_i$ which depends on values taken from the previous iteration step $i-1$. Nevertheless, the convergence is by no means guaranteed. In case of strong non-linearity and/or unfavorable initialization conditions, divergence is not unlikely. Sufficiently close to the solution, the convergence of the NR-method is quadratic. In order to ensure or accelerate convergence, relaxation techniques may be applied: \begin{equation} \vec{x}_i=\vec{x}_{i-1}+\gamma_i \;\vec{\delta x}_i, \end{equation} with $\gamma_i$, the so-called relaxation factor, which should be chosen judiciously (typically between $0$ and $1$).

The matrix which appears in the system \eqref{SysNR} is called the Jacobian matrix (or Tangent stiffness matrix) of the system and is noted $\Ja$ : \begin{equation} \Ja=(\frac{\partial \big( \mat{A}\;.\;\vec{x} \big)}{\partial \vec{x}}). \label{Jac} \end{equation}

The NR-method can be summarized by the following iterative loop :

$\vec{x}_0=\vec{0}$; // Initialization
for $i=1,2,3,...$ { 
   $\vec{\delta x}_i = \Big( \Ja(\vec{x}_{i-1}) \Big)^{-1} \big( \vec{b}-\mat{A}(\vec{x}_{i-1})\;.\;\vec{x}_{i-1} \big)$; // Find the new $\vec{\delta x}$
   $\vec{x}_i=\vec{x}_{i-1}+\gamma_i \;\vec{\delta x}_i$; // Update the value of $\vec{x}$
}

Implementation of nonlinear problems in GetDP

As illustrated above, iterative loops are essential for nonlinear analysis. In GetDP, a loop is launched thanks to the \verb|IterativeLoop| command and has to be defined in an appropriate level of the recursive resolution operations :

<syntaxhighlight lang="cpp" enclose="div">

IterativeLoop [exp-cst,exp,exp-cst<,exp-cst>] {resolution-op }

</syntaxhighlight>

The parameters are : the maximum number of iterations $\i_{max}$ (if no convergence), the relative error $\varepsilon$ to achieve and the relaxation factor $\gamma_i$ that multiplies the iterative correction (The optional parameter is a flag for testing purposes). The \verb|resolution-op| field contains the operations that have to be realized at each iteration.

During a Newton-Raphson iteration, the system \eqref{SysNR} has to be generated and then solved consecutively. These steps are done by using the \verb|GenerateJac| and \verb|SolveJac| functions inside the \verb|resolution-op| field of an \verb|IterativeLoop| command. These two functions are briefly described here after :

<syntaxhighlight lang="cpp" enclose="div">

GenerateJac [system-id]

</syntaxhighlight>

Generate the system of equations \emph{system-id} (see \eqref{SysNR}) using a \emph{jacobian matrix} $\Ja$ (of which the unknowns are corrections $\vec{\delta x}$ of the current solution $\vec{x}$).

<syntaxhighlight lang="cpp" enclose="div">

SolveJac [system-id]

</syntaxhighlight>

Solve the system of equations \emph{system-id} (see \eqref{SysNR}) using a \emph{jacobian matrix} $\Ja$ (of which the unknowns are corrections $\vec{\delta x}$ of the current solution $\vec{x}$). Then, Increment the solution ($\vec{x}=\vec{x}+\vec{\delta x}$) and compute the relative error $\vec{\delta x}/\vec{x}$.

In GetDP, the system of equations that has to be generated and solved is expressed in an "Equation"-block of a `.pro' file. When calling \verb|GenerateJac|, the \emph{Jacobian matrix} $\Ja$ is built by assembling the \emph{stiffness matrix} $\mat{A}$ whith the additionnal terms incorporated in a \verb|JacNL| function in the formulation of a nonlinear problem. In other word, by definition : \begin{equation} \Ja:= \mat{A} + \JacNL, \label{J=A+JacNL} \end{equation} where $\JacNL$ stands for the elements included in a \verb|JacNL| function while $\mat{A}$ gathers the classical terms that are not attached to a \verb|JacNL| group. In case the problem is linear, i.e. the $\mat{A}$-matrix does not depend on the unknowns $\vec{x}$, the \emph{Jacobian matrix} reduced to $\Ja = \mat{A}$ so that the $\JacNL$ part vanishes. In the light of this, it is obvious that the $\JacNL$ is used to represent the nonlinear part of the \emph{Jacobian matrix}.

By supposing that all the terms in the formulation are integrated on the complete domain $\Omega$ and knowing that $\vec{x}_i=\vec{x}_{i-1}+\vec{\delta x}_i$, the system \eqref{SysNR} combined with the notations \eqref{Jac}, \eqref{J=A+JacNL} can be rewritten as : \begin{equation*} \Big( \mat{A}(\vec{x}_{i-1}) + \JacNL (\vec{x}_{i-1}) \Big) .\;\vec{\delta x}_i = \vec{b}-\mat{A}(\vec{x}_{i-1})\;.\;\vec{x}_{i-1}, \end{equation*} \begin{equation} \mat{A}(\vec{x}_{i-1}).\;\vec{x}_{i} + \JacNL (\vec{x}_{i-1}) \;.\; \vec{\delta x}_i = \vec{b}, \label{Formu} \end{equation} where $\vec{x}_i$ and $\vec{\delta x}_i$ are both discrete unknown quantities while $\vec{x}_{i-1}$ is already computed from the previous iteration.

Thanks to this last rewriting of the system, it's possible to give an interpretation on the way the degrees of freedom have to be understood in an "Equation"-block of a `.pro' file. In \emph{DetDP}, a \verb|Dof| symbol in front of a discrete quantity indicates that this quantity is an unknown quantity, and should therefore not be considered as already computed. When calling \verb|GenerateJac|, the generated unknowns are the increments $\vec{\delta x}_i$ of the \verb|Dof| elements. However, as the reformulation of the system \eqref{Formu} highlights, the \verb|Dof| terms that are not in a \verb|JacNL| function ("$\mat{A}(\vec{x}_{i-1}).\; \vec{\delta x}_i$") can be grouped with the right hand side previous values ("$\mat{A}(\vec{x}_{i-1}).\; \vec{x}_{i-1}$") build by \verb|GenerateJac|, so that the unknowns linked to these terms can be seen as classical \verb|Dof| quantities : \begin{equation*} \verb|Dof{x}| \rightarrow \vec{x}_i. \end{equation*} On the contrary, for the elements inside a \verb|JacNL| function, when using \verb|GenerateJac|, no additional suitable previous terms are generated on the right hand side, so that a \verb|Dof| quantity will be understood by \emph{DetDP} as being the unknown increment of this quantity at the current iteration. As it is visible on the last rewriting of the system \eqref{Formu}, the $\JacNL$ operator is acting specifically on the increment $\vec{\delta x}_i$. According to this interpretation, it comes : \begin{equation*} \verb|JacNL[Dof{x}...]| \rightarrow \vec{\delta x}_i. \end{equation*} Finally, if a quantity is not distinguished by a \verb|Dof| symbol, GetDP will use the last computed value (results of the last iteration) : \begin{equation*} \verb|{x}| \rightarrow \vec{x}_{i-1}. \end{equation*}

Thanks to these interpretations, any system in the form of system \eqref{Formu} can be naturally expressed in an "Equation"-block of a `.pro' file and be resolved by NR-method.\\

At this point, only the implementation of the NR-method in GetDP has been discussed. Actually, the only difference between the NR-method and Picard's method is linked to the presence or not of the $\JacNL$ operator. If the $\JacNL$ term is removed from the Newton-Raphson formulation \eqref{Formu}, the Picard system \eqref{Picard} is retrieved. So from the implementation point of view, simply removing the \verb|JacNL| content in the "Equation"-block of a `.pro' file transforms a Newton-Raphson iteration in a Picard's one.

The next section presents in concrete terms how to write nonlinear partial differential equations (PDE) in GetDP using a practical example.

An example

PDE in strong form

\begin{equation} \curl \left(\frac{1}{\sigma} \curl \vec{h} \right) + \frac{\partial}{\partial t} \left( \mu(\vec{h})\;.\;\vec{h} \right) = 0. \end{equation} \underline{PDE in weak form :} \begin{equation} \left( \frac{1}{\sigma} \curl \vec{h} \;.\; \curl \vec{h'} \right)_\Omega + \left( \frac{\partial}{\partial t} \left( \mu(\vec{h})\;.\;\vec{h} \right)\;.\; \vec{h'} \right)_\Omega = 0, \end{equation}

$\vec{h'}=\;$test function; in GetDP, $\vec{h'}$ is choosen $\vec{h'}=\vec{h}$ \\

Backward Euler: \begin{equation} \frac{\partial}{\partial t} \left( \mu(\vec{h})\;.\;\vec{h} \right) \quad \rightarrow \quad \frac{\mu\left(\vec{h}^n\right)\;.\;\vec{h}^n - \mu\left(\vec{h}^{n-1}\right)\;.\;\vec{h}^{n-1} }{\Delta t}, \end{equation} with $n :\;$ upper index for time step, $i :\;$ lower index for iteration number, $n \rightarrow\;$ actual time point, $n-1 \rightarrow\;$ known last time point, $i \rightarrow\;$ actual iteration, $i-1 \rightarrow\;$ known last iteration.

In GetDP
$\vec{h}^n_i \rightarrow$ This we are looking for $\verb|Dof{H}|$
$\vec{h}^n_{i-1} \rightarrow$ Actual time point, but value from last iteration $\verb|{H}|$
$\vec{h}^{n-1} \rightarrow$ Value from last time step $\verb|{H}[1]|$
$\Delta t \rightarrow$ Time step size $\verb|DTime|$

Picard's Method: $\mu$-version

For the nonlinear part ($\mu$ in our case), use $\vec{h}$ from the last iteration : \begin{equation} \boxed{ \left( \frac{1}{\sigma} \curl \vec{h}^n_i \;.\; \curl \vec{h'} \right)_\Omega + \left( \frac{\mu\left(\vec{h}^n_{i-1}\right)\;.\;\vec{h}^n_i - \mu\left(\vec{h}^{n-1}\right)\;.\;\vec{h}^{n-1} }{\Delta t}\;.\; \vec{h'} \right)_\Omega = 0. }\end{equation}

Formulation in GetDP :

<syntaxhighlight lang="cpp" enclose="div">

Formulation{ { Name Eddy_Formulation_H_Pic; Type FemEquation ; Quantity { { Name H; Type Local; NameOfSpace Hcurl_hphi; } } Equation { Galerkin { [ 1/sig[] * Dof{Curl H} , {Curl H}] ; In Domain ; Jacobian J ; Integration I ; } Galerkin { [ 1.0/$DTime * mu_NL[{H}] * Dof{H} , {H}] ; In Domain ; Jacobian J ; Integration I ; } Galerkin { [-1.0/$DTime * mu_NL[{H}[1]] * {H}[1], {H}] ; In Domain ; Jacobian J ; Integration I ; } } } }

Resolution{ { Name Solution_Nonlinear_H_Pic; System { { Name A; NameOfFormulation Eddy_Formulation_H_Pic;} } Operation { InitSolution[A]; TimeLoopTheta[T_Time0,T_TimeMax,T_DTime[], T_Theta[]]{ IterativeLoop[maxit, eps, relax] { GenerateJac[A] ; SolveJac[A] ; } SaveSolution[A]; } } } }

</syntaxhighlight>

Newton-Raphson Method: $\mu$-version

We want to set the following function $f$ equal zero : \begin{equation} f\left(\vec{h}^n\right)=\left( \frac{1}{\sigma} \curl \vec{h}^n \;.\; \curl \vec{h'} \right)_\Omega + \left( \frac{\mu\left(\vec{h}^n\right)\;.\;\vec{h}^n - \mu\left(\vec{h}^{n-1}\right)\;.\;\vec{h}^{n-1} }{\Delta t}\;.\; \vec{h'} \right)_\Omega = 0. \end{equation}

Algorithm : \begin{equation} \vec{h}^n_i=\vec{h}^n_{i-1}-\frac{f(\vec{h}^n_{i-1})}{\frac{\partial}{\partial \; \vec{h}^n_{i-1}} f(\vec{h}^n_{i-1})}. \label{Algo} \end{equation}

Defining : \begin{equation} \delta\vec{h}=\vec{h}^n_i-\vec{h}^n_{i-1} = \text{change of '"`UNIQ-MathJax103-QINU`"' per iteration}, \label{def} \end{equation} equation \eqref{Algo} can be rewritten as \begin{equation} \boxed{ \frac{\partial}{\partial \; \vec{h}^n_{i-1}} f(\vec{h}^n_{i-1}) \;.\; \delta\vec{h} + f(\vec{h}^n_{i-1}) = 0. } \label{Algo2} \end{equation} \underline{\underline{Note}} : In GetDP the word \verb|JacNL| tells that \verb|Dof{H}| means here the change of $\vec{h}$ in the iteration ! : \begin{equation*} \delta\vec{h} \; \rightarrow \; \verb|JacNL[Dof{H}...]|. \; \end{equation*}

Equation \eqref{Algo2} has to be implemented in the "Equation"-block \begin{equation} \begin{split} f\left(\vec{h}^n_{i-1}\right) = & \; \left( \frac{1}{\sigma} \curl \vec{h}^n_{i-1} \;.\; \curl \vec{h'} \right)_\Omega \\ &+ ( \frac{\mu\left(\vec{h}^n_{i-1}\right)\;.\;\vec{h}^n_{i-1} - \mu\left(\vec{h}^{n-1}\right)\;.\;\vec{h}^{n-1} }{\Delta t}\;.\; \vec{h'} )_\Omega = 0, \end{split} \label{f} \end{equation}

\begin{equation} \begin{split} \frac{\partial}{\partial \vec{h}^n_{i-1} } f\left(\vec{h}^n_{i-1}\right)\;.\;\delta\vec{h} = & \; \left( \frac{1}{\sigma} \curl \delta\vec{h} \;.\; \curl \vec{h'} \right)_\Omega \\ &+ ( \frac{1}{\Delta t} \Bigg( \mu ( \vec{h}^n_{i-1}) + \frac{\partial \mu}{\partial \vec{h}}\Bigg|_{\vec{h}^n_{i-1}}\;.\;\vec{h}^n_{i-1}\Bigg)\;.\;\delta\vec{h}\;.\;\vec{h'} )_\Omega = 0. \end{split} \label{df} \end{equation}

Inserting \eqref{f} and \eqref{df} in \eqref{Algo2} and doing a small simplification with \eqref{def} yield : \begin{equation} \boxed{ \begin{split} \frac{\partial}{\partial \vec{h}^n_{i-1} } f\left(\vec{h}^n_{i-1}\right)\;.\;\delta\vec{h} + f&\left(\vec{h}^n_{i-1}\right) = \; \left( \frac{1}{\sigma} \curl \vec{h}^n_i \;.\; \curl \vec{h'} \right)_\Omega \\ &+ ( \frac{1}{\Delta t} \Bigg( \mu ( \vec{h}^n_{i-1})\;.\;\vec{h}^n_i - \mu(\vec{h}^{n-1})\;.\;\vec{h}^{n-1}\Bigg)\;.\;\vec{h'} )_\Omega \\ &+ ( \frac{1}{\Delta t} \;.\; \frac{\partial \mu}{\partial \vec{h}}\Bigg|_{\vec{h}^n_{i-1}}\;.\;\vec{h}^n_{i-1} \;.\; \delta\vec{h} \;.\; \vec{h'})_\Omega = 0. \end{split} } \end{equation}

Formulation in GetDP :

<syntaxhighlight lang="cpp" enclose="div">

Formulation{ { Name Eddy_Formulation_H_NR; Type FemEquation ; Quantity { { Name H; Type Local; NameOfSpace Hcurl_hphi; } } Equation { Galerkin { [ 1/sig[] * Dof{Curl H} , {Curl H}] ; In Domain ; Jacobian J ; Integration I ; } Galerkin { [ 1.0/$DTime * mu_NL[{H}] * Dof{H} , {H}] ; In Domain ; Jacobian J ; Integration I ; } Galerkin { [-1.0/$DTime * mu_NL[{H}[1]] * {H}[1], {H}] ; In Domain ; Jacobian J ; Integration I ; } Galerkin { JacNL [1.0/$DTime * dmudH[{H}] * Norm[{H}] * Dof{H}, {H}] ; In Domain ; Jacobian J ; Integration I ; } } } }

Resolution{ { Name Solution_Nonlinear_H_NR; System { { Name A; NameOfFormulation Eddy_Formulation_H_NR;} } Operation { InitSolution[A]; TimeLoopTheta[T_Time0,T_TimeMax,T_DTime[], T_Theta[]]{ IterativeLoop[maxit, eps, relax] { GenerateJac[A] ; SolveJac[A] ; } SaveSolution[A]; } } } }

</syntaxhighlight>

Newton-Raphson Method: $b$-version

Following the same procedure with \begin{equation*} f\left(\vec{h}^n\right)= \left( \frac{1}{\sigma} \curl \vec{h}^n \;.\; \curl \vec{h'} \right)_\Omega + \left( \frac{\B(\vec{h}^n)-\B(\vec{h}^{n-1} ) } {\Delta t}\;.\; \vec{h'} \right)_\Omega = 0, \end{equation*}

gives \begin{equation*} f\left(\vec{h}^n_{i-1}\right) = \; \left( \frac{1}{\sigma} \curl \vec{h}^n_{i-1} \;.\; \curl \vec{h'} \right)_\Omega + ( \frac{\B(\vec{h}^n_{i-1})-\B(\vec{h}^{n-1}) }{\Delta t}\;.\; \vec{h'} )_\Omega = 0, \end{equation*}

\begin{equation*} \frac{\partial}{\partial \vec{h}^n_{i-1} } f\left(\vec{h}^n_{i-1}\right)\;.\;\delta\vec{h} = \; \left( \frac{1}{\sigma} \curl \delta\vec{h} \;.\; \curl \vec{h'} \right)_\Omega + ( \frac{1}{\Delta t} \frac{\partial \B}{\partial \vec{h}}\Bigg|_{\vec{h}^n_{i-1}}\;.\;\delta\vec{h}\;.\;\vec{h'} )_\Omega = 0. \end{equation*}

\begin{equation*} \boxed{ \begin{split} \frac{\partial}{\partial \vec{h}^n_{i-1} } f\left(\vec{h}^n_{i-1}\right)\;.\;\delta\vec{h} + f\left(\vec{h}^n_{i-1}\right) &= \; \left( \frac{1}{\sigma} \curl \vec{h}^n_i \;.\; \curl \vec{h'} \right)_\Omega \\ &+ ( \frac{1}{\Delta t} \Bigg( \B(\vec{h}^n_{i-1} ) - \B(\vec{h}^{n-1}) \Bigg)\;.\;\vec{h'} )_\Omega \\ &+ ( \frac{1}{\Delta t} \;.\; \frac{\partial \B}{\partial \vec{h}}\Bigg|_{\vec{h}^n_{i-1}}\;.\; \delta\vec{h} \;.\; \vec{h'})_\Omega = 0. \end{split} } \end{equation*}

Formulation in GetDP :

<syntaxhighlight lang="cpp" enclose="div">

Formulation{ { Name Eddy_Formulation_B_NR; Type FemEquation ; Quantity { { Name H; Type Local; NameOfSpace Hcurl_hphi; } } Equation { Galerkin { [ 1/sig[] * Dof{Curl H} , {Curl H}] ; In Domain ; Jacobian J ; Integration I ; } Galerkin { [ 1.0/$DTime * B[{H}] , {H}] ; In Domain ; Jacobian J ; Integration I ; } Galerkin { [-1.0/$DTime * B[{H}[1]], {H}[1]] ; In Domain ; Jacobian J ; Integration I ; } Galerkin { JacNL [1.0/$DTime * dBdH[{H}] * Dof{H}, {H}] ; In Domain ; Jacobian J ; Integration I ; } } } }

Resolution{ { Name Solution_Nonlinear_B_NR; System { { Name A; NameOfFormulation Eddy_Formulation_B_NR;} } Operation { InitSolution[A]; TimeLoopTheta[T_Time0,T_TimeMax,T_DTime[], T_Theta[]]{ IterativeLoop[maxit, eps, relax] { GenerateJac[A] ; SolveJac[A] ; } SaveSolution[A]; } } } }

</syntaxhighlight>