Difference between revisions of "Nonlinear problems in GetDP"

From ONELAB
Jump to: navigation, search
(Newton-Raphson Method: $\mu$-version)
 
(16 intermediate revisions by the same user not shown)
Line 7: Line 7:
 
\newcommand{\JacNL}{\boldsymbol{\mathsf{JacNL}}}
 
\newcommand{\JacNL}{\boldsymbol{\mathsf{JacNL}}}
 
$
 
$
 
 
This page explains how a system of ''nonlinear'' partial differential equations is written and solved in GetDP. A summary of the Picard and Newton-Raphson methods is recalled first, before proceeding to the GetDP implementation. The <code>JacNL</code> function is explained in detail on a practical example.   
 
This page explains how a system of ''nonlinear'' partial differential equations is written and solved in GetDP. A summary of the Picard and Newton-Raphson methods is recalled first, before proceeding to the GetDP implementation. The <code>JacNL</code> function is explained in detail on a practical example.   
  
 
== Resolution of nonlinear problems ==
 
== Resolution of nonlinear problems ==
  
The finite element method allows to approximate a system of partial differential equations by a system of algebraic equations which can be written in the following matrix form :   
+
For linear problems, the finite element method leads to the solution of linear systems of algebraic equations of the form:   
 
\begin{equation}
 
\begin{equation}
\mat{A}\;.\;\vec{x}=\vec{b},
+
\mat{A} \; \vec{x}=\vec{b},
 
\end{equation}
 
\end{equation}
where $\mat{A}$ is the system matrix of the problem, $\vec{b}$ takes into account potential source terms and $\vec{x}$ represents the field of unknowns that has to be calculated.  
+
where $\mat{A}$ is a square matrix (e.g. the stiffness matrix of the problem), $\vec{b}$ takes into account potential source terms and $\vec{x}$ is the vector of unknowns to be calculated.  
 
   
 
   
In the presence of nonlinear media in the finite element model, the matri} $\mat{A}$ depends on the unknown field $\vec{x}$, and the system of equations becomes nonlinear:
+
In the presence of material nonlinearities, the matrix $\mat{A}$ depends on the unknown field $\vec{x}$, and the system of equations becomes nonlinear:
 
\begin{equation}
 
\begin{equation}
\mat{A}(\vec{x})\;.\;\vec{x}=\vec{b}.
+
\mat{A}(\vec{x}) \; \vec{x}=\vec{b}.
 
\label{Sys}
 
\label{Sys}
 
\end{equation}
 
\end{equation}
Therefore, the system must necessarily be solved iteratively. Starting from an initial vector $\vec{x}_0$ (a zero vector for example), the following calculated values $\vec{x}_1$, $\vec{x}_2$, ... are hoped to be closer and closer to the correct unknown solution. For the exact solution, the residual defined by $\vec{r}(\vec{x})=\mat{A}(\vec{x})\;.\;\vec{x}-\vec{b}$ equals zero. If after $p$ iterations, a satisfactory convergence is obtained, the iterative process is stopped. The convergence criterion could be based on some norm of the residual $\vec{r}(\vec{x}_p)$ or on the $p^\text{th}$ increment $\vec{\delta x}_p=\vec{x}_p-\vec{x}_{p-1}$. For example, it could be :
+
Therefore, the system must necessarily be solved iteratively. Starting from an initial guess vector $\vec{x}_0$ (e.g. a zero vector), the following calculated values $\vec{x}_1$, $\vec{x}_2$, ... are hoped to converge to the correct solution. For the exact solution, the residual defined by $\vec{r}(\vec{x})=\mat{A}(\vec{x}) \; \vec{x}-\vec{b}$ is zero. If after $p$ iterations, a satisfactory convergence is obtained, the iterative process is stopped. The convergence criterion could be based on some norm of the residual $\vec{r}(\vec{x}_p)$ or on the $p^\text{th}$ increment $\vec{\delta x}_p=\vec{x}_p-\vec{x}_{p-1}$. For example, it could be :
 
\begin{equation}
 
\begin{equation}
\frac{||\vec{\delta x}_p||_\infty}{||\vec{x}_p||_\infty}\;<\;\varepsilon,
+
\frac{||\vec{\delta x}_p||_\infty}{||\vec{x}_p||_\infty} < \varepsilon,
 
\end{equation}
 
\end{equation}
with $\varepsilon$, a small dimensionless number ($10^{-6}$, ..., $10^{-3}$).
+
with $\varepsilon$ a small dimensionless number (e.g. $10^{-6}$).
  
 
=== Picard's method ===
 
=== Picard's method ===
  
Picard's iteration is an easy way of handling nonlinear partial differential equations.
+
Picard's iteration provides an easy way to handle the nonlinearity. In the Picard iterative process, a new approximation $\vec{x}_i$ is calculated by using a known, previous solution $\vec{x}_{i-1}$ in the nonlinear terms so that these terms become linear in the unknown $\vec{x}_i$. Therefore, the problem becomes:   
In the Picard iterative process, a new approximation $\vec{x}_i$ is calculated by using a known, previous solution $\vec{x}_{i-1}$ in the nonlinear terms so that these terms become linear in the unknown $\vec{x}_i$. Therefore, the problem becomes :   
 
 
\begin{equation}
 
\begin{equation}
\mat{A}(\vec{x}_{i-1}).\;\vec{x}_{i} = \vec{b}.
+
\mat{A}(\vec{x}_{i-1}) \; \vec{x}_{i} = \vec{b}.
 
\label{Picard}
 
\label{Picard}
 
\end{equation}
 
\end{equation}
  
The following iterative process summarizes the Picard method :
+
The following iterative process summarizes the Picard method:
 
  $\vec{x}_0=\vec{0}$; // Initialization
 
  $\vec{x}_0=\vec{0}$; // Initialization
 
  for $i=1,2,3,...$ {
 
  for $i=1,2,3,...$ {
Line 59: Line 57:
 
\end{equation}
 
\end{equation}
  
 
+
Remark: The derivative of an $m \times 1$-column vector $\vec{y}$ with respect to an $n \times 1$-column vector $\vec{x}$ is an  $m \times n$-matrix whose $(j,k)^\text{th}$ element is given by:  
Remark. The derivative of an $m \times 1$-column vector $\vec{y}$ with respect to an $n \times 1$-column vector $\vec{x}$ is an  $m \times n$-matrix whose $(j,k)^\text{th}$ element is given by:  
 
 
\begin{equation*}
 
\begin{equation*}
 
(\frac{\partial \vec{y}}{\partial \vec{x}})_{j,k}=\frac{\partial \vec{y}_j}{\partial \vec{x}_k}.
 
(\frac{\partial \vec{y}}{\partial \vec{x}})_{j,k}=\frac{\partial \vec{y}_j}{\partial \vec{x}_k}.
Line 67: Line 64:
 
For the system written in \eqref{Sys}, with the right-hand side $\vec{b}$ assumed known, the formulation \eqref{NR} becomes:
 
For the system written in \eqref{Sys}, with the right-hand side $\vec{b}$ assumed known, the formulation \eqref{NR} becomes:
 
\begin{equation}
 
\begin{equation}
(\frac{\partial \big( \mat{A}(\vec{x}).\;\vec{x} \big)}{\partial \vec{x}})_{i-1} \vec{\delta x}_i = \vec{b}-\mat{A}(\vec{x}_{i-1})\;.\;\vec{x}_{i-1}.
+
(\frac{\partial \big( \mat{A}(\vec{x}) \; \vec{x} \big)}{\partial \vec{x}})_{i-1} \vec{\delta x}_i = \vec{b}-\mat{A}(\vec{x}_{i-1}) \; \vec{x}_{i-1}.
 
\label{SysNR}
 
\label{SysNR}
 
\end{equation}
 
\end{equation}
Line 73: Line 70:
 
Each NR-iteration provides an unique increment solution $\vec{\delta x}_i$ which depends on values taken from the previous iteration step $i-1$. Nevertheless, the convergence is by no means guaranteed. In case of strong non-linearity and/or unfavorable initialization conditions, divergence is not unlikely. Sufficiently close to the solution, the convergence of the NR-method is quadratic. In order to ensure or accelerate convergence, relaxation techniques may be applied:
 
Each NR-iteration provides an unique increment solution $\vec{\delta x}_i$ which depends on values taken from the previous iteration step $i-1$. Nevertheless, the convergence is by no means guaranteed. In case of strong non-linearity and/or unfavorable initialization conditions, divergence is not unlikely. Sufficiently close to the solution, the convergence of the NR-method is quadratic. In order to ensure or accelerate convergence, relaxation techniques may be applied:
 
\begin{equation}
 
\begin{equation}
\vec{x}_i=\vec{x}_{i-1}+\gamma_i \;\vec{\delta x}_i,
+
\vec{x}_i=\vec{x}_{i-1}+\gamma_i \; \vec{\delta x}_i,
 
\end{equation}
 
\end{equation}
 
with $\gamma_i$, the so-called ''relaxation factor'', which should be chosen judiciously (typically between $0$ and $1$).
 
with $\gamma_i$, the so-called ''relaxation factor'', which should be chosen judiciously (typically between $0$ and $1$).
Line 79: Line 76:
 
The matrix which appears in the system \eqref{SysNR} is called the ''Jacobian matrix'' (or ''Tangent stiffness matrix'') of the system and is noted $\Ja$:
 
The matrix which appears in the system \eqref{SysNR} is called the ''Jacobian matrix'' (or ''Tangent stiffness matrix'') of the system and is noted $\Ja$:
 
\begin{equation}
 
\begin{equation}
\Ja=(\frac{\partial \big( \mat{A}\;.\;\vec{x} \big)}{\partial \vec{x}}).
+
\Ja=(\frac{\partial \big( \mat{A}(\vec{x}) \; \vec{x} \big)}{\partial \vec{x}}).
 
\label{Jac}
 
\label{Jac}
 
\end{equation}
 
\end{equation}
Line 86: Line 83:
 
  $\vec{x}_0=\vec{0}$; // Initialization
 
  $\vec{x}_0=\vec{0}$; // Initialization
 
  for $i=1,2,3,...$ {  
 
  for $i=1,2,3,...$ {  
     $\vec{\delta x}_i = \Big( \Ja(\vec{x}_{i-1}) \Big)^{-1} \big( \vec{b}-\mat{A}(\vec{x}_{i-1})\;.\;\vec{x}_{i-1} \big)$; // Find the new $\vec{\delta x}$
+
     $\vec{\delta x}_i = \Big( \Ja(\vec{x}_{i-1}) \Big)^{-1} \big( \vec{b}-\mat{A}(\vec{x}_{i-1}) \; \vec{x}_{i-1} \big)$; // Find the new $\vec{\delta x}$
     $\vec{x}_i=\vec{x}_{i-1}+\gamma_i \;\vec{\delta x}_i$; // Update the value of $\vec{x}$
+
     $\vec{x}_i=\vec{x}_{i-1}+\gamma_i \; \vec{\delta x}_i$; // Update the value of $\vec{x}$
 
  }
 
  }
  
Line 93: Line 90:
  
 
As illustrated above, iterative loops are essential for nonlinear analysis. In GetDP, a loop is launched thanks to the <code>IterativeLoop</code> command and has to be defined in an appropriate level of the recursive resolution operations:
 
As illustrated above, iterative loops are essential for nonlinear analysis. In GetDP, a loop is launched thanks to the <code>IterativeLoop</code> command and has to be defined in an appropriate level of the recursive resolution operations:
<blockquote><syntaxhighlight lang="cpp" enclose="div">
+
<pre>
 
IterativeLoop [exp-cst,exp,exp-cst<,exp-cst>] { resolution-op }
 
IterativeLoop [exp-cst,exp,exp-cst<,exp-cst>] { resolution-op }
</syntaxhighlight></blockquote>
+
</pre>
  
 
The parameters are: the maximum number of iterations $i_{max}$ (if no convergence), the relative error $\varepsilon$ to achieve and the relaxation factor $\gamma_i$ that multiplies the iterative correction. (The optional parameter is a flag for testing purposes.) The <code>resolution-op</code> field contains the operations that have to be performed at each iteration.
 
The parameters are: the maximum number of iterations $i_{max}$ (if no convergence), the relative error $\varepsilon$ to achieve and the relaxation factor $\gamma_i$ that multiplies the iterative correction. (The optional parameter is a flag for testing purposes.) The <code>resolution-op</code> field contains the operations that have to be performed at each iteration.
Line 101: Line 98:
 
During a Newton-Raphson iteration, the system \eqref{SysNR} has to be generated and then solved consecutively. These steps are done by using the <code>GenerateJac</code>| and <code>SolveJac</code> functions inside the <code>resolution-op</code> field of an <code>IterativeLoop</code> command. These two functions are briefly described hereafter:
 
During a Newton-Raphson iteration, the system \eqref{SysNR} has to be generated and then solved consecutively. These steps are done by using the <code>GenerateJac</code>| and <code>SolveJac</code> functions inside the <code>resolution-op</code> field of an <code>IterativeLoop</code> command. These two functions are briefly described hereafter:
  
<blockquote><syntaxhighlight lang="cpp" enclose="div">
+
<pre>
 
GenerateJac [system-id]
 
GenerateJac [system-id]
</syntaxhighlight></blockquote>
+
</pre>
  
Generate the system of equations \emph{system-id} (see \eqref{SysNR}) using a Jacobian matrix $\Ja$ (of which the unknowns are corrections $\vec{\delta x}$ of the current solution $\vec{x}$).
+
Generate the system of equations <code>system-id</code> (see \eqref{SysNR}) using a Jacobian matrix $\Ja$ (of which the unknowns are corrections $\vec{\delta x}$ of the current solution $\vec{x}$).
  
<blockquote><syntaxhighlight lang="cpp" enclose="div">
+
<pre>
 
SolveJac [system-id]
 
SolveJac [system-id]
</syntaxhighlight></blockquote>
+
</pre>
  
Solve the system of equations \emph{system-id} (see \eqref{SysNR}) using a Jacobian matrix $\Ja$ (system of which the unknowns are corrections $\vec{\delta x}$ of the current solution $\vec{x}$). Then, Increment the solution ($\vec{x}=\vec{x}+\vec{\delta x}$) and compute the relative error $\vec{\delta x}/\vec{x}$.
+
Solve the system of equations <code>system-id</code> (see \eqref{SysNR}) using a Jacobian matrix $\Ja$ (system of which the unknowns are corrections $\vec{\delta x}$ of the current solution $\vec{x}$). Then, Increment the solution ($\vec{x}=\vec{x}+\vec{\delta x}$) and compute the relative error $\vec{\delta x}/\vec{x}$.
  
 
In GetDP, the system of equations that has to be generated and solved is expressed in an "Equation"-block of a `.pro' file. When calling <code>GenerateJac</code>, the Jacobian matrix $\Ja$ is built by assembling the matrix $\mat{A}$ whith the additionnal terms incorporated in a <code>JacNL</code> function in the formulation of a nonlinear problem. In other word, by definition :
 
In GetDP, the system of equations that has to be generated and solved is expressed in an "Equation"-block of a `.pro' file. When calling <code>GenerateJac</code>, the Jacobian matrix $\Ja$ is built by assembling the matrix $\mat{A}$ whith the additionnal terms incorporated in a <code>JacNL</code> function in the formulation of a nonlinear problem. In other word, by definition :
Line 118: Line 115:
 
\label{J=A+JacNL}
 
\label{J=A+JacNL}
 
\end{equation}
 
\end{equation}
where $\JacNL$ stands for the elements included in a <code>JacNL</code> equation block while $\mat{A}$ gathers the classical terms that are not attached to a <code>JacNL</code> equation block. In case the problem is linear, i.e. when the $\mat{A}$-matrix does not depend on the unknowns $\vec{x}$, theJacobian matrix reduces to $\Ja = \mat{A}$ so that the $\JacNL$ part vanishes. In the light of this, it is obvious that the $\JacNL$ is used to represent the nonlinear part of the \emph{Jacobian matrix}.   
+
where $\JacNL$ stands for the elements included in a <code>JacNL</code> equation block while $\mat{A}$ gathers the classical terms that are not attached to a <code>JacNL</code> equation block. In case the problem is linear, i.e. when the $\mat{A}$-matrix does not depend on the unknowns $\vec{x}$, theJacobian matrix reduces to $\Ja = \mat{A}$ so that the $\JacNL$ part vanishes. In the light of this, it is obvious that the $\JacNL$ is used to represent the nonlinear part of the Jacobian matrix.   
  
 
By supposing that all the terms in the formulation are integrated on the complete domain $\Omega$ and knowing that $\vec{x}_i=\vec{x}_{i-1}+\vec{\delta x}_i$, the system \eqref{SysNR} combined with the notations \eqref{Jac}, \eqref{J=A+JacNL} can be rewritten as :   
 
By supposing that all the terms in the formulation are integrated on the complete domain $\Omega$ and knowing that $\vec{x}_i=\vec{x}_{i-1}+\vec{\delta x}_i$, the system \eqref{SysNR} combined with the notations \eqref{Jac}, \eqref{J=A+JacNL} can be rewritten as :   
Line 133: Line 130:
 
where $\vec{x}_i$ and $\vec{\delta x}_i$ are both discrete unknown quantities while $\vec{x}_{i-1}$ is already computed from the previous iteration.  
 
where $\vec{x}_i$ and $\vec{\delta x}_i$ are both discrete unknown quantities while $\vec{x}_{i-1}$ is already computed from the previous iteration.  
  
Thanks to this last rewriting of the system, it is possible to give an interpretation on the way the degrees of freedom have to be understood in an "Equation"-block of a `.pro' file. In \emph{DetDP}, a <code>Dof</code> symbol in front of a discrete quantity indicates that this quantity is an unknown quantity, and should therefore not be considered as already computed. When calling <code>GenerateJac</code>, the generated unknowns are the increments $\vec{\delta x}_i$ of the <code>Dof</code> elements. However, as the reformulation of the system \eqref{Formu} highlights, the <code>Dof</code> terms that are not in a <code>JacNL</code> function ("$\mat{A}(\vec{x}_{i-1}).\; \vec{\delta x}_i$") can be grouped with the right hand side previous values ("$\mat{A}(\vec{x}_{i-1}).\; \vec{x}_{i-1}$") built by <code>GenerateJac</code>, so that the unknowns linked to these terms can be seen as classical <code>Dof</code> quantities :   
+
Thanks to this last rewriting of the system, it is possible to give an interpretation on the way the degrees of freedom have to be understood in an "Equation"-block of a `.pro' file. In GetDP, a <code>Dof</code> symbol in front of a discrete quantity indicates that this quantity is an unknown quantity, and should therefore not be considered as already computed. When calling <code>GenerateJac</code>, the generated unknowns are the increments $\vec{\delta x}_i$ of the <code>Dof</code> elements. However, as the reformulation of the system \eqref{Formu} highlights, the <code>Dof</code> terms that are not in a <code>JacNL</code> function ("$\mat{A}(\vec{x}_{i-1}).\; \vec{\delta x}_i$") can be grouped with the right hand side previous values ("$\mat{A}(\vec{x}_{i-1}).\; \vec{x}_{i-1}$") built by <code>GenerateJac</code>, so that the unknowns linked to these terms can be seen as classical <code>Dof</code> quantities :   
 
\begin{equation*}
 
\begin{equation*}
 
\verb|Dof{x}| \rightarrow \vec{x}_i.  
 
\verb|Dof{x}| \rightarrow \vec{x}_i.  
Line 193: Line 190:
 
Formulation in GetDP:
 
Formulation in GetDP:
  
<blockquote><syntaxhighlight lang="cpp" enclose="div">
+
<pre>
 
Formulation{
 
Formulation{
 
   { Name Eddy_Formulation_H_Pic; Type FemEquation ;
 
   { Name Eddy_Formulation_H_Pic; Type FemEquation ;
Line 202: Line 199:
 
      Galerkin { [ 1/sig[] * Dof{Curl H} , {Curl H}] ;
 
      Galerkin { [ 1/sig[] * Dof{Curl H} , {Curl H}] ;
 
                  In Domain ; Jacobian J ; Integration I ; }  
 
                  In Domain ; Jacobian J ; Integration I ; }  
      Galerkin { [ 1.0/$DTime * mu_NL[{H}]    * Dof{H} , {H}] ;   
+
      Galerkin { [ 1.0/\$DTime * mu_NL[{H}]    * Dof{H} , {H}] ;   
 
                  In Domain ; Jacobian J ; Integration I ; }
 
                  In Domain ; Jacobian J ; Integration I ; }
      Galerkin { [-1.0/$DTime * mu_NL[{H}[1]] * {H}[1], {H}] ;
+
      Galerkin { [-1.0/\$DTime * mu_NL[{H}[1]] * {H}[1], {H}] ;
 
                  In Domain ; Jacobian J ; Integration I ; }     
 
                  In Domain ; Jacobian J ; Integration I ; }     
 
     }
 
     }
Line 226: Line 223:
 
   }
 
   }
 
}
 
}
</syntaxhighlight></blockquote>
+
</pre>
  
 
=== Newton-Raphson Method: $\mu$-version ===
 
=== Newton-Raphson Method: $\mu$-version ===
Line 243: Line 240:
 
Defining:
 
Defining:
 
\begin{equation}  
 
\begin{equation}  
\delta\vec{h}=\vec{h}^n_i-\vec{h}^n_{i-1} = \text{change of } \vec{h} \text{per iteration},  
+
\delta\vec{h}=\vec{h}^n_i-\vec{h}^n_{i-1} = \text{change of } \vec{h} \text{ per iteration},  
 
\label{def}
 
\label{def}
 
\end{equation}
 
\end{equation}
Line 289: Line 286:
 
Formulation in GetDP:
 
Formulation in GetDP:
  
<blockquote><syntaxhighlight lang="cpp" enclose="div">
+
<pre>
 
Formulation{
 
Formulation{
 
   { Name Eddy_Formulation_H_NR; Type FemEquation ;
 
   { Name Eddy_Formulation_H_NR; Type FemEquation ;
Line 298: Line 295:
 
      Galerkin { [ 1/sig[] * Dof{Curl H} , {Curl H}] ;
 
      Galerkin { [ 1/sig[] * Dof{Curl H} , {Curl H}] ;
 
                  In Domain ; Jacobian J ; Integration I ; }  
 
                  In Domain ; Jacobian J ; Integration I ; }  
      Galerkin { [ 1.0/$DTime * mu_NL[{H}] * Dof{H} , {H}] ;   
+
      Galerkin { [ 1.0/\$DTime * mu_NL[{H}] * Dof{H} , {H}] ;   
 
                  In Domain ; Jacobian J ; Integration I ; }
 
                  In Domain ; Jacobian J ; Integration I ; }
      Galerkin { [-1.0/$DTime * mu_NL[{H}[1]] * {H}[1], {H}] ;
+
      Galerkin { [-1.0/\$DTime * mu_NL[{H}[1]] * {H}[1], {H}] ;
 
                  In Domain ; Jacobian J ; Integration I ; }   
 
                  In Domain ; Jacobian J ; Integration I ; }   
      Galerkin { JacNL [1.0/$DTime * dmudH[{H}] * Norm[{H}] * Dof{H}, {H}] ;
+
      Galerkin { JacNL [1.0/\$DTime * dmudH[{H}] * Norm[{H}] * Dof{H}, {H}] ;
 
                  In Domain ; Jacobian J ; Integration I ; }  
 
                  In Domain ; Jacobian J ; Integration I ; }  
 
     }
 
     }
Line 324: Line 321:
 
   }
 
   }
 
}
 
}
</syntaxhighlight></blockquote>
+
</pre>
  
 
=== Newton-Raphson Method: $b$-version ===
 
=== Newton-Raphson Method: $b$-version ===
Line 362: Line 359:
 
Formulation in GetDP:
 
Formulation in GetDP:
  
<blockquote><syntaxhighlight lang="cpp" enclose="div">
+
<pre>
 
Formulation{
 
Formulation{
 
   { Name Eddy_Formulation_B_NR; Type FemEquation ;
 
   { Name Eddy_Formulation_B_NR; Type FemEquation ;
Line 371: Line 368:
 
      Galerkin { [ 1/sig[] * Dof{Curl H} , {Curl H}] ;
 
      Galerkin { [ 1/sig[] * Dof{Curl H} , {Curl H}] ;
 
                  In Domain ; Jacobian J ; Integration I ; }   
 
                  In Domain ; Jacobian J ; Integration I ; }   
      Galerkin { [ 1.0/$DTime * B[{H}] , {H}] ;   
+
      Galerkin { [ 1.0/\$DTime * B[{H}] , {H}] ;   
 
                  In Domain ; Jacobian J ; Integration I ; }
 
                  In Domain ; Jacobian J ; Integration I ; }
      Galerkin { [-1.0/$DTime * B[{H}[1]], {H}[1]] ;
+
      Galerkin { [-1.0/\$DTime * B[{H}[1]], {H}[1]] ;
 
                  In Domain ; Jacobian J ; Integration I ; }  
 
                  In Domain ; Jacobian J ; Integration I ; }  
      Galerkin { JacNL [1.0/$DTime * dBdH[{H}] * Dof{H}, {H}] ;
+
      Galerkin { JacNL [1.0/\$DTime * dBdH[{H}] * Dof{H}, {H}] ;
 
                  In Domain ; Jacobian J ; Integration I ; }  
 
                  In Domain ; Jacobian J ; Integration I ; }  
 
     }
 
     }
Line 397: Line 394:
 
   }
 
   }
 
}
 
}
</syntaxhighlight></blockquote>
+
</pre>

Latest revision as of 19:44, 6 November 2016

$ \newcommand{\vec}[1]{\mathbf{#1}} \newcommand{\mat}[1]{\mathbf{#1}} \newcommand{\curl}{\mathrm{curl}\;} \newcommand{\B}{\mathfrak{B}} \newcommand{\Ja}{\boldsymbol{\mathsf{J}}} \newcommand{\JacNL}{\boldsymbol{\mathsf{JacNL}}} $ This page explains how a system of nonlinear partial differential equations is written and solved in GetDP. A summary of the Picard and Newton-Raphson methods is recalled first, before proceeding to the GetDP implementation. The JacNL function is explained in detail on a practical example.

Resolution of nonlinear problems

For linear problems, the finite element method leads to the solution of linear systems of algebraic equations of the form: \begin{equation} \mat{A} \; \vec{x}=\vec{b}, \end{equation} where $\mat{A}$ is a square matrix (e.g. the stiffness matrix of the problem), $\vec{b}$ takes into account potential source terms and $\vec{x}$ is the vector of unknowns to be calculated.

In the presence of material nonlinearities, the matrix $\mat{A}$ depends on the unknown field $\vec{x}$, and the system of equations becomes nonlinear: \begin{equation} \mat{A}(\vec{x}) \; \vec{x}=\vec{b}. \label{Sys} \end{equation} Therefore, the system must necessarily be solved iteratively. Starting from an initial guess vector $\vec{x}_0$ (e.g. a zero vector), the following calculated values $\vec{x}_1$, $\vec{x}_2$, ... are hoped to converge to the correct solution. For the exact solution, the residual defined by $\vec{r}(\vec{x})=\mat{A}(\vec{x}) \; \vec{x}-\vec{b}$ is zero. If after $p$ iterations, a satisfactory convergence is obtained, the iterative process is stopped. The convergence criterion could be based on some norm of the residual $\vec{r}(\vec{x}_p)$ or on the $p^\text{th}$ increment $\vec{\delta x}_p=\vec{x}_p-\vec{x}_{p-1}$. For example, it could be : \begin{equation} \frac{||\vec{\delta x}_p||_\infty}{||\vec{x}_p||_\infty} < \varepsilon, \end{equation} with $\varepsilon$ a small dimensionless number (e.g. $10^{-6}$).

Picard's method

Picard's iteration provides an easy way to handle the nonlinearity. In the Picard iterative process, a new approximation $\vec{x}_i$ is calculated by using a known, previous solution $\vec{x}_{i-1}$ in the nonlinear terms so that these terms become linear in the unknown $\vec{x}_i$. Therefore, the problem becomes: \begin{equation} \mat{A}(\vec{x}_{i-1}) \; \vec{x}_{i} = \vec{b}. \label{Picard} \end{equation}

The following iterative process summarizes the Picard method:

$\vec{x}_0=\vec{0}$; // Initialization
for $i=1,2,3,...$ {
   $\vec{x}_{i} = \Big( \mat{A}(\vec{x}_{i-1}) \Big)^{-1} \vec{b}$; // Find the new $\vec{x}$
}

Newton-Raphson method

Usually, the Newton-Raphson method (NR-method) is used. In that case, a new approximation $\vec{x}_i=\vec{x}_{i-1}+\vec{\delta x}_i$ is obtained through the linearization of the residual vector $\vec{r}(\vec{x}_i)$ around the previous approximated value $\vec{x}_{i-1}$: \begin{equation} \vec{r}(\vec{x}_i)=\vec{r}(\vec{x}_{i-1}+\vec{\delta x}_i)=0, \end{equation} \begin{equation} \vec{r}(\vec{x}_{i-1})+(\frac{\partial \vec{r}}{\partial \vec{x}})_{i-1} \vec{\delta x}_i + o(\vec{\delta \vec{x}}_i^2)= 0. \end{equation} By neglecting the higher-order terms $o(\vec{\delta \vec{x}}_i^2)$, a system of linear algebraic equations in $\vec{\delta x}_i$ is deduced: \begin{equation} (\frac{\partial \vec{r}}{\partial \vec{x}})_{i-1} \vec{\delta x}_i = -\vec{r}(\vec{x}_{i-1}). \label{NR} \end{equation}

Remark: The derivative of an $m \times 1$-column vector $\vec{y}$ with respect to an $n \times 1$-column vector $\vec{x}$ is an $m \times n$-matrix whose $(j,k)^\text{th}$ element is given by: \begin{equation*} (\frac{\partial \vec{y}}{\partial \vec{x}})_{j,k}=\frac{\partial \vec{y}_j}{\partial \vec{x}_k}. \end{equation*}

For the system written in \eqref{Sys}, with the right-hand side $\vec{b}$ assumed known, the formulation \eqref{NR} becomes: \begin{equation} (\frac{\partial \big( \mat{A}(\vec{x}) \; \vec{x} \big)}{\partial \vec{x}})_{i-1} \vec{\delta x}_i = \vec{b}-\mat{A}(\vec{x}_{i-1}) \; \vec{x}_{i-1}. \label{SysNR} \end{equation}

Each NR-iteration provides an unique increment solution $\vec{\delta x}_i$ which depends on values taken from the previous iteration step $i-1$. Nevertheless, the convergence is by no means guaranteed. In case of strong non-linearity and/or unfavorable initialization conditions, divergence is not unlikely. Sufficiently close to the solution, the convergence of the NR-method is quadratic. In order to ensure or accelerate convergence, relaxation techniques may be applied: \begin{equation} \vec{x}_i=\vec{x}_{i-1}+\gamma_i \; \vec{\delta x}_i, \end{equation} with $\gamma_i$, the so-called relaxation factor, which should be chosen judiciously (typically between $0$ and $1$).

The matrix which appears in the system \eqref{SysNR} is called the Jacobian matrix (or Tangent stiffness matrix) of the system and is noted $\Ja$: \begin{equation} \Ja=(\frac{\partial \big( \mat{A}(\vec{x}) \; \vec{x} \big)}{\partial \vec{x}}). \label{Jac} \end{equation}

The NR-method can be summarized by the following iterative loop:

$\vec{x}_0=\vec{0}$; // Initialization
for $i=1,2,3,...$ { 
   $\vec{\delta x}_i = \Big( \Ja(\vec{x}_{i-1}) \Big)^{-1} \big( \vec{b}-\mat{A}(\vec{x}_{i-1}) \; \vec{x}_{i-1} \big)$; // Find the new $\vec{\delta x}$
   $\vec{x}_i=\vec{x}_{i-1}+\gamma_i \; \vec{\delta x}_i$; // Update the value of $\vec{x}$
}

Implementation of nonlinear problems in GetDP

As illustrated above, iterative loops are essential for nonlinear analysis. In GetDP, a loop is launched thanks to the IterativeLoop command and has to be defined in an appropriate level of the recursive resolution operations:

IterativeLoop [exp-cst,exp,exp-cst<,exp-cst>] { resolution-op }

The parameters are: the maximum number of iterations $i_{max}$ (if no convergence), the relative error $\varepsilon$ to achieve and the relaxation factor $\gamma_i$ that multiplies the iterative correction. (The optional parameter is a flag for testing purposes.) The resolution-op field contains the operations that have to be performed at each iteration.

During a Newton-Raphson iteration, the system \eqref{SysNR} has to be generated and then solved consecutively. These steps are done by using the GenerateJac| and SolveJac functions inside the resolution-op field of an IterativeLoop command. These two functions are briefly described hereafter:

GenerateJac [system-id]

Generate the system of equations system-id (see \eqref{SysNR}) using a Jacobian matrix $\Ja$ (of which the unknowns are corrections $\vec{\delta x}$ of the current solution $\vec{x}$).

SolveJac [system-id]

Solve the system of equations system-id (see \eqref{SysNR}) using a Jacobian matrix $\Ja$ (system of which the unknowns are corrections $\vec{\delta x}$ of the current solution $\vec{x}$). Then, Increment the solution ($\vec{x}=\vec{x}+\vec{\delta x}$) and compute the relative error $\vec{\delta x}/\vec{x}$.

In GetDP, the system of equations that has to be generated and solved is expressed in an "Equation"-block of a `.pro' file. When calling GenerateJac, the Jacobian matrix $\Ja$ is built by assembling the matrix $\mat{A}$ whith the additionnal terms incorporated in a JacNL function in the formulation of a nonlinear problem. In other word, by definition : \begin{equation} \Ja:= \mat{A} + \JacNL, \label{J=A+JacNL} \end{equation} where $\JacNL$ stands for the elements included in a JacNL equation block while $\mat{A}$ gathers the classical terms that are not attached to a JacNL equation block. In case the problem is linear, i.e. when the $\mat{A}$-matrix does not depend on the unknowns $\vec{x}$, theJacobian matrix reduces to $\Ja = \mat{A}$ so that the $\JacNL$ part vanishes. In the light of this, it is obvious that the $\JacNL$ is used to represent the nonlinear part of the Jacobian matrix.

By supposing that all the terms in the formulation are integrated on the complete domain $\Omega$ and knowing that $\vec{x}_i=\vec{x}_{i-1}+\vec{\delta x}_i$, the system \eqref{SysNR} combined with the notations \eqref{Jac}, \eqref{J=A+JacNL} can be rewritten as : \begin{equation*} \Big( \mat{A}(\vec{x}_{i-1}) + \JacNL (\vec{x}_{i-1}) \Big) .\;\vec{\delta x}_i = \vec{b}-\mat{A}(\vec{x}_{i-1})\;.\;\vec{x}_{i-1}, \end{equation*} \begin{equation} \mat{A}(\vec{x}_{i-1}).\;\vec{x}_{i} + \JacNL (\vec{x}_{i-1}) \;.\; \vec{\delta x}_i = \vec{b}, \label{Formu} \end{equation} where $\vec{x}_i$ and $\vec{\delta x}_i$ are both discrete unknown quantities while $\vec{x}_{i-1}$ is already computed from the previous iteration.

Thanks to this last rewriting of the system, it is possible to give an interpretation on the way the degrees of freedom have to be understood in an "Equation"-block of a `.pro' file. In GetDP, a Dof symbol in front of a discrete quantity indicates that this quantity is an unknown quantity, and should therefore not be considered as already computed. When calling GenerateJac, the generated unknowns are the increments $\vec{\delta x}_i$ of the Dof elements. However, as the reformulation of the system \eqref{Formu} highlights, the Dof terms that are not in a JacNL function ("$\mat{A}(\vec{x}_{i-1}).\; \vec{\delta x}_i$") can be grouped with the right hand side previous values ("$\mat{A}(\vec{x}_{i-1}).\; \vec{x}_{i-1}$") built by GenerateJac, so that the unknowns linked to these terms can be seen as classical Dof quantities : \begin{equation*} \verb|Dof{x}| \rightarrow \vec{x}_i. \end{equation*} On the contrary, for the elements inside a JacNL equation block, when using GenerateJac, no additional suitable previous terms are generated on the right hand side, so that a Dof quantity will be understood by GetDP as being the unknown increment of this quantity at the current iteration. As it is visible on the last rewriting of the system \eqref{Formu}, the $\JacNL$ operator is acting specifically on the increment $\vec{\delta x}_i$. According to this interpretation, it comes: \begin{equation*} \verb|JacNL[Dof{x}...]| \rightarrow \vec{\delta x}_i. \end{equation*} Finally, if a quantity is not distinguished by a Dof symbol, GetDP will use the last computed value (results of the last iteration): \begin{equation*} \verb|{x}| \rightarrow \vec{x}_{i-1}. \end{equation*}

Thanks to these interpretations, any system in the form of system \eqref{Formu} can be naturally expressed in an "Equation"-block of a `.pro' file and be resolved by the NR-method.

At this point, only the implementation of the NR-method in GetDP has been discussed. Actually, the only difference between the NR-method and Picard's method is linked to the presence or not of the $\JacNL$ operator. If the $\JacNL$ term is removed from the Newton-Raphson formulation \eqref{Formu}, the Picard system \eqref{Picard} is retrieved. So from the implementation point of view, simply removing the \verb|JacNL| content in the "Equation"-block of a `.pro' file transforms a Newton-Raphson iteration in a Picard iteration.

The next section presents in concrete terms how to write nonlinear partial differential equations in GetDP using a practical example.

An example

Let us consider the following PDE in strong form, describing a magnetodynamic problem in terms of the magnetic field $\vec{h}$, where for clarity we omitted the boundary conditions and the source terms: \begin{equation} \curl \left(\frac{1}{\sigma} \curl \vec{h} \right) + \frac{\partial}{\partial t} \left( \mu(\vec{h})\;.\;\vec{h} \right) = 0. \end{equation}

Using a Galerkin approach, the weak form of this PDE reads: find $\vec{h}$ in a suitable function space such that \begin{equation} \left( \frac{1}{\sigma} \curl \vec{h} \;.\; \curl \vec{h'} \right)_\Omega + \left( \frac{\partial}{\partial t} \left( \mu(\vec{h})\;.\;\vec{h} \right)\;.\; \vec{h'} \right)_\Omega = 0 \end{equation} holds for all test functions $\vec{h'}$ in the same space.

Let us analyse the nonlinear implementation when a backward Euler scheme is used for the time discretization: \begin{equation} \frac{\partial}{\partial t} \left( \mu(\vec{h})\;.\;\vec{h} \right) \quad \rightarrow \quad \frac{\mu\left(\vec{h}^n\right)\;.\;\vec{h}^n - \mu\left(\vec{h}^{n-1}\right)\;.\;\vec{h}^{n-1} }{\Delta t} , \end{equation} where the upper index $n$ denotes the current time step (and $n-1$ the previous time step). In what follows the lower index $i$ will denote the current iteration number (and the index $i-1$ the previous, known one).

In GetDP
$\vec{h}^n_i \rightarrow$ This we are looking for $\verb|Dof{H}|$
$\vec{h}^n_{i-1} \rightarrow$ Actual time point, but value from last iteration $\verb|{H}|$
$\vec{h}^{n-1} \rightarrow$ Value from last time step $\verb|{H}[1]|$
$\Delta t \rightarrow$ Time step size $\verb|DTime|$

Picard's Method: $\mu$-version

For the nonlinear part ($\mu$ in our case), use $\vec{h}$ from the last iteration: \begin{equation} \boxed{ \left( \frac{1}{\sigma} \curl \vec{h}^n_i \;.\; \curl \vec{h'} \right)_\Omega + \left( \frac{\mu\left(\vec{h}^n_{i-1}\right)\;.\;\vec{h}^n_i - \mu\left(\vec{h}^{n-1}\right)\;.\;\vec{h}^{n-1} }{\Delta t}\;.\; \vec{h'} \right)_\Omega = 0. }\end{equation}

Formulation in GetDP:

Formulation{
  { Name Eddy_Formulation_H_Pic; Type FemEquation ;
    Quantity {
		       { Name H;    Type Local; NameOfSpace Hcurl_hphi; }
    }
    Equation {
		       Galerkin { [ 1/sig[] * Dof{Curl H} , {Curl H}] ;
					                  In Domain ; Jacobian J ; Integration I ; } 
		       Galerkin { [ 1.0/\$DTime * mu_NL[{H}]    * Dof{H} 	, {H}] ;  
					                  In Domain ; Jacobian J ; Integration I ; }
		       Galerkin { [-1.0/\$DTime * mu_NL[{H}[1]] * {H}[1], {H}] ;
					                  In Domain ; Jacobian J ; Integration I ; }    
    }
  }
}

Resolution{
  { Name Solution_Nonlinear_H_Pic;
    System {
      { Name A;    NameOfFormulation Eddy_Formulation_H_Pic;}   
    }
    Operation {
      InitSolution[A]; 
      TimeLoopTheta[T_Time0,T_TimeMax,T_DTime[], T_Theta[]]{ 
        IterativeLoop[maxit, eps, relax] {
        GenerateJac[A] ; SolveJac[A] ; 
        }
      SaveSolution[A];	  
      }
    }
  }
}

Newton-Raphson Method: $\mu$-version

We want to set the following function $f$ equal zero: \begin{equation} f\left(\vec{h}^n\right)=\left( \frac{1}{\sigma} \curl \vec{h}^n \;.\; \curl \vec{h'} \right)_\Omega + \left( \frac{\mu\left(\vec{h}^n\right)\;.\;\vec{h}^n - \mu\left(\vec{h}^{n-1}\right)\;.\;\vec{h}^{n-1} }{\Delta t}\;.\; \vec{h'} \right)_\Omega = 0. \end{equation}

Algorithm: \begin{equation} \vec{h}^n_i=\vec{h}^n_{i-1}-\frac{f(\vec{h}^n_{i-1})}{\frac{\partial}{\partial \; \vec{h}^n_{i-1}} f(\vec{h}^n_{i-1})}. \label{Algo} \end{equation}

Defining: \begin{equation} \delta\vec{h}=\vec{h}^n_i-\vec{h}^n_{i-1} = \text{change of } \vec{h} \text{ per iteration}, \label{def} \end{equation} equation \eqref{Algo} can be rewritten as \begin{equation} \boxed{ \frac{\partial}{\partial \; \vec{h}^n_{i-1}} f(\vec{h}^n_{i-1}) \;.\; \delta\vec{h} + f(\vec{h}^n_{i-1}) = 0. } \label{Algo2} \end{equation} Note: as explained above, In GetDP JacNL implies that Dof{H} here means the change of $\vec{h}$ in the iteration: \begin{equation*} \delta\vec{h} \; \rightarrow \; \verb|JacNL[Dof{H}...]|. \; \end{equation*}

Equation \eqref{Algo2} has to be implemented in the "Equation"-block \begin{equation} \begin{split} f\left(\vec{h}^n_{i-1}\right) = & \; \left( \frac{1}{\sigma} \curl \vec{h}^n_{i-1} \;.\; \curl \vec{h'} \right)_\Omega \\ &+ ( \frac{\mu\left(\vec{h}^n_{i-1}\right)\;.\;\vec{h}^n_{i-1} - \mu\left(\vec{h}^{n-1}\right)\;.\;\vec{h}^{n-1} }{\Delta t}\;.\; \vec{h'} )_\Omega = 0, \end{split} \label{f} \end{equation}

\begin{equation} \begin{split} \frac{\partial}{\partial \vec{h}^n_{i-1} } f\left(\vec{h}^n_{i-1}\right)\;.\;\delta\vec{h} = & \; \left( \frac{1}{\sigma} \curl \delta\vec{h} \;.\; \curl \vec{h'} \right)_\Omega \\ &+ ( \frac{1}{\Delta t} \Bigg( \mu ( \vec{h}^n_{i-1}) + \frac{\partial \mu}{\partial \vec{h}}\Bigg|_{\vec{h}^n_{i-1}}\;.\;\vec{h}^n_{i-1}\Bigg)\;.\;\delta\vec{h}\;.\;\vec{h'} )_\Omega = 0. \end{split} \label{df} \end{equation}

Inserting \eqref{f} and \eqref{df} in \eqref{Algo2} and doing a small simplification with \eqref{def} yields: \begin{equation} \boxed{ \begin{split} \frac{\partial}{\partial \vec{h}^n_{i-1} } f\left(\vec{h}^n_{i-1}\right)\;.\;\delta\vec{h} + f&\left(\vec{h}^n_{i-1}\right) = \; \left( \frac{1}{\sigma} \curl \vec{h}^n_i \;.\; \curl \vec{h'} \right)_\Omega \\ &+ ( \frac{1}{\Delta t} \Bigg( \mu ( \vec{h}^n_{i-1})\;.\;\vec{h}^n_i - \mu(\vec{h}^{n-1})\;.\;\vec{h}^{n-1}\Bigg)\;.\;\vec{h'} )_\Omega \\ &+ ( \frac{1}{\Delta t} \;.\; \frac{\partial \mu}{\partial \vec{h}}\Bigg|_{\vec{h}^n_{i-1}}\;.\;\vec{h}^n_{i-1} \;.\; \delta\vec{h} \;.\; \vec{h'})_\Omega = 0. \end{split} } \end{equation}

Formulation in GetDP:

Formulation{
  { Name Eddy_Formulation_H_NR; Type FemEquation ;
    Quantity {
		       { Name H;    Type Local; NameOfSpace Hcurl_hphi; }
    }
    Equation {
		       Galerkin { [ 1/sig[] * Dof{Curl H} , {Curl H}] ;
					                  In Domain ; Jacobian J ; Integration I ; } 
		       Galerkin { [ 1.0/\$DTime * mu_NL[{H}] * Dof{H}		, {H}] ;  
					                  In Domain ; Jacobian J ; Integration I ; }
		       Galerkin { [-1.0/\$DTime * mu_NL[{H}[1]] * {H}[1], {H}] ;
					                  In Domain ; Jacobian J ; Integration I ; }  
		       Galerkin { JacNL [1.0/\$DTime * dmudH[{H}] * Norm[{H}] * Dof{H}, {H}] ;
					                  In Domain ; Jacobian J ; Integration I ; } 
    }
  }
}

Resolution{
  { Name Solution_Nonlinear_H_NR;
    System {
      { Name A;    NameOfFormulation Eddy_Formulation_H_NR;}   
    }
    Operation {
      InitSolution[A]; 
      TimeLoopTheta[T_Time0,T_TimeMax,T_DTime[], T_Theta[]]{ 
        IterativeLoop[maxit, eps, relax] {
        GenerateJac[A] ; SolveJac[A] ; 
        }
      SaveSolution[A];	  
      }
    }
  }
}

Newton-Raphson Method: $b$-version

Following the same procedure with \begin{equation*} f\left(\vec{h}^n\right)= \left( \frac{1}{\sigma} \curl \vec{h}^n \;.\; \curl \vec{h'} \right)_\Omega + \left( \frac{\B(\vec{h}^n)-\B(\vec{h}^{n-1} ) } {\Delta t}\;.\; \vec{h'} \right)_\Omega = 0, \end{equation*}

gives \begin{equation*} f\left(\vec{h}^n_{i-1}\right) = \; \left( \frac{1}{\sigma} \curl \vec{h}^n_{i-1} \;.\; \curl \vec{h'} \right)_\Omega + ( \frac{\B(\vec{h}^n_{i-1})-\B(\vec{h}^{n-1}) }{\Delta t}\;.\; \vec{h'} )_\Omega = 0, \end{equation*}

\begin{equation*} \frac{\partial}{\partial \vec{h}^n_{i-1} } f\left(\vec{h}^n_{i-1}\right)\;.\;\delta\vec{h} = \; \left( \frac{1}{\sigma} \curl \delta\vec{h} \;.\; \curl \vec{h'} \right)_\Omega + ( \frac{1}{\Delta t} \frac{\partial \B}{\partial \vec{h}}\Bigg|_{\vec{h}^n_{i-1}}\;.\;\delta\vec{h}\;.\;\vec{h'} )_\Omega = 0. \end{equation*}

\begin{equation*} \boxed{ \begin{split} \frac{\partial}{\partial \vec{h}^n_{i-1} } f\left(\vec{h}^n_{i-1}\right)\;.\;\delta\vec{h} + f\left(\vec{h}^n_{i-1}\right) &= \; \left( \frac{1}{\sigma} \curl \vec{h}^n_i \;.\; \curl \vec{h'} \right)_\Omega \\ &+ ( \frac{1}{\Delta t} \Bigg( \B(\vec{h}^n_{i-1} ) - \B(\vec{h}^{n-1}) \Bigg)\;.\;\vec{h'} )_\Omega \\ &+ ( \frac{1}{\Delta t} \;.\; \frac{\partial \B}{\partial \vec{h}}\Bigg|_{\vec{h}^n_{i-1}}\;.\; \delta\vec{h} \;.\; \vec{h'})_\Omega = 0. \end{split} } \end{equation*}

Formulation in GetDP:

Formulation{
  { Name Eddy_Formulation_B_NR; Type FemEquation ;
    Quantity {
		       { Name H;    Type Local; NameOfSpace Hcurl_hphi; }
    }
    Equation {
		       Galerkin { [ 1/sig[] * Dof{Curl H} , {Curl H}] ;
					                  In Domain ; Jacobian J ; Integration I ; }  
		       Galerkin { [ 1.0/\$DTime * B[{H}]	, {H}] ;  
					                  In Domain ; Jacobian J ; Integration I ; }
		       Galerkin { [-1.0/\$DTime * B[{H}[1]], {H}[1]] ;
					                  In Domain ; Jacobian J ; Integration I ; } 
		       Galerkin { JacNL [1.0/\$DTime * dBdH[{H}] * Dof{H}, {H}] ;
					                  In Domain ; Jacobian J ; Integration I ; } 
    }
  }
}

Resolution{
  { Name Solution_Nonlinear_B_NR;
    System {
      { Name A;    NameOfFormulation Eddy_Formulation_B_NR;}   
    }
    Operation {
      InitSolution[A]; 
      TimeLoopTheta[T_Time0,T_TimeMax,T_DTime[], T_Theta[]]{ 
        IterativeLoop[maxit, eps, relax] {
        GenerateJac[A] ; SolveJac[A] ; 
        }
      SaveSolution[A];	  
      }
    }
  }
}