[Getdp] large problems with getdp

Guillaume Demesy gdemesy at physics.utoronto.ca
Tue Dec 14 14:31:46 CET 2010


Hello Helmut,

8Gb should be more than enough to solve this 200000 DOF problem.
Getdp binaries are compiled with the PETSc solver umfpack. Have you 
tried it? Or are you using GMRES?
getdp myfile.pro -solve myresolution -ksp_type preonly -pc_type lu 
-pc_factor_mat_solver_package umfpack

But the best solution when tackling big problems is probably to
1- recompile your own petsc with openmpi support and MUMPS
2- compile getdp with your petsc
...which will provide you a parallel 'solve'. The preprocessing will 
remain serial.

see:
https://geuz.org/trac/getdp/wiki/PETScCompile
./configure --CC=/opt/local/bin/openmpicc 
--CXX=/opt/local/bin/openmpicxx --FC=/opt/local/bin/openmpif90 
--with-debugging=0 --with-clanguage=cxx --with-shared=0 --with-x=0 
--download-mumps=1--download-parmetis=1--download-scalapack=1 
--download-blacs=1 --with-scalar-type=complex

Good luck!

Guillaume



On 14/12/2010 06:28, Helmut Müller wrote:
> Hi all,
> first I´d like to thank you for this impressive software!
> I use it for (quite simple) simulations regarding buildingphysics, I just solve heat equations. Therefore I have quite large models (2m by 2m by 2m) with some small parts or thin layers (10mm).
> Unfortunately I have to spend a lot of time adjusting the mesh and/or simplify the geometry because I didn´t manage to solve Problems with more than approx. 220.000 DOF on my Mac (8GB RAM, quadCore ). Those problems are solved within seconds or few minutes.
>  From working with other FEM Simulations I know that it is really important to have a "good" mesh, but I´d like to spend less time for optimization of the geometry and/or the mesh for the price of longer calc times on larger models. A longer calculation time
> would cost me far less than optimization.
> In this context I have read a mail on the list:
> >  This has been successfully tested with both iterative (GMRES+SOR) and
> >  direct (MUMPS) parallel solvers on up to 128 CPUs, for test-cases up to
> >  10 millions dofs.
> With which trick or procedure has this been done ? On which Platform ? How can I at least use the available memory to perform such calculations ( my getdp 2.1 on MacOS (binary download) uses only a small part ca. 1GB of the available memory, pets fails with
> a malloc error message, the new release of getdp uses all cores but with no benefit for maximum possible model size in respect to DOF. So I assume with 8GB it should be possible to do calculations of at least 500000 DOF.
> So, what do I miss ? Could partitioning of the mesh and doing separate, iterative calculations be a solution ?
> Thanks in advance for any suggestion. I assume that other people are interested in this too.
> Helmut Müller
>
>
> _______________________________________________
> getdp mailing list
> getdp at geuz.org
> http://www.geuz.org/mailman/listinfo/getdp

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.geuz.org/pipermail/getdp/attachments/20101214/a6986567/attachment.html>