<div dir="ltr">Hi. <div><br><div class="gmail_extra"><div class="gmail_extra">First a couple of tips on improving performance.</div><div class="gmail_extra">1) Generic BLAS slow enough. Try using ATLAS, OpenBLAS or MKL (preferably for INTEL) or etc.</div><div class="gmail_extra">2) The use of MPI ("mpirun") can have many nuances (for example, multiple performance degradation in the "Generate" operation with some large unsymmetrical matrices). Use it with caution or use the OpenMP instead (it is only possible for the factorization phase). See, for example, OpenBLAS or ACML or <...> documentation.</div><div class="gmail_extra">3) For symmetric matrices, use the "cholesky" instead of "lu" (also see MUMPS and PETSc user manual).</div><div class="gmail_extra"><br></div><div class="gmail_extra">Concerning your problem. Preprocessing ("-pre") and postprocessing ("-pos") use only one thread. Using "mpirun" on these operations leads to a decrease in performance. Use of MPI ("mpirun") allows to obtain a gain only in the processing operation ("-cal").</div><div class="gmail_extra">In this case, different combinations of implementations of the BLAS and hardware (various CPUs) can give a sharp decrease in performance if the number of threads ("-np N") exceeds Cores / 2.</div><div class="gmail_extra">Try to fulfill this condition, for example "mpirun -np 2 getdp magnet -cal -cpu". Compare the time between "SaveSolution [A]" and "Solve [A]" ("Wall" time) with this time on 1 thread ("getdp magnet -cal -cpu").</div><div class="gmail_extra"><br></div><div class="gmail_extra"><br></div><div class="gmail_extra"><br></div><div class="gmail_extra">P.S.</div><div class="gmail_extra"><div class="gmail_extra">I have no relationship to the development of this software (or libraries used by it) and I have only little knowledge of higher mathematics. Almost all the information that I have given is received, mainly empirically and from the documentation for this software. I hope that my information will help you.</div><div class="gmail_extra">In my case, I do not use MPI in its pure form. I use OpenBLAS or ACML (for different hardware) that use OpenMP, which allows several threads to be used for the factorization phase (I use "mpirun" only in some cases).</div></div></div><div class="gmail_extra"><br></div><div class="gmail_extra">And,</div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><span style="color:rgb(51,0,102);font-family:arial,helvetica,sans-serif;font-size:16px">Looking at your mail it seems that I might have to recompile BLAS, MUMPS and PETSC with particular options ?</span></blockquote><div>In Ubuntu you can use "sudo apt install openblas*" (for example) and recompilation is not required due to "update-alternatives" (but in some cases it not working). After that you can use "OMP_NUM_THREADS=" environment variable to set number of threads (work only in factorization phase of "Solve[A]" operation). </div><div class="gmail_extra"><br></div><div class="gmail_extra"><br><div class="gmail_quote">2017-08-09 17:00 GMT+03:00 gilles quemener <span dir="ltr"><<a href="mailto:quemener@lpccaen.in2p3.fr" target="_blank">quemener@lpccaen.in2p3.fr</a>></span>:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div style="font-family:arial,helvetica,sans-serif;font-size:12pt;color:rgb(51,0,102)"><div>Hi,</div><div><br></div><div>For BLAS, MUMPS and PETSC, I am using standard packages from Ubuntu 16.04 with the following versions:<br></div><div>- BLAS : libblas 3.6.0<br></div><div>- MUMPS: libmumps 4.10.0<br></div><div>- PETSC: libpetsc 3.6.2<br></div><div>I do not know the standard compiling options from Ubuntu 16.04 for these packages.<br></div><div><br></div><div>For the processors, from /proc/cpuinfo:<br></div><div>processor : 0<br>vendor_id : GenuineIntel<br>cpu family : 6<br>model : 63<br>model name : Intel(R) Core(TM) i7-5820K CPU @ 3.30GHz<br>stepping : 2<br>microcode : 0x2d<br>cpu MHz : 1199.988<br>cache size : 15360 KB<br><br></div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Looking at your mail it seems that I might have to recompile BLAS, MUMPS and PETSC with particular options ?</blockquote><div>If yes, which versions of these programs (especially for BLAS which can be found in many libraries) do you suggest to use ?<br></div><div><br></div><div>Results from running getdp w/ options -cpu -v8 is in attached file singleCPU.txt.<br></div><div><br></div><div>Thanks a lot for your help.<br></div><div><br></div><div>Gilles<br></div><div><br></div><div><br></div><hr id="gmail-m_7351383978045122691zwchr"><div><blockquote style="border-left:2px solid rgb(16,16,255);margin-left:5px;padding-left:5px;color:rgb(0,0,0);font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt"><b>De: </b>"Артем Хорошев" <<a href="mailto:vskych@gmail.com" target="_blank">vskych@gmail.com</a>><br><b>À: </b>"Gilles Quéméner" <<a href="mailto:quemener@lpccaen.in2p3.fr" target="_blank">quemener@lpccaen.in2p3.fr</a>><br><b>Cc: </b>"getdp" <<a href="mailto:getdp@onelab.info" target="_blank">getdp@onelab.info</a>><br><b>Envoyé: </b>Mercredi 9 Août 2017 15:19:26<br><b>Objet: </b>Re: [Getdp] GetDP MPI versus single CPU<br></blockquote></div><div><div class="gmail-h5"><div><blockquote style="border-left:2px solid rgb(16,16,255);margin-left:5px;padding-left:5px;color:rgb(0,0,0);font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt"><div dir="ltr"><div>What versions of BLAS, MUMPS do you use?</div><div>With what options was PETSc configured?</div><div>Run getdp with "-cpu -v 8" options. If hang that repeat without "-cpu"</div><div>Which processors do you use?</div></div><div class="gmail_extra"><br><div class="gmail_quote">2017-08-09 10:45 GMT+03:00 gilles quemener <span dir="ltr"><<a href="mailto:quemener@lpccaen.in2p3.fr" target="_blank">quemener@lpccaen.in2p3.fr</a>></span>:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div style="font-family:arial,helvetica,sans-serif;font-size:12pt;color:rgb(51,0,102)"><div>Hi,<br></div><br><div>I have compiled GetDP with MPI option under Linux Ubuntu 16.04 on a machine w/ 6x2 CPUs and 32 MB of RAM.</div><div>When running the <a href="http://magnet.pro" target="_blank">magnet.pro</a> file given in the GetDP demos folder, I was expecting to get faster results with MPI<br></div><div>than w/o and was quite surprise by the comparison as shown by the following outputs :<br></div><br><div>1) MPI run:<br></div><div>***********<br></div><br><div>mpirun -np 12 /home/quemener/local/OneLab_<wbr>gq/GetDP/bin/getdp magnet -solve MagSta_phi -pos MagSta_phi<br>Info : Running '/home/quemener/local/OneLab_<wbr>gq/GetDP/bin/getdp magnet -solve MagSta_phi -pos MagSta_phi' [GetDP 2.11.1, 12 nodes]<br>Info : Started (Wed Aug 9 09:20:20 2017, Wall = 0.277839s, CPU = 0.884s [0.04s,0.116s], Mem = 287.254Mb [23.4609Mb,27.6289Mb])<br>Info : Increasing process stack size (8192 kB < 16 MB)<br>Info : Loading problem definition '<a href="http://magnet.pro" target="_blank">magnet.pro</a>'<br>Info : Loading problem definition '<a href="http://magnet_data.pro" target="_blank">magnet_data.pro</a>'<br>Info : Loading problem definition '../templates/<wbr>MaterialDatabase.pro'<br>Info : Loading problem definition '../templates/MaterialMacros.<wbr>pro'<br>Info : Loading problem definition '../templates/Magnetostatics.<wbr>pro'<br>Info : Selected Resolution 'MagSta_phi'<br>Info : Loading Geometric data 'magnet.msh'<br>Info : System 'A' : Real<br>P r e - P r o c e s s i n g . . .<br>Info : Treatment Formulation 'MagSta_phi'<br>Info : Generate ExtendedGroup '_CO_Entity_15' (NodesOf) <br>Info : [rank 8] System 1/1: 1658225 Dofs <br>Info : [rank 7] System 1/1: 1658225 Dofs<br>Info : [rank 3] System 1/1: 1658225 Dofs<br>Info : [rank 4] System 1/1: 1658225 Dofs<br>Info : [rank 1] System 1/1: 1658225 Dofs<br>Info : [rank 2] System 1/1: 1658225 Dofs<br>Info : [rank 10] System 1/1: 1658225 Dofs<br>Info : [rank 6] System 1/1: 1658225 Dofs<br>Info : [rank 5] System 1/1: 1658225 Dofs<br>Info : [rank 11] System 1/1: 1658225 Dofs<br>Info : [rank 0] System 1/1: 1658225 Dofs<br>Info : [rank 9] System 1/1: 1658225 Dofs<br>Info : (Wall = 30.3178s, CPU = 181.936s [14.804s,16.964s], Mem = 8228.93Mb [685.133Mb,689.43Mb])<br>E n d P r e - P r o c e s s i n g<br>P r o c e s s i n g . . .<br>Info : CreateDir[../templates/res/]<br>Info : Generate[A]<br>Info : Solve[A] <wbr> <wbr> <br>Info : N: 1658225 - preonly lu mumps<br>Info : SaveSolution[A]<br>Info : (Wall = 87.9995s, CPU = 489.904s [38.456s,48.752s], Mem = 14249.9Mb [1109.46Mb,1297.96Mb])<br>E n d P r o c e s s i n g<br>P o s t - P r o c e s s i n g . . .<br>Info : NameOfSystem not set in PostProcessing: selected 'A'<br>Info : Selected PostProcessing 'MagSta_phi'<br>Info : Selected Mesh 'magnet.msh'<br>Info : PostOperation 1/4 <br> > 'res/MagSta_phi_hc.pos'<br>Info : PostOperation 2/4 <wbr> <br> > 'res/MagSta_phi_phi.pos'<br>Info : PostOperation 3/4 <wbr> <br> > 'res/MagSta_phi_h.pos'<br>Info : PostOperation 4/4 <wbr> <br> > 'res/MagSta_phi_b.pos'<br>Info : (Wall = 377.562s, CPU = 1118.17s [80.556s,207.128s], Mem = 14254.1Mb [1109.95Mb,1297.96Mb])<br>E n d P o s t - P r o c e s s i n g<br>Info : Stopped (Wed Aug 9 09:26:37 2017, Wall = 377.851s, CPU = 2012.12s [162.712s,207.26s], Mem = 14254.1Mb [1109.95Mb,1297.96Mb])<br><br></div><br><div>2) Single CPU run:<br></div><div>******************<br></div><br><div>/home/quemener/local/OneLab_<wbr>gq/GetDP/bin/getdp magnet -solve MagSta_phi -pos MagSta_phi<br>Info : Running '/home/quemener/local/OneLab_<wbr>gq/GetDP/bin/getdp magnet -solve MagSta_phi -pos MagSta_phi' [GetDP 2.11.1, 1 node]<br>Info : Started (Wed Aug 9 09:27:06 2017, Wall = 0.146171s, CPU = 0.136s, Mem = 25.9609Mb)<br>Info : Increasing process stack size (8192 kB < 16 MB)<br>Info : Loading problem definition '<a href="http://magnet.pro" target="_blank">magnet.pro</a>'<br>Info : Loading problem definition '<a href="http://magnet_data.pro" target="_blank">magnet_data.pro</a>'<br>Info : Loading problem definition '../templates/<wbr>MaterialDatabase.pro'<br>Info : Loading problem definition '../templates/MaterialMacros.<wbr>pro'<br>Info : Loading problem definition '../templates/Magnetostatics.<wbr>pro'<br>Info : Selected Resolution 'MagSta_phi'<br>Info : Loading Geometric data 'magnet.msh'<br>Info : System 'A' : Real<br>P r e - P r o c e s s i n g . . .<br>Info : Treatment Formulation 'MagSta_phi'<br>Info : Generate ExtendedGroup '_CO_Entity_15' (NodesOf) <br>Info : System 1/1: 1658225 Dofs <wbr> <br>Info : (Wall = 10.7856s, CPU = 7.42s, Mem = 688.309Mb)<br>E n d P r e - P r o c e s s i n g<br>P r o c e s s i n g . . .<br>Info : CreateDir[../templates/res/]<br>Info : Generate[A]<br>Info : Solve[A] <wbr> <wbr> <br>Info : N: 1658225 - preonly lu mumps<br>Info : SaveSolution[A]<br>Info : (Wall = 40.7256s, CPU = 47.028s, Mem = 4496.32Mb)<br>E n d P r o c e s s i n g<br>P o s t - P r o c e s s i n g . . .<br>Info : NameOfSystem not set in PostProcessing: selected 'A'<br>Info : Selected PostProcessing 'MagSta_phi'<br>Info : Selected Mesh 'magnet.msh'<br>Info : PostOperation 1/4 <br> > 'res/MagSta_phi_hc.pos'<br>Info : PostOperation 2/4 <wbr> <br> > 'res/MagSta_phi_phi.pos'<br>Info : PostOperation 3/4 <wbr> <br> > 'res/MagSta_phi_h.pos'<br>Info : PostOperation 4/4 <wbr> <br> > 'res/MagSta_phi_b.pos'<br>Info : (Wall = 324.758s, CPU = 132.924s, Mem = 4496.32Mb) <br>E n d P o s t - P r o c e s s i n g<br>Info : Stopped (Wed Aug 9 09:32:31 2017, Wall = 324.946s, CPU = 133.032s, Mem = 4496.32Mb)<br></div><br><div>Does any one has clues to explain such a behaviour ?<span class="gmail-m_7351383978045122691HOEnZb"><span style="color:rgb(136,136,136)" color="#888888"><br></span></span></div><span class="gmail-m_7351383978045122691HOEnZb"><span style="color:rgb(136,136,136)" color="#888888"><br><div><div><div> <span style="color:rgb(0,51,102)"> Gilles <br></span></div><br></div><div><table style="width:792px;table-layout:fixed;height:145px" cellpadding="0" border="0"><tbody><tr><td style="width:200px;text-align:left"></td><td><span style="color:rgb(0,0,128);font-size:10pt"></span><span style="color:rgb(0,0,128);font-size:10pt"></span></td></tr></tbody></table></div></div></span></span></div></div><br>______________________________<wbr>_________________<br>
getdp mailing list<br>
<a href="mailto:getdp@onelab.info" target="_blank">getdp@onelab.info</a><br>
<a href="http://onelab.info/mailman/listinfo/getdp" rel="noreferrer" target="_blank">http://onelab.info/mailman/<wbr>listinfo/getdp</a><br>
<br></blockquote></div></div><br></blockquote></div></div></div></div></div></blockquote></div><br></div></div></div>