[Gmsh] Hdf5 files and gmsh

Christophe Geuzaine cgeuzaine at uliege.be
Sun Mar 22 19:16:02 CET 2020

> On 22 Mar 2020, at 16:50, paul francedixhuit <paul18fr at gmail.com> wrote:
>  Hi All
> Recently I found interesting discussions about hdf5 and gmsh (see https://gitlab.onelab.info/gmsh/gmsh/issues/552). On my side I had to look to xmdf format 2 years ago that seemed to be interesting to my applications (mechanical engineering). Finally I gave up because of difficulties and xdmf developments not really active.
> I'm using hdf5 files through h5py Python library to deal with huge amount of data, in either static conditions or transitory ones.
> In the meantime gmsh team has developed the format 4.1 and I've to dig into it;  I saw about Python API and I'm not familiar with, but not sure that's I'm looking for. In my opinion it'll be very useful and powerful to be able to import datasets from hdf5 files directly from the gmsh native file, especially for post-processing purposes; let me developing:
> - by import I mean calling, not intending to write it in an ascii format; similar to "h5.get" for people who are familiar to h5py library

The Gmsh Python API is one way to go here: it allows you to both get and set the mesh and post-processing data, so you could easily use the data store of your choice, e.g. HDF5 through h5py. 

> - I'm not speaking about HPC modelings where huge meshes and models can be post-processed using dozens or hundreds of cores, but about a common PC or a working station having limited resources compared to HPC servers,
> - When working on models having a million of 2nd order elements, and dozens of time steps, dealing with different variables (up to 3 displacement components, up to 6 for the stress ones, and so on), all results cannot be loaded in a single native .pos file due to obvious computing limitations

You shouldn't indeed use parsed .pos files. But you can clearly use .msh files, which can contain post-processing data and have nice features for very large datasets: 

- the mesh can be stored in one or more files
- post-processing data can be stored along with the mesh, or separately
- each step (e.g. time step) in a multi-step post-processing view can be stored in a separate file
- moreover, each step can be split into arbitrary partitions, each in a separate file

Hope this helps,


> While using hdf5 in native gmsh file, I've been thinking it may answer to this because:
> - hdf5 files are optimized for I/O in terms of speed and memory; for example only datasets you need are loaded and not the entire file: it's fast accordingly
> - data can be compressed in hdf5 file
> - a dataset is a block of data (typically a matrix)
> - in practice in gmsh, only some data are plotted at the same time and not all: for example not relevant to plot Von Mises stress and equivalent strain at the same time, or as well initial time step and final one
> - in other word, only what you need is "uploaded" through something like import "myhdf5\dataset129" (is slicing will be possible?)
> - each dataset is built following gmsh format
> - as many import as necessary will be done due to different dataset sizes: one per element type (triangles, quads, tets, hex, wedges, and so on)
> - 1 dataset per element = elements ordering might be a limitation I think (speed down), except if it can be anticipated
> - if slicing not possible, well data must be duplicated (higher file size), but fast reading in practice
> - and so on
> I've been using hdf5 files for some time, and it's now an essential part of my data management: I'm sure gmsh will take huge advantages to integrate it.
> I'll be happy to discuss further about it
> Thanks to gmsh team for their works
> Paul
> _______________________________________________
> gmsh mailing list
> gmsh at onelab.info
> http://onelab.info/mailman/listinfo/gmsh

Prof. Christophe Geuzaine
University of Liege, Electrical Engineering and Computer Science 

More information about the gmsh mailing list