Hi there!
I am using the structural plasticity module to simulate large-scale
networks on HPC. The way I run these simulations with MPI creates a
large number of small data files (.npy) which I organize into a HDF5
file in a post hoc manner. This has hitherto not been a problem,
however, I recently increased the size of my simulations and the number
of files that I create exceeds my HPCinode quota. So, I have to come up
with a way of handling all these files or the way I export them.
Ideally, I would like to concurrently write to a file (like zarr) with
multiple processes. However, parallel zarr writes are only optimal when
the chunk size are uniform. For me, they are not uniform---so, I
abandoned the idea of using zarr.
Currently, I am trying to use mpi4py to gather the data I want and save
it through a mother MPI process. I ran some test simulations and all was
fine (see attached script). However, when I use structural plasticity
with mpi4py I get an error that:
"nest.lib.hl_api_exceptions.DimensionMismatch: ('DimensionMismatch in
SetStatus_id: Expected dimension size: 1\nProvided dimension size: 20',
'DimensionMismatch', <SLILiteral: SetStatus_id>, ': Expected dimension
size: 1\nProvided dimension size: 20".
I believe that mpi4py somehow interferes with nest's internal mpi
mechanism. Perhaps it signals nest kernel to use 1 process rather than
the mentioned 20. If I remove the mpi4py import call, all works fine.
So, I cannot run the simulation using this mpi4py hack.
Do you notice something wrong with my approach? I'm just beginning to
use mpi4py, so I may be missing something here.
Alternatively, could you recommend a way to reduce the number of
exported files. For example, through zarr or other file formats which
support parallel writes.
I'd appreciate any suggestions. Thanks!
pyNEST version: 2.20.2
python version: Python 3.7.10
mpiexec/mpirun version: MPI Library for Linux* OS, Version 2021.3
Best,
Ady