Hi all!
In the world of biophysical detail, it's commonplace that the connectome is
generated with algorithms that specify connections as dense tabular data,
with each row specifying a synaptic location on a cell pair (SONATA for
example).
A) In NEST I can't really find the opportunity to fit this data into any of
the connection rules: I want to specify pairwise connections from the
multiset A to multiset B.
- Is this possible with `pairwise bernoulli`, or do the inputs have to be
strict sets?
- The probability step is superfluous, can it be skipped?
B) Then there's the fact that NEST parallelizes transparently, but since
this data was generated in parallel by tiling the biological volume, I have
neatly fragmented data already available on each node in the distributed
cluster. It would be such a waste to communicate all the data to each node,
for NEST to communicate and distribute them back another way.
The data is too big to allgather and fit into memory of any single node.
Not only is this a lot of overhead to implement, but NEST will throw away
all but `1 / Nnodes` of the data on each node again, leaving me with a
reshuffled version of my starting data.
Is there a way to bypass the transparency and to imperatively declare the
cells and connections on each machine?
--
Robin De Schepper, MSc (they/them)
Department of Brain and Behavioral Sciences
Unit of Neurophysiology
University of Pavia, Italy
Via Forlanini 6, 27100 Pavia - Italy
Tel: (+39) 038298-7607
http://www-5.unipv.it/dangelo/
Interested in large scale network modelling?
Discover our framework <https://bsb.readthedocs.io/en/latest/>:
<https://github.com/dbbs-lab/bsb>