Hi Tom,
As a DIY workaround, you can use the RunManager context to simulate in small steps and
break if it gets too slow. I haven't tested the code, just sketching from memory.
Instead of calling nest.Simulate(1000), use
with nest.RunManager():
for _ in range(100):
t = time.time()
nest.Run(10)
if time.time() - t > 5:
break
The logic is as follows: You split the 1000 ms into 100 times 10 ms. This is fast with
Run() within the RunManager(). You then use Python's time to see how long it takes to
simulate 10 ms and break if it takes too long, here a 5 s limit. You can then use
GetKernelStatus to get the current time in the simulation.
It would be interesting to add this as a kernel feature. Let me know if it works!
Best,
Hans Ekkehard
--
Prof. Dr. Hans Ekkehard Plesser
Head, Department of Data Science
Faculty of Science and Technology
Norwegian University of Life Sciences
PO Box 5003, 1432 Aas, Norway
Phone +47 6723 1560
Email hans.ekkehard.plesser@nmbu.no<mailto:hans.ekkehard.plesser@nmbu.no>
Home
http://arken.nmbu.no/~plesser
On 06/05/2021, 16:08, "TOM BUGNON"
<bugnon@wisc.edu<mailto:bugnon@wisc.edu>> wrote:
Hi all,
Under some circumstances simulations can slow down up to the point where the
nest.Simulate() does not advance anymore and stays stuck at a given virtual time, with a
"realtime factor" of 0. I suppose this can happen for instance when a network
falls into a regime of runaway excitation in which a massive number of spikes are being
exchanged.
I'm looking for a way to stop the simulation in such a case, (say when the realtime
factor goes below a set threshold, or when the output files are not updated for a certain
duration), ideally in such a way that the program can continue running rather than
crashing. If anyone has a suggestion about how to go around this issue I'd be happy to
hear it.
Thanks in advance! Best, Tom