Dear NEST Users and Developers!
I would like to thank you all for your engagement for high-quality computational neuroscience and research software in 2021. We have made some major steps forward with the release of NEST 3, NEST Desktop 3 and NESTML 4. A few days ago, PyNN 0.10 also brought support for NEST 3, nicely wrapping up the year of NEST 3. We also moved to quarterly releases with the release of NEST 3.0. If you wonder what happened to NEST 3.2 in that scheme of things don't worry, it will come in Januar. The combination of holiday season, new COVID restrictions and an important reporting deadline in the Human Brain Project—the major source of funding for NEST development in recent years—unfortunately left too little time to wrap everything up in time.
2022 promises to be an exciting year for NEST, including the deeper integration of NEST Desktop (so far mainly developed by Sebastian Spreizer) and NEST GPU (so far mainly developed by Bruno Golosio as NeuronGPU) into the NEST development process and community.
Don't forget to block out 23/24 June in your calendars for the NEST Conference 2022 (this time on a Thursday and Friday)!
On behalf of the NEST Initiative, I wish you happy holidays and all the best for 2022!
Hans Ekkehard
--
Prof. Dr. Hans Ekkehard Plesser
Head, Department of Data Science
Faculty of Science and Technology
Norwegian University of Life Sciences
PO Box 5003, 1432 Aas, Norway
Phone +47 6723 1560
Email hans.ekkehard.plesser(a)nmbu.no<mailto:hans.ekkehard.plesser@nmbu.no>
Home http://arken.nmbu.no/~plesser
Hi all,
we will organise another edition of the in situ visualization workshop (https://woiv.gitlab.io) at ISC (https://isc-hpc.com). If you are working on visualizing results of neural simulations while these are still running, you might want to consider submitting your work to the workshop …
Please find the CfP below.
Cheers,
Tom
-----
# WOIV'22: 6th International Workshop on In Situ Visualization
* Held in conjunction with ISC 2022
* Hamburg, Germany, June 2, 2022
## Scope
Large-scale HPC simulations with their inherent I/O bottleneck have
made in situ visualization an essential approach for data analysis,
although the idea of in situ visualization dates back to the golden
era of coprocessing in the 1990s. In situ coupling of analysis and
visualization to a live simulation circumvents writing raw data to
disk for post-mortem analysis – an approach that is already
inefficient for today’s very large simulation codes. Instead, with in
situ visualization, data abstracts are generated that provide a much
higher level of expressiveness per byte. Therefore, more details can
be computed and stored for later analysis, providing more insight than
traditional methods.
We encourage contributed talks on methods and workflows that have been
used for large-scale parallel visualization, with a particular focus
on the in situ case. Presentations on codes that closely couple
numerical methods and visualization are particularly welcome. Speakers
should detail if and how the application drove abstractions or other
kinds of data reductions and how these interacted with the
expressiveness and flexibility of the visualization for exploratory
analysis. Presentations on codes that closely couple numerical methods
and visualization are particularly welcome. Speakers should detail
frameworks used and data reductions applied. They should also indicate
how these impacted the flexibility of the visualization for
exploratory analysis.
Of particular interest to WOIV and its attendees are recent
developments for in situ libraries and software. Submissions
documenting recent additions to existing in situ software or new in
situ platforms are highly encouraged. WOIV is an excellent place to
connect providers of in situ solutions with potential customers.
For the submissions we are not only looking for success stories, but
are also particularly interested in those experiments that started
with a certain goal or idea in mind, but later got shattered by
reality or insufficient hardware/software.
Areas of interest for WOIV include, but are not limited to:
* Techniques and paradigms for in situ visualization.
* Algorithms relevant to in situ visualization. These could include
algorithms empowered by in situ visualization or algorithms that
overcome limitations of in situ visualization.
* Systems and software implementing in situ visualization. These
include both general purpose and bespoke implementations. This also
includes updates to existing software as well as new software.
* Workflow management.
* Use of in situ visualization for application science or other
examples of using in situ visualization.
* Performance studies of in situ systems. Comparisons between in situ
systems or techniques or comparisons between in situ and
alternatives (such as post hoc) are particularly encouraged.
* The impact of hardware changes on in situ visualization.
* The online visualization of experimental data.
* Reports of in situ visualization failures.
* Emerging issues with in situ visualization.
## Submissions
We accept submissions of short papers (6 to 8 pages) and full papers
(10 to 12 pages) in Springer single column LNCS style. Please find
LaTeX and Word templates at https://woiv.gitlab.io/woiv22/template.
Submissions are exclusively handled via EasyChair:
https://woiv.gitlab.io/woiv22/submit. The review process is single or
double blind, we leave it to the discretion of the authors whether
they want to disclose their identity in their submissions.
All submissions will be peer-reviewed by experts in the field, and
will be evaluated according to relevance to the workshop theme,
technical soundness, thoroughness of success/failure comparison, and
impactfulness of method/results. Accepted papers will appear as
post-conference workshop proceedings in the Springer Lecture Notes in
Computer Science (LNCS) series. The submitted versions will be made
available to workshop participants during ISC.
## Important Dates
* Submission deadline: February 13, 2022, anywhere on earth
* Notification of acceptance: April 15, 2022
* Final presentation slides due: May 10, 2012, anywhere on earth
(subject to change)
* Workshop: June 2, 2022
* Camera-ready version due: July 1, 2022 (subject to change,
extrapolated from previous years)
## Chairs
* Peter Messmer, NVIDIA
* Tom Vierjahn, Westphalian University of Applied Sciences, Bocholt,
Germany
## Steering Committee
* Steffen Frey, University of Groningen, The Netherlands
* Kenneth Moreland, Sandia National Labs, USA
* Thomas Theussl, KAUST, Saudi Arabia
* Guido Reina, University of Stuttgart, Germany
* Tom Vierjahn, Westphalian University of Applied Sciences, Bocholt,
Germany
## Website, Venue, Registration
* Website: https://woiv.gitlab.io
* Submission system: https://woiv.gitlab.io/woiv22/submit
* Template: https://woiv.gitlab.io/woiv22/template
* Venue: https://www.isc-hpc.com (ISC 2022)
* Workshop registration: https://woiv.gitlab.io/woiv22/register
## Contact
E-Mail: woiv(a)googlegroups.com
Dear NEST Users & Developers!
I would like to invite you to our next fortnightly Open NEST Developer
Video Conference, today
Monday December 20, 11.30-12.30 CET (UTC+1).
Feel free to join the meeting also just to bring your own questions for
direct discussion in the in-depth section.
As usual, in the Project team round, a contact person of each team will
give a short statement summarizing ongoing work in the team and
cross-cutting points that need discussion among the teams. The remainder
of the meeting we would go into a more in-depth discussion of topics
that came up on the mailing list or that are suggested by the teams.
Agenda
* Welcome
* Review of NEST User Mailing List
* Project team round
* In-depth discussion
The agenda for this meeting is also available online, see
https://github.com/nest/nest-simulator/wiki/2021-12-20-Open-NEST-Developer-…
Looking forward to seeing you soon!
Cheers,
Jochen Martin Eppler!
------------------
Log-in information
------------------
We use a virtual conference room provided by DFN (Deutsches Forschungsnetz).
You can use the web client to connect. We however encourage everyone to
use a headset for better audio quality or even a proper video
conferencing system (see below) or software when available.
Web client
* Visit https://conf.dfn.de/webapp/conference/97938800
* Enter your name and allow your browser to use camera and microphone
* The conference does not need a PIN to join, just click join and you're in.
In case you see a dfnconf logo and the phrase "Auf den
Meetingveranstalter warten", just be patient, the meeting host needs to
join first (a voice will tell you).
VC system/software
How to log in with a video conferencing system, depends on you VC system
or software.
- Using the H.323 protocol (eg Polycom): vc.dfn.net##97938800 or
194.95.240.2##97938800
- Using the SIP protocol:97938800@vc.dfn.de
- By telephone: +49-30-200-97938800
For those who do not have a video conference system or suitable
software, Polycom provides a pretty good free app for iOS and Android,
so you can join from your tablet (Polycom RealPresence Mobile, available
from AppStore/PlayStore). Note that firewalls may interfere with
videoconferencing in various and sometimes confusing ways.
For more technical information on logging in from various VC systems,
please see http://vcc.zih.tu-dresden.de/index.php?linkid=1.1.3.4
--
Dr. Jochen Martin Eppler
Phone: +49(2461)61-96653
----------------------------------
Simulation Laboratory Neuroscience
Jülich Supercomputing Centre
Institute for Advanced Simulation
------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------
Forschungszentrum Juelich GmbH
52425 Juelich
Sitz der Gesellschaft: Juelich
Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
Vorsitzender des Aufsichtsrats: MinDir Volker Rieke
Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
Karsten Beneke (stellv. Vorsitzender), Prof. Dr. Astrid Lambrecht,
Prof. Dr. Frauke Melchior
------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------
Dear all,
I am a new NEST user. I have a question concerning the range of
neuron/synapses model possibilities of NEST.
I would like to implement my own neuron/synapse model with NESTML, but I
am unsure that it would be possible.
Indeed, in my model, synaptic currents are not only relying on
pre-synaptic spikes. To compute synaptic currents, the opening
probability of pre-synaptic channel receptors are required.
Those pre-synaptic channel receptors opening probabilities are evolving
according to differential equations involving second order dynamics,
with specific decays and taking into account the pre-synaptic spikes
arrivals times at this specific synapse.
Those differential equations for the opening probabilities are relying
on different parameters, according to the neurotransmitter type (GABA
A,GABA B, NMDA, AMPA ).
Furthermore, additionally to the input spikes and the pre-synaptic
channel receptors opening probabilities, the current membrane potential
of the post-synaptic neuron is also required to compute the synaptic
currents.
Do you know if one of the NEST models implement similar dynamics? Is it
possible to compute such synaptic dynamics with NESTML by creating a
synapse or (and) a neuron model? Or is it not, due to specific
limitations?
Thank you,
Best regards,
JB
Dear NEST Developers,
You may have observed that Github checks failed on macOS in recent days. This is due to even more pedantic checking of standard compliance by the newest version of Clang. The issue is fixed in master now (#2231), so if you experience trouble with failing macOS tests, you should update your branch.
Best,
Hans Ekkehard
--
Prof. Dr. Hans Ekkehard Plesser
Head, Department of Data Science
Faculty of Science and Technology
Norwegian University of Life Sciences
PO Box 5003, 1432 Aas, Norway
Phone +47 6723 1560
Email hans.ekkehard.plesser(a)nmbu.no<mailto:hans.ekkehard.plesser@nmbu.no>
Home http://arken.nmbu.no/~plesser
Dear all,
in a previous version of NEST I used to set the RNG seeds like this:
nest.SetKernelStatus({'grng_seed' : value_grngseed})
nest.SetKernelStatus({'rng_seeds' : value_rngseed)})
I already found in newer NEST versions I can set the RNG seed like:
nest.rng_seed = value_rngseed
Is there also a possibility to set the Global RNG seed 'grng' with newer
NEST versions?
Thanks!
Benedikt
--
Benedikt Feldotto M.Sc.
Research Assistant
Human Brain Project - Neurorobotics
Technical University of Munich
Department of Informatics
Chair of Robotics, Artificial Intelligence and Real-Time Systems
Room HB 2.02.20
Parkring 13
D-85748 Garching b. München
Tel.: +49 89 289 17628
Mail: feldotto(a)in.tum.de
https://www6.in.tum.de/en/people/benedikt-feldotto-msc/www.neurorobotics.net
Dear NEST Users & Developers!
I would like to invite you to our next fortnightly Open NEST Developer Video Conference, today
Monday 06 December, 11.30-12.30 CET (UTC+1).
Feel free to join the meeting also just to bring your own questions for direct discussion in the in-depth section.
As usual, in the Project team round, a contact person of each team will give a short statement summarizing ongoing work in the team and cross-cutting points that need discussion among the teams. The remainder of the meeting we would go into a more in-depth discussion of topics that came up on the mailing list or that are suggested by the teams.
Agenda
* Welcome
* Review of NEST User Mailing List
* Project team round
* In-depth discussion
The agenda for this meeting is also available online, see https://github.com/nest/nest-simulator/wiki/2021-12-06-Open-NEST-Developer-…
Looking forward to seeing you soon!
best,
Dennis Terhorst
------------------
Log-in information
------------------
We use a virtual conference room provided by DFN (Deutsches Forschungsnetz).
You can use the web client to connect. We however encourage everyone to use a headset for better audio quality or even a proper video conferencing system (see below) or software when available.
Web client
* Visit https://conf.dfn.de/webapp/conference/97938800
* Enter your name and allow your browser to use camera and microphone
* The conference does not need a PIN to join, just click join and you're in.
In case you see a dfnconf logo and the phrase "Auf den Meetingveranstalter warten", just be patient, the meeting host needs to join first (a voice will tell you).
VC system/software
How to log in with a video conferencing system, depends on you VC system or software.
- Using the H.323 protocol (eg Polycom): vc.dfn.net##97938800 or 194.95.240.2##97938800
- Using the SIP protocol:97938800@vc.dfn.de
- By telephone: +49-30-200-97938800
For those who do not have a video conference system or suitable
software, Polycom provides a pretty good free app for iOS and Android,
so you can join from your tablet (Polycom RealPresence Mobile, available
from AppStore/PlayStore). Note that firewalls may interfere with
videoconferencing in various and sometimes confusing ways.
For more technical information on logging in from various VC systems,
please see
http://vcc.zih.tu-dresden.de/index.php?linkid=1.1.3.4