Dear NEST Users & Developers!
I would like to invite you to our next fortnightly Open NEST Developer
Video Conference, today
Monday February 28, 11.30-12.30 CET (UTC+1).
Feel free to join the meeting also just to bring your own questions for
direct discussion in the in-depth section.
As usual, in the Project team round, a contact person of each team will
give a short statement summarizing ongoing work in the team and
cross-cutting points that need discussion among the teams. The remainder
of the meeting we would go into a more in-depth discussion of topics
that came up on the mailing list or that are suggested by the teams.
Agenda
* Welcome
* Review of NEST User Mailing List
* Project team round
* In-depth discussion
The agenda for this meeting is also available online, see
https://github.com/nest/nest-simulator/wiki/2022-02-28-Open-NEST-Developer-…
Looking forward to seeing you soon!
Cheers,
Dennis Terhorst
------------------
Log-in information
------------------
We use a virtual conference room provided by DFN (Deutsches Forschungsnetz).
You can use the web client to connect. We however encourage everyone to
use a headset for better audio quality or even a proper video
conferencing system (see below) or software when available.
Web client
* Visit https://conf.dfn.de/webapp/conference/97938800
* Enter your name and allow your browser to use camera and microphone
* The conference does not need a PIN to join, just click join and you're in.
In case you see a dfnconf logo and the phrase "Auf den
Meetingveranstalter warten", just be patient, the meeting host needs to
join first (a voice will tell you).
VC system/software
How to log in with a video conferencing system, depends on you VC system
or software.
- Using the H.323 protocol (eg Polycom): vc.dfn.net##97938800 or
194.95.240.2##97938800
- Using the SIP protocol:97938800@vc.dfn.de
- By telephone: +49-30-200-97938800
For those who do not have a video conference system or suitable
software, Polycom provides a pretty good free app for iOS and Android,
so you can join from your tablet (Polycom RealPresence Mobile, available
from AppStore/PlayStore). Note that firewalls may interfere with
videoconferencing in various and sometimes confusing ways.
For more technical information on logging in from various VC systems,
please see http://vcc.zih.tu-dresden.de/index.php?linkid=1.1.3.4
Hey Nest Team,
Calling "NodeCollection.get" returns a dictionary containing the user declared "States" and "Parameters" with other keys such as "global_id", "recordables", and "synaptic_elements". In this context, I want to ask if there is a function in "nest" that returns exactly only the model declared "States" or "Parameters" without having these extra keys, and of course without having a prior knowledge of the "nestml" file from which the model was originated.
Best,
Ayssar
Hi,
Am I correct in thinking that all rand generators provided for NEST use
std::thread? If I am not mistaken, it's an extension of POSIX threads
and I'd like to avoid it for my target offloading work.
Thanks,
Itaru.
Hello,
Documentation for `tsodyks_synapse` says it is only compatible with `iaf_psc_exp` or `iaf_psc_exp_htum` neuron models. Would it be possible to use it with other `_exp`-type models with postsynaptic currents or conductances with exponential decay (for example, `aeif_cond_exp`)? If not, what could be a work around?
With best regards,
Alexander Kozlov,
CST EECS KTH.
Dear Colleagues,
The NEST Initiative is excited to invite everyone interested in Neural Simulation Technology and the NEST Simulator to the NEST Conference 2022. The NEST Conference provides an opportunity for the NEST Community to meet, exchange success stories, swap advice, learn about current developments in and around NEST spiking network simulation and its application. We particularly encourage young scientists to participate in the conference!
This year's conference will again take place as a virtual event on Thursday/Friday 23/24 June 2022.
Register now!
For more information please visit the conference website
https://nest-simulator.org/conference
We are looking forward to seeing you all in June!
Hans Ekkehard Plesser and colleagues
--
Prof. Dr. Hans Ekkehard Plesser
Head, Department of Data Science
Faculty of Science and Technology
Norwegian University of Life Sciences
PO Box 5003, 1432 Aas, Norway
Phone +47 6723 1560
Email hans.ekkehard.plesser(a)nmbu.no<mailto:hans.ekkehard.plesser@nmbu.no>
Home http://arken.nmbu.no/~plesser
Hello,
We would like to use neurons models with NMDA channels in our spiking neuron model. We're still unsure whether we will use a native neuron model in Nest or whether we will implement our own one in NESTML. My understanding is that the only model in Nest which does that is the Hill - Tononi model, which seems rather complex. How fast would you roughly expect a Hill - Tononi neuron network to run compared to a network made of aeif_cond_exp neurons?
Do you know by any chance any example of adex NESTML models which implement NMDA and Gaba_B channels?
Also, one last question not really related to the previous ones: Is there any way to model synaptic reliability in Nest?
Thanks a lot,
Remy,
Hi all,
we have extended the submission deadline for WOIV’22 until March 7.
And we have added a 3rd submission type: lightning presentations (2 to 4 pages, published via Zenodo).
Please find the updated CfP below and on the webpage https://woiv.gitlab.io
Cheers,
Tom
-----
# WOIV'22: 6th International Workshop on In Situ Visualization
* Held in conjunction with ISC 2022
* Hamburg, Germany, June 2, 2022
## Scope
Large-scale HPC simulations with their inherent I/O bottleneck have
made in situ visualization an essential approach for data analysis,
although the idea of in situ visualization dates back to the golden
era of coprocessing in the 1990s. In situ coupling of analysis and
visualization to a live simulation circumvents writing raw data to
disk for post-mortem analysis – an approach that is already
inefficient for today’s very large simulation codes. Instead, with in
situ visualization, data abstracts are generated that provide a much
higher level of expressiveness per byte. Therefore, more details can
be computed and stored for later analysis, providing more insight than
traditional methods.
We encourage contributed talks on methods and workflows that have been
used for large-scale parallel visualization, with a particular focus
on the in situ case. Presentations on codes that closely couple
numerical methods and visualization are particularly welcome. Speakers
should detail if and how the application drove abstractions or other
kinds of data reductions and how these interacted with the
expressiveness and flexibility of the visualization for exploratory
analysis. Presentations on codes that closely couple numerical methods
and visualization are particularly welcome. Speakers should detail
frameworks used and data reductions applied. They should also indicate
how these impacted the flexibility of the visualization for
exploratory analysis.
Of particular interest to WOIV and its attendees are recent
developments for in situ libraries and software. Submissions
documenting recent additions to existing in situ software or new in
situ platforms are highly encouraged. WOIV is an excellent place to
connect providers of in situ solutions with potential customers.
For the submissions we are not only looking for success stories, but
are also particularly interested in those experiments that started
with a certain goal or idea in mind, but later got shattered by
reality or insufficient hardware/software.
Areas of interest for WOIV include, but are not limited to:
* Techniques and paradigms for in situ visualization.
* Algorithms relevant to in situ visualization. These could include
algorithms empowered by in situ visualization or algorithms that
overcome limitations of in situ visualization.
* Systems and software implementing in situ visualization. These
include both general purpose and bespoke implementations. This also
includes updates to existing software as well as new software.
* Workflow management.
* Use of in situ visualization for application science or other
examples of using in situ visualization.
* Performance studies of in situ systems. Comparisons between in situ
systems or techniques or comparisons between in situ and
alternatives (such as post hoc) are particularly encouraged.
* The impact of hardware changes on in situ visualization.
* The online visualization of experimental data.
* Reports of in situ visualization failures.
* Emerging issues with in situ visualization.
## Submissions
We accept submissions of short papers (6 to 8 pages), full papers (10
to 12 pages) and lightning presentations (2 to 4 pages) in Springer
single column LNCS style. Please find LaTeX and Word templates at
https://woiv.gitlab.io/woiv22/template.
`Submissions are exclusively handled via EasyChair:
https://woiv.gitlab.io/woiv22/submit. The review process is single or
double blind, we leave it to the discretion of the authors whether
they want to disclose their identity in their submissions.
All submissions will be peer-reviewed by experts in the field, and
will be evaluated according to relevance to the workshop theme,
technical soundness, thoroughness of success/failure comparison, and
impactfulness of method/results. Accepted short and full papers will
appear as post-conference workshop proceedings in the Springer Lecture
Notes in Computer Science (LNCS) series; lightning presentations will
be published via Zenodo. The submitted versions will be made available
to workshop participants during ISC.
## Important Dates
* Submission deadline (extended): March 7, 2022, anywhere on earth
* Notification of acceptance: April 15, 2022
* Final presentation slides due: May 10, 2012, anywhere on earth
(subject to change)
* Workshop: June 2, 2022
* Camera-ready version due: July 1, 2022 (subject to change,
extrapolated from previous years)
## Chairs
* Peter Messmer, NVIDIA
* Tom Vierjahn, Westphalian University of Applied Sciences, Bocholt,
Germany
## Steering Committee
* Steffen Frey, University of Groningen, The Netherlands
* Kenneth Moreland, Oak Ridge National Laboratory, USA
* Thomas Theussl, KAUST, Saudi Arabia
* Guido Reina, University of Stuttgart, Germany
* Tom Vierjahn, Westphalian University of Applied Sciences, Bocholt,
Germany
## Website, Venue, Registration
* Website: https://woiv.gitlab.io
* Submission system: https://woiv.gitlab.io/woiv22/submit
* Template: https://woiv.gitlab.io/woiv22/template
* Venue: https://www.isc-hpc.com (ISC 2022)
* Workshop registration: https://woiv.gitlab.io/woiv22/register
## Contact
E-Mail: woiv(a)googlegroups.com
Dear NEST Users & Developers!
I would like to invite you to our next fortnightly Open NEST Developer
Video Conference, today
Monday February 14, 11.30-12.30 CET (UTC+1).
Feel free to join the meeting also just to bring your own questions for
direct discussion in the in-depth section.
As usual, in the Project team round, a contact person of each team will
give a short statement summarizing ongoing work in the team and
cross-cutting points that need discussion among the teams. The remainder
of the meeting we would go into a more in-depth discussion of topics
that came up on the mailing list or that are suggested by the teams.
Agenda
* Welcome
* Review of NEST User Mailing List
* Project team round
* In-depth discussion
The agenda for this meeting is also available online, see
https://github.com/nest/nest-simulator/wiki/2022-02-14-Open-NEST-Developer-…
Looking forward to seeing you soon!
Cheers,
Dennis Terhorst
------------------
Log-in information
------------------
We use a virtual conference room provided by DFN (Deutsches Forschungsnetz).
You can use the web client to connect. We however encourage everyone to
use a headset for better audio quality or even a proper video
conferencing system (see below) or software when available.
Web client
* Visit https://conf.dfn.de/webapp/conference/97938800
* Enter your name and allow your browser to use camera and microphone
* The conference does not need a PIN to join, just click join and you're in.
In case you see a dfnconf logo and the phrase "Auf den
Meetingveranstalter warten", just be patient, the meeting host needs to
join first (a voice will tell you).
VC system/software
How to log in with a video conferencing system, depends on you VC system
or software.
- Using the H.323 protocol (eg Polycom): vc.dfn.net##97938800 or
194.95.240.2##97938800
- Using the SIP protocol:97938800@vc.dfn.de
- By telephone: +49-30-200-97938800
For those who do not have a video conference system or suitable
software, Polycom provides a pretty good free app for iOS and Android,
so you can join from your tablet (Polycom RealPresence Mobile, available
from AppStore/PlayStore). Note that firewalls may interfere with
videoconferencing in various and sometimes confusing ways.
For more technical information on logging in from various VC systems,
please see http://vcc.zih.tu-dresden.de/index.php?linkid=1.1.3.4
Dear all,
NEST is a powerful tool to simulate biologically derived spiking Neural
Networks. Besides the ever improving model details, learning paradigms
and computational complexity, there are two complementary components
necessary in order to simulate a (full) biologically plausible mammalian
brain: Running simulations at large scale and embodying the brain
simulation in a (virtual) body.
In frame of the Fenix Infrastructure Webinar series I am going to
present "Embodied large scale spiking neural networks in the
Neurorobotics Platform" with a focus on the deployment on High
Performance Computing Infrastructure next week Thursday. The
presentation showcases the joint work of developers from NEST and the
Neurorobotics Platform, the CSCS Swiss National Supercomputing centre
and researchers in the RoboBrain project.
Check out the details and registration at:
https://fenix-ri.eu/events/14th-fenix-infrastructure-webinar-ebrains-servic…
Looking forward to see you there,
Benedikt
--
Benedikt Feldotto M.Sc.
Research Assistant
Human Brain Project - Neurorobotics
Technical University of Munich
Department of Informatics
Chair of Robotics, Artificial Intelligence and Real-Time Systems
Room HB 2.02.20
Parkring 13
D-85748 Garching b. München
Tel.: +49 89 289 17628
Mail: feldotto(a)in.tum.de
https://www6.in.tum.de/en/people/benedikt-feldotto-msc/www.neurorobotics.net