Dear NEST community,
When trying to pass a `synapse_label` parameter in the syn_spec
dictionary, in conjunction with the `one_to_one` rule and node ids
passed as arrays, I get the following error with NEST 3.0:
File
"/home/zbarni/code/projects/lif_sorn_seqlearn/lif-sorn-nest-implementation/remote/test_nestml_multith.py",
line 29, in <module>
nest.Connect([1], [2], conn_spec={'rule': 'one_to_one'},
syn_spec={'synapse_model': 'static_synapse_lbl', 'synapse_label': 1})
File
"/home/zbarni/software/installed/miniconda3/envs/del-lif-sorn-nestml-check_py390/lib/python3.9/site-packages/nest/ll_api.py",
line 228, in stack_checker_func
return f(*args, **kwargs)
File
"/home/zbarni/software/installed/miniconda3/envs/del-lif-sorn-nestml-check_py390/lib/python3.9/site-packages/nest/lib/hl_api_connections.py",
line 255, in Connect
connect_arrays(pre, post, weights, delays, synapse_model,
syn_param_keys, syn_param_values)
File "pynestkernel.pyx", line 360, in
pynestkernel.NESTEngine.connect_arrays
nest.lib.hl_api_exceptions.TypeMismatch: TypeMismatch in SLI function
connect_arrays
It works when using the `all_to_all` rule. Is this the expected behavior
or for some reason it's not possible to use synapse_label and the
one_to_one rule (yet)?
Thank you,
Barna
Dear NEST Users & Developers!
I would like to invite you to our next fortnightly Open NEST Developer
Video Conference, today
Monday January 31, 11.30-12.30 CET (UTC+1).
Feel free to join the meeting also just to bring your own questions for
direct discussion in the in-depth section.
As usual, in the Project team round, a contact person of each team will
give a short statement summarizing ongoing work in the team and
cross-cutting points that need discussion among the teams. The remainder
of the meeting we would go into a more in-depth discussion of topics
that came up on the mailing list or that are suggested by the teams.
Agenda
* Welcome
* Review of NEST User Mailing List
* Project team round
* In-depth discussion
The agenda for this meeting is also available online, see
https://github.com/nest/nest-simulator/wiki/2022-01-31-Open-NEST-Developer-…
Looking forward to seeing you soon!
Cheers,
Dennis Terhorst
------------------
Log-in information
------------------
We use a virtual conference room provided by DFN (Deutsches Forschungsnetz).
You can use the web client to connect. We however encourage everyone to
use a headset for better audio quality or even a proper video
conferencing system (see below) or software when available.
Web client
* Visit https://conf.dfn.de/webapp/conference/97938800
* Enter your name and allow your browser to use camera and microphone
* The conference does not need a PIN to join, just click join and you're in.
In case you see a dfnconf logo and the phrase "Auf den
Meetingveranstalter warten", just be patient, the meeting host needs to
join first (a voice will tell you).
VC system/software
How to log in with a video conferencing system, depends on you VC system
or software.
- Using the H.323 protocol (eg Polycom): vc.dfn.net##97938800 or
194.95.240.2##97938800
- Using the SIP protocol:97938800@vc.dfn.de
- By telephone: +49-30-200-97938800
For those who do not have a video conference system or suitable
software, Polycom provides a pretty good free app for iOS and Android,
so you can join from your tablet (Polycom RealPresence Mobile, available
from AppStore/PlayStore). Note that firewalls may interfere with
videoconferencing in various and sometimes confusing ways.
For more technical information on logging in from various VC systems,
please see http://vcc.zih.tu-dresden.de/index.php?linkid=1.1.3.4
Dear Colleagues,
The Department of Data Science at NMBU is currently looking for two full-time permanent associate professors in machine learning and scientific computing, respectively. We are also looking for adjunct professors (20%, 4 years) to cover ethical and legal aspects of data science and data security, respectively.
I'd appreciate if you would pass this information on to colleagues who may be interested in any of the positions. As we currently are mostly male faculty members, we'd in particular appreciate applications by women.
* Associate professor in Data Science (Machine Learning) - Deadline: Tuesday, February 15, 2022<https://www.jobbnorge.no/en/available-jobs/job/217579/associate-professor-i…>
* Associate professor in Scientific Computing - Deadline: Tuesday, February 15, 2022<https://www.jobbnorge.no/en/available-jobs/job/217590/associate-professor-i…>
* Professor II/Associate professor II in Ethics and Law of Data Science - Deadline: Tuesday, February 15, 2022<https://www.jobbnorge.no/en/available-jobs/job/217629/professor-ii-associat…>
* Professor II/Associate professor II in Data Security - Deadline: Tuesday, February 15, 2022<https://www.jobbnorge.no/en/available-jobs/job/217607/professor-ii-associat…>
Best regards,
Hans Ekkehard
--
Prof. Dr. Hans Ekkehard Plesser
Head, Department of Data Science
Faculty of Science and Technology
Norwegian University of Life Sciences
PO Box 5003, 1432 Aas, Norway
Phone +47 6723 1560
Email hans.ekkehard.plesser(a)nmbu.no<mailto:hans.ekkehard.plesser@nmbu.no>
Home http://arken.nmbu.no/~plesser
Dear all,
quick question about data logging to a specified folder:
with
nest.SetKernelStatus({"data_path": '/opt/data/log'})
I can specify the path, where data is logged to. If I do not specify a
path, where does data get saved, or is it not saved to file if I do not
specify 'data_path'?
Thanks!
Benedikt
--
Benedikt Feldotto M.Sc.
Research Assistant
Human Brain Project - Neurorobotics
Technical University of Munich
Department of Informatics
Chair of Robotics, Artificial Intelligence and Real-Time Systems
Room HB 2.02.20
Parkring 13
D-85748 Garching b. München
Tel.: +49 89 289 17628
Mail: feldotto(a)in.tum.de
https://www6.in.tum.de/en/people/benedikt-feldotto-msc/www.neurorobotics.net
Dear NEST Users & Developers!
I would like to invite you to our next fortnightly Open NEST Developer
Video Conference, today
Monday January 17, 11.30-12.30 CET (UTC+1).
Feel free to join the meeting also just to bring your own questions for
direct discussion in the in-depth section.
As usual, in the Project team round, a contact person of each team will
give a short statement summarizing ongoing work in the team and
cross-cutting points that need discussion among the teams. The remainder
of the meeting we would go into a more in-depth discussion of topics
that came up on the mailing list or that are suggested by the teams.
Agenda
* Welcome
* Review of NEST User Mailing List
* Project team round
* In-depth discussion
The agenda for this meeting is also available online, see
https://github.com/nest/nest-simulator/wiki/2022-01-17-Open-NEST-Developer-…
Looking forward to seeing you soon!
Cheers,
Dennis Terhorst
------------------
Log-in information
------------------
We use a virtual conference room provided by DFN (Deutsches Forschungsnetz).
You can use the web client to connect. We however encourage everyone to
use a headset for better audio quality or even a proper video
conferencing system (see below) or software when available.
Web client
* Visit https://conf.dfn.de/webapp/conference/97938800
* Enter your name and allow your browser to use camera and microphone
* The conference does not need a PIN to join, just click join and you're in.
In case you see a dfnconf logo and the phrase "Auf den
Meetingveranstalter warten", just be patient, the meeting host needs to
join first (a voice will tell you).
VC system/software
How to log in with a video conferencing system, depends on you VC system
or software.
- Using the H.323 protocol (eg Polycom): vc.dfn.net##97938800 or
194.95.240.2##97938800
- Using the SIP protocol:97938800@vc.dfn.de
- By telephone: +49-30-200-97938800
For those who do not have a video conference system or suitable
software, Polycom provides a pretty good free app for iOS and Android,
so you can join from your tablet (Polycom RealPresence Mobile, available
from AppStore/PlayStore). Note that firewalls may interfere with
videoconferencing in various and sometimes confusing ways.
For more technical information on logging in from various VC systems,
please see http://vcc.zih.tu-dresden.de/index.php?linkid=1.1.3.4
Dear all,
first of all, happy new year!
I am using nest 2.2 (installed by conda) and I have set up a simple network
of 20-100 neurons based on the ht_neuron model. I am delivering some noise
to all neurons with the sinusoidal_poisson_generator (I can switch to
another) and the simulation produces the results I am looking for.
However I would like now to have a look at the spikes entering the network
and therefore I need to access all events generated by the random generator.
A sample of my code is:
import nest
receptors=nest.GetDefaults('ht_neuron')['receptor_types']
nest.CopyModel("sinusoidal_poisson_generator",'sin_pois',{'amplitude':50.,'rate':15.0
})
nest.CopyModel("static_synapse","AMPAnoise",{"receptor_type":receptors["AMPA"],"weight":10.
})
neurons=nest.Create("ht_neuron",1)
noise=nest.Create('sin_pois',1)
g = nest.Create('sin_pois', 1)
# current pulse
dc_gen = nest.Create("dc_generator")
nest.SetStatus(dc_gen, {"amplitude": 20., "start": 400, "stop": 1050})
nest.Connect(dc_gen, neurons, 'one_to_one')
# recorders
detect_noise = nest.Create('spike_detector', 1)
nest.Connect(noise, neurons, 'one_to_one', syn_spec="AMPAnoise")
nest.Connect(noise, detect_noise, 'one_to_one')
''' the latest does not work
Creation of connection is not possible because:
All outgoing connections from a device must use the same synapse type.
'''
Is there a way to record those spikes ?
Thanks in advance for any help
Best
Dear NEST Users and Developers!
I would like to thank you all for your engagement for high-quality computational neuroscience and research software in 2021. We have made some major steps forward with the release of NEST 3, NEST Desktop 3 and NESTML 4. A few days ago, PyNN 0.10 also brought support for NEST 3, nicely wrapping up the year of NEST 3. We also moved to quarterly releases with the release of NEST 3.0. If you wonder what happened to NEST 3.2 in that scheme of things don't worry, it will come in Januar. The combination of holiday season, new COVID restrictions and an important reporting deadline in the Human Brain Project—the major source of funding for NEST development in recent years—unfortunately left too little time to wrap everything up in time.
2022 promises to be an exciting year for NEST, including the deeper integration of NEST Desktop (so far mainly developed by Sebastian Spreizer) and NEST GPU (so far mainly developed by Bruno Golosio as NeuronGPU) into the NEST development process and community.
Don't forget to block out 23/24 June in your calendars for the NEST Conference 2022 (this time on a Thursday and Friday)!
On behalf of the NEST Initiative, I wish you happy holidays and all the best for 2022!
Hans Ekkehard
--
Prof. Dr. Hans Ekkehard Plesser
Head, Department of Data Science
Faculty of Science and Technology
Norwegian University of Life Sciences
PO Box 5003, 1432 Aas, Norway
Phone +47 6723 1560
Email hans.ekkehard.plesser(a)nmbu.no<mailto:hans.ekkehard.plesser@nmbu.no>
Home http://arken.nmbu.no/~plesser
Hi all,
we will organise another edition of the in situ visualization workshop (https://woiv.gitlab.io) at ISC (https://isc-hpc.com). If you are working on visualizing results of neural simulations while these are still running, you might want to consider submitting your work to the workshop …
Please find the CfP below.
Cheers,
Tom
-----
# WOIV'22: 6th International Workshop on In Situ Visualization
* Held in conjunction with ISC 2022
* Hamburg, Germany, June 2, 2022
## Scope
Large-scale HPC simulations with their inherent I/O bottleneck have
made in situ visualization an essential approach for data analysis,
although the idea of in situ visualization dates back to the golden
era of coprocessing in the 1990s. In situ coupling of analysis and
visualization to a live simulation circumvents writing raw data to
disk for post-mortem analysis – an approach that is already
inefficient for today’s very large simulation codes. Instead, with in
situ visualization, data abstracts are generated that provide a much
higher level of expressiveness per byte. Therefore, more details can
be computed and stored for later analysis, providing more insight than
traditional methods.
We encourage contributed talks on methods and workflows that have been
used for large-scale parallel visualization, with a particular focus
on the in situ case. Presentations on codes that closely couple
numerical methods and visualization are particularly welcome. Speakers
should detail if and how the application drove abstractions or other
kinds of data reductions and how these interacted with the
expressiveness and flexibility of the visualization for exploratory
analysis. Presentations on codes that closely couple numerical methods
and visualization are particularly welcome. Speakers should detail
frameworks used and data reductions applied. They should also indicate
how these impacted the flexibility of the visualization for
exploratory analysis.
Of particular interest to WOIV and its attendees are recent
developments for in situ libraries and software. Submissions
documenting recent additions to existing in situ software or new in
situ platforms are highly encouraged. WOIV is an excellent place to
connect providers of in situ solutions with potential customers.
For the submissions we are not only looking for success stories, but
are also particularly interested in those experiments that started
with a certain goal or idea in mind, but later got shattered by
reality or insufficient hardware/software.
Areas of interest for WOIV include, but are not limited to:
* Techniques and paradigms for in situ visualization.
* Algorithms relevant to in situ visualization. These could include
algorithms empowered by in situ visualization or algorithms that
overcome limitations of in situ visualization.
* Systems and software implementing in situ visualization. These
include both general purpose and bespoke implementations. This also
includes updates to existing software as well as new software.
* Workflow management.
* Use of in situ visualization for application science or other
examples of using in situ visualization.
* Performance studies of in situ systems. Comparisons between in situ
systems or techniques or comparisons between in situ and
alternatives (such as post hoc) are particularly encouraged.
* The impact of hardware changes on in situ visualization.
* The online visualization of experimental data.
* Reports of in situ visualization failures.
* Emerging issues with in situ visualization.
## Submissions
We accept submissions of short papers (6 to 8 pages) and full papers
(10 to 12 pages) in Springer single column LNCS style. Please find
LaTeX and Word templates at https://woiv.gitlab.io/woiv22/template.
Submissions are exclusively handled via EasyChair:
https://woiv.gitlab.io/woiv22/submit. The review process is single or
double blind, we leave it to the discretion of the authors whether
they want to disclose their identity in their submissions.
All submissions will be peer-reviewed by experts in the field, and
will be evaluated according to relevance to the workshop theme,
technical soundness, thoroughness of success/failure comparison, and
impactfulness of method/results. Accepted papers will appear as
post-conference workshop proceedings in the Springer Lecture Notes in
Computer Science (LNCS) series. The submitted versions will be made
available to workshop participants during ISC.
## Important Dates
* Submission deadline: February 13, 2022, anywhere on earth
* Notification of acceptance: April 15, 2022
* Final presentation slides due: May 10, 2012, anywhere on earth
(subject to change)
* Workshop: June 2, 2022
* Camera-ready version due: July 1, 2022 (subject to change,
extrapolated from previous years)
## Chairs
* Peter Messmer, NVIDIA
* Tom Vierjahn, Westphalian University of Applied Sciences, Bocholt,
Germany
## Steering Committee
* Steffen Frey, University of Groningen, The Netherlands
* Kenneth Moreland, Sandia National Labs, USA
* Thomas Theussl, KAUST, Saudi Arabia
* Guido Reina, University of Stuttgart, Germany
* Tom Vierjahn, Westphalian University of Applied Sciences, Bocholt,
Germany
## Website, Venue, Registration
* Website: https://woiv.gitlab.io
* Submission system: https://woiv.gitlab.io/woiv22/submit
* Template: https://woiv.gitlab.io/woiv22/template
* Venue: https://www.isc-hpc.com (ISC 2022)
* Workshop registration: https://woiv.gitlab.io/woiv22/register
## Contact
E-Mail: woiv(a)googlegroups.com
Dear NEST Users & Developers!
I would like to invite you to our next fortnightly Open NEST Developer
Video Conference, today
Monday December 20, 11.30-12.30 CET (UTC+1).
Feel free to join the meeting also just to bring your own questions for
direct discussion in the in-depth section.
As usual, in the Project team round, a contact person of each team will
give a short statement summarizing ongoing work in the team and
cross-cutting points that need discussion among the teams. The remainder
of the meeting we would go into a more in-depth discussion of topics
that came up on the mailing list or that are suggested by the teams.
Agenda
* Welcome
* Review of NEST User Mailing List
* Project team round
* In-depth discussion
The agenda for this meeting is also available online, see
https://github.com/nest/nest-simulator/wiki/2021-12-20-Open-NEST-Developer-…
Looking forward to seeing you soon!
Cheers,
Jochen Martin Eppler!
------------------
Log-in information
------------------
We use a virtual conference room provided by DFN (Deutsches Forschungsnetz).
You can use the web client to connect. We however encourage everyone to
use a headset for better audio quality or even a proper video
conferencing system (see below) or software when available.
Web client
* Visit https://conf.dfn.de/webapp/conference/97938800
* Enter your name and allow your browser to use camera and microphone
* The conference does not need a PIN to join, just click join and you're in.
In case you see a dfnconf logo and the phrase "Auf den
Meetingveranstalter warten", just be patient, the meeting host needs to
join first (a voice will tell you).
VC system/software
How to log in with a video conferencing system, depends on you VC system
or software.
- Using the H.323 protocol (eg Polycom): vc.dfn.net##97938800 or
194.95.240.2##97938800
- Using the SIP protocol:97938800@vc.dfn.de
- By telephone: +49-30-200-97938800
For those who do not have a video conference system or suitable
software, Polycom provides a pretty good free app for iOS and Android,
so you can join from your tablet (Polycom RealPresence Mobile, available
from AppStore/PlayStore). Note that firewalls may interfere with
videoconferencing in various and sometimes confusing ways.
For more technical information on logging in from various VC systems,
please see http://vcc.zih.tu-dresden.de/index.php?linkid=1.1.3.4
--
Dr. Jochen Martin Eppler
Phone: +49(2461)61-96653
----------------------------------
Simulation Laboratory Neuroscience
Jülich Supercomputing Centre
Institute for Advanced Simulation
------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------
Forschungszentrum Juelich GmbH
52425 Juelich
Sitz der Gesellschaft: Juelich
Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
Vorsitzender des Aufsichtsrats: MinDir Volker Rieke
Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
Karsten Beneke (stellv. Vorsitzender), Prof. Dr. Astrid Lambrecht,
Prof. Dr. Frauke Melchior
------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------
Dear all,
I am a new NEST user. I have a question concerning the range of
neuron/synapses model possibilities of NEST.
I would like to implement my own neuron/synapse model with NESTML, but I
am unsure that it would be possible.
Indeed, in my model, synaptic currents are not only relying on
pre-synaptic spikes. To compute synaptic currents, the opening
probability of pre-synaptic channel receptors are required.
Those pre-synaptic channel receptors opening probabilities are evolving
according to differential equations involving second order dynamics,
with specific decays and taking into account the pre-synaptic spikes
arrivals times at this specific synapse.
Those differential equations for the opening probabilities are relying
on different parameters, according to the neurotransmitter type (GABA
A,GABA B, NMDA, AMPA ).
Furthermore, additionally to the input spikes and the pre-synaptic
channel receptors opening probabilities, the current membrane potential
of the post-synaptic neuron is also required to compute the synaptic
currents.
Do you know if one of the NEST models implement similar dynamics? Is it
possible to compute such synaptic dynamics with NESTML by creating a
synapse or (and) a neuron model? Or is it not, due to specific
limitations?
Thank you,
Best regards,
JB