Abstract
Understanding the relation between cortical neuronal network structure and neuronal activity is a fundamental unresolved question in neuroscience, with implications to our understanding of the mechanism by which neuronal networks evolve over time, spontaneously or under stimulation. It requires a method for inferring the structure and composition of a network from neuronal activities. Tracking the evolution of networks and their changing functionality will provide invaluable insight into the occurrence of plasticity and the underlying learning process. We devise a probabilistic method for inferring the effective network structure by integrating techniques from Bayesian statistics, statistical physics, and principled machine learning. The method and resulting algorithm allow one to infer the effective network structure, identify the excitatory and inhibitory type of its constituents, and predict neuronal spiking activity by employing the inferred structure. We validate the method and algorithm’s performance using synthetic data, spontaneous activity of an in silico emulator, and realistic in vitro neuronal networks of modular and homogeneous connectivity, demonstrating excellent structure inference and activity prediction. We also show that our method outperforms commonly used existing methods for inferring neuronal network structure. Inferring the evolving effective structure of neuronal networks will provide new insight into the learning process due to stimulation in general and will facilitate the development of neuron-based circuits with computing capabilities.
Inferring effective connectivity from firing patterns in neuronal networks is a long-standing problem, crucial for understanding how connectivity changes due to stimulation and plasticity occur. We introduce a probabilistic method to infer the excitatory/inhibitory type of neurons, links existence, and the effective coupling strengths in cortical neuronal networks from neuronal spiking recordings. We validate the efficacy of our method using both in silico and in vitro experiments, demonstrating that the effective structure revealed agrees with the true values and the key experimental observable such as modular organization. Additionally, the predicted neuronal activity using the inferred structures aligns with the observed data. The method will impact significantly on both fundamental studies in neuroscience and the development of neuron-based computing devices.
Introduction
Revealing how cortical neuronal networks connectivity evolves in time, spontaneously or under stimulation, is a foundational question in neuroscience, in particular, understanding how cortical neurons learn from repeated stimulation through changes in topology and synaptic strengths (1–7). While there are many tools for investigating macroscopic brain activities and changes, such as functional magnetic resonance imaging (fMRI), magnetoencephalography (MEG), and electroencephalography (EEG) (8–10), investigating the microscopic changes which occur in neuronal tissues noninvasively remains a challenge. While in vivo interrogation of neuronal networks at the microscopic level remains difficult, in vitro techniques such as multielectrode array (MEA) (11) and calcium imaging (12, 13) facilitate the monitoring and stimulation of neuronal tissues at cellular resolution and open the way to greater understanding of the learning process. New developments in the application of neuron-based circuits with computing capabilities make the need to understand the exact relationship between learning and stimulation more urgent and relevant (14–17).
Machine learning (ML) and artificial intelligence (AI) play an increasingly crucial role in our daily lives. However, training ML and AI systems require unsustainable computing power and energy consumption (18) and mostly lack the ability to adapt their structure in response to a changing situation. This has given rise to the search for alternative computing paradigms, in particular, the emerging field of biological computation, which aims at employing human neuronal networks (hNNs) as processing units in biological computing devices. These developments, such as using cortical brain organoids for nonlinear curve prediction (19) and employing cortical neuronal networks for decision-making in simulated gaming environments (20), have recently drawn the attention of both researchers and the general public. These breakthroughs point to the immense potential of hNNs in biological machine learning. The common belief is that plasticity and learning occur through appropriate stimulation in neuronal-based computing devices, such that both network topology and synaptic strengths evolve to structures that can carry out specific data/stimulation-driven tasks. However, the tools needed to investigate the evolving structure and support the understanding of how these hNNs-based devices operate are currently lacking. Hence, it is crucial to develop a principled inference tool that can reveal the effective cortical network structure, to better comprehend the mechanism that gives rise to task learning from stimulation.
Various methods have been employed to infer the effective neuronal network from its firing patterns. Commonly used techniques include generalized transfer entropy (GTE) (21), dynamic causal modeling, Granger causality (22), maximum entropy model (23), and generalized linear model (24). However, these methods have significant limitations. They can only measure directional causation between neurons and identify the existence of an effective connection by setting an appropriate but somewhat arbitrary threshold. These methods cannot find the excitatory or inhibitory type of neurons without manipulating the network through stimulation or channel blocking (21, 25), nor can they determine the model’s effective synaptic strengths. As such, they are less suitable for studies requiring long-term monitoring and careful consideration of stimulation protocols as they may potentially affect network development as observed in the training of cortical neurons-based learning machines (19, 20).
Moreover, these methods often overlook the activities of nearby neurons and fail to capture interactions between multiple neurons, resulting in inaccurate inference. Additionally, methods such as GTE do not provide a probabilistic model for neuronal activity, making it difficult to predict or reproduce network activity using the inferred effective connectivity structure for validation or further investigations.
To fill these gaps, here we advocate mapping neuronal activities onto the kinetic Ising model of statistical physics as they share some common features, such as binary state of activity, unidirectional nonequilibrium and nonlinear dynamics, and multineuron interactions. Mapping neuronal activities onto the kinetic Ising model facilitates the inference of interneuron interactions (26, 27). Yet, inferring the kinetic Ising model structure and properties is challenging, and probabilistic methods have been developed in the statistical physics community for inferring the underlying directional interaction strengths from observation sequences (28–30). However, most methods have been derived for a simple model, where coupling strengths are Gaussian distributed with mean zero and small variance, which does not hold in biological neuronal networks, with the exception of (31) under specific conditions.a Thus, a principled probabilistic method that can identify connectivity, synaptic strengths, and the excitatory/inhibitory type of each neuron is required.
In this paper, we introduce an algorithm that combines models from statistical physics, Bayesian inference, and probabilistic machine learning to infer the effective architecture of biological neuronal networks from firing patterns. Our proposed algorithm overcomes some of the limitations of existing methods and infers not only the effective connectivity from neuronal firing but also the neuronal characteristics (inhibitory/excitatory) and the existence of connections. Furthermore, unlike conventional methods, our algorithm provides a probabilistic model that facilitates the simulation of neuronal activities using the inferred architecture, which can be used for structure validation, prediction, and further investigations. We evaluate the performance of our algorithm using synthetically generated data, in silico neuronal network emulator data, and calcium imaging recordings of real in vitro cortical networks with patterned (32) and unpatterned substrates, demonstrating excellent agreement between the effective inferred model and the corresponding data.
Model
Kinetic Ising model
The kinetic Ising model (33) in statistical physics studies the activity of spins in a system with asymmetric coupling strengths. Sharing the common feature that spin (neuron) configuration is influenced by directional coupling (synaptic) strengths and the state of its neighbors, the kinetic Ising model is suitable for describing neuronal spiking activities. Here, we map the binary neuronal spiking activity onto the kinetic Ising model, which is a discrete-time nonequilibrium probabilistic structure. Consider a system of N neurons within an interacting neuronal network. We denote a discrete variable
where
Adapting Ising model to living neuronal networks
Based on the mathematical model defined, we introduce prior distributions to align with real-world neuronal network properties. For clarity, we use
where
In principle, the related probability of Eq. 6 should be integrated over
Connectivity network inference
We extend the kinetic Ising model by introducing two sets of latent variables and prior distributions for all variables to align with the nature of realistic biological neuronal networks. The Methods section and the Supplementary material provide details on how the (latent) variables and hyperparameters can be evaluated in a principled way, enabling us to interpret the neuronal type from
Results
To validate the efficacy of our model and inference algorithm, we perform tests using three types of data: synthetic data generated by the kinetic Ising model, in silico data from computational models, and in vitro data from biological cortical neuronal activity recordings.
In the Supplementary material, we discuss and examine the properties of different algorithms for kinetic Ising model inference, comparing them with our proposed method in detail, and show that the proposed method performs significantly better than the naïve mean-field and maximum-likelihood approaches. In our GML inference for the kinetic Ising model, coupling strengths are not zero-mean and of small variance, and more importantly, the method allows one to infer the existence of connections between neurons. Here, we focus on the results obtained from the in silico model with patterned substrates as well as the two realistic in vitro cortical neuronal activities. One of these in vitro activities is of a homogeneous network with potassium stimulation, while the other focuses on spontaneous activities in a neuronal network with patterned substrates.
In silico experimental data
Since validating the accuracy of neuronal-type classification and link existence is very costly and extremely difficult for in vitro experiments, we first test our algorithm on emulated in silico data since ground-truth topology is known. We generate an in silico neuronal network in which the culture is lying over a striped topographical patterned substrate (32) as illustrated in Fig. 1A. The patterned substrate facilitates high connection density between neurons located on the same stripe but allows for lower connection density across stripes, which is an interesting feature that can be expressed and tested using our method. Spontaneous neuronal activity is then generated on this network. We note that connectivity within a stripe is so strong that each one effectually shapes a “module” of highly interacting neurons. The detailed methodology of how the data are generated is described in the Methods section.
A) A three-dimensional sketch illustrating the in silico experimental setup. The neurons (
Figure 1B shows representative fluorescence traces of spontaneous neuronal activity generated in silico, and Fig. 1C shows the raster plot of activity for the entire network, with colors indicating the respective module neurons belong to. Using the activity as input, we infer the effective structure
The inferred connectivity matrix of
The most appropriate measures of success when contrasting inferred (“infer”) and ground-truth (“true”) topologies are the positive predictive value (PPV) and negative predictive value (NPV), or precision of neuronal type
Success measures in identifying neuron type, TPR (sensitivity), TNR (specificity), and PPV of the in silico model study.
Measure . | TPR/TNR . | PPV/NPV . | Random . |
---|---|---|---|
Excitatory | 0.88 | 0.96 | 0.8 |
Inhibitory | 0.84 | 0.64 | 0.2 |
Overall | n/a | 0.87 | 0.68 |
Success measures in identifying neuron type, TPR (sensitivity), TNR (specificity), and PPV of the in silico model study.
Measure . | TPR/TNR . | PPV/NPV . | Random . |
---|---|---|---|
Excitatory | 0.88 | 0.96 | 0.8 |
Inhibitory | 0.84 | 0.64 | 0.2 |
Overall | n/a | 0.87 | 0.68 |
Figure 1D reveals a strong clustering effect for connectivity within modules, indicating that two neurons within the same module have a higher probability of being connected compared with two neurons from different modules. This suggests that the GML captures the structure of the topographical substrate. We then test the performance of inferring existing links using MAP, GML, and GTE (21). Since GTE is based on an assigned transfer entropy threshold value for identifying links, we plot the complete receiver operating characteristic (ROC) curve for all threshold values, of TPR against FPR of identifying nonzero links as shown in Fig. 2A. The TPR and FPR of finding nonzero links using GML are
A) The ROC curve plotting the TPR against the false positive rate (FPR) of identifying effective links for in silico experiments using GTE (continuous blue line). The TPR and TFR using our MAP and GML are marked as green and red nodes respectively. The dashed diagonal line in A refers to a random guess in this case. B and C) The scatter plot of inferred coupling strength
A key feature that our method offers is the inference of the effective synaptic (coupling) strength between neurons, which is crucial for understanding the learning process in both neuroscience and cortical neuron-based learning machines. For example, observing changes in synaptic strengths over extended periods allows one to study neuronal network plasticity due to stimulation and facilitates the design of efficient stimulation protocols in cortical neurons-based learning devices. We note that the neuronal spiking mechanism of the in silico emulator follows the Izhikevich model (37), which is distinct from the kinetic Ising model so that there is no direct mapping between the inferred (
Figure 2C shows the mean value of GTE in the inhibitory group to be
Another advantage of our method is that one can adopt the inferred structure
Our GML approach demonstrates strong performance in inferring the neuronal types and link existence as well as the ability for activity prediction. Notably, by comparing the results between MLE and our GML approach, we observe that GML exhibits a gentler slope and deviates more from the perfect prediction manifested by the
The above results show that our GML approach performs very well on data generated by the in silico model with patterned substrates. In the Supplementary material, we study the performance of our GML approach on another in silico homogeneous neuronal network with no patterned substrates. We show that even in a homogeneous network, which is harder for structure inference, as there are fewer constraints for the solution space of
Sensitivity analysis of in silico experiments
To further investigate the performance of our algorithms in different scenarios, we performed a sensitivity analysis on in silico neuronal networks where synaptic strengths follow different distributions, varying the degree-connectivity from sparse to dense. Results for networks with patterned substrates are shown below, while results for homogeneous networks are provided in the Supplementary material.
We examine two different cases where the synaptic strengths in the in silico networks vary. Specifically, we assume that the synaptic strengths
As no existing method can perform neuronal-type classification, we compare our results with those from biased random guessing, as shown in Fig. 3A. The average overall accuracy of neuronal-type classification for both weak and strong synaptic strengths is significantly higher than random guessing, indicating strong performance. For effective connectivity identification, we compare our method with GTE. As discussed, GTE provides an ROC curve, not specific TPR or FPR values. To facilitate comparison, we fix the FPR for GTE at the same level as MAP to obtain the TPR value, and vice versa. Figure 3B and C shows the resulting TPR and FPR values, respectively. For the strong synaptic weights case, our method achieves higher TPR and lower FPR than GTE, outperforming GTE in both metrics. When synaptic weights are weak, our method shows only a slightly higher TPR and slightly lower FPR than GTE. This is due to weaker spiking activity and lower correlations between neurons, making structural inference more challenging. While MAP considers interactions among multiple neurons, which requires more data, GTE focuses on pairwise interactions only, allowing for quicker inference with less data. Nevertheless, our method outperforms GTE in both cases.
Sensitivity analysis of the performance of MAP and GTE on structural inference in in silico neuronal networks over stripe-patterned substrates with different synaptic weight distributions, varying over the average degree-connectivity. A) Average overall performance in neuronal-type classification, compared with a biased random guess. B) TPR of identifying effective links. The TPR of GTE is obtained from the ROC curve by fixing the FPR at the same value as MAP. C) FPR of identifying effective links. The FPR of GTE is obtained from the ROC curve by fixing the TPR at the same level as MAP. Results are averaged over five samples for a network of
To further assess the predictive capability of our method, we generated predicted activities using Monte Carlo simulations based on the structures inferred by MAP and MLE for each sample, and compared them with the true activities by measuring the correlation between predicted and true delayed time covariances
Sensitivity analysis of the predictability performance on in silico neuronal networks with stripe-patterned substrates using MAP and MLE. The average correlation between predicted and true delayed time covariance
Our sensitivity analysis demonstrates that our algorithms do not only work effectively in single instances but also perform well across various connectivity and synaptic strength scenarios.
Experimental validation—in vitro network with patterned substrates
Having validated the efficacy of our GML approach to both synthetic data (see Supplementary material) and emulator data, we now apply it to an in vitro rat cortical neuronal network grown on topographically patterned substrates. Data consisted of neuronal activity recording from calcium fluorescence imaging, processed to consider regions of interest (ROIs) as nodes in the neuronal network, as shown in Fig. 5A. Details of data acquisition and analysis are provided in the Methods section.
A) The studied rat cortical neuronal network over striped a topographical pattern 2 mm in diameter, grouped into 400 regions of interest (ROIs). ROIs within different stripes are shown as squares with different colors. The scale bar of 200
Using this experimental data, with representative traces in Fig. 5B and complete raster plot in Fig. 5C, we infer the effective structure and recreate the neuronal activity for validation. Similar to the in silico case, we group ROIs into modules according to the stripes patterning (Fig. 5A); the inferred coupling strengths are visualized in the connectivity matrix shown in Fig. 5D. The background colors indicate which module the source ROIs belong to. Entries colored in red (blue) indicate effective excitatory (inhibitory) connections from source (i) to target (j) ROIs. We observe dense connectivity between ROIs from the same stripe and sparse connections across stripes. Furthermore, the likelihood of connection decreases as the distance between ROIs increases. This agrees with the biological understanding that long-range connections are rare to minimize wiring cost, so that neurons preferentially connect to their neighbors, as observed in previous studies using similar patterned substrates (32).
While conventional approaches often yield sets of scores, like GTE, they lack a direct measure of estimation quality. Our probabilistic model-based inference, however, allows for the validation of inferred effective structures by reproducing neuronal activity using Monte Carlo simulation. We thus simulate activity using the estimated reconstructed effective network parameters
In general, the GML inference method shows excellent agreement between the inferred effective network and the underlying biological structure, with predicted activities fitting well with the real data.
Altered in vitro homogeneous network structure by modifying neuronal activity
To validate the algorithm further in a scenario of changing synaptic strengths, we implement experiments that neuronal connectivity is altered through plasticity, which is modulated via chemical stimulation.
Arguably, the simplest way to understand how neuronal network plasticity occurs due to stimulation is to compare the effective network structures of a neuronal network before and after stimulation. We apply our method to an in vitro primary homogeneous neuronal network comparing the situation in control and following exposure to elevated potassium chloride (KCl) stimulation, with the aim of depolarizing neurons and therefore increasing network activity. Neuronal activity data was also collected through calcium imaging, although here each ROI includes one neuron only. The video of the calcium imaging recording is available from (38). The details of data generation are provided in the Methods section.
We infer the effective structure using the neuronal activity data as input, with representative traces in Fig. 6B and complete raster plot in Fig. 6C, and then recreate the activity measures for validation. We first identify the neuronal type (of single neuron ROIs) as excitatory (red) and inhibitory (blue) as shown in Fig. 6A.
A) The in vitro primary cortical homogeneous neuronal network analyzed, grouped into 131 regions of interest (ROIs). The red and blue dots correspond to the regions of interest (ROIs) that are classified as excitatory or inhibitory, respectively, where each includes one neuron only. The scale bar of 25
In particular, we simulate activity using the inferred
In this section, we have studied two in vitro experiments providing different scopes. In the experiment with patterned substrates, we have studied a completely extended network; whereas in the experiment with a homogeneous network, we have studied neuronal activity modulated by chemical stimulation. While the samples and the experimental conditions are different, it is interesting to see that both covariance matrices show a much higher value under stimulation as the KCl stimulation affects the activity across the sample. A detailed study comparing the neuronal structure and synaptic weights before and after stimulation is underway and is beyond the scope of this work. Our results demonstrate that our methods do not only work effectively on synthetic in silico data but also perform well on more realistic neuronal networks.
Discussion
Cortical neuronal network inference has long been an open question in neuroscience and is crucial for understanding the underlying mechanisms and properties of neuronal systems. Neuronal cultures are regarded as a promising living model to investigate a broad spectrum of technological challenges, from biologically inspired AI (14, 20) to efficient design of treatments for neurological disorders (39). Since the structural blueprint of neuronal connections is not easily accessible in a culture, nor their excitatory/inhibitory type, indirect techniques to infer such a blueprint have jumped into the front-line of computational neuroscience.
Here we introduced a novel and probabilistic algorithm based on statistical physics and Bayesian techniques for the effective structural inference of biological neuronal networks from activity data. The algorithm can not only infer the effective synaptic strengths between neurons but, more importantly, can identify the excitatory and inhibitory type of neurons as well as the effective connections between them in a probablistic way, a capability that no other existing method can achieve. This is used in a principled way from single spontaneous recordings without additional interference to the culture such as stimulation. This capability goes beyond what most existing state-of-the-art methods can offer. Through synthetic, in silico and realistic in vitro experiments, we demonstrate that our algorithm: (i) outperforms existing methods in both synaptic strength inference and effective connections identification; (ii) achieves high accuracy in neuronal-type classification; (iii) exhibits good reproducibility in the inferred structure, justifying the reliability of the algorithm. In the Supplementary material, we also show that our algorithm provides good inference results also in the absence of patterned substrates, in both in silico and in vitro studies.
The dynamics and functioning of neuronal networks are in large part determined by their connectivity and their evolving synaptic strengths. As such, the method is expected to have a direct impact on neuroscience, cortical neuron-based computing devices, and many other related biological and medical areas. For instance, studying the changes in the effective structure of neuronal cultures over time can lead to new theoretical understandings of how plasticity takes place in response to stimuli. Additionally, gaining insight into information processing and propagation in neuronal networks could greatly impact the development of artificial neuronal networks and neuromorphic computing, e.g. by understanding the importance of the ratio between excitatory and inhibitory neurons on functionality.
Revealing precise information about effective structure and neuronal types is essential for developing biological machine learning as it helps one to accurately represent the network dynamics, make predictions, and test hypotheses. Most importantly, it is critical for the design of stimulation learning protocols. Interdisciplinary research in this direction is underway.
Methods
Preprocessing and optimal time bin determination
Since the mathematical model we introduce is based on discrete-time steps, one needs to binarize the neuronal activity before carrying out the inference process. We note that the determination of the time bin τ is crucial for inference as the binarized firing patterns can vary significantly with τ. For instance, the delayed time covariance between neurons can vanish if τ is either too large or small. To address this, we employed an information theory-based method to identify the optimal time bin size
where
Inference algorithm
Unlike the basic kinetic Ising model of statistical physics, inferring the effective structure from neuronal activities faces two major challenges: (i)
To tackle these challenges based on the defined mathematical model, we utilize the generalized maximum-likelihood (GML) (36) technique, also known as evidence approximation and MAP estimation. This combination enables us to jointly infer optimal hyperparameters, latent variables, and the effective network structure in a principled manner. While the detailed derivation is available in the Supplementary material, we provide a general overview of the method here.
The aim of this algorithm is to infer the effective structure parameters
A sketch of the proposed GML-based approximation algorithm. The effective connectivity and the optimal hyperparameters are jointly evaluated by incorporating two applications of the EM algorithm. The posterior distributions of
Parameters obtained by the macro E step are treated as fixed in the macro M step. In the micro E step, we use the current hyperparameters
While in the micro M step, one uses
In particular, the hyperparameters
In general, the inference and optimization framework consists of iteratively executing the macro E and M steps until convergence. Within each macro M-step, the hyperparameters are evaluated by iteratively conducting the micro E step and M step until convergence. Finally, one decides on the structure parameters, neuronal type, and link existence by choosing the highest probability states. Notably, for a more sensible starting point, the initial conditions of the hyperparameters
In silico data generation
Emulated neuronal data have been generated with a spiking neuronal network model previously used to model the network growth and activity of biological neuronal cultures (2, 46). The existing network growth model has been adapted to incorporate the effect of inhomogeneous environments on the network connectivity, thus making the emulated data replicate experimental calcium-recorded results on biological neuronal cultures in inhomogeneous environments (32).
Briefly, network growth is modeled by placing N neurons in a nonoverlapping manner on a surface and modeling axon growth from each neuron by concatenating line segments, in which segment i is placed with a random angle
Neuronal dynamics is modeled using the Izhikevich model neuron (37), with added synapse dynamics
The quantity
In vitro data generation—experimental methods
PDMS topographical reliefs
Topographical patterns were generated by pouring liquid polydimethylsiloxane (PDMS) on specially designed printed circuit board molds shaped as parallel tracks
Preparation of in vitro neuronal cultures
Rat primary cultures for patterned networks on PDMS substrates—Sprague–Dawley rat primary neurons (Charles River Laboratories, France) from embryonic cortices at days 18–19 of development were used in all experiments. Manipulation and dissection of the embryonic cortices were carried out under ethical order B-RP-094/15–7125 of the Animal Experimentation Ethics Committee (CEEA) of the University of Barcelona and in accordance with the regulations of the Generalitat de Catalunya (Spain). Dissection was carried out identically as described in (32). Briefly, cortices were dissected in ice-cold L-15 medium (Gibco), enriched with 0.6% glucose and 0.5% gentamicin (Sigma-Aldrich). Brain cortices were first isolated from the meninges and then mechanically dissociated by repeated pipetting. The resulting dissociated neural progenitors were plated on a set of precoated Poly-L-Lysine (PLL, 10 mg/mL, Sigma-Aldrich) PDMS topographical substrates in the presence of plating medium, which ensured both the development of neurons and glial cells. A density of a half cortex per
Mouse primary neuronal cultures for network inference—13 mm glass coverslips (VWR) were sterilized in 70% Ethanol for 30 min. Coverslips were then transferred to a biological safety cabinet and dried completely before coating for 2 h with 0.02% Poly-L-Ornithine (Sigma). This was washed once with sterile ddH2O, and then
Intracellular calcium fluorescence imaging
Rat neuronal cultures shaped as
For mouse neuronal cultures, cells were loaded with
Data analysis
Calcium fluorescence recordings were analyzed with the NETCAL software (49, 50) run in MATLAB in combination with custom-made packages. To analyze the data, and as described in (32), Regions of Interest (ROIs) were first laid on the area covered by each culture. For rat primary cultures, ROIs were shaped as a
Immunocytochemistry
This technique was used to identify the position of neuronal cell bodies in culture and extract their fluorescence trace with precision. Neuronal cultures were fixed for 20 min with 4% PFA (Sigma) at room temperature. After washing with PBS, the samples were incubated with a blocking solution containing 0.03% Triton (Sigma) and 5% normal donkey serum (Jackson Immunoresearch) in PBS for 45 min at room temperature. To visualize the neuronal nuclei, the samples were incubated with primary antibodies, against the neuronal marker NeuN (M1406, Sigma), diluted in blocking solution, and incubated overnight at 4°C. Cy3-conjugated secondary antibody against rabbit (711-165-152, Jackson Immunoresear) was diluted in blocking solution and incubated for 90 min at room temperature. Then, cultures were rinsed with PBS and mounted using DAPI-fluoromount–G (ShouternBiotech). Immunocytochemical images were acquired on a Zeiss confocal microscope.
Footnotes
This method is accurate only when coupling strengths are relatively small or when the activation function is smooth.
Supplementary Material
Supplementary material is available at PNAS Nexus online.
Funding
This research is supported by the European Union Horizon 2020 research and innovation program under Grant No. 964877 (project NEU-CHiP). J.S. also acknowledges support from grants PID2022-137713NB-C22 and PLEC2022-009401, funded by MCIU/AEI/10.13039/501100011033 and by ERDF/EU, and by the Generalitat de Catalunya under grant 2021-SGR-00450. The authors acknowledge the support of Aston University Biomedical Facility and Aston Institute for Membrane Excellence (AIME) for the purpose of providing infrastructure support within the College of Health and Life Sciences.
Author Contributions
H.F.P. and D.S. designed research, derived algorithm, created synthetic data, and analyzed all data. A.C.H., D.R.J., H.R.P., E.J.H., and J.S. performed in vitro experiments; A.M.H. and J.S. performed in silicon experiments. H.F.P., A.M.H., A.C.H., D.R.J., H.R.P., E.J.H., J.S., and D.S. wrote the paper.
Preprints
This manuscript is available as a preprint at: https://arxiv.org/abs/2402.18788.
Data Availability
All data presented in this paper are available from https://doi.org/10.17036/researchdata.aston.ac.uk.00000635. This includes the Python file containing the derived algorithm, as well as both the in silico and in vitro neuronal firing data being studied.
References
Author notes
Competing Interest: The authors declare no competing interests.