- Inverse problems
- Professor Roman Borisyuk
- 6. Wave Dynamics in the Transmission of Neural Signals
- SCITEPRESS - SCIENCE AND TECHNOLOGY PUBLICATIONS
- 15th MASCOT, MASCOT2015 - 14th MEETING ON APPLIED SCIENTIFIC COMPUTING AND TOOLS

Mathematical and Computational Applications 24 :2, Journal of Nonlinear Science 29 :3, Physica A: Statistical Mechanics and its Applications , Journal of Complex Networks 7 :3, Electronics and Communications in Japan :5, Journal of Physics: Conference Series , Natural Computing Next Generation Neural Mass Models. Nonlinear Dynamics in Computational Neuroscience, Physical Review E 99 Dan Wilson and Bard Ermentrout.

### Inverse problems

SIAM Review 61 :2, Mathematical Modelling of Natural Phenomena 14 :4, Biomedical Engineering Systems and Technologies, Applied Mechanics Reviews 71 Physical Review E 98 Mathematical Methods in the Applied Sciences 41 , The Journal of Mathematical Neuroscience 8 Physical Review Letters Science Advances 4 :8, e Neural Computation 30 :2, Physical Review E 97 Scientific Reports 8 Nonlinear Dynamics 91 :2, Automatic Control and Computer Sciences 51 :7, Journal of Statistical Mechanics: Theory and Experiment , Journal of Computational Neuroscience 43 :2, Computational Models of Brain and Behavior, Biological Cybernetics , Frontiers in Computational Neuroscience Neural Computation 29 :7, Neural Computation 29 :6, Proceedings of the National Academy of Sciences , Physical Review E 95 Mathematical Models and Computer Simulations 9 :3, Neural Networks 89 , An Introduction to Modeling Neuronal Dynamics, Frequency-Current Curves.

Model Neurons of Bifurcation Type 3.

- Advanced Models of Neural Networks : Gerasimos G. Rigatos : !
- Highlights in bioorganic chemistry.
- Office XP Made Simple (Made Simple Computer)!
- Solid Hydrogen: Theory of the Properties of Solid H2, HD, and D2;
- Advances in Polycarbonates.
- Advanced Models of Neural Networks : Nonlinear Dynamics and Stochasticity in Biological Neurons!
- About This Item.

Cognitive Neurodynamics 10 :6, Journal of Mathematical Biology 73 , Physica D: Nonlinear Phenomena , Journal of Computational Neuroscience 40 :3, Nonlinear Dynamics 84 :4, Wilten Nicola and Sue Ann Campbell. Controlling Synchronization Patterns in Complex Networks, Mathematical Modeling and Applications in Nonlinear Dynamics, Nano Communication Networks 6 :4, Trends in Neurosciences 38 , Physical Review E 92 Physical Review X 5 Frontiers in Computational Neuroscience 9. Jonathan Cannon and Nancy Kopell.

Carlo R. Nonlinear Dynamics New Directions, Multiple Time Scale Dynamics, Nonlinearity 28 :1, Frontiers in Computational Neuroscience 8.

## Professor Roman Borisyuk

International Journal of Modern Physics C 25 , International Journal of Neural Systems 24 , Journal of Computational Neuroscience 36 :3, Proceedings of the IEEE :5, Journal of Nonlinear Science 24 :2, Physical Review E 89 Neural Networks 49 , The Journal of Mathematical Neuroscience 4 :1, The Journal of Mathematical Neuroscience 4 :1, 2. Neural Computation 26 :1, Integrate and Fire Models, Deterministic.

Encyclopedia of Computational Neuroscience, Theta-Neuron Model. Foundations and the mutualistic case. Nonlinearity 26 , Neural Computation 25 , PLoS Computational Biology 9 , e Journal of Computational Neuroscience 35 :2, Journal of Physics A: Mathematical and Theoretical 46 , Physical Review E 88 BMC Neuroscience 14 :S1. Biological Cybernetics :3, Experimental Neurology , Dynamics of Neural Networks. Principles of Neural Coding, Mathematical Notes 93 , PLoS Computational Biology 9 :1, e The Journal of Mathematical Neuroscience 3 :1, Population Density Models. Acta Physica Polonica A :4, I: Deterministic Behavior.

Neural Computation 24 :8, Journal of Mathematical Biology 65 :1, European Journal of Neuroscience 36 :2, Computational Mathematics and Mathematical Physics 52 :5, Dipoppa , M. Krupa , A. Torcini , and B. Eli Shlizerman , Konrad Schroder , and J. Nathan Kutz. Phase Response Curves in Neuroscience, Computing with Spiking Neuron Networks. Handbook of Natural Computing, Neural Information Processing, Neural Computation 23 , Physical Review E 84 Stochastics and Dynamics 11 n03, Journal of Computational Neuroscience 31 :1, Physical Review E 83 Physica D: Nonlinear Phenomena :7, International Journal of Stochastic Analysis , The Complexity in Activity of Biological Neurons.

Complex Systems, Physical Review E 82 Journal of Neurophysiology :5, Biological Cybernetics :2, Neural Computation 22 :6, Regular and Chaotic Dynamics 15 , Physical Review E 81 Journal of Computational Neuroscience 28 :3, Ultimately, however, using empirical methods to represent microscale parameters overlooks the effects of microscale changes on macroscale systems Weinan and Engquist, , which thus inherently results in measurable errors between empirical i.

This leads to an incentive to instead an create intermediate model, one which does not capture the full details of the Markov kinetic models with all its given states, but instead use an alternate representation to replicate the complex nonlinearities which are lost in more simplistic models such as exponential synapses. In some cases, external modifications were made to the traditional exponential synapse form so that varying non-linearities, including desensitization as well as STDP characteristics like facilitation and depression, may be replicated Tsodyks et al.

These types of models require a priori knowledge on the biological system of interest to calibrate their nonlinear features, which could prove limiting in cases where sources of nonlinear dynamics are not yet fully known.

- Linux Recipes for Oracle DBAs!
- Mises: The Last Knight of Liberalism;
- Top Authors.
- Bibliography;

Synapses are very complex structures with many different cellular pathways and mechanisms, much of which is still being investigated; defining all possible nonlinear sources beforehand in a model would be difficult, if not impossible, when the knowledge of these synaptic mechanisms is still incomplete. Currently, the nonlinear synapse models developed so far can only take into account more of the commonly noted nonlinear behaviors.

Regardless, they provide an efficient, nonlinear alternative to traditional exponential synapses and have helped demonstrate the impact of nonlinear synapses in neuron population models Tsodyks et al. Here, we propose to use the Volterra functional power series Berger et al. The Input-Output IO model uses kernels to represent the functional properties of the system modeled, effectively replicating the dynamics and behavior of the process without requiring a priori knowledge on its internal structure and its underlying mechanisms.

Furthermore, the Volterra-based model requires little computational power. The Input-Output model can therefore reduce complex nonlinear differential systems into input-output transformations, which describe the causal relationship between the input and output properties of the system, while maintaining the nonlinear dynamics of the system model and reducing its computational complexity Marmarelis and Marmarelis, The generality of this methodology means that it can be applied to various phenomena, including those from mechanical systems Bharathy et al.

Figure 1. Synapse models can have various representations, which differ in computational efficiency and model detail. A The exponential synapse is a commonly used synapse model that produces a postsynaptic response from a simplistic equation. The result is fast but lacks more complex dynamics typically seen in an actual synapse.

Markov kinetic state models and other additional mechanisms govern the overall postsynaptic response, resulting in an accurate and nonlinear response more characteristic of the response that would be observed in an actual glutamatergic synapse. As this IO synapse model characterizes the dynamic relationships between the input events and the corresponding output, much of the computationally intensive calculations are waived through the use of this methodology.

D Schematic representation of the computation time required and the detailed accuracy of each model. The IO synapse model can provide a much more accurate representation than the exponential synapse, while computationally lighter than the parametric EONS synapse model. The EONS synapse model is a detailed, parametric model of the glutamatergic synapse that includes kinetic receptor channels and diffusion mechanisms Bouteiller et al.

The input-output model, which from this point forward will be referred to as the IO synapse model, is proposed to serve as an extension to the EONS synapse model for multi- and large-scale simulations, and be fitted to the nonlinear dynamics simulated by the EONS synapse model. This article investigates the validity and performance of the IO synapse model through comparisons of the IO synapse model simulation results with the results obtained with the original EONS synapse.

To proceed with the validation, first and foremost is a direct comparison between the results of the IO synapse model with EONS in a standalone simulation of the synapse, then simulated alongside a hippocampal CA1 pyramidal cell model in a hybrid simulation environment Allam et al.

Next, the IO synapse model is provided with random interval train input with mean firing rates ranging from 2 to 10 Hz to determine the degree of nonlinearities that the model is capable of capturing. Finally, to further evaluate the efficiency of the IO synapse model, the kinetic receptor models were implemented directly in the NEURON simulation environment and simulation times between IO and kinetic models were compared. The results clearly indicate that the IO synapse model is capable of replicating the complex functional dynamics of a detailed glutamatergic synapse model, while significantly reducing computational complexity, thereby enabling simulations on larger temporal seconds to minutes and spatial scales large network of neurons containing highly elaborate functional synapses.

The synapse model introduced here is a non-parametric representation of the parametric EONS synapse model which simulates a glutamatergic synapse on a CA1 pyramidal cell; details of the model can be found in Bouteiller et al. Briefly, the EONS synapse model is a highly detailed model of a generic glutamatergic synapse, and includes a number of receptor models as well as various mechanistic properties of the synapse, including but not limited to, voltage-dependent presynaptic calcium entry, probabilistic vesicular release, neurotransmitter diffusion and reuptake, and postsynaptic potential induction through ionotropic receptors.

Models used in the EONS synapse model are derived from published experimental results and computational models. The AMPAr model was derived from the model presented in Robert and Howe , and comprises 16 states, each state representing a different conformation of the receptor open vs. The AMPAr current is calculated through the following equation:. However, the dynamics of AMPAr desensitization are much slower and the receptor can take up to approximately ms to recover from desensitization. The kinetic model used in the parametric modeling framework is the 8 states receptor model developed by Erreger et al.

The parameter g NMDA , however, is intrinsically different from the conductance of AMPAr because of the nonlinear response to voltage due to the magnesium block properties. This feature was separately accounted for in the IO synapse model and is explained in more detail in the next section. Because calculations of both AMPAr and NMDAr kinetic models are time-consuming, constituting a potential bottleneck for larger scale simulations, we propose to derive their corresponding Input-Output counterparts.

## 6. Wave Dynamics in the Transmission of Neural Signals

Using such a modular structure allows for additional components to be easily implemented and integrated in the future. This model calculates the probability of vesicle release. A random number generator compared with the calculated release probability determines whether a release event occurs or not. For NMDAr, due to the additional complexity of the magnesium block of the channel, the open state probability is calculated first; the receptor conductance is then calculated with the following two equations, as stated in Ambert et al.

K 0 is the equilibrium constant for magnesium set at 3. Here we utilize the open state O t as the output data during training of the IO receptor model of the NMDA receptor; the estimated conductance is then calculated from the predicted open state value by the IO receptor model.

Figure 2. Structure of the EONS synapse model. A For every presynaptic event, the EONS synapse calculates the probability of vesicle release based on past release events. B For the original EONS synapse model, in the event of a vesicle release, glutamate diffusion is calculated and depending on postsynaptic receptor location, the result is used for deriving the open states in the kinetic models of the receptors. C The IO synapse model accounts for both the diffusion and kinetic receptor dynamics to calculate the predicted open states of the receptors.

The open states are used to calculate conductances and resulting currents based on the postsynaptic potential. The Input-Output model for receptors uses the Volterra functional power series together with Laguerre basis functions Berger et al. The general form of the Volterra functional power series is described by:. N denotes the number of basis function sets and L represents the number of basis functions for each set. Each set is representative of basis functions of different decay constants, which is further elaborated in the description of Laguerre basis functions.

Nonlinearities are captured through modeling higher model orders, represented as kernels.

## SCITEPRESS - SCIENCE AND TECHNOLOGY PUBLICATIONS

For the 0th order kernel, the only required value is c 0. This corresponds to the baseline signal in the presence of no event. Values for c 1 and v 1 represent the 1st order kernel and account for responses to a single event. Nonlinearities occur when multiple events interact with each other and represent the differences between the output of the system and the linear solution of the model given by only the 1st order kernels. As is shown in the equations, 2nd order consists of the basis functions cross multiplying with each other basis functions within a set are multiplied with each other for 2 s and basis functions are multiplied with outside sets for 2 x.

Higher orders involve more cross multiplications between basis functions. For basis functions, the Laguerre equations are used for their orthogonality and convergence properties. Additionally, the signals reproduced using Laguerre basis functions have high resemblance to signals encountered in physiology and biology.

More details of the methodology are described in Ghaderi et al. In brief, the Laguerre basis functions are derived from Laguerre polynomials, which are defined as:. With proper normalization the equations take the following form:. A visualization of the Laguerre basis functions is shown in Figure 3. The basis functions are then scaled with coefficient values that are fitted to provide the appropriate response when all functions are convolved with the input signal and summed.

To capture nonlinear responses, the basis functions are cross multiplied with each other as described previously. These functions together correspond to one set of basis functions with one given decay value, p.

### 15th MASCOT, MASCOT2015 - 14th MEETING ON APPLIED SCIENTIFIC COMPUTING AND TOOLS

Because of the complexities of receptor responses, two sets of basis functions are used with different decay values represented by p. The first set covers the general response of the system within a short time frame to capture the overall waveform.

The other covers a much longer time frame and accounts for slower mechanisms, such as desensitization. We found that using two basis function sets yield better approximation of the dynamics seen in the original kinetic models. The p -values were determined via gradient descent to find the optimal decay values with the lowest absolute error while fitting the data.