Dr. Christoph Kolodziejski

Email:
-

Global QuickSearch:   Matches: 0

Search Settings

    Author / Editor / Organization
    Year
    Title
    Journal / Proceedings / Book
    Faghihi, F. and Kolodziejski, C. and Fiala, A. and Wörgötter, F. and Tetzlaff, C. (2013).
    An Information Theoretic Model of Information Processing in the Drosophila Olfactory System: the Role of Inhibitory Neurons for System Efficiency. Frontiers in Computational Neuroscience, 7, 183. DOI: 10.3389/fncom.2013.00183.
    BibTeX:
    @article{faghihikolodziejskifiala2013,
      author = {Faghihi, F. and Kolodziejski, C. and Fiala, A. and Wörgötter, F. and Tetzlaff, C.},
      title = {An Information Theoretic Model of Information Processing in the Drosophila Olfactory System: the Role of Inhibitory Neurons for System Efficiency},
      journal = {Frontiers in Computational Neuroscience},
      year = {2013},
      volume= {7},
      number = {183},
      url = {http://journal.frontiersin.org/Journal/10.3389/fncom.2013.00183/full},
      doi = {10.3389/fncom.2013.00183},
      abstract = {Fruit flies Drosophila melanogaster rely on their olfactory system to process environmental information. This information has to be transmitted without system-relevant loss by the olfactory system to deeper brain areas for learning. Here we study the role of several parameters of the flys olfactory system and the environment and how they influence olfactory information transmission. We have designed an abstract model of the antennal lobe, the mushroom body and the inhibitory circuitry. Mutual information between the olfactory environment, simulated in terms of different odor concentrations, and a sub-population of intrinsic mushroom body neurons Kenyon cells was calculated to quantify the efficiency of information transmission. With this method we study, on the one hand, the effect of different connectivity rates between olfactory projection neurons and firing thresholds of Kenyon cells. On the other hand, we analyze the influence of inhibition on mutual information between environment and mushroom body. Our simulations show an expected linear relation between the connectivity rate between the antennal lobe and the mushroom body and firing threshold of the Kenyon cells to obtain maximum mutual information for both low and high odor concentrations. However, contradicting all-day experiences, high odor concentrations cause a drastic, and unrealistic, decrease in mutual information for all connectivity rates compared to low concentration. But when inhibition on the mushroom body is included, mutual information remains at high levels independent of other system parameters. This finding points to a pivotal role of inhibition in fly information processing without which the systems efficiency will be substantially reduced}}
    Abstract: Fruit flies Drosophila melanogaster rely on their olfactory system to process environmental information. This information has to be transmitted without system-relevant loss by the olfactory system to deeper brain areas for learning. Here we study the role of several parameters of the flys olfactory system and the environment and how they influence olfactory information transmission. We have designed an abstract model of the antennal lobe, the mushroom body and the inhibitory circuitry. Mutual information between the olfactory environment, simulated in terms of different odor concentrations, and a sub-population of intrinsic mushroom body neurons Kenyon cells was calculated to quantify the efficiency of information transmission. With this method we study, on the one hand, the effect of different connectivity rates between olfactory projection neurons and firing thresholds of Kenyon cells. On the other hand, we analyze the influence of inhibition on mutual information between environment and mushroom body. Our simulations show an expected linear relation between the connectivity rate between the antennal lobe and the mushroom body and firing threshold of the Kenyon cells to obtain maximum mutual information for both low and high odor concentrations. However, contradicting all-day experiences, high odor concentrations cause a drastic, and unrealistic, decrease in mutual information for all connectivity rates compared to low concentration. But when inhibition on the mushroom body is included, mutual information remains at high levels independent of other system parameters. This finding points to a pivotal role of inhibition in fly information processing without which the systems efficiency will be substantially reduced
    Review:
    Tetzlaff, C. and Kolodziejski, C. and Timme, M. and Tsodyks, M. and Wörgötter, F. (2013).
    Synaptic scaling enables dynamically distinct short- and long-term memory formation. PLoS Computational Biology, e10003307, 910. DOI: doi:10.1371/journal.pcbi.1003307.
    BibTeX:
    @article{tetzlaffkolodziejskitimme2013,
      author = {Tetzlaff, C. and Kolodziejski, C. and Timme, M. and Tsodyks, M. and Wörgötter, F.},
      title = {Synaptic scaling enables dynamically distinct short- and long-term memory formation},
      pages = {e10003307},
      journal = {PLoS Computational Biology},
      year = {2013},
      volume= {910},
      doi = {doi:10.1371/journal.pcbi.1003307},
      abstract = {Memory storage in the brain relies on mechanisms acting on time scales from minutes, for long-term synaptic potentiation, to days, for memory consolidation. During such processes, neural circuits distinguish synapses relevant for forming a long-term storage, which are consolidated, from synapses of short-term storage, which fade. How time scale integration and synaptic differentiation is simultaneously achieved remains unclear. Here we show that synaptic scaling - a slow process usually associated with the maintenance of activity homeostasis - combined with synaptic plasticity may simultaneously achieve both, thereby providing a natural separation of short- from long-term storage. The interaction between plasticity and scaling provides also an explanation for an established paradox where memory consolidation critically depends on the exact order of learning and recall. These results indicate that scaling may be fundamental for stabilizing memories, providing a dynamic link between early and late memory formation processes.}}
    Abstract: Memory storage in the brain relies on mechanisms acting on time scales from minutes, for long-term synaptic potentiation, to days, for memory consolidation. During such processes, neural circuits distinguish synapses relevant for forming a long-term storage, which are consolidated, from synapses of short-term storage, which fade. How time scale integration and synaptic differentiation is simultaneously achieved remains unclear. Here we show that synaptic scaling - a slow process usually associated with the maintenance of activity homeostasis - combined with synaptic plasticity may simultaneously achieve both, thereby providing a natural separation of short- from long-term storage. The interaction between plasticity and scaling provides also an explanation for an established paradox where memory consolidation critically depends on the exact order of learning and recall. These results indicate that scaling may be fundamental for stabilizing memories, providing a dynamic link between early and late memory formation processes.
    Review:
    Manoonpong, P. and Kolodziejski, C. and Wörgötter, F. and Morimoto, J. (2013).
    Combining Correlation-Based and Reward-Based Learning in Neural Control for Policy Improvement. Advances in Complex Systems, 1350015, 16, 2-3. DOI: 10.1142/S021952591350015X.
    BibTeX:
    @article{manoonpongkolodziejskiwoergoetter20,
      author = {Manoonpong, P. and Kolodziejski, C. and Wörgötter, F. and Morimoto, J.},
      title = {Combining Correlation-Based and Reward-Based Learning in Neural Control for Policy Improvement},
      pages = {1350015},
      journal = {Advances in Complex Systems},
      year = {2013},
      volume= {16},
      number = {2-3},
      url = {http://www.worldscientific.com/doi/abs/10.1142/S021952591350015X},
      doi = {10.1142/S021952591350015X},
      abstract = {Classical conditioning (conventionally modeled as correlation-based learning) and operant conditioning (conventionally modeled as reinforcement learning or reward-based learning) have been found in biological systems. Evidence shows that these two mechanisms strongly involve learning about associations. Based on these biological findings, we propose a new learning model to achieve successful control policies for artificial systems. This model combines correlation-based learning using input correlation learning (ICO learning) and reward-based learning using continuous actor-critic reinforcement learning (RL), thereby working as a dual learner system. The model performance is evaluated by simulations of a cart-pole system as a dynamic motion control problem and a mobile robot system as a goal-directed behavior control problem. Results show that the model can strongly improve pole balancing control policy, i.e., it allows the controller to learn stabilizing the pole in the largest domain of initial conditions compared to the results obtained when using a single learning mechanism. This model can also find a successful control policy for goal-directed behavior, i.e., the robot can effectively learn to approach a given goal compared to its individual components. Thus, the study pursued here sharpens our understanding of how two different learning mechanisms can be combined and complement each other for solving complex tasks.}}
    Abstract: Classical conditioning (conventionally modeled as correlation-based learning) and operant conditioning (conventionally modeled as reinforcement learning or reward-based learning) have been found in biological systems. Evidence shows that these two mechanisms strongly involve learning about associations. Based on these biological findings, we propose a new learning model to achieve successful control policies for artificial systems. This model combines correlation-based learning using input correlation learning (ICO learning) and reward-based learning using continuous actor-critic reinforcement learning (RL), thereby working as a dual learner system. The model performance is evaluated by simulations of a cart-pole system as a dynamic motion control problem and a mobile robot system as a goal-directed behavior control problem. Results show that the model can strongly improve pole balancing control policy, i.e., it allows the controller to learn stabilizing the pole in the largest domain of initial conditions compared to the results obtained when using a single learning mechanism. This model can also find a successful control policy for goal-directed behavior, i.e., the robot can effectively learn to approach a given goal compared to its individual components. Thus, the study pursued here sharpens our understanding of how two different learning mechanisms can be combined and complement each other for solving complex tasks.
    Review:
    Tetzlaff, C. and Kolodziejski, C. and Timme, M. and Wörgötter, F. (2012).
    Analysis of synaptic scaling in combination with Hebbian plasticity in several simple networks. Front Comput. Neurosci, 36, 6. DOI: 10.3389/fncom.2012.00036.
    BibTeX:
    @article{tetzlaffkolodziejskitimme2012,
      author = {Tetzlaff, C. and Kolodziejski, C. and Timme, M. and Wörgötter, F.},
      title = {Analysis of synaptic scaling in combination with Hebbian plasticity in several simple networks},
      pages = {36},
      journal = {Front Comput. Neurosci},
      year = {2012},
      volume= {6},
      doi = {10.3389/fncom.2012.00036},
      abstract = {Conventional synaptic plasticity in combination with synaptic scaling is a biologically plau-sible plasticity rule that guides the development of synapses toward stability. Here we analyze the development of synaptic connections and the resulting activity patterns in dif-ferent feed-forward and recurrent neural networks, with plasticity and scaling. We show under which constraints an external input given to a feed-forward network forms an input trace similar to a cell assembly Hebb, 1949 by enhancing synaptic weights to larger stable values as compared to the rest of the network. For instance, a weak input creates a less strong representation in the network than a strong input which produces a trace along large parts of the network.These processes are strongly influenced by the underlying con-nectivity. For example, when embedding recurrent structures excitatory rings, etc. into a feed-forward network, the input trace is extended into more distant layers, while inhibition shortens it. These findings provide a better understanding of the dynamics of generic net-work structures where plasticity is combined with scaling. This makes it also possible to use this rule for constructing an artificial network with certain desired storage properties}}
    Abstract: Conventional synaptic plasticity in combination with synaptic scaling is a biologically plau-sible plasticity rule that guides the development of synapses toward stability. Here we analyze the development of synaptic connections and the resulting activity patterns in dif-ferent feed-forward and recurrent neural networks, with plasticity and scaling. We show under which constraints an external input given to a feed-forward network forms an input trace similar to a cell assembly Hebb, 1949 by enhancing synaptic weights to larger stable values as compared to the rest of the network. For instance, a weak input creates a less strong representation in the network than a strong input which produces a trace along large parts of the network.These processes are strongly influenced by the underlying con-nectivity. For example, when embedding recurrent structures excitatory rings, etc. into a feed-forward network, the input trace is extended into more distant layers, while inhibition shortens it. These findings provide a better understanding of the dynamics of generic net-work structures where plasticity is combined with scaling. This makes it also possible to use this rule for constructing an artificial network with certain desired storage properties
    Review:
    Tetzlaff, C. and Kolodziejski, C. and Markelic, I. and Wörgötter, F. (2012).
    Time scales of memory, learning, and plasticity. Biol. Cybern, 715-726, 10611. DOI: 10.1007/s00422-012-0529.
    BibTeX:
    @article{tetzlaffkolodziejskimarkelic2012,
      author = {Tetzlaff, C. and Kolodziejski, C. and Markelic, I. and Wörgötter, F.},
      title = {Time scales of memory, learning, and plasticity},
      pages = {715-726},
      journal = {Biol. Cybern},
      year = {2012},
      volume= {10611},
      url = {http://dx.doi.org/10.1007/s00422-012-0529},
      doi = {10.1007/s00422-012-0529},
      abstract = {If we stored every bit of input, the storage capacity of our nervous system would be reached after only about 10 days. The nervous system relies on at least two mechanisms that counteract this capacity limit: compression and forgetting. But the latter mechanism needs to know how long an entity should be stored: some memories are relevant only for the next few minutes, some are important even after the passage of several years. Psychology and physiology have found and described many different memory mechanisms, and these mechanisms indeed use different time scales. In this prospect we review these mechanisms with respect to their time scale and propose relations between mechanisms in learning and memory and their underlying physiological basis}}
    Abstract: If we stored every bit of input, the storage capacity of our nervous system would be reached after only about 10 days. The nervous system relies on at least two mechanisms that counteract this capacity limit: compression and forgetting. But the latter mechanism needs to know how long an entity should be stored: some memories are relevant only for the next few minutes, some are important even after the passage of several years. Psychology and physiology have found and described many different memory mechanisms, and these mechanisms indeed use different time scales. In this prospect we review these mechanisms with respect to their time scale and propose relations between mechanisms in learning and memory and their underlying physiological basis
    Review:
    Ren, G. and Chen, W. and Kolodziejski, C. and Wörgötter, F. and Dasgupta, S. and Manoonpong, P. (2012).
    Multiple Chaotic Central Pattern Generators for Locomotion Generation and Leg Damage Compensation in a Hexapod Robot. IEEE/RSJ International Conference on Intelligent Robots and Systems IROS. DOI: 10.1109/IROS.2012.6385573.
    BibTeX:
    @inproceedings{renchenkolodziejski2012,
      author = {Ren, G. and Chen, W. and Kolodziejski, C. and Wörgötter, F. and Dasgupta, S. and Manoonpong, P.},
      title = {Multiple Chaotic Central Pattern Generators for Locomotion Generation and Leg Damage Compensation in a Hexapod Robot},
      booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems IROS},
      year = {2012},
      doi = {10.1109/IROS.2012.6385573},
      abstract = {In chaos control, an originally chaotic system is modified so that periodic dynamics arise. One application of this is to use the periodic dynamics of a single chaotic system as walking patterns in legged robots. In our previous work we applied such a controlled chaotic system as a central pattern generator (CPG) to generate different gait patterns of our hexapod robot AMOSII. However, if one or more legs break, its control fails. Specifically, in the scenario presented here, its movement permanently deviates from a desired trajectory. This is in contrast to the movement of real insects as they can compensate for body damages, for instance, by adjusting the remaining legs frequency. To achieve this for our hexapod robot, we extend the system from one chaotic system serving as a single CPG to multiple chaotic systems, performing as multiple CPGs. Without damage, the chaotic systems synchronize and their dynamics is identical (similar to a single CPG). With damage, they can lose synchronization leading to independent dynamics. In both simulations and real experiments, we can tune the oscillation frequency of every CPG manually so that the controller can indeed compensate for leg damage. In comparison to the trajectory of the robot controlled by only a single CPG, the trajectory produced by multiple chaotic CPG controllers resembles the original trajectory by far better. Thus, multiple chaotic systems that synchronize for normal behavior but can stay desynchronized in other circumstances are an effective way to control complex behaviors where, for instance, different body parts have to do independent movements like after leg damage.}}
    Abstract: In chaos control, an originally chaotic system is modified so that periodic dynamics arise. One application of this is to use the periodic dynamics of a single chaotic system as walking patterns in legged robots. In our previous work we applied such a controlled chaotic system as a central pattern generator (CPG) to generate different gait patterns of our hexapod robot AMOSII. However, if one or more legs break, its control fails. Specifically, in the scenario presented here, its movement permanently deviates from a desired trajectory. This is in contrast to the movement of real insects as they can compensate for body damages, for instance, by adjusting the remaining legs frequency. To achieve this for our hexapod robot, we extend the system from one chaotic system serving as a single CPG to multiple chaotic systems, performing as multiple CPGs. Without damage, the chaotic systems synchronize and their dynamics is identical (similar to a single CPG). With damage, they can lose synchronization leading to independent dynamics. In both simulations and real experiments, we can tune the oscillation frequency of every CPG manually so that the controller can indeed compensate for leg damage. In comparison to the trajectory of the robot controlled by only a single CPG, the trajectory produced by multiple chaotic CPG controllers resembles the original trajectory by far better. Thus, multiple chaotic systems that synchronize for normal behavior but can stay desynchronized in other circumstances are an effective way to control complex behaviors where, for instance, different body parts have to do independent movements like after leg damage.
    Review:
    Tetzlaff, C. and Kolodziejski, C. and Timm, M. and Wörgötter, F. (2011).
    Synaptic Scaling in Combination with many Generic Plasticity Mechanisms Stabilizes Circuit Connectivity. Front. Comput. Neurosci, 47, 5. DOI: 10.3389/fncom.2011.00047.
    BibTeX:
    @article{tetzlaffkolodziejskitimm2011,
      author = {Tetzlaff, C. and Kolodziejski, C. and Timm, M. and Wörgötter, F.},
      title = {Synaptic Scaling in Combination with many Generic Plasticity Mechanisms Stabilizes Circuit Connectivity},
      pages = {47},
      journal = {Front. Comput. Neurosci},
      year = {2011},
      volume= {5},
      doi = {10.3389/fncom.2011.00047},
      abstract = {Synaptic scaling is a slow process that modifies synapses, keeping the firing rate of neural circuits in specific regimes. Together with other processes, such as conventional synaptic plasticity in the form of long term depression and potentiation, synaptic scaling changes the synaptic patterns in a network, ensuring diverse, functionally relevant, stable, and input-dependent connectivity. How synaptic patterns are generated and stabilized, however, is largely unknown. Here we formally describe and analyze synaptic scaling based on results from experimental studies and demonstrate that the combination of different conventional plasticity mechanisms and synaptic scaling provides a powerful general framework for regulating network connectivity. In addition, we design several simple models that reproduce experimentally observed synaptic distributions as well as the observed synaptic modifications during sustained activity changes. These models predict that the combination of plasticity with scaling generates globally stable, input-controlled synaptic patterns, also in recurrent networks. Thus, in combination with other forms of plasticity, synaptic scaling can robustly yield neuronal circuits with high synaptic diversity, which potentially enables robust dynamic storage of complex activation patterns. This mechanism is even more pronounced when considering networks with a realistic degree of inhibition. Synaptic scaling combined with plasticity could thus be the basis for learning structured behavior even in initially random networks}}
    Abstract: Synaptic scaling is a slow process that modifies synapses, keeping the firing rate of neural circuits in specific regimes. Together with other processes, such as conventional synaptic plasticity in the form of long term depression and potentiation, synaptic scaling changes the synaptic patterns in a network, ensuring diverse, functionally relevant, stable, and input-dependent connectivity. How synaptic patterns are generated and stabilized, however, is largely unknown. Here we formally describe and analyze synaptic scaling based on results from experimental studies and demonstrate that the combination of different conventional plasticity mechanisms and synaptic scaling provides a powerful general framework for regulating network connectivity. In addition, we design several simple models that reproduce experimentally observed synaptic distributions as well as the observed synaptic modifications during sustained activity changes. These models predict that the combination of plasticity with scaling generates globally stable, input-controlled synaptic patterns, also in recurrent networks. Thus, in combination with other forms of plasticity, synaptic scaling can robustly yield neuronal circuits with high synaptic diversity, which potentially enables robust dynamic storage of complex activation patterns. This mechanism is even more pronounced when considering networks with a realistic degree of inhibition. Synaptic scaling combined with plasticity could thus be the basis for learning structured behavior even in initially random networks
    Review:
    Porr, B. and McCabe, L. and Kolodziejski, C. and Wörgötter, F. (2011).
    How feedback inhibition shapes spike-timing-dependent plasticity and its implications for recent Schizophrenia models. Neural Networks, 560-567, 24, 6. DOI: 10.1016/j.neunet.2011.03.004.
    BibTeX:
    @article{porrmccabekolodziejski2011,
      author = {Porr, B. and McCabe, L. and Kolodziejski, C. and Wörgötter, F.},
      title = {How feedback inhibition shapes spike-timing-dependent plasticity and its implications for recent Schizophrenia models},
      pages = {560-567},
      journal = {Neural Networks},
      year = {2011},
      volume= {24},
      number = {6},
      url = {http://www.sciencedirect.com/science/article/pii/S0893608011000888},
      doi = {10.1016/j.neunet.2011.03.004},
      abstract = {It has been shown that plasticity is not a fixed property but, in fact, changes depending on the location of the synapse on the neuron and/or changes of biophysical parameters. Here we investigate how plasticity is shaped by feedback inhibition in a cortical microcircuit. We use a differential Hebbian learning rule to model spike-timing dependent plasticity and show analytically that the feedback inhibition shortens the time window for LTD during spike-timing dependent plasticity but not for LTP. We then use a realistic GENESIS model to test two hypothesis about interneuron hypofunction and conclude that a reduction in GAD67 is the most likely candidate as the cause for hypofrontality as observed in Schizophrenia}}
    Abstract: It has been shown that plasticity is not a fixed property but, in fact, changes depending on the location of the synapse on the neuron and/or changes of biophysical parameters. Here we investigate how plasticity is shaped by feedback inhibition in a cortical microcircuit. We use a differential Hebbian learning rule to model spike-timing dependent plasticity and show analytically that the feedback inhibition shortens the time window for LTD during spike-timing dependent plasticity but not for LTP. We then use a realistic GENESIS model to test two hypothesis about interneuron hypofunction and conclude that a reduction in GAD67 is the most likely candidate as the cause for hypofrontality as observed in Schizophrenia
    Review:
    Kulvicius, T. and Kolodziejski, C. and Tamosiunaite, M. and Porr, B. and Wörgötter, F. (2010).
    Behavioral analysis of differential hebbian learning in closed-loop systems. Biological Cybernetics, 255-271, 103, 4. DOI: 10.1007/s00422-010-0396-4.
    BibTeX:
    @article{kulviciuskolodziejskitamosiunaite20,
      author = {Kulvicius, T. and Kolodziejski, C. and Tamosiunaite, M. and Porr, B. and Wörgötter, F.},
      title = {Behavioral analysis of differential hebbian learning in closed-loop systems},
      pages = {255-271},
      journal = {Biological Cybernetics},
      year = {2010},
      volume= {103},
      number = {4},
      publisher = {Springer-Verlag},
      doi = {10.1007/s00422-010-0396-4},
      abstract = {Understanding closed loop behavioral systems is a non-trivial problem, especially when they change during learning. Descriptions of closed loop systems in terms of information theory date back to the 50s, however, there have been only a few attempts which take into account learning, mostly measuring information of inputs. In this study we analyze a specific type of closed loop system by looking at the input as well as the output space. For this, we investigate simulated agents that perform differential Hebbian learning STDP. In the first part we show that analytical solutions can be found for the temporal development of such systems for relatively simple cases. In the second part of this study we try to answer the following question: How can we predict which system from a given class would be the best for a particular scenario? This question is addressed using energy and entropy measures and investigating their development during learning. This way we can show that within well- specified scenarios there are indeed agents which are optimal with respect to their structure and adaptive properties}}
    Abstract: Understanding closed loop behavioral systems is a non-trivial problem, especially when they change during learning. Descriptions of closed loop systems in terms of information theory date back to the 50s, however, there have been only a few attempts which take into account learning, mostly measuring information of inputs. In this study we analyze a specific type of closed loop system by looking at the input as well as the output space. For this, we investigate simulated agents that perform differential Hebbian learning STDP. In the first part we show that analytical solutions can be found for the temporal development of such systems for relatively simple cases. In the second part of this study we try to answer the following question: How can we predict which system from a given class would be the best for a particular scenario? This question is addressed using energy and entropy measures and investigating their development during learning. This way we can show that within well- specified scenarios there are indeed agents which are optimal with respect to their structure and adaptive properties
    Review:
    Kolodziejski, C. and Tetzlaff, C. and Wörgötter, F. (2010).
    Closed-form treatment of the interactions between neuronal activity and timing-dependent plasticity in networks of linear neurons. Front. Comput. Neurosci, 1-15, 4. DOI: 10.3389/fncom.2010.00134.
    BibTeX:
    @article{kolodziejskitetzlaffwoergoetter2010,
      author = {Kolodziejski, C. and Tetzlaff, C. and Wörgötter, F.},
      title = {Closed-form treatment of the interactions between neuronal activity and timing-dependent plasticity in networks of linear neurons},
      pages = {1-15},
      journal = {Front. Comput. Neurosci},
      year = {2010},
      volume= {4},
      doi = {10.3389/fncom.2010.00134},
      abstract = {Network activity and network connectivity mutually influence each other. Especially for fast processes, like spike-timing- dependent plasticity STDP, which depends on the interaction of few two signals, the question arises how these inter- actions are continuously altering the behavior and structure of the network. To address this question a time-continuous treatment of plasticity is required. However, this is even in simple recurrent network structures currently not possible. Thus, here we develop for a linear differential Hebbian learning system a method by which we can analytically investigate the dynamics and stability of the connections in recurrent networks. We use noisy periodic external input signals, which through the recurrent connections lead to complex actual ongoing inputs and observe that large stable ranges emerge in these networks without boundaries or weight-normalization. Somewhat counter-intuitively, we find that about 40 of these cases are obtained with an LTP- dominated STDP curve. Noise can reduce stability in some cases, but generally this does not occur. Instead stable domains are often enlarged. This study is a first step towards a better understanding of the on- going interactions between activity and plasticity in recurrent networks using STDP. The results suggests that stability of sub-networks should generically be present also in larger structures}}
    Abstract: Network activity and network connectivity mutually influence each other. Especially for fast processes, like spike-timing- dependent plasticity STDP, which depends on the interaction of few two signals, the question arises how these inter- actions are continuously altering the behavior and structure of the network. To address this question a time-continuous treatment of plasticity is required. However, this is even in simple recurrent network structures currently not possible. Thus, here we develop for a linear differential Hebbian learning system a method by which we can analytically investigate the dynamics and stability of the connections in recurrent networks. We use noisy periodic external input signals, which through the recurrent connections lead to complex actual ongoing inputs and observe that large stable ranges emerge in these networks without boundaries or weight-normalization. Somewhat counter-intuitively, we find that about 40 of these cases are obtained with an LTP- dominated STDP curve. Noise can reduce stability in some cases, but generally this does not occur. Instead stable domains are often enlarged. This study is a first step towards a better understanding of the on- going interactions between activity and plasticity in recurrent networks using STDP. The results suggests that stability of sub-networks should generically be present also in larger structures
    Review:
    Manoonpong, P. and Pasemann, F. and Kolodziejski, C. and Wörgötter, F. (2010).
    Designing Simple Nonlinear Filters Using Hysteresis of Single Recurrent Neurons for Acoustic Signal Recognition in Robots. ICANN 1, 374-383, 6352. DOI: 10.1007/978-3-642-15819-3_50.
    BibTeX:
    @inproceedings{manoonpongpasemannkolodziejski2010,
      author = {Manoonpong, P. and Pasemann, F. and Kolodziejski, C. and Wörgötter, F.},
      title = {Designing Simple Nonlinear Filters Using Hysteresis of Single Recurrent Neurons for Acoustic Signal Recognition in Robots},
      pages = {374-383},
      booktitle = {ICANN 1},
      year = {2010},
      volume= {6352},
      doi = {10.1007/978-3-642-15819-3_50},
      abstract = {In this article we exploit the discrete-time dynamics of a sin- gle neuron with self-connection to systematically design simple signal fil- ters. Due to hysteresis effects and transient dynamics, this single neuron behaves as an adjustable low-pass filter for specific parameter configura- tions. Extending this neuro-module by two more recurrent neurons leads to versatile high- and band-pass filters. The approach presented here helps to understand how the dynamical properties of recurrent neural networks can be used for filter design. Furthermore, it gives guidance to a new way of implementing sensory preprocessing for acoustic signal recognition in autonomous robots}}
    Abstract: In this article we exploit the discrete-time dynamics of a sin- gle neuron with self-connection to systematically design simple signal fil- ters. Due to hysteresis effects and transient dynamics, this single neuron behaves as an adjustable low-pass filter for specific parameter configura- tions. Extending this neuro-module by two more recurrent neurons leads to versatile high- and band-pass filters. The approach presented here helps to understand how the dynamical properties of recurrent neural networks can be used for filter design. Furthermore, it gives guidance to a new way of implementing sensory preprocessing for acoustic signal recognition in autonomous robots
    Review:
    Kolodziejski, C. and Porr, B. and Wörgötter, F. (2009).
    On the Asymptotic Equivalence Between Differential Hebbian and Temporal Difference Learning. Neural Computation, 1173-1202, 21, 4.
    BibTeX:
    @article{kolodziejskiporrwoergoetter2009,
      author = {Kolodziejski, C. and Porr, B. and Wörgötter, F.},
      title = {On the Asymptotic Equivalence Between Differential Hebbian and Temporal Difference Learning},
      pages = {1173-1202},
      journal = {Neural Computation},
      year = {2009},
      volume= {21},
      number = {4},
      abstract = {In this theoretical contribution, we provide mathematical proof that two of the most important classes of network learning correlation-based differential Hebbian learning and reward-based temporal difference learning are asymptotically equivalent when timing the learning with a modulatory signal. This opens the opportunity to consistently refor- mulate most of the abstract reinforcement learning framework from a correlation-based perspective more closely related to the biophysics of neurons}}
    Abstract: In this theoretical contribution, we provide mathematical proof that two of the most important classes of network learning correlation-based differential Hebbian learning and reward-based temporal difference learning are asymptotically equivalent when timing the learning with a modulatory signal. This opens the opportunity to consistently refor- mulate most of the abstract reinforcement learning framework from a correlation-based perspective more closely related to the biophysics of neurons
    Review:
    Kolodziejski, C. and Porr, B. and Tamosiunaite, M. and Wörgötter, F. (2009).
    On the asymptotic equivalence between differential Hebbian and temporal difference learning using a local third factor. Advances in Neural Information Processing Systems, 857-864, 21.
    BibTeX:
    @inproceedings{kolodziejskiporrtamosiunaite2009,
      author = {Kolodziejski, C. and Porr, B. and Tamosiunaite, M. and Wörgötter, F.},
      title = {On the asymptotic equivalence between differential Hebbian and temporal difference learning using a local third factor},
      pages = {857-864},
      booktitle = {Advances in Neural Information Processing Systems},
      year = {2009},
      volume= {21},
      abstract = {In this theoretical contribution we provide mathematical proof that two of the most important classes of network learning - correlation-based differential Heb- bian learning and reward-based temporal difference learning - are asymptotically equivalent when timing the learning with a local modulatory signal. This opens the opportunity to consistently reformulate most of the abstract reinforcement learn- ing framework from a correlation based perspective that is more closely related to the biophysics of neurons}}
    Abstract: In this theoretical contribution we provide mathematical proof that two of the most important classes of network learning - correlation-based differential Heb- bian learning and reward-based temporal difference learning - are asymptotically equivalent when timing the learning with a local modulatory signal. This opens the opportunity to consistently reformulate most of the abstract reinforcement learn- ing framework from a correlation based perspective that is more closely related to the biophysics of neurons
    Review:
    Thompson, A M. and Porr, B. and Kolodziejski, C. and Wörgötter, F. (2008).
    Second Order Conditioning in the Sub-cortical Nuclei of the Limbic System. From Animals to Animats 10, 189-198, 5040. DOI: 10.1007/978-3-540-69134-1_19.
    BibTeX:
    @inproceedings{thompsonporrkolodziejski2008,
      author = {Thompson, A M. and Porr, B. and Kolodziejski, C. and Wörgötter, F.},
      title = {Second Order Conditioning in the Sub-cortical Nuclei of the Limbic System},
      pages = {189-198},
      booktitle = {From Animals to Animats 10},
      year = {2008},
      volume= {5040},
      editor = {Asada, Minoru and Hallam, JohnC.T. and Meyer, Jean-Arcady and Tani, Jun},
      publisher = {Springer Berlin Heidelberg},
      series = {Lecture Notes in Computer Science},
      url = {http://dx.doi.org/10.1007/978-3-540-69134-1_19},
      doi = {10.1007/978-3-540-69134-1_19},
      abstract = {Three factor Isotropic sequence order ISO3 learning is a form of differential Hebbian learning where a third factor switches on learning at relevant moments for example, after reward retreival. This switch enables learning only at specific moments and, thus, stablises the corresponding weights. The concept of using a third factor as a gating signal for learning at relevant moments has been extended in this pa- per to perform second order conditioning SOC. We present a biological model of the sub-cortical nuclei of the limbic system that is capable of performing SOC in a food seeking task. The 3rd-factor is modelled by dopaminergic neurons of the VTA which are activated via a direct exci- tatory glutamatergic pathway, and an indirect dis-inhibitory GABAergic pathway. The latter generates an amplification in the number of tonically active DA neurons. This produces an increase in DA outside the event of a primary reward and enables SOC to be accomplished}}
    Abstract: Three factor Isotropic sequence order ISO3 learning is a form of differential Hebbian learning where a third factor switches on learning at relevant moments for example, after reward retreival. This switch enables learning only at specific moments and, thus, stablises the corresponding weights. The concept of using a third factor as a gating signal for learning at relevant moments has been extended in this pa- per to perform second order conditioning SOC. We present a biological model of the sub-cortical nuclei of the limbic system that is capable of performing SOC in a food seeking task. The 3rd-factor is modelled by dopaminergic neurons of the VTA which are activated via a direct exci- tatory glutamatergic pathway, and an indirect dis-inhibitory GABAergic pathway. The latter generates an amplification in the number of tonically active DA neurons. This produces an increase in DA outside the event of a primary reward and enables SOC to be accomplished
    Review:
    Kolodziejski, C. and Porr, B. and Wörgötter, F. (2008).
    Mathematical properties of neuronal TD-rules and differential Hebbian learning: a comparison. Biological Cybernetics, 259-272, 98, 3.
    BibTeX:
    @article{kolodziejskiporrwoergoetter2008,
      author = {Kolodziejski, C. and Porr, B. and Wörgötter, F.},
      title = {Mathematical properties of neuronal TD-rules and differential Hebbian learning: a comparison},
      pages = {259-272},
      journal = {Biological Cybernetics},
      year = {2008},
      volume= {98},
      number = {3},
      abstract = {A confusingly wide variety of temporally asym- metric learning rules exists related to reinforcement learning and/or to spike-timing dependent plasticity, many of which look exceedingly similar, while displaying strongly different behavior. These rules often find their use in control tasks, for example in robotics and for this rigorous convergence and numerical stability is required. The goal of this article is to review these rules and compare them to provide a better over- view over their different properties. Two main classes will be discussed: temporal difference TD rules and correlation based differential hebbian rules and some transition cases. In general we will focus on neuronal implementations with changeable synaptic weights and a time-continuous represen- tation of activity. In a machine learning non-neuronal con- text, for TD-learning a solid mathematical theory has existed since several years. This can partly be transfered to a neu- ronal framework, too. On the other hand, only now a more complete theory has also emerged for differential Hebb rules. In general rules differ by their convergence conditions and their numerical stability, which can lead to very undesirable behavior, when wanting to apply them. For TD, convergence can be enforced with a certain output condition assuring that the}}
    Abstract: A confusingly wide variety of temporally asym- metric learning rules exists related to reinforcement learning and/or to spike-timing dependent plasticity, many of which look exceedingly similar, while displaying strongly different behavior. These rules often find their use in control tasks, for example in robotics and for this rigorous convergence and numerical stability is required. The goal of this article is to review these rules and compare them to provide a better over- view over their different properties. Two main classes will be discussed: temporal difference TD rules and correlation based differential hebbian rules and some transition cases. In general we will focus on neuronal implementations with changeable synaptic weights and a time-continuous represen- tation of activity. In a machine learning non-neuronal con- text, for TD-learning a solid mathematical theory has existed since several years. This can partly be transfered to a neu- ronal framework, too. On the other hand, only now a more complete theory has also emerged for differential Hebb rules. In general rules differ by their convergence conditions and their numerical stability, which can lead to very undesirable behavior, when wanting to apply them. For TD, convergence can be enforced with a certain output condition assuring that the
    Review:
    Kolodziejski, C. and Porr, B. and Wörgötter, F. (2007).
    Anticipative adaptive muscle control: forward modeling with self-induced disturbances and recruitment. BMC Neuroscience, 1-1, 8, Suppl 2. DOI: 10.1186/1471-2202-8-S2-P202.
    BibTeX:
    @article{kolodziejskiporrwoergoetter2007,
      author = {Kolodziejski, C. and Porr, B. and Wörgötter, F.},
      title = {Anticipative adaptive muscle control: forward modeling with self-induced disturbances and recruitment},
      pages = {1-1},
      journal = {BMC Neuroscience},
      year = {2007},
      volume= {8},
      number = {Suppl 2},
      publisher = {BioMed Central},
      url = {http://www.biomedcentral.com/1471-2202/8/S2/P202},
      doi = {10.1186/1471-2202-8-S2-P202}}
    Abstract:
    Review:
    Ren, G. and Chen, W. and Dasgupta, S. and Kolodziejski, C. and Wörgötter, F. and Manoonpong, P. (2014).
    Multiple chaotic central pattern generators with learning for legged locomotion and malfunction compensation. Information Sciences, 666 - 682, 294. DOI: 10.1016/j.ins.2014.05.001.
    BibTeX:
    @article{renchendasgupta2014,
      author = {Ren, G. and Chen, W. and Dasgupta, S. and Kolodziejski, C. and Wörgötter, F. and Manoonpong, P.},
      title = {Multiple chaotic central pattern generators with learning for legged locomotion and malfunction compensation},
      pages = {666 - 682},
      journal = {Information Sciences},
      year = {2014},
      volume= {294},
      month = {05},
      publisher = {Elseiver},
      url = {http://www.sciencedirect.com/science/article/pii/S0020025514005192},
      doi = {10.1016/j.ins.2014.05.001},
      abstract = {An originally chaotic system can be controlled into various periodic dynamics. When it is implemented into a legged robots locomotion control as a central pattern generator CPG, sophisticated gait patterns arise so that the robot can perform various walking behaviors. However, such a single chaotic CPG controller has difficulties dealing with leg malfunction. Specifically, in the scenarios presented here, its movement permanently deviates from the desired trajectory. To address this problem, we extend the single chaotic CPG to multiple CPGs with learning. The learning mechanism is based on a simulated annealing algorithm. In a normal situation, the CPGs synchronize and their dynamics are identical. With leg malfunction or disability, the CPGs lose synchronization leading to independent dynamics. In this case, the learning mechanism is applied to automatically adjust the remaining legs oscillation frequencies so that the robot adapts its locomotion to deal with the malfunction. As a consequence, the trajectory produced by the multiple chaotic CPGs resembles the original trajectory far better than the one produced by only a single CPG. The performance of the system is evaluated first in a physical simulation of a quadruped as well as a hexapod robot and finally in a real six-legged walking machine called AMOSII. The experimental results presented here reveal that using multiple CPGs with learning is an effective approach for adaptive locomotion generation where, for instance, different body parts have to perform independent movements for malfunction compensation}}
    Abstract: An originally chaotic system can be controlled into various periodic dynamics. When it is implemented into a legged robots locomotion control as a central pattern generator CPG, sophisticated gait patterns arise so that the robot can perform various walking behaviors. However, such a single chaotic CPG controller has difficulties dealing with leg malfunction. Specifically, in the scenarios presented here, its movement permanently deviates from the desired trajectory. To address this problem, we extend the single chaotic CPG to multiple CPGs with learning. The learning mechanism is based on a simulated annealing algorithm. In a normal situation, the CPGs synchronize and their dynamics are identical. With leg malfunction or disability, the CPGs lose synchronization leading to independent dynamics. In this case, the learning mechanism is applied to automatically adjust the remaining legs oscillation frequencies so that the robot adapts its locomotion to deal with the malfunction. As a consequence, the trajectory produced by the multiple chaotic CPGs resembles the original trajectory far better than the one produced by only a single CPG. The performance of the system is evaluated first in a physical simulation of a quadruped as well as a hexapod robot and finally in a real six-legged walking machine called AMOSII. The experimental results presented here reveal that using multiple CPGs with learning is an effective approach for adaptive locomotion generation where, for instance, different body parts have to perform independent movements for malfunction compensation
    Review:

    © 2011 - 2017 Dept. of Computational Neuroscience • comments to: sreich _at_ gwdg.de • Impressum / Site Info