Email: 



Author / Editor / Organization 
Year 
Title 
Journal / Proceedings / Book 



BibTeX:
@article{faghihikolodziejskifiala2013, author = {Faghihi, F. and Kolodziejski, C. and Fiala, A. and Wörgötter, F. and Tetzlaff, C.}, title = {An Information Theoretic Model of Information Processing in the Drosophila Olfactory System: the Role of Inhibitory Neurons for System Efficiency}, journal = {Frontiers in Computational Neuroscience}, year = {2013}, volume= {7}, number = {183}, url = http://journal.frontiersin.org/Journal/10.3389/fncom.2013.00183/full}, doi = 10.3389/fncom.2013.00183}, abstract = Fruit flies Drosophila melanogaster rely on their olfactory system to process environmental information. This information has to be transmitted without systemrelevant loss by the olfactory system to deeper brain areas for learning. Here we study the role of several parameters of the flys olfactory system and the environment and how they influence olfactory information transmission. We have designed an abstract model of the antennal lobe, the mushroom body and the inhibitory circuitry. Mutual information between the olfactory environment, simulated in terms of different odor concentrations, and a subpopulation of intrinsic mushroom body neurons Kenyon cells was calculated to quantify the efficiency of information transmission. With this method we study, on the one hand, the effect of different connectivity rates between olfactory projection neurons and firing thresholds of Kenyon cells. On the other hand, we analyze the influence of inhibition on mutual information between environment and mushroom body. Our simulations show an expected linear relation between the connectivity rate between the antennal lobe and the mushroom body and firing threshold of the Kenyon cells to obtain maximum mutual information for both low and high odor concentrations. However, contradicting allday experiences, high odor concentrations cause a drastic, and unrealistic, decrease in mutual information for all connectivity rates compared to low concentration. But when inhibition on the mushroom body is included, mutual information remains at high levels independent of other system parameters. This finding points to a pivotal role of inhibition in fly information processing without which the systems efficiency will be substantially reduced}} 

Abstract: Fruit flies Drosophila melanogaster rely on their olfactory system to process environmental information. This information has to be transmitted without systemrelevant loss by the olfactory system to deeper brain areas for learning. Here we study the role of several parameters of the flys olfactory system and the environment and how they influence olfactory information transmission. We have designed an abstract model of the antennal lobe, the mushroom body and the inhibitory circuitry. Mutual information between the olfactory environment, simulated in terms of different odor concentrations, and a subpopulation of intrinsic mushroom body neurons Kenyon cells was calculated to quantify the efficiency of information transmission. With this method we study, on the one hand, the effect of different connectivity rates between olfactory projection neurons and firing thresholds of Kenyon cells. On the other hand, we analyze the influence of inhibition on mutual information between environment and mushroom body. Our simulations show an expected linear relation between the connectivity rate between the antennal lobe and the mushroom body and firing threshold of the Kenyon cells to obtain maximum mutual information for both low and high odor concentrations. However, contradicting allday experiences, high odor concentrations cause a drastic, and unrealistic, decrease in mutual information for all connectivity rates compared to low concentration. But when inhibition on the mushroom body is included, mutual information remains at high levels independent of other system parameters. This finding points to a pivotal role of inhibition in fly information processing without which the systems efficiency will be substantially reduced  
Review:  


BibTeX:
@article{tetzlaffkolodziejskitimme2013, author = {Tetzlaff, C. and Kolodziejski, C. and Timme, M. and Tsodyks, M. and Wörgötter, F.}, title = {Synaptic scaling enables dynamically distinct short and longterm memory formation}, pages = {e10003307}, journal = {PLoS Computational Biology}, year = {2013}, volume= {910}, doi = doi:10.1371/journal.pcbi.1003307}, abstract = Memory storage in the brain relies on mechanisms acting on time scales from minutes, for longterm synaptic potentiation, to days, for memory consolidation. During such processes, neural circuits distinguish synapses relevant for forming a longterm storage, which are consolidated, from synapses of shortterm storage, which fade. How time scale integration and synaptic differentiation is simultaneously achieved remains unclear. Here we show that synaptic scaling  a slow process usually associated with the maintenance of activity homeostasis  combined with synaptic plasticity may simultaneously achieve both, thereby providing a natural separation of short from longterm storage. The interaction between plasticity and scaling provides also an explanation for an established paradox where memory consolidation critically depends on the exact order of learning and recall. These results indicate that scaling may be fundamental for stabilizing memories, providing a dynamic link between early and late memory formation processes.}} 

Abstract: Memory storage in the brain relies on mechanisms acting on time scales from minutes, for longterm synaptic potentiation, to days, for memory consolidation. During such processes, neural circuits distinguish synapses relevant for forming a longterm storage, which are consolidated, from synapses of shortterm storage, which fade. How time scale integration and synaptic differentiation is simultaneously achieved remains unclear. Here we show that synaptic scaling  a slow process usually associated with the maintenance of activity homeostasis  combined with synaptic plasticity may simultaneously achieve both, thereby providing a natural separation of short from longterm storage. The interaction between plasticity and scaling provides also an explanation for an established paradox where memory consolidation critically depends on the exact order of learning and recall. These results indicate that scaling may be fundamental for stabilizing memories, providing a dynamic link between early and late memory formation processes.  
Review:  


BibTeX:
@article{manoonpongkolodziejskiwoergoetter20, author = {Manoonpong, P. and Kolodziejski, C. and Wörgötter, F. and Morimoto, J.}, title = {Combining CorrelationBased and RewardBased Learning in Neural Control for Policy Improvement}, pages = {1350015}, journal = {Advances in Complex Systems}, year = {2013}, volume= {16}, number = {23}, url = http://www.worldscientific.com/doi/abs/10.1142/S021952591350015X}, doi = 10.1142/S021952591350015X}, abstract = Classical conditioning (conventionally modeled as correlationbased learning) and operant conditioning (conventionally modeled as reinforcement learning or rewardbased learning) have been found in biological systems. Evidence shows that these two mechanisms strongly involve learning about associations. Based on these biological findings, we propose a new learning model to achieve successful control policies for artificial systems. This model combines correlationbased learning using input correlation learning (ICO learning) and rewardbased learning using continuous actorcritic reinforcement learning (RL), thereby working as a dual learner system. The model performance is evaluated by simulations of a cartpole system as a dynamic motion control problem and a mobile robot system as a goaldirected behavior control problem. Results show that the model can strongly improve pole balancing control policy, i.e., it allows the controller to learn stabilizing the pole in the largest domain of initial conditions compared to the results obtained when using a single learning mechanism. This model can also find a successful control policy for goaldirected behavior, i.e., the robot can effectively learn to approach a given goal compared to its individual components. Thus, the study pursued here sharpens our understanding of how two different learning mechanisms can be combined and complement each other for solving complex tasks.}} 

Abstract: Classical conditioning (conventionally modeled as correlationbased learning) and operant conditioning (conventionally modeled as reinforcement learning or rewardbased learning) have been found in biological systems. Evidence shows that these two mechanisms strongly involve learning about associations. Based on these biological findings, we propose a new learning model to achieve successful control policies for artificial systems. This model combines correlationbased learning using input correlation learning (ICO learning) and rewardbased learning using continuous actorcritic reinforcement learning (RL), thereby working as a dual learner system. The model performance is evaluated by simulations of a cartpole system as a dynamic motion control problem and a mobile robot system as a goaldirected behavior control problem. Results show that the model can strongly improve pole balancing control policy, i.e., it allows the controller to learn stabilizing the pole in the largest domain of initial conditions compared to the results obtained when using a single learning mechanism. This model can also find a successful control policy for goaldirected behavior, i.e., the robot can effectively learn to approach a given goal compared to its individual components. Thus, the study pursued here sharpens our understanding of how two different learning mechanisms can be combined and complement each other for solving complex tasks.  
Review:  


BibTeX:
@article{tetzlaffkolodziejskitimme2012, author = {Tetzlaff, C. and Kolodziejski, C. and Timme, M. and Wörgötter, F.}, title = {Analysis of synaptic scaling in combination with Hebbian plasticity in several simple networks}, pages = {36}, journal = {Front Comput. Neurosci}, year = {2012}, volume= {6}, doi = 10.3389/fncom.2012.00036}, abstract = Conventional synaptic plasticity in combination with synaptic scaling is a biologically plausible plasticity rule that guides the development of synapses toward stability. Here we analyze the development of synaptic connections and the resulting activity patterns in different feedforward and recurrent neural networks, with plasticity and scaling. We show under which constraints an external input given to a feedforward network forms an input trace similar to a cell assembly Hebb, 1949 by enhancing synaptic weights to larger stable values as compared to the rest of the network. For instance, a weak input creates a less strong representation in the network than a strong input which produces a trace along large parts of the network.These processes are strongly influenced by the underlying connectivity. For example, when embedding recurrent structures excitatory rings, etc. into a feedforward network, the input trace is extended into more distant layers, while inhibition shortens it. These findings provide a better understanding of the dynamics of generic network structures where plasticity is combined with scaling. This makes it also possible to use this rule for constructing an artificial network with certain desired storage properties}} 

Abstract: Conventional synaptic plasticity in combination with synaptic scaling is a biologically plausible plasticity rule that guides the development of synapses toward stability. Here we analyze the development of synaptic connections and the resulting activity patterns in different feedforward and recurrent neural networks, with plasticity and scaling. We show under which constraints an external input given to a feedforward network forms an input trace similar to a cell assembly Hebb, 1949 by enhancing synaptic weights to larger stable values as compared to the rest of the network. For instance, a weak input creates a less strong representation in the network than a strong input which produces a trace along large parts of the network.These processes are strongly influenced by the underlying connectivity. For example, when embedding recurrent structures excitatory rings, etc. into a feedforward network, the input trace is extended into more distant layers, while inhibition shortens it. These findings provide a better understanding of the dynamics of generic network structures where plasticity is combined with scaling. This makes it also possible to use this rule for constructing an artificial network with certain desired storage properties  
Review:  


BibTeX:
@article{tetzlaffkolodziejskimarkelic2012, author = {Tetzlaff, C. and Kolodziejski, C. and Markelic, I. and Wörgötter, F.}, title = {Time scales of memory, learning, and plasticity}, pages = {715726}, journal = {Biol. Cybern}, year = {2012}, volume= {10611}, url = http://dx.doi.org/10.1007/s004220120529}, doi = 10.1007/s004220120529}, abstract = If we stored every bit of input, the storage capacity of our nervous system would be reached after only about 10 days. The nervous system relies on at least two mechanisms that counteract this capacity limit: compression and forgetting. But the latter mechanism needs to know how long an entity should be stored: some memories are relevant only for the next few minutes, some are important even after the passage of several years. Psychology and physiology have found and described many different memory mechanisms, and these mechanisms indeed use different time scales. In this prospect we review these mechanisms with respect to their time scale and propose relations between mechanisms in learning and memory and their underlying physiological basis}} 

Abstract: If we stored every bit of input, the storage capacity of our nervous system would be reached after only about 10 days. The nervous system relies on at least two mechanisms that counteract this capacity limit: compression and forgetting. But the latter mechanism needs to know how long an entity should be stored: some memories are relevant only for the next few minutes, some are important even after the passage of several years. Psychology and physiology have found and described many different memory mechanisms, and these mechanisms indeed use different time scales. In this prospect we review these mechanisms with respect to their time scale and propose relations between mechanisms in learning and memory and their underlying physiological basis  
Review:  


BibTeX:
@inproceedings{renchenkolodziejski2012, author = {Ren, G. and Chen, W. and Kolodziejski, C. and Wörgötter, F. and Dasgupta, S. and Manoonpong, P.}, title = {Multiple Chaotic Central Pattern Generators for Locomotion Generation and Leg Damage Compensation in a Hexapod Robot}, booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems IROS}, year = {2012}, doi = 10.1109/IROS.2012.6385573}, abstract = In chaos control, an originally chaotic system is modified so that periodic dynamics arise. One application of this is to use the periodic dynamics of a single chaotic system as walking patterns in legged robots. In our previous work we applied such a controlled chaotic system as a central pattern generator (CPG) to generate different gait patterns of our hexapod robot AMOSII. However, if one or more legs break, its control fails. Specifically, in the scenario presented here, its movement permanently deviates from a desired trajectory. This is in contrast to the movement of real insects as they can compensate for body damages, for instance, by adjusting the remaining legs frequency. To achieve this for our hexapod robot, we extend the system from one chaotic system serving as a single CPG to multiple chaotic systems, performing as multiple CPGs. Without damage, the chaotic systems synchronize and their dynamics is identical (similar to a single CPG). With damage, they can lose synchronization leading to independent dynamics. In both simulations and real experiments, we can tune the oscillation frequency of every CPG manually so that the controller can indeed compensate for leg damage. In comparison to the trajectory of the robot controlled by only a single CPG, the trajectory produced by multiple chaotic CPG controllers resembles the original trajectory by far better. Thus, multiple chaotic systems that synchronize for normal behavior but can stay desynchronized in other circumstances are an effective way to control complex behaviors where, for instance, different body parts have to do independent movements like after leg damage.}} 

Abstract: In chaos control, an originally chaotic system is modified so that periodic dynamics arise. One application of this is to use the periodic dynamics of a single chaotic system as walking patterns in legged robots. In our previous work we applied such a controlled chaotic system as a central pattern generator (CPG) to generate different gait patterns of our hexapod robot AMOSII. However, if one or more legs break, its control fails. Specifically, in the scenario presented here, its movement permanently deviates from a desired trajectory. This is in contrast to the movement of real insects as they can compensate for body damages, for instance, by adjusting the remaining legs frequency. To achieve this for our hexapod robot, we extend the system from one chaotic system serving as a single CPG to multiple chaotic systems, performing as multiple CPGs. Without damage, the chaotic systems synchronize and their dynamics is identical (similar to a single CPG). With damage, they can lose synchronization leading to independent dynamics. In both simulations and real experiments, we can tune the oscillation frequency of every CPG manually so that the controller can indeed compensate for leg damage. In comparison to the trajectory of the robot controlled by only a single CPG, the trajectory produced by multiple chaotic CPG controllers resembles the original trajectory by far better. Thus, multiple chaotic systems that synchronize for normal behavior but can stay desynchronized in other circumstances are an effective way to control complex behaviors where, for instance, different body parts have to do independent movements like after leg damage.  
Review:  


BibTeX:
@article{tetzlaffkolodziejskitimm2011, author = {Tetzlaff, C. and Kolodziejski, C. and Timm, M. and Wörgötter, F.}, title = {Synaptic Scaling in Combination with many Generic Plasticity Mechanisms Stabilizes Circuit Connectivity}, pages = {47}, journal = {Front. Comput. Neurosci}, year = {2011}, volume= {5}, doi = 10.3389/fncom.2011.00047}, abstract = Synaptic scaling is a slow process that modifies synapses, keeping the firing rate of neural circuits in specific regimes. Together with other processes, such as conventional synaptic plasticity in the form of long term depression and potentiation, synaptic scaling changes the synaptic patterns in a network, ensuring diverse, functionally relevant, stable, and inputdependent connectivity. How synaptic patterns are generated and stabilized, however, is largely unknown. Here we formally describe and analyze synaptic scaling based on results from experimental studies and demonstrate that the combination of different conventional plasticity mechanisms and synaptic scaling provides a powerful general framework for regulating network connectivity. In addition, we design several simple models that reproduce experimentally observed synaptic distributions as well as the observed synaptic modifications during sustained activity changes. These models predict that the combination of plasticity with scaling generates globally stable, inputcontrolled synaptic patterns, also in recurrent networks. Thus, in combination with other forms of plasticity, synaptic scaling can robustly yield neuronal circuits with high synaptic diversity, which potentially enables robust dynamic storage of complex activation patterns. This mechanism is even more pronounced when considering networks with a realistic degree of inhibition. Synaptic scaling combined with plasticity could thus be the basis for learning structured behavior even in initially random networks}} 

Abstract: Synaptic scaling is a slow process that modifies synapses, keeping the firing rate of neural circuits in specific regimes. Together with other processes, such as conventional synaptic plasticity in the form of long term depression and potentiation, synaptic scaling changes the synaptic patterns in a network, ensuring diverse, functionally relevant, stable, and inputdependent connectivity. How synaptic patterns are generated and stabilized, however, is largely unknown. Here we formally describe and analyze synaptic scaling based on results from experimental studies and demonstrate that the combination of different conventional plasticity mechanisms and synaptic scaling provides a powerful general framework for regulating network connectivity. In addition, we design several simple models that reproduce experimentally observed synaptic distributions as well as the observed synaptic modifications during sustained activity changes. These models predict that the combination of plasticity with scaling generates globally stable, inputcontrolled synaptic patterns, also in recurrent networks. Thus, in combination with other forms of plasticity, synaptic scaling can robustly yield neuronal circuits with high synaptic diversity, which potentially enables robust dynamic storage of complex activation patterns. This mechanism is even more pronounced when considering networks with a realistic degree of inhibition. Synaptic scaling combined with plasticity could thus be the basis for learning structured behavior even in initially random networks  
Review:  


BibTeX:
@article{porrmccabekolodziejski2011, author = {Porr, B. and McCabe, L. and Kolodziejski, C. and Wörgötter, F.}, title = {How feedback inhibition shapes spiketimingdependent plasticity and its implications for recent Schizophrenia models}, pages = {560567}, journal = {Neural Networks}, year = {2011}, volume= {24}, number = {6}, url = http://www.sciencedirect.com/science/article/pii/S0893608011000888}, doi = 10.1016/j.neunet.2011.03.004}, abstract = It has been shown that plasticity is not a fixed property but, in fact, changes depending on the location of the synapse on the neuron and/or changes of biophysical parameters. Here we investigate how plasticity is shaped by feedback inhibition in a cortical microcircuit. We use a differential Hebbian learning rule to model spiketiming dependent plasticity and show analytically that the feedback inhibition shortens the time window for LTD during spiketiming dependent plasticity but not for LTP. We then use a realistic GENESIS model to test two hypothesis about interneuron hypofunction and conclude that a reduction in GAD67 is the most likely candidate as the cause for hypofrontality as observed in Schizophrenia}} 

Abstract: It has been shown that plasticity is not a fixed property but, in fact, changes depending on the location of the synapse on the neuron and/or changes of biophysical parameters. Here we investigate how plasticity is shaped by feedback inhibition in a cortical microcircuit. We use a differential Hebbian learning rule to model spiketiming dependent plasticity and show analytically that the feedback inhibition shortens the time window for LTD during spiketiming dependent plasticity but not for LTP. We then use a realistic GENESIS model to test two hypothesis about interneuron hypofunction and conclude that a reduction in GAD67 is the most likely candidate as the cause for hypofrontality as observed in Schizophrenia  
Review:  


BibTeX:
@article{kulviciuskolodziejskitamosiunaite20, author = {Kulvicius, T. and Kolodziejski, C. and Tamosiunaite, M. and Porr, B. and Wörgötter, F.}, title = {Behavioral analysis of differential hebbian learning in closedloop systems}, pages = {255271}, journal = {Biological Cybernetics}, year = {2010}, volume= {103}, number = {4}, publisher = {SpringerVerlag}, doi = 10.1007/s0042201003964}, abstract = Understanding closed loop behavioral systems is a nontrivial problem, especially when they change during learning. Descriptions of closed loop systems in terms of information theory date back to the 50s, however, there have been only a few attempts which take into account learning, mostly measuring information of inputs. In this study we analyze a specific type of closed loop system by looking at the input as well as the output space. For this, we investigate simulated agents that perform differential Hebbian learning STDP. In the first part we show that analytical solutions can be found for the temporal development of such systems for relatively simple cases. In the second part of this study we try to answer the following question: How can we predict which system from a given class would be the best for a particular scenario? This question is addressed using energy and entropy measures and investigating their development during learning. This way we can show that within well specified scenarios there are indeed agents which are optimal with respect to their structure and adaptive properties}} 

Abstract: Understanding closed loop behavioral systems is a nontrivial problem, especially when they change during learning. Descriptions of closed loop systems in terms of information theory date back to the 50s, however, there have been only a few attempts which take into account learning, mostly measuring information of inputs. In this study we analyze a specific type of closed loop system by looking at the input as well as the output space. For this, we investigate simulated agents that perform differential Hebbian learning STDP. In the first part we show that analytical solutions can be found for the temporal development of such systems for relatively simple cases. In the second part of this study we try to answer the following question: How can we predict which system from a given class would be the best for a particular scenario? This question is addressed using energy and entropy measures and investigating their development during learning. This way we can show that within well specified scenarios there are indeed agents which are optimal with respect to their structure and adaptive properties  
Review:  


BibTeX:
@article{kolodziejskitetzlaffwoergoetter2010, author = {Kolodziejski, C. and Tetzlaff, C. and Wörgötter, F.}, title = {Closedform treatment of the interactions between neuronal activity and timingdependent plasticity in networks of linear neurons}, pages = {115}, journal = {Front. Comput. Neurosci}, year = {2010}, volume= {4}, doi = 10.3389/fncom.2010.00134}, abstract = Network activity and network connectivity mutually influence each other. Especially for fast processes, like spiketiming dependent plasticity STDP, which depends on the interaction of few two signals, the question arises how these inter actions are continuously altering the behavior and structure of the network. To address this question a timecontinuous treatment of plasticity is required. However, this is even in simple recurrent network structures currently not possible. Thus, here we develop for a linear differential Hebbian learning system a method by which we can analytically investigate the dynamics and stability of the connections in recurrent networks. We use noisy periodic external input signals, which through the recurrent connections lead to complex actual ongoing inputs and observe that large stable ranges emerge in these networks without boundaries or weightnormalization. Somewhat counterintuitively, we find that about 40 of these cases are obtained with an LTP dominated STDP curve. Noise can reduce stability in some cases, but generally this does not occur. Instead stable domains are often enlarged. This study is a first step towards a better understanding of the on going interactions between activity and plasticity in recurrent networks using STDP. The results suggests that stability of subnetworks should generically be present also in larger structures}} 

Abstract: Network activity and network connectivity mutually influence each other. Especially for fast processes, like spiketiming dependent plasticity STDP, which depends on the interaction of few two signals, the question arises how these inter actions are continuously altering the behavior and structure of the network. To address this question a timecontinuous treatment of plasticity is required. However, this is even in simple recurrent network structures currently not possible. Thus, here we develop for a linear differential Hebbian learning system a method by which we can analytically investigate the dynamics and stability of the connections in recurrent networks. We use noisy periodic external input signals, which through the recurrent connections lead to complex actual ongoing inputs and observe that large stable ranges emerge in these networks without boundaries or weightnormalization. Somewhat counterintuitively, we find that about 40 of these cases are obtained with an LTP dominated STDP curve. Noise can reduce stability in some cases, but generally this does not occur. Instead stable domains are often enlarged. This study is a first step towards a better understanding of the on going interactions between activity and plasticity in recurrent networks using STDP. The results suggests that stability of subnetworks should generically be present also in larger structures  
Review:  


BibTeX:
@inproceedings{manoonpongpasemannkolodziejski2010, author = {Manoonpong, P. and Pasemann, F. and Kolodziejski, C. and Wörgötter, F.}, title = {Designing Simple Nonlinear Filters Using Hysteresis of Single Recurrent Neurons for Acoustic Signal Recognition in Robots}, pages = {374383}, booktitle = {ICANN 1}, year = {2010}, volume= {6352}, doi = 10.1007/9783642158193_50}, abstract = In this article we exploit the discretetime dynamics of a sin gle neuron with selfconnection to systematically design simple signal fil ters. Due to hysteresis effects and transient dynamics, this single neuron behaves as an adjustable lowpass filter for specific parameter configura tions. Extending this neuromodule by two more recurrent neurons leads to versatile high and bandpass filters. The approach presented here helps to understand how the dynamical properties of recurrent neural networks can be used for filter design. Furthermore, it gives guidance to a new way of implementing sensory preprocessing for acoustic signal recognition in autonomous robots}} 

Abstract: In this article we exploit the discretetime dynamics of a sin gle neuron with selfconnection to systematically design simple signal fil ters. Due to hysteresis effects and transient dynamics, this single neuron behaves as an adjustable lowpass filter for specific parameter configura tions. Extending this neuromodule by two more recurrent neurons leads to versatile high and bandpass filters. The approach presented here helps to understand how the dynamical properties of recurrent neural networks can be used for filter design. Furthermore, it gives guidance to a new way of implementing sensory preprocessing for acoustic signal recognition in autonomous robots  
Review:  


BibTeX:
@article{kolodziejskiporrwoergoetter2009, author = {Kolodziejski, C. and Porr, B. and Wörgötter, F.}, title = {On the Asymptotic Equivalence Between Differential Hebbian and Temporal Difference Learning}, pages = {11731202}, journal = {Neural Computation}, year = {2009}, volume= {21}, number = {4}, abstract = In this theoretical contribution, we provide mathematical proof that two of the most important classes of network learning correlationbased differential Hebbian learning and rewardbased temporal difference learning are asymptotically equivalent when timing the learning with a modulatory signal. This opens the opportunity to consistently refor mulate most of the abstract reinforcement learning framework from a correlationbased perspective more closely related to the biophysics of neurons}} 

Abstract: In this theoretical contribution, we provide mathematical proof that two of the most important classes of network learning correlationbased differential Hebbian learning and rewardbased temporal difference learning are asymptotically equivalent when timing the learning with a modulatory signal. This opens the opportunity to consistently refor mulate most of the abstract reinforcement learning framework from a correlationbased perspective more closely related to the biophysics of neurons  
Review:  


BibTeX:
@inproceedings{kolodziejskiporrtamosiunaite2009, author = {Kolodziejski, C. and Porr, B. and Tamosiunaite, M. and Wörgötter, F.}, title = {On the asymptotic equivalence between differential Hebbian and temporal difference learning using a local third factor}, pages = {857864}, booktitle = {Advances in Neural Information Processing Systems}, year = {2009}, volume= {21}, abstract = In this theoretical contribution we provide mathematical proof that two of the most important classes of network learning  correlationbased differential Heb bian learning and rewardbased temporal difference learning  are asymptotically equivalent when timing the learning with a local modulatory signal. This opens the opportunity to consistently reformulate most of the abstract reinforcement learn ing framework from a correlation based perspective that is more closely related to the biophysics of neurons}} 

Abstract: In this theoretical contribution we provide mathematical proof that two of the most important classes of network learning  correlationbased differential Heb bian learning and rewardbased temporal difference learning  are asymptotically equivalent when timing the learning with a local modulatory signal. This opens the opportunity to consistently reformulate most of the abstract reinforcement learn ing framework from a correlation based perspective that is more closely related to the biophysics of neurons  
Review:  


BibTeX:
@inproceedings{thompsonporrkolodziejski2008, author = {Thompson, A M. and Porr, B. and Kolodziejski, C. and Wörgötter, F.}, title = {Second Order Conditioning in the Subcortical Nuclei of the Limbic System}, pages = {189198}, booktitle = {From Animals to Animats 10}, year = {2008}, volume= {5040}, editor = {Asada, Minoru and Hallam, JohnC.T. and Meyer, JeanArcady and Tani, Jun}, publisher = {Springer Berlin Heidelberg}, series = {Lecture Notes in Computer Science}, url = http://dx.doi.org/10.1007/9783540691341_19}, doi = 10.1007/9783540691341_19}, abstract = Three factor Isotropic sequence order ISO3 learning is a form of differential Hebbian learning where a third factor switches on learning at relevant moments for example, after reward retreival. This switch enables learning only at specific moments and, thus, stablises the corresponding weights. The concept of using a third factor as a gating signal for learning at relevant moments has been extended in this pa per to perform second order conditioning SOC. We present a biological model of the subcortical nuclei of the limbic system that is capable of performing SOC in a food seeking task. The 3rdfactor is modelled by dopaminergic neurons of the VTA which are activated via a direct exci tatory glutamatergic pathway, and an indirect disinhibitory GABAergic pathway. The latter generates an amplification in the number of tonically active DA neurons. This produces an increase in DA outside the event of a primary reward and enables SOC to be accomplished}} 

Abstract: Three factor Isotropic sequence order ISO3 learning is a form of differential Hebbian learning where a third factor switches on learning at relevant moments for example, after reward retreival. This switch enables learning only at specific moments and, thus, stablises the corresponding weights. The concept of using a third factor as a gating signal for learning at relevant moments has been extended in this pa per to perform second order conditioning SOC. We present a biological model of the subcortical nuclei of the limbic system that is capable of performing SOC in a food seeking task. The 3rdfactor is modelled by dopaminergic neurons of the VTA which are activated via a direct exci tatory glutamatergic pathway, and an indirect disinhibitory GABAergic pathway. The latter generates an amplification in the number of tonically active DA neurons. This produces an increase in DA outside the event of a primary reward and enables SOC to be accomplished  
Review:  


BibTeX:
@article{kolodziejskiporrwoergoetter2008, author = {Kolodziejski, C. and Porr, B. and Wörgötter, F.}, title = {Mathematical properties of neuronal TDrules and differential Hebbian learning: a comparison}, pages = {259272}, journal = {Biological Cybernetics}, year = {2008}, volume= {98}, number = {3}, abstract = A confusingly wide variety of temporally asym metric learning rules exists related to reinforcement learning and/or to spiketiming dependent plasticity, many of which look exceedingly similar, while displaying strongly different behavior. These rules often find their use in control tasks, for example in robotics and for this rigorous convergence and numerical stability is required. The goal of this article is to review these rules and compare them to provide a better over view over their different properties. Two main classes will be discussed: temporal difference TD rules and correlation based differential hebbian rules and some transition cases. In general we will focus on neuronal implementations with changeable synaptic weights and a timecontinuous represen tation of activity. In a machine learning nonneuronal con text, for TDlearning a solid mathematical theory has existed since several years. This can partly be transfered to a neu ronal framework, too. On the other hand, only now a more complete theory has also emerged for differential Hebb rules. In general rules differ by their convergence conditions and their numerical stability, which can lead to very undesirable behavior, when wanting to apply them. For TD, convergence can be enforced with a certain output condition assuring that the}} 

Abstract: A confusingly wide variety of temporally asym metric learning rules exists related to reinforcement learning and/or to spiketiming dependent plasticity, many of which look exceedingly similar, while displaying strongly different behavior. These rules often find their use in control tasks, for example in robotics and for this rigorous convergence and numerical stability is required. The goal of this article is to review these rules and compare them to provide a better over view over their different properties. Two main classes will be discussed: temporal difference TD rules and correlation based differential hebbian rules and some transition cases. In general we will focus on neuronal implementations with changeable synaptic weights and a timecontinuous represen tation of activity. In a machine learning nonneuronal con text, for TDlearning a solid mathematical theory has existed since several years. This can partly be transfered to a neu ronal framework, too. On the other hand, only now a more complete theory has also emerged for differential Hebb rules. In general rules differ by their convergence conditions and their numerical stability, which can lead to very undesirable behavior, when wanting to apply them. For TD, convergence can be enforced with a certain output condition assuring that the  
Review:  


BibTeX:
@article{kolodziejskiporrwoergoetter2007, author = {Kolodziejski, C. and Porr, B. and Wörgötter, F.}, title = {Anticipative adaptive muscle control: forward modeling with selfinduced disturbances and recruitment}, pages = {11}, journal = {BMC Neuroscience}, year = {2007}, volume= {8}, number = {Suppl 2}, publisher = {BioMed Central}, url = http://www.biomedcentral.com/14712202/8/S2/P202}, doi = 10.1186/147122028S2P202}} 

Abstract:  
Review:  


BibTeX:
@article{renchendasgupta2014, author = {Ren, G. and Chen, W. and Dasgupta, S. and Kolodziejski, C. and Wörgötter, F. and Manoonpong, P.}, title = {Multiple chaotic central pattern generators with learning for legged locomotion and malfunction compensation}, pages = {666  682}, journal = {Information Sciences}, year = {2014}, volume= {294}, month = {05}, publisher = {Elseiver}, url = http://www.sciencedirect.com/science/article/pii/S0020025514005192}, doi = 10.1016/j.ins.2014.05.001}, abstract = An originally chaotic system can be controlled into various periodic dynamics. When it is implemented into a legged robots locomotion control as a central pattern generator CPG, sophisticated gait patterns arise so that the robot can perform various walking behaviors. However, such a single chaotic CPG controller has difficulties dealing with leg malfunction. Specifically, in the scenarios presented here, its movement permanently deviates from the desired trajectory. To address this problem, we extend the single chaotic CPG to multiple CPGs with learning. The learning mechanism is based on a simulated annealing algorithm. In a normal situation, the CPGs synchronize and their dynamics are identical. With leg malfunction or disability, the CPGs lose synchronization leading to independent dynamics. In this case, the learning mechanism is applied to automatically adjust the remaining legs oscillation frequencies so that the robot adapts its locomotion to deal with the malfunction. As a consequence, the trajectory produced by the multiple chaotic CPGs resembles the original trajectory far better than the one produced by only a single CPG. The performance of the system is evaluated first in a physical simulation of a quadruped as well as a hexapod robot and finally in a real sixlegged walking machine called AMOSII. The experimental results presented here reveal that using multiple CPGs with learning is an effective approach for adaptive locomotion generation where, for instance, different body parts have to perform independent movements for malfunction compensation}} 

Abstract: An originally chaotic system can be controlled into various periodic dynamics. When it is implemented into a legged robots locomotion control as a central pattern generator CPG, sophisticated gait patterns arise so that the robot can perform various walking behaviors. However, such a single chaotic CPG controller has difficulties dealing with leg malfunction. Specifically, in the scenarios presented here, its movement permanently deviates from the desired trajectory. To address this problem, we extend the single chaotic CPG to multiple CPGs with learning. The learning mechanism is based on a simulated annealing algorithm. In a normal situation, the CPGs synchronize and their dynamics are identical. With leg malfunction or disability, the CPGs lose synchronization leading to independent dynamics. In this case, the learning mechanism is applied to automatically adjust the remaining legs oscillation frequencies so that the robot adapts its locomotion to deal with the malfunction. As a consequence, the trajectory produced by the multiple chaotic CPGs resembles the original trajectory far better than the one produced by only a single CPG. The performance of the system is evaluated first in a physical simulation of a quadruped as well as a hexapod robot and finally in a real sixlegged walking machine called AMOSII. The experimental results presented here reveal that using multiple CPGs with learning is an effective approach for adaptive locomotion generation where, for instance, different body parts have to perform independent movements for malfunction compensation  
Review: 
© 2011  2016 Dept. of Computational Neuroscience • comments to: sreich _at_ gwdg.de • Impressum / Site Info