Prof. Dr. Florentin Wörgötter

Email:
worgott@gwdg.de
Phone: +49 551/ 39 10760
Room: E01.107

Florentin Wörgötter studied biology and mathematics at the University of Düsseldorf, Germany. He received a Ph.D. degree, studying the visual cortex, from the University of Essen, Germany, in 1988. From 1988 to 1990, he did research in computational neuroscience at the California Institute of Technology, Pasadena. He became a Researcher with the University of Bochum, Germany, in 1990, where he was investigating experimental and computational neuroscience of the visual system. From 2000 to 2005, he was a Professor for computational neuroscience with the Psychology Department, University of Stirling, U.K., where his interests strongly turned towards “Learning and Adaptive Artificial Systems” Since July 2005, he has been the Head of the Computational Neuroscience Department at the Bernstein Center for Computational Neuroscience, Inst. Physics 3, University of Göttingen, Germany. His current research interests include information processing in closed-loop perception–action systems (animals, robots), sensory processing (vision), motor control, and learning/plasticity, which are tested in different robotic implementations. This work has recently turned more and more towards issues of artificial cognition addressing problems of human action understanding and how to transfer this to machines.


Global QuickSearch:   Matches: 0

Search Settings

    Author / Editor / Organization
    Year
    Title
    Journal / Proceedings / Book
    Faghihi, F. and Kolodziejski, C. and Fiala, A. and Wörgötter, F. and Tetzlaff, C. (2013).
    An Information Theoretic Model of Information Processing in the Drosophila Olfactory System: the Role of Inhibitory Neurons for System Efficiency. Frontiers in Computational Neuroscience, 7, 183. DOI: 10.3389/fncom.2013.00183.
    BibTeX:
    @article{faghihikolodziejskifiala2013,
      author = {Faghihi, F. and Kolodziejski, C. and Fiala, A. and Wörgötter, F. and Tetzlaff, C.},
      title = {An Information Theoretic Model of Information Processing in the Drosophila Olfactory System: the Role of Inhibitory Neurons for System Efficiency},
      journal = {Frontiers in Computational Neuroscience},
      year = {2013},
      volume= {7},
      number = {183},
      url = {http://journal.frontiersin.org/Journal/10.3389/fncom.2013.00183/full},
      doi = {10.3389/fncom.2013.00183},
      abstract = {Fruit flies Drosophila melanogaster rely on their olfactory system to process environmental information. This information has to be transmitted without system-relevant loss by the olfactory system to deeper brain areas for learning. Here we study the role of several parameters of the flys olfactory system and the environment and how they influence olfactory information transmission. We have designed an abstract model of the antennal lobe, the mushroom body and the inhibitory circuitry. Mutual information between the olfactory environment, simulated in terms of different odor concentrations, and a sub-population of intrinsic mushroom body neurons Kenyon cells was calculated to quantify the efficiency of information transmission. With this method we study, on the one hand, the effect of different connectivity rates between olfactory projection neurons and firing thresholds of Kenyon cells. On the other hand, we analyze the influence of inhibition on mutual information between environment and mushroom body. Our simulations show an expected linear relation between the connectivity rate between the antennal lobe and the mushroom body and firing threshold of the Kenyon cells to obtain maximum mutual information for both low and high odor concentrations. However, contradicting all-day experiences, high odor concentrations cause a drastic, and unrealistic, decrease in mutual information for all connectivity rates compared to low concentration. But when inhibition on the mushroom body is included, mutual information remains at high levels independent of other system parameters. This finding points to a pivotal role of inhibition in fly information processing without which the systems efficiency will be substantially reduced}}
    Abstract: Fruit flies Drosophila melanogaster rely on their olfactory system to process environmental information. This information has to be transmitted without system-relevant loss by the olfactory system to deeper brain areas for learning. Here we study the role of several parameters of the flys olfactory system and the environment and how they influence olfactory information transmission. We have designed an abstract model of the antennal lobe, the mushroom body and the inhibitory circuitry. Mutual information between the olfactory environment, simulated in terms of different odor concentrations, and a sub-population of intrinsic mushroom body neurons Kenyon cells was calculated to quantify the efficiency of information transmission. With this method we study, on the one hand, the effect of different connectivity rates between olfactory projection neurons and firing thresholds of Kenyon cells. On the other hand, we analyze the influence of inhibition on mutual information between environment and mushroom body. Our simulations show an expected linear relation between the connectivity rate between the antennal lobe and the mushroom body and firing threshold of the Kenyon cells to obtain maximum mutual information for both low and high odor concentrations. However, contradicting all-day experiences, high odor concentrations cause a drastic, and unrealistic, decrease in mutual information for all connectivity rates compared to low concentration. But when inhibition on the mushroom body is included, mutual information remains at high levels independent of other system parameters. This finding points to a pivotal role of inhibition in fly information processing without which the systems efficiency will be substantially reduced
    Review:
    Dasgupta, S. and Wörgötter, F. and Manoonpong, P. (2013).
    Information dynamics based self-adaptive reservoir for delay temporal memory tasks. Evolving Systems, 235-249, 4, 4. DOI: 10.1007/s12530-013-9080-y.
    BibTeX:
    @article{dasguptawoergoettermanoonpong2013,
      author = {Dasgupta, S. and Wörgötter, F. and Manoonpong, P.},
      title = {Information dynamics based self-adaptive reservoir for delay temporal memory tasks},
      pages = {235-249},
      journal = {Evolving Systems},
      year = {2013},
      volume= {4},
      number = {4},
      publisher = {Springer Berlin Heidelberg},
      url = {http://dx.doi.org/10.1007/s12530-013-9080-y},
      doi = {10.1007/s12530-013-9080-y},
      abstract = {Recurrent neural networks of the reservoir computing (RC) type have been found useful in various time-series processing tasks with inherent non-linearity and requirements of variable temporal memory. Specifically for delayed response tasks involving the transient memorization of information (temporal memory), self-adaptation in RC is crucial for generalization to varying delays. In this work using information theory, we combine a generalized intrinsic plasticity rule with a local information dynamics based schema of reservoir neuron leak adaptation. This allows the RC network to be optimized in a self-adaptive manner with minimal parameter tuning. Local active information storage, measured as the degree of influence of previous activity on the next time step activity of a neuron, is used to modify its leak-rate. This results in RC network with non-uniform leak rate which depends on the time scales of the incoming input. Intrinsic plasticity (IP) is aimed at maximizing the mutual information between each neurons input and output while maintaining a mean level of activity (homeostasis). Experimental results on two standard benchmark tasks confirm the extended performance of this system as compared to the static RC (fixed leak and no IP) and RC with only IP. In addition, using both a simulated wheeled robot and a more complex physical hexapod robot, we demonstrate the ability of the system to achieve long temporal memory for solving a basic T-shaped maze navigation task with varying delay time scale.}}
    Abstract: Recurrent neural networks of the reservoir computing (RC) type have been found useful in various time-series processing tasks with inherent non-linearity and requirements of variable temporal memory. Specifically for delayed response tasks involving the transient memorization of information (temporal memory), self-adaptation in RC is crucial for generalization to varying delays. In this work using information theory, we combine a generalized intrinsic plasticity rule with a local information dynamics based schema of reservoir neuron leak adaptation. This allows the RC network to be optimized in a self-adaptive manner with minimal parameter tuning. Local active information storage, measured as the degree of influence of previous activity on the next time step activity of a neuron, is used to modify its leak-rate. This results in RC network with non-uniform leak rate which depends on the time scales of the incoming input. Intrinsic plasticity (IP) is aimed at maximizing the mutual information between each neurons input and output while maintaining a mean level of activity (homeostasis). Experimental results on two standard benchmark tasks confirm the extended performance of this system as compared to the static RC (fixed leak and no IP) and RC with only IP. In addition, using both a simulated wheeled robot and a more complex physical hexapod robot, we demonstrate the ability of the system to achieve long temporal memory for solving a basic T-shaped maze navigation task with varying delay time scale.
    Review:
    Tetzlaff, C. and Kolodziejski, C. and Timme, M. and Tsodyks, M. and Wörgötter, F. (2013).
    Synaptic scaling enables dynamically distinct short- and long-term memory formation. PLoS Computational Biology, e10003307, 910. DOI: doi:10.1371/journal.pcbi.1003307.
    BibTeX:
    @article{tetzlaffkolodziejskitimme2013,
      author = {Tetzlaff, C. and Kolodziejski, C. and Timme, M. and Tsodyks, M. and Wörgötter, F.},
      title = {Synaptic scaling enables dynamically distinct short- and long-term memory formation},
      pages = {e10003307},
      journal = {PLoS Computational Biology},
      year = {2013},
      volume= {910},
      doi = {doi:10.1371/journal.pcbi.1003307},
      abstract = {Memory storage in the brain relies on mechanisms acting on time scales from minutes, for long-term synaptic potentiation, to days, for memory consolidation. During such processes, neural circuits distinguish synapses relevant for forming a long-term storage, which are consolidated, from synapses of short-term storage, which fade. How time scale integration and synaptic differentiation is simultaneously achieved remains unclear. Here we show that synaptic scaling - a slow process usually associated with the maintenance of activity homeostasis - combined with synaptic plasticity may simultaneously achieve both, thereby providing a natural separation of short- from long-term storage. The interaction between plasticity and scaling provides also an explanation for an established paradox where memory consolidation critically depends on the exact order of learning and recall. These results indicate that scaling may be fundamental for stabilizing memories, providing a dynamic link between early and late memory formation processes.}}
    Abstract: Memory storage in the brain relies on mechanisms acting on time scales from minutes, for long-term synaptic potentiation, to days, for memory consolidation. During such processes, neural circuits distinguish synapses relevant for forming a long-term storage, which are consolidated, from synapses of short-term storage, which fade. How time scale integration and synaptic differentiation is simultaneously achieved remains unclear. Here we show that synaptic scaling - a slow process usually associated with the maintenance of activity homeostasis - combined with synaptic plasticity may simultaneously achieve both, thereby providing a natural separation of short- from long-term storage. The interaction between plasticity and scaling provides also an explanation for an established paradox where memory consolidation critically depends on the exact order of learning and recall. These results indicate that scaling may be fundamental for stabilizing memories, providing a dynamic link between early and late memory formation processes.
    Review:
    Kesper, P. and Grinke, E. and Hesse, F. and Wörgötter, F. and Manoonpong, P. (2013).
    Obstacle/Gap Detection and Terrain Classification of Walking Robots based on a 2D Laser Range Finder. 16th International Conference on Climbing and Walking Robots and the Support Technologies for Mobile Machines CLAWAR, 419-426, 16.
    BibTeX:
    @inproceedings{kespergrinkehesse2013,
      author = {Kesper, P. and Grinke, E. and Hesse, F. and Wörgötter, F. and Manoonpong, P.},
      title = {Obstacle/Gap Detection and Terrain Classification of Walking Robots based on a 2D Laser Range Finder},
      pages = {419-426},
      booktitle = {16th International Conference on Climbing and Walking Robots and the Support Technologies for Mobile Machines CLAWAR},
      year = {2013},
      number = {16},
      location = {Sidney (Australia)},
      month = {July 14-17},
      abstract = {This paper utilizes a 2D laser range finder (LRF) to determine the behavior of a walking robot. The LRF provides information for 1) obstacle/gap detection as well as 2) terrain classification. The obstacle/gap detection is based on an edge detection with increased robustness and accuracy due to customized pre and post processing. Its output is used to drive obstacle/gap avoidance behavior or climbing behavior, depending on the height of obstacles or the depth of gaps. The terrain classification employs terrain roughness to select a proper gait with respect to the current terrain. As a result, the combination of these methods enables the robot to decide if obstacles and gaps can be climbed up/down or have to be avoided while at the same time a terrain specific gait can be chosen.}}
    Abstract: This paper utilizes a 2D laser range finder (LRF) to determine the behavior of a walking robot. The LRF provides information for 1) obstacle/gap detection as well as 2) terrain classification. The obstacle/gap detection is based on an edge detection with increased robustness and accuracy due to customized pre and post processing. Its output is used to drive obstacle/gap avoidance behavior or climbing behavior, depending on the height of obstacles or the depth of gaps. The terrain classification employs terrain roughness to select a proper gait with respect to the current terrain. As a result, the combination of these methods enables the robot to decide if obstacles and gaps can be climbed up/down or have to be avoided while at the same time a terrain specific gait can be chosen.
    Review:
    Manoonpong, P. and Kolodziejski, C. and Wörgötter, F. and Morimoto, J. (2013).
    Combining Correlation-Based and Reward-Based Learning in Neural Control for Policy Improvement. Advances in Complex Systems, 1350015, 16, 2-3. DOI: 10.1142/S021952591350015X.
    BibTeX:
    @article{manoonpongkolodziejskiwoergoetter20,
      author = {Manoonpong, P. and Kolodziejski, C. and Wörgötter, F. and Morimoto, J.},
      title = {Combining Correlation-Based and Reward-Based Learning in Neural Control for Policy Improvement},
      pages = {1350015},
      journal = {Advances in Complex Systems},
      year = {2013},
      volume= {16},
      number = {2-3},
      url = {http://www.worldscientific.com/doi/abs/10.1142/S021952591350015X},
      doi = {10.1142/S021952591350015X},
      abstract = {Classical conditioning (conventionally modeled as correlation-based learning) and operant conditioning (conventionally modeled as reinforcement learning or reward-based learning) have been found in biological systems. Evidence shows that these two mechanisms strongly involve learning about associations. Based on these biological findings, we propose a new learning model to achieve successful control policies for artificial systems. This model combines correlation-based learning using input correlation learning (ICO learning) and reward-based learning using continuous actor-critic reinforcement learning (RL), thereby working as a dual learner system. The model performance is evaluated by simulations of a cart-pole system as a dynamic motion control problem and a mobile robot system as a goal-directed behavior control problem. Results show that the model can strongly improve pole balancing control policy, i.e., it allows the controller to learn stabilizing the pole in the largest domain of initial conditions compared to the results obtained when using a single learning mechanism. This model can also find a successful control policy for goal-directed behavior, i.e., the robot can effectively learn to approach a given goal compared to its individual components. Thus, the study pursued here sharpens our understanding of how two different learning mechanisms can be combined and complement each other for solving complex tasks.}}
    Abstract: Classical conditioning (conventionally modeled as correlation-based learning) and operant conditioning (conventionally modeled as reinforcement learning or reward-based learning) have been found in biological systems. Evidence shows that these two mechanisms strongly involve learning about associations. Based on these biological findings, we propose a new learning model to achieve successful control policies for artificial systems. This model combines correlation-based learning using input correlation learning (ICO learning) and reward-based learning using continuous actor-critic reinforcement learning (RL), thereby working as a dual learner system. The model performance is evaluated by simulations of a cart-pole system as a dynamic motion control problem and a mobile robot system as a goal-directed behavior control problem. Results show that the model can strongly improve pole balancing control policy, i.e., it allows the controller to learn stabilizing the pole in the largest domain of initial conditions compared to the results obtained when using a single learning mechanism. This model can also find a successful control policy for goal-directed behavior, i.e., the robot can effectively learn to approach a given goal compared to its individual components. Thus, the study pursued here sharpens our understanding of how two different learning mechanisms can be combined and complement each other for solving complex tasks.
    Review:
    Kulvicius, T. and Biehl, M. and Aein, M J. and Tamosiunaite, M. and Wörgötter, F. (2013).
    Interaction learning for dynamic movement primitives used in cooperative robotic tasks. Robotics and Autonomous Systems, 1450 - 1459, 61, 12. DOI: 10.1016/j.robot.2013.07.009.
    BibTeX:
    @article{kulviciusbiehlaein2013,
      author = {Kulvicius, T. and Biehl, M. and Aein, M J. and Tamosiunaite, M. and Wörgötter, F.},
      title = {Interaction learning for dynamic movement primitives used in cooperative robotic tasks},
      pages = {1450 - 1459},
      journal = {Robotics and Autonomous Systems},
      year = {2013},
      volume= {61},
      number = {12},
      url = {http://www.sciencedirect.com/science/article/pii/S0921889013001358},
      doi = {10.1016/j.robot.2013.07.009},
      abstract = {Since several years dynamic movement primitives (DMPs) are more and more getting into the center of interest for flexible movement control in robotics. In this study we introduce sensory feedback together with a predictive learning mechanism which allows tightly coupled dual-agent systems to learn an adaptive, sensor-driven interaction based on DMPs. The coupled conventional (no-sensors, no learning) DMP-system automatically equilibrates and can still be solved analytically allowing us to derive conditions for stability. When adding adaptive sensor control we can show that both agents learn to cooperate. Simulations as well as real-robot experiments are shown. Interestingly, all these mechanisms are entirely based on low level interactions without any planning or cognitive component.}}
    Abstract: Since several years dynamic movement primitives (DMPs) are more and more getting into the center of interest for flexible movement control in robotics. In this study we introduce sensory feedback together with a predictive learning mechanism which allows tightly coupled dual-agent systems to learn an adaptive, sensor-driven interaction based on DMPs. The coupled conventional (no-sensors, no learning) DMP-system automatically equilibrates and can still be solved analytically allowing us to derive conditions for stability. When adding adaptive sensor control we can show that both agents learn to cooperate. Simulations as well as real-robot experiments are shown. Interestingly, all these mechanisms are entirely based on low level interactions without any planning or cognitive component.
    Review:
    Manoonpong, P. and Parlitz, U. and Wörgötter, F. (2013).
    Neural Control and Adaptive Neural Forward Models for Insect-like, Energy-Efficient, and Adaptable Locomotion of Walking Machines. Frontiers in Neural Circuits, 7, 12. DOI: 10.3389/fncir.2013.00012.
    BibTeX:
    @article{manoonpongparlitzwoergoetter2013,
      author = {Manoonpong, P. and Parlitz, U. and Wörgötter, F.},
      title = {Neural Control and Adaptive Neural Forward Models for Insect-like, Energy-Efficient, and Adaptable Locomotion of Walking Machines},
      journal = {Frontiers in Neural Circuits},
      year = {2013},
      volume= {7},
      number = {12},
      url = {http://journal.frontiersin.org/Journal/10.3389/fncir.2013.00012/full},
      doi = {10.3389/fncir.2013.00012},
      abstract = {Living creatures, like walking animals, have found fascinating solutions for the problem of locomotion control. Their movements show the impression of elegance including versatile, energy-efficient, and adaptable locomotion. During the last few decades, roboticists have tried to imitate such natural properties with artificial legged locomotion systems by using different approaches including machine learning algorithms, classical engineering control techniques, and biologically-inspired control mechanisms. However, their levels of performance are still far from the natural ones. By contrast, animal locomotion mechanisms seem to largely depend not only on central mechanisms (central pattern generators, CPGs) and sensory feedback (afferent-based control) but also on internal forward models (efference copies). They are used to a different degree in different animals. Generally, CPGs organize basic rhythmic motions which are shaped by sensory feedback while internal models are used for sensory prediction and state estimations. According to this concept, we present here adaptive neural locomotion control consisting of a CPG mechanism with neuromodulation and local leg control mechanisms based on sensory feedback and adaptive neural forward models with efference copies. This neural closed-loop controller enables a walking machine to perform a multitude of different walking patterns including insect-like leg movements and gaits as well as energy-efficient locomotion. In addition, the forward models allow the machine to autonomously adapt its locomotion to deal with a change of terrain, losing of ground contact during stance phase, stepping on or hitting an obstacle during swing phase, leg damage, and even to promote cockroach-like climbing behavior. Thus, the results presented here show that the employed embodied neural closed-loop system can be a powerful way for developing robust and adaptable machines.}}
    Abstract: Living creatures, like walking animals, have found fascinating solutions for the problem of locomotion control. Their movements show the impression of elegance including versatile, energy-efficient, and adaptable locomotion. During the last few decades, roboticists have tried to imitate such natural properties with artificial legged locomotion systems by using different approaches including machine learning algorithms, classical engineering control techniques, and biologically-inspired control mechanisms. However, their levels of performance are still far from the natural ones. By contrast, animal locomotion mechanisms seem to largely depend not only on central mechanisms (central pattern generators, CPGs) and sensory feedback (afferent-based control) but also on internal forward models (efference copies). They are used to a different degree in different animals. Generally, CPGs organize basic rhythmic motions which are shaped by sensory feedback while internal models are used for sensory prediction and state estimations. According to this concept, we present here adaptive neural locomotion control consisting of a CPG mechanism with neuromodulation and local leg control mechanisms based on sensory feedback and adaptive neural forward models with efference copies. This neural closed-loop controller enables a walking machine to perform a multitude of different walking patterns including insect-like leg movements and gaits as well as energy-efficient locomotion. In addition, the forward models allow the machine to autonomously adapt its locomotion to deal with a change of terrain, losing of ground contact during stance phase, stepping on or hitting an obstacle during swing phase, leg damage, and even to promote cockroach-like climbing behavior. Thus, the results presented here show that the employed embodied neural closed-loop system can be a powerful way for developing robust and adaptable machines.
    Review:
    Nachstedt, T. and Wörgötter, F. and Manoonpong, P. and Ariizumi, R. and Ambe, Y. and Matsuno, F. (2013).
    Adaptive neural oscillators with synaptic plasticity for locomotion control of a snake-like robot with screw-drive mechanism. IEEE International Conference on Robotics and Automation ICRA, 3389-3395. DOI: 10.1109/ICRA.2013.6631050.
    BibTeX:
    @inproceedings{nachstedtwoergoettermanoonpong2013,
      author = {Nachstedt, T. and Wörgötter, F. and Manoonpong, P. and Ariizumi, R. and Ambe, Y. and Matsuno, F.},
      title = {Adaptive neural oscillators with synaptic plasticity for locomotion control of a snake-like robot with screw-drive mechanism},
      pages = {3389-3395},
      booktitle = {IEEE International Conference on Robotics and Automation ICRA},
      year = {2013},
      location = {Kralsruhe (Germany)},
      month = {May 6-10},
      doi = {10.1109/ICRA.2013.6631050},
      abstract = {Central pattern generators (CPGs) play a crucial role for animal locomotion control. They can be entrained by sensory feedback to induce proper rhythmic patterns and even store the entrained patterns through connection weights. Inspired by this biological finding, we use four adaptive neural oscillators with synaptic plasticity as CPGs for locomotion control of our real snake-like robot with screw-drive mechanism. Each oscillator consists of only three neurons and uses adaptive mechanisms based on frequency adaptation and Hebbian-type learning rules. It autonomously generates proper periodic patterns for the robot locomotion and can be entrained by sensory feedback to memorize the patterns. The adaptive CPG system in conjunction with a simple control strategy enables the robot to perform self-tuning behavior which is robust against short-time perturbations. The generated behavior is also energy efficient. In addition, the robot can also cope with corners as well as move through a complex environment with obstacles.}}
    Abstract: Central pattern generators (CPGs) play a crucial role for animal locomotion control. They can be entrained by sensory feedback to induce proper rhythmic patterns and even store the entrained patterns through connection weights. Inspired by this biological finding, we use four adaptive neural oscillators with synaptic plasticity as CPGs for locomotion control of our real snake-like robot with screw-drive mechanism. Each oscillator consists of only three neurons and uses adaptive mechanisms based on frequency adaptation and Hebbian-type learning rules. It autonomously generates proper periodic patterns for the robot locomotion and can be entrained by sensory feedback to memorize the patterns. The adaptive CPG system in conjunction with a simple control strategy enables the robot to perform self-tuning behavior which is robust against short-time perturbations. The generated behavior is also energy efficient. In addition, the robot can also cope with corners as well as move through a complex environment with obstacles.
    Review:
    Zenker, S. and Aksoy, E E. and Goldschmidt, D. and Wörgötter, F. and Manoonpong, P. (2013).
    Visual Terrain Classification for Selecting Energy Efficient Gaits of a Hexapod Robot. IEEE/ASME International Conference on Advanced Intelligent Mechatronics, 577-584. DOI: 10.1109/AIM.2013.6584154.
    BibTeX:
    @inproceedings{zenkeraksoygoldschmidt2013,
      author = {Zenker, S. and Aksoy, E E. and Goldschmidt, D. and Wörgötter, F. and Manoonpong, P.},
      title = {Visual Terrain Classification for Selecting Energy Efficient Gaits of a Hexapod Robot},
      pages = {577-584},
      booktitle = {IEEE/ASME International Conference on Advanced Intelligent Mechatronics},
      year = {2013},
      location = {Wollongong (Australia)},
      month = {Jul 9-12},
      doi = {10.1109/AIM.2013.6584154},
      abstract = {Legged robots need to be able to classify and recognize different terrains to adapt their gait accordingly. Recent works in terrain classification use different types of sensors (like stereovision, 3D laser range, and tactile sensors) and their combination. However, such sensor systems require more computing power, produce extra load to legged robots, and/or might be difficult to install on a small size legged robot. In this work, we present an online terrain classification system. It uses only a monocular camera with a feature-based terrain classification algorithm which is robust to changes in illumination and view points. For this algorithm, we extract local features of terrains using either Scale Invariant Feature Transform (SIFT) or Speed Up Robust Feature (SURF). We encode the features using the Bag of Words (BoW) technique, and then classify the words using Support Vector Machines (SVMs) with a radial basis function kernel. We compare this feature-based approach with a color-based approach on the Caltech-256 benchmark as well as eight different terrain image sets (grass, gravel, pavement, sand, asphalt, floor, mud, and fine gravel). For terrain images, we observe up to 90% accuracy with the feature-based approach. Finally, this online terrain classification system is successfully applied to our small hexapod robot AMOS II. The output of the system providing terrain information is used as an input to its neural locomotion control to trigger an energy-efficient gait while traversing different terrains.}}
    Abstract: Legged robots need to be able to classify and recognize different terrains to adapt their gait accordingly. Recent works in terrain classification use different types of sensors (like stereovision, 3D laser range, and tactile sensors) and their combination. However, such sensor systems require more computing power, produce extra load to legged robots, and/or might be difficult to install on a small size legged robot. In this work, we present an online terrain classification system. It uses only a monocular camera with a feature-based terrain classification algorithm which is robust to changes in illumination and view points. For this algorithm, we extract local features of terrains using either Scale Invariant Feature Transform (SIFT) or Speed Up Robust Feature (SURF). We encode the features using the Bag of Words (BoW) technique, and then classify the words using Support Vector Machines (SVMs) with a radial basis function kernel. We compare this feature-based approach with a color-based approach on the Caltech-256 benchmark as well as eight different terrain image sets (grass, gravel, pavement, sand, asphalt, floor, mud, and fine gravel). For terrain images, we observe up to 90% accuracy with the feature-based approach. Finally, this online terrain classification system is successfully applied to our small hexapod robot AMOS II. The output of the system providing terrain information is used as an input to its neural locomotion control to trigger an energy-efficient gait while traversing different terrains.
    Review:
    Aein, M J. and Aksoy, E E. and Tamosiunaite, M. and Papon, J. and Ude, A. and Wörgötter, F. (2013).
    Toward a library of manipulation actions based on Semantic Object-Action Relations. IEEE/RSJ International Conference on Intelligent Robots and Systems. DOI: 10.1109/IROS.2013.6697011.
    BibTeX:
    @inproceedings{aeinaksoytamosiunaite2013,
      author = {Aein, M J. and Aksoy, E E. and Tamosiunaite, M. and Papon, J. and Ude, A. and Wörgötter, F.},
      title = {Toward a library of manipulation actions based on Semantic Object-Action Relations},
      booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems},
      year = {2013},
      doi = {10.1109/IROS.2013.6697011},
      abstract = {The goal of this study is to provide an architecture for a generic definition of robot manipulation actions. We emphasize that the representation of actions presented here is procedural. Thus, we will define the structural elements of our action representations as execution protocols. To achieve this, manipulations are defined using three levels. The top- level defines objects, their relations and the actions in an abstract and symbolic way. A mid-level sequencer, with which the action primitives are chained, is used to structure the actual action execution, which is performed via the bottom level. This (lowest) level collects data from sensors and communicates with the control system of the robot. This method enables robot manipulators to execute the same action in different situations i.e. on different objects with different positions and orientations. In addition, two methods of detecting action failure are provided which are necessary to handle faults in system. To demonstrate the effectiveness of the proposed framework, several different actions are performed on our robotic setup and results are shown. This way we are creating a library of human-like robot actions, which can be used by higher-level task planners to execute more complex tasks.}}
    Abstract: The goal of this study is to provide an architecture for a generic definition of robot manipulation actions. We emphasize that the representation of actions presented here is procedural. Thus, we will define the structural elements of our action representations as execution protocols. To achieve this, manipulations are defined using three levels. The top- level defines objects, their relations and the actions in an abstract and symbolic way. A mid-level sequencer, with which the action primitives are chained, is used to structure the actual action execution, which is performed via the bottom level. This (lowest) level collects data from sensors and communicates with the control system of the robot. This method enables robot manipulators to execute the same action in different situations i.e. on different objects with different positions and orientations. In addition, two methods of detecting action failure are provided which are necessary to handle faults in system. To demonstrate the effectiveness of the proposed framework, several different actions are performed on our robotic setup and results are shown. This way we are creating a library of human-like robot actions, which can be used by higher-level task planners to execute more complex tasks.
    Review:
    Xiong, X. and Wörgötter, F. and Manoonpong, P. (2013).
    A Simplified Variable Admittance Controller Based on a Virtual Agonist-Antagonist Mechanism for Robot Joint Control. Proc. Intl Conf. on Climbing and Walking Robots CLAWAR 2013, 281-288.
    BibTeX:
    @inproceedings{xiongwoergoettermanoonpong2013a,
      author = {Xiong, X. and Wörgötter, F. and Manoonpong, P.},
      title = {A Simplified Variable Admittance Controller Based on a Virtual Agonist-Antagonist Mechanism for Robot Joint Control},
      pages = {281-288},
      booktitle = {Proc. Intl Conf. on Climbing and Walking Robots CLAWAR 2013},
      year = {2013},
      month = {July},
      abstract = {Physiological studies suggest that the integration of neural circuits and biomechanics (e.g., muscles) is a key for animals to achieve robust and efficient locomotion over challenging surfaces. Inspired by these studies, we present a neuromechanical controller of a hexapod robot for walking on soft elastic and loose surfaces. It consists of a modular neural network (MNN) and virtual agonist-antagonist mechanisms (VAAM, i.e., a muscle model). The MNN coordinates 18 joints and generates basic locomotion while variable joint compliance for walking on different surfaces is achieved by the VAAM. The changeable compliance of each joint does not depend on physical compliant mechanisms or joint torque sensing. Instead, the compliance is altered by two internal parameters of the VAAM. The performance of the controller is tested on a physical hexapod robot for walking on soft elastic (e.g., sponge) and loose (e.g., gravel and snow) surfaces. The experimental results show that the controller enables the hexapod robot to achieve variably compliant leg behaviors, thereby leading to more energy-efficient locomotion on different surfaces. In addition, a finding of the experiments complies with the finding of physiological experiments on cockroach locomotion on soft elastic surfaces,}}
    Abstract: Physiological studies suggest that the integration of neural circuits and biomechanics (e.g., muscles) is a key for animals to achieve robust and efficient locomotion over challenging surfaces. Inspired by these studies, we present a neuromechanical controller of a hexapod robot for walking on soft elastic and loose surfaces. It consists of a modular neural network (MNN) and virtual agonist-antagonist mechanisms (VAAM, i.e., a muscle model). The MNN coordinates 18 joints and generates basic locomotion while variable joint compliance for walking on different surfaces is achieved by the VAAM. The changeable compliance of each joint does not depend on physical compliant mechanisms or joint torque sensing. Instead, the compliance is altered by two internal parameters of the VAAM. The performance of the controller is tested on a physical hexapod robot for walking on soft elastic (e.g., sponge) and loose (e.g., gravel and snow) surfaces. The experimental results show that the controller enables the hexapod robot to achieve variably compliant leg behaviors, thereby leading to more energy-efficient locomotion on different surfaces. In addition, a finding of the experiments complies with the finding of physiological experiments on cockroach locomotion on soft elastic surfaces,
    Review:
    Xiong, X. and Wörgötter, F. and Manoonpong, P. (2013).
    A Neuromechanical Controller of a Hexapod Robot for Walking on Sponge, Gravel and Snow Surfaces. Advances in Artificial Life. Proceedings of the 11th European Conference on Artificial Life ECAL, 989-996.
    BibTeX:
    @inproceedings{xiongwoergoettermanoonpong2013,
      author = {Xiong, X. and Wörgötter, F. and Manoonpong, P.},
      title = {A Neuromechanical Controller of a Hexapod Robot for Walking on Sponge, Gravel and Snow Surfaces},
      pages = {989-996},
      booktitle = {Advances in Artificial Life. Proceedings of the 11th European Conference on Artificial Life ECAL},
      year = {2013},
      editor = {Pietro Lio, Orazio Miglino, Giuseppe Nicosia, Stefano Nolfi and Mario Pavone},
      location = {Taormina (Italy)},
      month = {September 2-6},
      publisher = {MIT Press, Cambridge, MA},
      abstract = {Physiological studies suggest that the integration of neural circuits and biomechanics (e.g., muscles) is a key for animals to achieve robust and efficient locomotion over challenging surfaces. Inspired by these studies, we present a neuromechanical controller of a hexapod robot for walking on soft elastic and loose surfaces. It consists of a modular neural network (MNN) and virtual agonist-antagonist mechanisms (VAAM, i.e., a muscle model). The MNN coordinates 18 joints and generates basic locomotion while variable joint compliance for walking on different surfaces is achieved by the VAAM. The changeable compliance of each joint does not depend on physical compliant mechanisms or joint torque sensing. Instead, the compliance is altered by two internal parameters of the VAAM. The performance of the controller is tested on a physical hexapod robot for walking on soft elastic (e.g., sponge) and loose (e.g., gravel and snow) surfaces. The experimental results show that the controller enables the hexapod robot to achieve variably compliant leg behaviors, thereby leading to more energy-efficient locomotion on different surfaces. In addition, a finding of the experiments complies with the finding of physiological experiments on cockroach locomotion on soft elastic surfaces.}}
    Abstract: Physiological studies suggest that the integration of neural circuits and biomechanics (e.g., muscles) is a key for animals to achieve robust and efficient locomotion over challenging surfaces. Inspired by these studies, we present a neuromechanical controller of a hexapod robot for walking on soft elastic and loose surfaces. It consists of a modular neural network (MNN) and virtual agonist-antagonist mechanisms (VAAM, i.e., a muscle model). The MNN coordinates 18 joints and generates basic locomotion while variable joint compliance for walking on different surfaces is achieved by the VAAM. The changeable compliance of each joint does not depend on physical compliant mechanisms or joint torque sensing. Instead, the compliance is altered by two internal parameters of the VAAM. The performance of the controller is tested on a physical hexapod robot for walking on soft elastic (e.g., sponge) and loose (e.g., gravel and snow) surfaces. The experimental results show that the controller enables the hexapod robot to achieve variably compliant leg behaviors, thereby leading to more energy-efficient locomotion on different surfaces. In addition, a finding of the experiments complies with the finding of physiological experiments on cockroach locomotion on soft elastic surfaces.
    Review:
    Hesse, F. and Wörgötter, F. (2013).
    A goal-orientation framework for self-organizing control. Advances in Complex Systems, 1350002, 16, 02n03. DOI: 10.1142/S0219525913500021.
    BibTeX:
    @article{hessewoergoetter2013,
      author = {Hesse, F. and Wörgötter, F.},
      title = {A goal-orientation framework for self-organizing control},
      pages = {1350002},
      journal = {Advances in Complex Systems},
      year = {2013},
      volume= {16},
      number = {02n03},
      url = {http://www.worldscientific.com/doi/abs/10.1142/S0219525913500021},
      doi = {10.1142/S0219525913500021},
      abstract = {Self-organization, especially in the framework of embodiment in biologically inspired robots, allows the acquisition of behavioral primitives by autonomous robots themselves. However, it is an open question how self-organization of basic motor primitives and goal-orientation can be combined, which is a prerequisite for the usefulness of such systems. In the paper at hand we propose a goal-orientation framework allowing the combination of self-organization and goal-orientation for the control of autonomous robots in a mutually independent fashion. Self-organization based motor primitives are employed to achieve a given goal. This requires less initial knowledge about the properties of robot and environment and increases adaptivity of the overall system. A combination of self-organization and reward-based learning seems thus a promising route for the development of adaptive learning systems.}}
    Abstract: Self-organization, especially in the framework of embodiment in biologically inspired robots, allows the acquisition of behavioral primitives by autonomous robots themselves. However, it is an open question how self-organization of basic motor primitives and goal-orientation can be combined, which is a prerequisite for the usefulness of such systems. In the paper at hand we propose a goal-orientation framework allowing the combination of self-organization and goal-orientation for the control of autonomous robots in a mutually independent fashion. Self-organization based motor primitives are employed to achieve a given goal. This requires less initial knowledge about the properties of robot and environment and increases adaptivity of the overall system. A combination of self-organization and reward-based learning seems thus a promising route for the development of adaptive learning systems.
    Review:
    Kulvicius, T. and Markelic, I. and Tamosiunaite, M. and Wörgötter, F. (2013).
    Semantic image search for robotic applications. Proc. of 22nd Int. Workshop on Robotics in Alpe-Adria-Danube Region RAAD2113, 1-8.
    BibTeX:
    @inproceedings{kulviciusmarkelictamosiunaite2013,
      author = {Kulvicius, T. and Markelic, I. and Tamosiunaite, M. and Wörgötter, F.},
      title = {Semantic image search for robotic applications},
      pages = {1-8},
      booktitle = {Proc. of 22nd Int. Workshop on Robotics in Alpe-Adria-Danube Region RAAD2113},
      year = {2013},
      location = {Portoro (Slovenia)},
      month = {September 11-13},
      abstract = {Generalization in robotics is one of the most important problems. New generalization approaches use internet databases in order to solve new tasks. Modern search engines can return a large amount of information according to a query within milliseconds. However, not all of the returned information is task relevant, partly due to the problem of polysemes. Here we specifically address the problem of object generalization by using image search. We suggest a bi-modal solution, combining visual and textual information, based on the observation that humans use additional linguistic cues to demarcate intended word meaning. We evaluate the quality of our approach by comparing it to human labelled data and find that, on average, our approach leads to improved results in comparison to Google searches, and that it can treat the problem of polysemes.}}
    Abstract: Generalization in robotics is one of the most important problems. New generalization approaches use internet databases in order to solve new tasks. Modern search engines can return a large amount of information according to a query within milliseconds. However, not all of the returned information is task relevant, partly due to the problem of polysemes. Here we specifically address the problem of object generalization by using image search. We suggest a bi-modal solution, combining visual and textual information, based on the observation that humans use additional linguistic cues to demarcate intended word meaning. We evaluate the quality of our approach by comparing it to human labelled data and find that, on average, our approach leads to improved results in comparison to Google searches, and that it can treat the problem of polysemes.
    Review:
    Reich, S. and Abramov, A. and Papon, J. and Wörgötter, F. and Dellen, B. (2013).
    A Novel Real-time Edge-Preserving Smoothing Filter. International Conference on Computer Vision Theory and Applications, 5 - 14.
    BibTeX:
    @inproceedings{reichabramovpapon2013,
      author = {Reich, S. and Abramov, A. and Papon, J. and Wörgötter, F. and Dellen, B.},
      title = {A Novel Real-time Edge-Preserving Smoothing Filter},
      pages = {5 - 14},
      booktitle = {International Conference on Computer Vision Theory and Applications},
      year = {2013},
      location = {Barcelona (Spain)},
      month = {February 21-24},
      url = {http://www.visapp.visigrapp.org/Abstracts/2013/VISAPP_2013_Abstracts.htm},
      abstract = {The segmentation of textured and noisy areas in images is a very challenging task due to the large variety of objects and materials in natural environments, which cannot be solved by a single similarity measure. In this paper, we address this problem by proposing a novel edge-preserving texture filter, which smudges the color values inside uniformly textured areas, thus making the processed image more workable for color-based image segmentation. Due to the highly parallel structure of the method, the implementation on a GPU runs in real-time, allowing us to process standard images within tens of milliseconds. By preprocessing images with this novel filter before applying a recent real-time color-based image segmentation method, we obtain significant improvements in performance for images from the Berkeley dataset, outperforming an alternative version using a standard bilateral filter for preprocessing. We further show that our combined approach leads to better segmentations in terms of a standard performance measure than graph-based and mean-shift segmentation for the Berkeley image dataset.}}
    Abstract: The segmentation of textured and noisy areas in images is a very challenging task due to the large variety of objects and materials in natural environments, which cannot be solved by a single similarity measure. In this paper, we address this problem by proposing a novel edge-preserving texture filter, which smudges the color values inside uniformly textured areas, thus making the processed image more workable for color-based image segmentation. Due to the highly parallel structure of the method, the implementation on a GPU runs in real-time, allowing us to process standard images within tens of milliseconds. By preprocessing images with this novel filter before applying a recent real-time color-based image segmentation method, we obtain significant improvements in performance for images from the Berkeley dataset, outperforming an alternative version using a standard bilateral filter for preprocessing. We further show that our combined approach leads to better segmentations in terms of a standard performance measure than graph-based and mean-shift segmentation for the Berkeley image dataset.
    Review:
    Papon, J. and Kulvicius, T. and Aksoy, E E. and Wörgötter, F. (2013).
    Point Cloud Video Object Segmentation using a Persistent Supervoxel World-Model. IEEE/RSJ International Conference on Intelligent Robots and Systems IROS, 3712-3718. DOI: 10.1109/IROS.2013.6696886.
    BibTeX:
    @inproceedings{paponkulviciusaksoy2013,
      author = {Papon, J. and Kulvicius, T. and Aksoy, E E. and Wörgötter, F.},
      title = {Point Cloud Video Object Segmentation using a Persistent Supervoxel World-Model},
      pages = {3712-3718},
      booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems IROS},
      year = {2013},
      location = {Tokyo (Japan)},
      month = {November 3-8},
      organization = {},
      doi = {10.1109/IROS.2013.6696886},
      abstract = {Robust visual tracking is an essential precursor to understanding and replicating human actions in robotic systems. In order to accurately evaluate the semantic meaning of a sequence of video frames, or to replicate an action contained therein, one must be able to coherently track and segment all observed agents and objects. This work proposes a novel online point cloud based algorithm which simultaneously tracks 6DoF pose and determines spatial extent of all entities in indoor scenarios. This is accomplished using a persistent supervoxel world-model which is updated, rather than replaced, as new frames of data arrive. Maintenance of a world model enables general object permanence, permitting successful tracking through full occlusions. Object models are tracked using a bank of independent adaptive particle filters which use a supervoxel observation model to give rough estimates of object state. These are united using a novel multi-model RANSAC-like approach, which seeks to minimize a global energy function associating world-model supervoxels to predicted states. We present results on a standard robotic assembly benchmark for two application scenarios - human trajectory imitation and semantic action understanding - demonstrating the usefulness of the tracking in intelligent robotic systems.}}
    Abstract: Robust visual tracking is an essential precursor to understanding and replicating human actions in robotic systems. In order to accurately evaluate the semantic meaning of a sequence of video frames, or to replicate an action contained therein, one must be able to coherently track and segment all observed agents and objects. This work proposes a novel online point cloud based algorithm which simultaneously tracks 6DoF pose and determines spatial extent of all entities in indoor scenarios. This is accomplished using a persistent supervoxel world-model which is updated, rather than replaced, as new frames of data arrive. Maintenance of a world model enables general object permanence, permitting successful tracking through full occlusions. Object models are tracked using a bank of independent adaptive particle filters which use a supervoxel observation model to give rough estimates of object state. These are united using a novel multi-model RANSAC-like approach, which seeks to minimize a global energy function associating world-model supervoxels to predicted states. We present results on a standard robotic assembly benchmark for two application scenarios - human trajectory imitation and semantic action understanding - demonstrating the usefulness of the tracking in intelligent robotic systems.
    Review:
    Dasgupta, S. and Wörgötter, F. and Morimoto, J. and Manoonpong, P. (2013).
    Neural Combinatorial Learning of Goal-directed Behavior with Reservoir Critic and Reward Modulated Hebbian Plasticity. IEEE International Conference on Systems, Man, and Cybernetics SMC, 993-1000. DOI: 10.1109/SMC.2013.174.
    BibTeX:
    @inproceedings{dasguptawoergoettermorimoto2013,
      author = {Dasgupta, S. and Wörgötter, F. and Morimoto, J. and Manoonpong, P.},
      title = {Neural Combinatorial Learning of Goal-directed Behavior with Reservoir Critic and Reward Modulated Hebbian Plasticity},
      pages = {993-1000},
      booktitle = {IEEE International Conference on Systems, Man, and Cybernetics SMC},
      year = {2013},
      location = {Manchester (UK)},
      month = {October 13-16},
      doi = {10.1109/SMC.2013.174},
      abstract = {Learning of goal-directed behaviors in biological systems is broadly based on associations between conditional and unconditional stimuli. This can be further classified as classical conditioning (correlation-based learning) and operant conditioning (reward-based learning). Although traditionally modeled as separate learning systems in artificial agents, numerous animal experiments point towards their co-operative role in behavioral learning. Based on this concept, the recently introduced framework of neural combinatorial learning combines the two systems where both the systems run in parallel to guide the overall learned behavior. Such a combinatorial learning demonstrates a faster and efficient learner. In this work, we further improve the framework by applying a reservoir computing network (RC) as an adaptive critic unit and reward modulated Hebbian plasticity. Using a mobile robot system for goal-directed behavior learning, we clearly demonstrate that the reservoir critic outperforms traditional radial basis function (RBF) critics in terms of stability of convergence and learning time. Furthermore the temporal memory in RC allows the system to learn partially observable markov decision process scenario, in contrast to a memory less RBF critic.}}
    Abstract: Learning of goal-directed behaviors in biological systems is broadly based on associations between conditional and unconditional stimuli. This can be further classified as classical conditioning (correlation-based learning) and operant conditioning (reward-based learning). Although traditionally modeled as separate learning systems in artificial agents, numerous animal experiments point towards their co-operative role in behavioral learning. Based on this concept, the recently introduced framework of neural combinatorial learning combines the two systems where both the systems run in parallel to guide the overall learned behavior. Such a combinatorial learning demonstrates a faster and efficient learner. In this work, we further improve the framework by applying a reservoir computing network (RC) as an adaptive critic unit and reward modulated Hebbian plasticity. Using a mobile robot system for goal-directed behavior learning, we clearly demonstrate that the reservoir critic outperforms traditional radial basis function (RBF) critics in terms of stability of convergence and learning time. Furthermore the temporal memory in RC allows the system to learn partially observable markov decision process scenario, in contrast to a memory less RBF critic.
    Review:
    Aksoy, E E. and Tamosiunaite, M. and Vuga, R. and Ude, A. and Geib, C. and Steedman, M. and Wörgötter, F. (2013).
    Structural bootstrapping at the sensorimotor level for the fast acquisition of action knowledge for cognitive robots. IEEE International Conference on Development and Learning and Epigenetic Robotics ICDL-EPIROB, 1--8. DOI: 10.1109/DevLrn.2013.6652537.
    BibTeX:
    @inproceedings{aksoytamosiunaitevuga2013,
      author = {Aksoy, E E. and Tamosiunaite, M. and Vuga, R. and Ude, A. and Geib, C. and Steedman, M. and Wörgötter, F.},
      title = {Structural bootstrapping at the sensorimotor level for the fast acquisition of action knowledge for cognitive robots},
      pages = {1--8},
      booktitle = {IEEE International Conference on Development and Learning and Epigenetic Robotics ICDL-EPIROB},
      year = {2013},
      location = {Osaka (Japan)},
      month = {08},
      doi = {10.1109/DevLrn.2013.6652537},
      abstract = {Autonomous robots are faced with the problem of encoding complex actions (e.g. complete manipulations) in a generic and generalizable way. Recently we had introduced the Semantic Event Chains (SECs) as a new representation which can be directly computed from a stream of 3D images and is based on changes in the relationships between objects involved in a manipulation. Here we show that the SEC framework can be extended (called extended SEC) with action-related information and used to achieve and encode two important cognitive properties relevant for advanced autonomous robots: The extended SEC enables us to determine whether an action representation (1) needs to be newly created and stored in its entirety in the robots memory or (2) whether one of the already known and memorized action representations just needs to be refined. In human cognition these two processes (1 and 2) are known as accommodation and assimilation. Thus, here we show that the extended SEC representation can be used to realize these processes originally defined by Piaget for the first time in a robotic application. This is of fundamental importance for any cognitive agent as it allows categorizing observed actions in new versus known ones, storing only the relevant aspects.}}
    Abstract: Autonomous robots are faced with the problem of encoding complex actions (e.g. complete manipulations) in a generic and generalizable way. Recently we had introduced the Semantic Event Chains (SECs) as a new representation which can be directly computed from a stream of 3D images and is based on changes in the relationships between objects involved in a manipulation. Here we show that the SEC framework can be extended (called extended SEC) with action-related information and used to achieve and encode two important cognitive properties relevant for advanced autonomous robots: The extended SEC enables us to determine whether an action representation (1) needs to be newly created and stored in its entirety in the robots memory or (2) whether one of the already known and memorized action representations just needs to be refined. In human cognition these two processes (1 and 2) are known as accommodation and assimilation. Thus, here we show that the extended SEC representation can be used to realize these processes originally defined by Piaget for the first time in a robotic application. This is of fundamental importance for any cognitive agent as it allows categorizing observed actions in new versus known ones, storing only the relevant aspects.
    Review:
    Papon, J. and Abramov, A. and Schoeler, M. and Wörgötter, F. (2013).
    Voxel Cloud Connectivity Segmentation - Supervoxels for Point Clouds. IEEE Conference on Computer Vision and Pattern Recognition CVPR, 2027 - 2034. DOI: 10.1109/CVPR.2013.264.
    BibTeX:
    @inproceedings{paponabramovschoeler2013,
      author = {Papon, J. and Abramov, A. and Schoeler, M. and Wörgötter, F.},
      title = {Voxel Cloud Connectivity Segmentation - Supervoxels for Point Clouds},
      pages = {2027 - 2034},
      booktitle = {IEEE Conference on Computer Vision and Pattern Recognition CVPR},
      year = {2013},
      location = {Portland, OR, USA},
      month = {06},
      doi = {10.1109/CVPR.2013.264},
      abstract = {Unsupervised over-segmentation of an image into regions of perceptually similar pixels, known as super pixels, is a widely used preprocessing step in segmentation algorithms. Super pixel methods reduce the number of regions that must be considered later by more computationally expensive algorithms, with a minimal loss of information. Nevertheless, as some information is inevitably lost, it is vital that super pixels not cross object boundaries, as such errors will propagate through later steps. Existing methods make use of projected color or depth information, but do not consider three dimensional geometric relationships between observed data points which can be used to prevent super pixels from crossing regions of empty space. We propose a novel over-segmentation algorithm which uses voxel relationships to produce over-segmentations which are fully consistent with the spatial geometry of the scene in three dimensional, rather than projective, space. Enforcing the constraint that segmented regions must have spatial connectivity prevents label flow across semantic object boundaries which might otherwise be violated. Additionally, as the algorithm works directly in 3D space, observations from several calibrated RGB+D cameras can be segmented jointly. Experiments on a large data set of human annotated RGB+D images demonstrate a significant reduction in occurrence of clusters crossing object boundaries, while maintaining speeds comparable to state-of-the-art 2D methods.}}
    Abstract: Unsupervised over-segmentation of an image into regions of perceptually similar pixels, known as super pixels, is a widely used preprocessing step in segmentation algorithms. Super pixel methods reduce the number of regions that must be considered later by more computationally expensive algorithms, with a minimal loss of information. Nevertheless, as some information is inevitably lost, it is vital that super pixels not cross object boundaries, as such errors will propagate through later steps. Existing methods make use of projected color or depth information, but do not consider three dimensional geometric relationships between observed data points which can be used to prevent super pixels from crossing regions of empty space. We propose a novel over-segmentation algorithm which uses voxel relationships to produce over-segmentations which are fully consistent with the spatial geometry of the scene in three dimensional, rather than projective, space. Enforcing the constraint that segmented regions must have spatial connectivity prevents label flow across semantic object boundaries which might otherwise be violated. Additionally, as the algorithm works directly in 3D space, observations from several calibrated RGB+D cameras can be segmented jointly. Experiments on a large data set of human annotated RGB+D images demonstrate a significant reduction in occurrence of clusters crossing object boundaries, while maintaining speeds comparable to state-of-the-art 2D methods.
    Review:
    Nachstedt, T. and Wörgötter, F. and Manoonpong, P. (2012).
    Adaptive Neural Oscillator with Synaptic Plasticity Enabling Fast Resonance Tuning. Artificial Neural Networks and Machine Learning ICANN, 451-458, 7552. DOI: 10.1007/978-3-642-33269-2_57.
    BibTeX:
    @incollection{nachstedtwoergoettermanoonpong2012,
      author = {Nachstedt, T. and Wörgötter, F. and Manoonpong, P.},
      title = {Adaptive Neural Oscillator with Synaptic Plasticity Enabling Fast Resonance Tuning},
      pages = {451-458},
      booktitle = {Artificial Neural Networks and Machine Learning ICANN},
      year = {2012},
      volume= {7552},
      editor = {Villa, AlessandroE.P. and Duch, Wodzislaw and Irdi, Peter and Masulli, Francesco and Palm, Günther},
      publisher = {Springer Berlin Heidelberg},
      series = {Lecture Notes i},
      url = {http://dx.doi.org/10.1007/978-3-642},
      doi = {10.1007/978-3-642-33269-2_57},
      abstract = {Rhythmic neural circuits play an important role in biological systems in particular in motion generation. They can be entrained by sensory feedback to induce rhythmic motion at a natural frequency, leading to energy-efficient motion. In addition, such circuits can even store the entrained rhythmical patterns through connection weights. Inspired by this, we introduce an adaptive discrete-time neural oscillator system with synaptic plasticity. The system consists of only three neurons and uses adaptive mechanisms based on frequency adaptation and Hebbian-type learning rules. As a result, it autonomously generates periodic patterns and can be entrained by sensory feedback to memorize a pattern. Using numerical simulations we show that this neural system possesses fast and precise convergence behaviour within a wide target frequency range. We use resonant tuning of a pendulum as a simple system for demonstrating possible applications of the adaptive oscillator network.}}
    Abstract: Rhythmic neural circuits play an important role in biological systems in particular in motion generation. They can be entrained by sensory feedback to induce rhythmic motion at a natural frequency, leading to energy-efficient motion. In addition, such circuits can even store the entrained rhythmical patterns through connection weights. Inspired by this, we introduce an adaptive discrete-time neural oscillator system with synaptic plasticity. The system consists of only three neurons and uses adaptive mechanisms based on frequency adaptation and Hebbian-type learning rules. As a result, it autonomously generates periodic patterns and can be entrained by sensory feedback to memorize a pattern. Using numerical simulations we show that this neural system possesses fast and precise convergence behaviour within a wide target frequency range. We use resonant tuning of a pendulum as a simple system for demonstrating possible applications of the adaptive oscillator network.
    Review:
    Xiong, X. and Wörgötter, F. and Manoonpong, P. (2012).
    An Adaptive Neuromechanical Model for Muscle Impedance Modulations of Legged Robots. International Conference on Dynamic Walking 2012, 1-3.
    BibTeX:
    @conference{xiongwoergoettermanoonpong2012,
      author = {Xiong, X. and Wörgötter, F. and Manoonpong, P.},
      title = {An Adaptive Neuromechanical Model for Muscle Impedance Modulations of Legged Robots},
      pages = {1-3},
      booktitle = {International Conference on Dynamic Walking 2012},
      year = {2012},
      month = {05},
      url = {http://www.bccn-goettingen.de/Publications/articlereference.2012-06-13.4632442521},
      abstract = {Recently, an integrative view of neural circuits and mechanical components has been developed by neuroscientists and biomechanicians 11, 8. This view argues that mechanical components cannot be isolated from neural circuits in the context of substantially perturbed locomotion. Note that mechanical passive walkers with no neural circuits only show stable locomotion on flat terrain or small slopes 2. The argument of the integrative view has been supported by a cockroach experiment, which has demonstrated that more modulations of neural activities are detected when cockroaches run over a highly complex terrain with larger obstacles (more than three times cockroach hip height). Normally, cockroaches are able to solely rely on passive mechanical properties for rapid stabilization while confronted with moderate obstacles (less than three times cockroach hip height) 10. In addition, neural circuits and leg muscle activities tend to be entrained by mechanical feedback 11, 12, 14. Besides, it is well known that neural activities modulate muscle impedance such as stiffness and damping 7, 9, 15, such modulations can be utilized for stabilization in posture and locomotion 3.}}
    Abstract: Recently, an integrative view of neural circuits and mechanical components has been developed by neuroscientists and biomechanicians 11, 8. This view argues that mechanical components cannot be isolated from neural circuits in the context of substantially perturbed locomotion. Note that mechanical passive walkers with no neural circuits only show stable locomotion on flat terrain or small slopes 2. The argument of the integrative view has been supported by a cockroach experiment, which has demonstrated that more modulations of neural activities are detected when cockroaches run over a highly complex terrain with larger obstacles (more than three times cockroach hip height). Normally, cockroaches are able to solely rely on passive mechanical properties for rapid stabilization while confronted with moderate obstacles (less than three times cockroach hip height) 10. In addition, neural circuits and leg muscle activities tend to be entrained by mechanical feedback 11, 12, 14. Besides, it is well known that neural activities modulate muscle impedance such as stiffness and damping 7, 9, 15, such modulations can be utilized for stabilization in posture and locomotion 3.
    Review:
    Goldschmidt, D. and Hesse, F. and Wörgötter, F. and Manoonpong, P. (2012).
    Biologically inspired reactive climbing behavior of hexapod robots. IEEE/RSJ International Conference on Intelligent Robots and Systems IROS, 4632-4637. DOI: 10.1109/IROS.2012.6386135.
    BibTeX:
    @inproceedings{goldschmidthessewoergoetter2012,
      author = {Goldschmidt, D. and Hesse, F. and Wörgötter, F. and Manoonpong, P.},
      title = {Biologically inspired reactive climbing behavior of hexapod robots},
      pages = {4632-4637},
      booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems IROS},
      year = {2012},
      doi = {10.1109/IROS.2012.6386135},
      abstract = {Insects, e.g. cockroaches and stick insects, have found fascinating solutions for the problem of locomotion, especially climbing over a large variety of obstacles. Research on behavioral neurobiology has identified key behavioral patterns of these animals (i.e., body flexion, center of mass elevation, and local leg reflexes) necessary for climbing. Inspired by this finding, we develop a neural control mechanism for hexapod robots which generates basic walking behavior and especially enables them to effectively perform reactive climbing behavior. The mechanism is composed of three main neural circuits: locomotion control, reactive backbone joint control, and local leg reflex control. It was developed and tested using a physical simulation environment, and was then successfully transferred to a physical six-legged walking machine, called AMOS II. Experimental results show that the controller allows the robot to overcome obstacles of various heights (e.g., 75% of its leg length, which are higher than those that other comparable legged robots have achieved so far). The generated climbing behavior is also comparable to the one observed in cockroaches.}}
    Abstract: Insects, e.g. cockroaches and stick insects, have found fascinating solutions for the problem of locomotion, especially climbing over a large variety of obstacles. Research on behavioral neurobiology has identified key behavioral patterns of these animals (i.e., body flexion, center of mass elevation, and local leg reflexes) necessary for climbing. Inspired by this finding, we develop a neural control mechanism for hexapod robots which generates basic walking behavior and especially enables them to effectively perform reactive climbing behavior. The mechanism is composed of three main neural circuits: locomotion control, reactive backbone joint control, and local leg reflex control. It was developed and tested using a physical simulation environment, and was then successfully transferred to a physical six-legged walking machine, called AMOS II. Experimental results show that the controller allows the robot to overcome obstacles of various heights (e.g., 75% of its leg length, which are higher than those that other comparable legged robots have achieved so far). The generated climbing behavior is also comparable to the one observed in cockroaches.
    Review:
    Papon, J. and Abramov, A. and Aksoy, E. and Wörgötter, F. (2012).
    A modular system architecture for online parallel vision pipelines. Applications of Computer Vision WACV, 2012 IEEE Workshop on, 361-368. DOI: 10.1109/WACV.2012.6163002.
    BibTeX:
    @inproceedings{paponabramovaksoy2012,
      author = {Papon, J. and Abramov, A. and Aksoy, E. and Wörgötter, F.},
      title = {A modular system architecture for online parallel vision pipelines},
      pages = {361-368},
      booktitle = {Applications of Computer Vision WACV, 2012 IEEE Workshop on},
      year = {2012},
      month = {jan},
      doi = {10.1109/WACV.2012.6163002},
      abstract = {We present an architecture for real-time, online vision systems which enables development and use of complex vision pipelines integrating any number of algorithms. Individual algorithms are implemented using modular plugins, allowing integration of independently developed algorithms and rapid testing of new vision pipeline configurations. The architecture exploits the parallelization of graphics processing units (GPUs) and multi-core systems to speed processing and achieve real-time performance. Additionally, the use of a global memory management system for frame buffering permits complex algorithmic flow (e.g. feedback loops) in online processing setups, while maintaining the benefits of threaded asynchronous operation of separate algorithms. To demonstrate the system, a typical real-time system setup is described which incorporates plugins for video and depth acquisition, GPU-based segmentation and optical flow, semantic graph generation, and online visualization of output. Performance numbers are shown which demonstrate the insignificant overhead cost of the architecture as well as speed-up over strictly CPU and single threaded implementations.}}
    Abstract: We present an architecture for real-time, online vision systems which enables development and use of complex vision pipelines integrating any number of algorithms. Individual algorithms are implemented using modular plugins, allowing integration of independently developed algorithms and rapid testing of new vision pipeline configurations. The architecture exploits the parallelization of graphics processing units (GPUs) and multi-core systems to speed processing and achieve real-time performance. Additionally, the use of a global memory management system for frame buffering permits complex algorithmic flow (e.g. feedback loops) in online processing setups, while maintaining the benefits of threaded asynchronous operation of separate algorithms. To demonstrate the system, a typical real-time system setup is described which incorporates plugins for video and depth acquisition, GPU-based segmentation and optical flow, semantic graph generation, and online visualization of output. Performance numbers are shown which demonstrate the insignificant overhead cost of the architecture as well as speed-up over strictly CPU and single threaded implementations.
    Review:
    Papon, J. and Abramov, A. and Wörgötter, F. (2012).
    Occlusion Handling in Video Segmentation via Predictive Feedback. Computer Vision ECCV 2012. Workshops and Demonstrations, 233-242, 7585. DOI: 10.1007/978-3-642-33885-4_24.
    BibTeX:
    @incollection{paponabramovwoergoetter2012,
      author = {Papon, J. and Abramov, A. and Wörgötter, F.},
      title = {Occlusion Handling in Video Segmentation via Predictive Feedback},
      pages = {233-242},
      booktitle = {Computer Vision ECCV 2012. Workshops and Demonstrations},
      year = {2012},
      volume= {7585},
      publisher = {Springer Berlin Heidelberg},
      series = {Lecture Notes i},
      doi = {10.1007/978-3-642-33885-4_24},
      abstract = {We present a method for unsupervised on-line dense video segmentation which utilizes sequential Bayesian estimation techniques to resolve partial and full occlusions. Consistent labeling through occlusions is vital for applications which move from low-level object labels to high-level semantic knowledge - tasks such as activity recognition or robot control. The proposed method forms a predictive loop between segmentation and tracking, with tracking predictions used to seed the segmentation kernel, and segmentation results used to update tracked models. All segmented labels are tracked, without the use of a-priori models, using parallel color-histogram particle filters. Predictions are combined into a probabilistic representation of image labels, a realization of which is used to seed segmentation. A simulated annealing relaxation process allows the realization to converge to a minimal energy segmented image. Found segments are subsequently used to repopulate the particle sets, closing the loop. Results on the Cranfield benchmark sequence demonstrate that the prediction mechanism allows on-line segmentation to maintain temporally consistent labels through partial & full occlusions, significant appearance changes, and rapid erratic movements. Additionally, we show that tracking performance matches state-of-the art tracking methods on several challenging benchmark sequences.}}
    Abstract: We present a method for unsupervised on-line dense video segmentation which utilizes sequential Bayesian estimation techniques to resolve partial and full occlusions. Consistent labeling through occlusions is vital for applications which move from low-level object labels to high-level semantic knowledge - tasks such as activity recognition or robot control. The proposed method forms a predictive loop between segmentation and tracking, with tracking predictions used to seed the segmentation kernel, and segmentation results used to update tracked models. All segmented labels are tracked, without the use of a-priori models, using parallel color-histogram particle filters. Predictions are combined into a probabilistic representation of image labels, a realization of which is used to seed segmentation. A simulated annealing relaxation process allows the realization to converge to a minimal energy segmented image. Found segments are subsequently used to repopulate the particle sets, closing the loop. Results on the Cranfield benchmark sequence demonstrate that the prediction mechanism allows on-line segmentation to maintain temporally consistent labels through partial & full occlusions, significant appearance changes, and rapid erratic movements. Additionally, we show that tracking performance matches state-of-the art tracking methods on several challenging benchmark sequences.
    Review:
    Dasgupta, S. and Wörgötter, F. and Manoonpong, P. (2012).
    Information Theoretic Self-organised Adaptation in Reservoirs for Temporal Memory Tasks. Engineering Applications of Neural Networks, 31-40, 311. DOI: 10.1007/978-3-642-32909-8_4.
    BibTeX:
    @incollection{dasguptawoergoettermanoonpong2012,
      author = {Dasgupta, S. and Wörgötter, F. and Manoonpong, P.},
      title = {Information Theoretic Self-organised Adaptation in Reservoirs for Temporal Memory Tasks},
      pages = {31-40},
      booktitle = {Engineering Applications of Neural Networks},
      year = {2012},
      volume= {311},
      editor = {Jayne, Chrisina and Yue, Shigang and Iliadis, Lazaros},
      publisher = {Springer Berlin Heidelberg},
      series = {Communications},
      url = {http://dx.doi.org/10.1007/978-3-642-32909-8_4},
      doi = {10.1007/978-3-642-32909-8_4},
      abstract = {Recurrent neural networks of the Reservoir Computing (RC) type have been found useful in various time-series processing tasks with inherent non-linearity and requirements of temporal memory. Here with the aim to obtain extended temporal memory in generic delayed response tasks, we combine a generalised intrinsic plasticity mechanism with an information storage based neuron leak adaptation rule in a self-organised manner. This results in adaptation of neuron local memory in terms of leakage along with inherent homeostatic stability. Experimental results on two benchmark tasks confirm the extended performance of this system as compared to a static RC and RC with only intrinsic plasticity. Furthermore, we demonstrate the ability of the system to solve long temporal memory tasks via a simulated T-shaped maze navigation scenario.}}
    Abstract: Recurrent neural networks of the Reservoir Computing (RC) type have been found useful in various time-series processing tasks with inherent non-linearity and requirements of temporal memory. Here with the aim to obtain extended temporal memory in generic delayed response tasks, we combine a generalised intrinsic plasticity mechanism with an information storage based neuron leak adaptation rule in a self-organised manner. This results in adaptation of neuron local memory in terms of leakage along with inherent homeostatic stability. Experimental results on two benchmark tasks confirm the extended performance of this system as compared to a static RC and RC with only intrinsic plasticity. Furthermore, we demonstrate the ability of the system to solve long temporal memory tasks via a simulated T-shaped maze navigation scenario.
    Review:
    Tetzlaff, C. and Kolodziejski, C. and Timme, M. and Wörgötter, F. (2012).
    Analysis of synaptic scaling in combination with Hebbian plasticity in several simple networks. Front Comput. Neurosci, 36, 6. DOI: 10.3389/fncom.2012.00036.
    BibTeX:
    @article{tetzlaffkolodziejskitimme2012,
      author = {Tetzlaff, C. and Kolodziejski, C. and Timme, M. and Wörgötter, F.},
      title = {Analysis of synaptic scaling in combination with Hebbian plasticity in several simple networks},
      pages = {36},
      journal = {Front Comput. Neurosci},
      year = {2012},
      volume= {6},
      doi = {10.3389/fncom.2012.00036},
      abstract = {Conventional synaptic plasticity in combination with synaptic scaling is a biologically plau-sible plasticity rule that guides the development of synapses toward stability. Here we analyze the development of synaptic connections and the resulting activity patterns in dif-ferent feed-forward and recurrent neural networks, with plasticity and scaling. We show under which constraints an external input given to a feed-forward network forms an input trace similar to a cell assembly Hebb, 1949 by enhancing synaptic weights to larger stable values as compared to the rest of the network. For instance, a weak input creates a less strong representation in the network than a strong input which produces a trace along large parts of the network.These processes are strongly influenced by the underlying con-nectivity. For example, when embedding recurrent structures excitatory rings, etc. into a feed-forward network, the input trace is extended into more distant layers, while inhibition shortens it. These findings provide a better understanding of the dynamics of generic net-work structures where plasticity is combined with scaling. This makes it also possible to use this rule for constructing an artificial network with certain desired storage properties}}
    Abstract: Conventional synaptic plasticity in combination with synaptic scaling is a biologically plau-sible plasticity rule that guides the development of synapses toward stability. Here we analyze the development of synaptic connections and the resulting activity patterns in dif-ferent feed-forward and recurrent neural networks, with plasticity and scaling. We show under which constraints an external input given to a feed-forward network forms an input trace similar to a cell assembly Hebb, 1949 by enhancing synaptic weights to larger stable values as compared to the rest of the network. For instance, a weak input creates a less strong representation in the network than a strong input which produces a trace along large parts of the network.These processes are strongly influenced by the underlying con-nectivity. For example, when embedding recurrent structures excitatory rings, etc. into a feed-forward network, the input trace is extended into more distant layers, while inhibition shortens it. These findings provide a better understanding of the dynamics of generic net-work structures where plasticity is combined with scaling. This makes it also possible to use this rule for constructing an artificial network with certain desired storage properties
    Review:
    Tetzlaff, C. and Kolodziejski, C. and Markelic, I. and Wörgötter, F. (2012).
    Time scales of memory, learning, and plasticity. Biol. Cybern, 715-726, 10611. DOI: 10.1007/s00422-012-0529.
    BibTeX:
    @article{tetzlaffkolodziejskimarkelic2012,
      author = {Tetzlaff, C. and Kolodziejski, C. and Markelic, I. and Wörgötter, F.},
      title = {Time scales of memory, learning, and plasticity},
      pages = {715-726},
      journal = {Biol. Cybern},
      year = {2012},
      volume= {10611},
      url = {http://dx.doi.org/10.1007/s00422-012-0529},
      doi = {10.1007/s00422-012-0529},
      abstract = {If we stored every bit of input, the storage capacity of our nervous system would be reached after only about 10 days. The nervous system relies on at least two mechanisms that counteract this capacity limit: compression and forgetting. But the latter mechanism needs to know how long an entity should be stored: some memories are relevant only for the next few minutes, some are important even after the passage of several years. Psychology and physiology have found and described many different memory mechanisms, and these mechanisms indeed use different time scales. In this prospect we review these mechanisms with respect to their time scale and propose relations between mechanisms in learning and memory and their underlying physiological basis}}
    Abstract: If we stored every bit of input, the storage capacity of our nervous system would be reached after only about 10 days. The nervous system relies on at least two mechanisms that counteract this capacity limit: compression and forgetting. But the latter mechanism needs to know how long an entity should be stored: some memories are relevant only for the next few minutes, some are important even after the passage of several years. Psychology and physiology have found and described many different memory mechanisms, and these mechanisms indeed use different time scales. In this prospect we review these mechanisms with respect to their time scale and propose relations between mechanisms in learning and memory and their underlying physiological basis
    Review:
    Ren, G. and Chen, W. and Kolodziejski, C. and Wörgötter, F. and Dasgupta, S. and Manoonpong, P. (2012).
    Multiple Chaotic Central Pattern Generators for Locomotion Generation and Leg Damage Compensation in a Hexapod Robot. IEEE/RSJ International Conference on Intelligent Robots and Systems IROS. DOI: 10.1109/IROS.2012.6385573.
    BibTeX:
    @inproceedings{renchenkolodziejski2012,
      author = {Ren, G. and Chen, W. and Kolodziejski, C. and Wörgötter, F. and Dasgupta, S. and Manoonpong, P.},
      title = {Multiple Chaotic Central Pattern Generators for Locomotion Generation and Leg Damage Compensation in a Hexapod Robot},
      booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems IROS},
      year = {2012},
      doi = {10.1109/IROS.2012.6385573},
      abstract = {In chaos control, an originally chaotic system is modified so that periodic dynamics arise. One application of this is to use the periodic dynamics of a single chaotic system as walking patterns in legged robots. In our previous work we applied such a controlled chaotic system as a central pattern generator (CPG) to generate different gait patterns of our hexapod robot AMOSII. However, if one or more legs break, its control fails. Specifically, in the scenario presented here, its movement permanently deviates from a desired trajectory. This is in contrast to the movement of real insects as they can compensate for body damages, for instance, by adjusting the remaining legs frequency. To achieve this for our hexapod robot, we extend the system from one chaotic system serving as a single CPG to multiple chaotic systems, performing as multiple CPGs. Without damage, the chaotic systems synchronize and their dynamics is identical (similar to a single CPG). With damage, they can lose synchronization leading to independent dynamics. In both simulations and real experiments, we can tune the oscillation frequency of every CPG manually so that the controller can indeed compensate for leg damage. In comparison to the trajectory of the robot controlled by only a single CPG, the trajectory produced by multiple chaotic CPG controllers resembles the original trajectory by far better. Thus, multiple chaotic systems that synchronize for normal behavior but can stay desynchronized in other circumstances are an effective way to control complex behaviors where, for instance, different body parts have to do independent movements like after leg damage.}}
    Abstract: In chaos control, an originally chaotic system is modified so that periodic dynamics arise. One application of this is to use the periodic dynamics of a single chaotic system as walking patterns in legged robots. In our previous work we applied such a controlled chaotic system as a central pattern generator (CPG) to generate different gait patterns of our hexapod robot AMOSII. However, if one or more legs break, its control fails. Specifically, in the scenario presented here, its movement permanently deviates from a desired trajectory. This is in contrast to the movement of real insects as they can compensate for body damages, for instance, by adjusting the remaining legs frequency. To achieve this for our hexapod robot, we extend the system from one chaotic system serving as a single CPG to multiple chaotic systems, performing as multiple CPGs. Without damage, the chaotic systems synchronize and their dynamics is identical (similar to a single CPG). With damage, they can lose synchronization leading to independent dynamics. In both simulations and real experiments, we can tune the oscillation frequency of every CPG manually so that the controller can indeed compensate for leg damage. In comparison to the trajectory of the robot controlled by only a single CPG, the trajectory produced by multiple chaotic CPG controllers resembles the original trajectory by far better. Thus, multiple chaotic systems that synchronize for normal behavior but can stay desynchronized in other circumstances are an effective way to control complex behaviors where, for instance, different body parts have to do independent movements like after leg damage.
    Review:
    Abramov, A. and Papon, J. and Pauwels, K. and Wörgötter, F. and Dellen, B. (2012).
    Depth-supported real-time video segmentation with the Kinect. IEEE workshop on the Applications of Computer Vision WACV. DOI: 10.1109/WACV.2012.6163000.
    BibTeX:
    @inproceedings{abramovpaponpauwels2012,
      author = {Abramov, A. and Papon, J. and Pauwels, K. and Wörgötter, F. and Dellen, B.},
      title = {Depth-supported real-time video segmentation with the Kinect},
      booktitle = {IEEE workshop on the Applications of Computer Vision WACV},
      year = {2012},
      doi = {10.1109/WACV.2012.6163000},
      abstract = {We present a real-time technique for the spatiotemporal segmentation of color/depth movies. Images are segmented using a parallel Metropolis algorithm implemented on a GPU utilizing both color and depth information, acquired with the Microsoft Kinect. Segments represent the equilibrium states of a Potts model, where tracking of segments is achieved by warping obtained segment labels to the next frame using real-time optical flow, which reduces the number of iterations required for the Metropolis method to encounter the new equilibrium state. By including depth information into the framework, true objects boundaries can be found more easily, improving also the temporal coherency of the method. The algorithm has been tested for videos of medium resolutions showing human manipulations of objects. The framework provides an inexpensive visual front end for visual preprocessing of videos in industrial settings and robot labs which can potentially be used in various applications.}}
    Abstract: We present a real-time technique for the spatiotemporal segmentation of color/depth movies. Images are segmented using a parallel Metropolis algorithm implemented on a GPU utilizing both color and depth information, acquired with the Microsoft Kinect. Segments represent the equilibrium states of a Potts model, where tracking of segments is achieved by warping obtained segment labels to the next frame using real-time optical flow, which reduces the number of iterations required for the Metropolis method to encounter the new equilibrium state. By including depth information into the framework, true objects boundaries can be found more easily, improving also the temporal coherency of the method. The algorithm has been tested for videos of medium resolutions showing human manipulations of objects. The framework provides an inexpensive visual front end for visual preprocessing of videos in industrial settings and robot labs which can potentially be used in various applications.
    Review:
    Liu, G. and Wörgötter, F. and Markelic, I. (2012).
    Stochastic Lane Shape Estimation Using Local Image Descriptors. IEEE Transactions on Intelligent Transportation Systems, 13 - 21. DOI: 10.1109/TITS.2012.2205146.
    BibTeX:
    @article{liuwoergoettermarkelic2012,
      author = {Liu, G. and Wörgötter, F. and Markelic, I.},
      title = {Stochastic Lane Shape Estimation Using Local Image Descriptors},
      pages = {13 - 21},
      journal = {IEEE Transactions on Intelligent Transportation Systems},
      year = {2012},
      month = {07},
      doi = {10.1109/TITS.2012.2205146},
      abstract = {In this paper, we present a novel measurement model for particle-filter-based lane shape estimation. Recently, the particle filter has been widely used to solve lane detection and tracking problems, due to its simplicity, robustness, and efficiency. The key part of the particle filter is the measurement model, which describes how well a generated hypothesis (a particle) fits current visual cues in the image. Previous methods often simply combine multiple visual cues in a likelihood function without considering the uncertainties of local visual cues and the accurate probability relationship between visual cues and the lane model. In contrast, this paper derives a new measurement model by utilizing multiple kernel density to precisely estimate this probability relationship. The uncertainties of local visual cues are considered and modeled by Gaussian kernels. Specifically, we use a linear-parabolic model to describe the shape of lane boundaries on a top-view image and a partitioned particle filter (PPF), integrating it with our novel measurement model to estimate lane shapes in consecutive frames. Finally, the robustness of the proposed algorithm with the new measurement model is demonstrated on the DRIVSCO data sets.}}
    Abstract: In this paper, we present a novel measurement model for particle-filter-based lane shape estimation. Recently, the particle filter has been widely used to solve lane detection and tracking problems, due to its simplicity, robustness, and efficiency. The key part of the particle filter is the measurement model, which describes how well a generated hypothesis (a particle) fits current visual cues in the image. Previous methods often simply combine multiple visual cues in a likelihood function without considering the uncertainties of local visual cues and the accurate probability relationship between visual cues and the lane model. In contrast, this paper derives a new measurement model by utilizing multiple kernel density to precisely estimate this probability relationship. The uncertainties of local visual cues are considered and modeled by Gaussian kernels. Specifically, we use a linear-parabolic model to describe the shape of lane boundaries on a top-view image and a partitioned particle filter (PPF), integrating it with our novel measurement model to estimate lane shapes in consecutive frames. Finally, the robustness of the proposed algorithm with the new measurement model is demonstrated on the DRIVSCO data sets.
    Review:
    Liu, G. and Wörgötter, F. and Markelic, I. (2012).
    Square-Root Sigma-Point Information Filtering. IEEE Transactions on Automatic Control. DOI: 10.1109/TAC.2012.2193708.
    BibTeX:
    @article{liuwoergoettermarkelic2012a,
      author = {Liu, G. and Wörgötter, F. and Markelic, I.},
      title = {Square-Root Sigma-Point Information Filtering},
      journal = {IEEE Transactions on Automatic Control},
      year = {2012},
      doi = {10.1109/TAC.2012.2193708},
      abstract = {The sigma-point information filters employ a number of deterministic sigma-points to calculate the mean and covariance of a random variable which undergoes a nonlinear transformation. These sigma-points can be generated by the unscented transform or Stirlings interpolation, which corresponds to the unscented information filter (UIF) and the central difference information filter (CDIF) respectively. In this technical note, we develop the square-root extensions of UIF and CDIF, which have better numerical properties than the original versions, e.g., improved numerical accuracy, double order precision and preservation of symmetry. We also show that the square-root unscented information filter (SRUIF) might lose the positive-definiteness due to the negative Cholesky update, whereas the square-root central difference information filter (SRCDIF) has only posi- tive Cholesky update. Therefore, the SRCDIF is preferable to the SRUIF concerning the numerical stability.}}
    Abstract: The sigma-point information filters employ a number of deterministic sigma-points to calculate the mean and covariance of a random variable which undergoes a nonlinear transformation. These sigma-points can be generated by the unscented transform or Stirlings interpolation, which corresponds to the unscented information filter (UIF) and the central difference information filter (CDIF) respectively. In this technical note, we develop the square-root extensions of UIF and CDIF, which have better numerical properties than the original versions, e.g., improved numerical accuracy, double order precision and preservation of symmetry. We also show that the square-root unscented information filter (SRUIF) might lose the positive-definiteness due to the negative Cholesky update, whereas the square-root central difference information filter (SRCDIF) has only posi- tive Cholesky update. Therefore, the SRCDIF is preferable to the SRUIF concerning the numerical stability.
    Review:
    Liu, G. and Wörgötter, F. and Markelic, I. (2012).
    The Square-Root Unscented Information Filter for State Estimation and Sensor Fusion. International Conference on Sensor Networks SENSORNETS.
    BibTeX:
    @inproceedings{liuwoergoettermarkelic2012b,
      author = {Liu, G. and Wörgötter, F. and Markelic, I.},
      title = {The Square-Root Unscented Information Filter for State Estimation and Sensor Fusion},
      booktitle = {International Conference on Sensor Networks SENSORNETS},
      year = {2012},
      abstract = {This paper presents a new recursive Bayesian estimation method, which is the square-root unscented information filter (SRUIF). The unscented information filter (UIF) has been introduced recently for nonlinear system estimation and sensor fusion. In the UIF framework, a number of sigma points are sampled from the probability distribution of the prior state by the unscented transform and then propagated through the nonlinear dynamic function and measurement function. The new state is estimated from the propagated sigma points. In this way, the UIF can achieve higher estimation accuracies and faster convergence rates than the extended information filter (EIF). As the extension of the original UIF, we propose to use the square-root of the covariance in the SRUIF instead of the full covariance in the UIF for estimation. The new SRUIF has better numerical properties than the original UIF, e.g., improved numerical accuracy, double order precision and preservation of symmetry.}}
    Abstract: This paper presents a new recursive Bayesian estimation method, which is the square-root unscented information filter (SRUIF). The unscented information filter (UIF) has been introduced recently for nonlinear system estimation and sensor fusion. In the UIF framework, a number of sigma points are sampled from the probability distribution of the prior state by the unscented transform and then propagated through the nonlinear dynamic function and measurement function. The new state is estimated from the propagated sigma points. In this way, the UIF can achieve higher estimation accuracies and faster convergence rates than the extended information filter (EIF). As the extension of the original UIF, we propose to use the square-root of the covariance in the SRUIF instead of the full covariance in the UIF for estimation. The new SRUIF has better numerical properties than the original UIF, e.g., improved numerical accuracy, double order precision and preservation of symmetry.
    Review:
    Wörgötter, F. and Aksoy, E. E. and Krüger, N. and Piater, J. and Ude, A. and Tamosiunaite, M. (2013).
    A Simple Ontology of Manipulation Actions based on Hand-Object Relations. IEEE Transactions on Autonomous Mental Development, 117 - 134, 05, 02. DOI: 10.1109/TAMD.2012.2232291.
    BibTeX:
    @article{woergoetteraksoykrueger2013,
      author = {Wörgötter, F. and Aksoy, E. E. and Krüger, N. and Piater, J. and Ude, A. and Tamosiunaite, M.},
      title = {A Simple Ontology of Manipulation Actions based on Hand-Object Relations},
      pages = {117 - 134},
      journal = {IEEE Transactions on Autonomous Mental Development},
      year = {2013},
      volume= {05},
      number = {02},
      month = {06},
      doi = {10.1109/TAMD.2012.2232291},
      abstract = {Humans can perform a multitude of different actions with their hands (manipulations). In spite of this, so far there have been only a few attempts to represent manipulation types trying to understand the underlying principles. Here we first discuss how manipulation actions are structured in space and time. For this we use as temporal anchor points those moments where two objects (or hand and object) touch or un-touch each other during a manipulation. We show that by this one can define a relatively small tree-like manipulation ontology. We find less than 30 fundamental manipulations. The temporal anchors also provide us with information about when to pay attention to additional important information, for example when to consider trajectory shapes and relative poses between objects. As a consequence a highly condensed representation emerges by which different manipulations can be recognized and encoded. Examples of manipulations recognition and execution by a robot based on this representation are given at the end of this study.}}
    Abstract: Humans can perform a multitude of different actions with their hands (manipulations). In spite of this, so far there have been only a few attempts to represent manipulation types trying to understand the underlying principles. Here we first discuss how manipulation actions are structured in space and time. For this we use as temporal anchor points those moments where two objects (or hand and object) touch or un-touch each other during a manipulation. We show that by this one can define a relatively small tree-like manipulation ontology. We find less than 30 fundamental manipulations. The temporal anchors also provide us with information about when to pay attention to additional important information, for example when to consider trajectory shapes and relative poses between objects. As a consequence a highly condensed representation emerges by which different manipulations can be recognized and encoded. Examples of manipulations recognition and execution by a robot based on this representation are given at the end of this study.
    Review:
    Kulvicius, T. and Ning, K. and Tamosiunaite, M. and Wörgötter, F. (2012).
    Joining Movement Sequences: Modified Dynamic Movement Primitives for Robotics Applications Exemplified on Handwriting. IEEE Transactions on Robotics, 145 - 157, 28, 1. DOI: 10.1109/TRO.2011.2163863.
    BibTeX:
    @article{kulviciusningtamosiunaite2012,
      author = {Kulvicius, T. and Ning, K. and Tamosiunaite, M. and Wörgötter, F.},
      title = {Joining Movement Sequences: Modified Dynamic Movement Primitives for Robotics Applications Exemplified on Handwriting},
      pages = {145 - 157},
      journal = {IEEE Transactions on Robotics},
      year = {2012},
      volume= {28},
      number = {1},
      doi = {10.1109/TRO.2011.2163863},
      abstract = {The generation of complex movement patterns, in particular in cases where one needs to smoothly and accurately join trajectories in a dynamic way, is an important problem in robotics. This paper presents a novel joining method based on the modification of the original dynamic movement primitive DMP formulation. The new method can reproduce the target trajectory with high accuracy regarding both, position and velocity profile, and produces smooth and natural transitions in position as well as velocity space. The properties of the method are demonstrated by applying it to simulated handwriting generation also shown on a robot, where an adaptive algorithm is used to learn trajectories from human demonstration. These results demonstrate that the new method is a feasible alternative for joining of movement sequences which has high potential for all robotics applications where trajectory joining is required}}
    Abstract: The generation of complex movement patterns, in particular in cases where one needs to smoothly and accurately join trajectories in a dynamic way, is an important problem in robotics. This paper presents a novel joining method based on the modification of the original dynamic movement primitive DMP formulation. The new method can reproduce the target trajectory with high accuracy regarding both, position and velocity profile, and produces smooth and natural transitions in position as well as velocity space. The properties of the method are demonstrated by applying it to simulated handwriting generation also shown on a robot, where an adaptive algorithm is used to learn trajectories from human demonstration. These results demonstrate that the new method is a feasible alternative for joining of movement sequences which has high potential for all robotics applications where trajectory joining is required
    Review:
    Ainge, A. and Tamosiunaite, M. and Wörgötter, F. and Dudchenko, P A. (2012).
    Hippocampal place cells encode intended destination, and not a discriminative stimulus, in a conditional T-maze task. Hippocampus, 534-543, 22.
    BibTeX:
    @article{aingetamosiunaitewoergoetter2012,
      author = {Ainge, A. and Tamosiunaite, M. and Wörgötter, F. and Dudchenko, P A.},
      title = {Hippocampal place cells encode intended destination, and not a discriminative stimulus, in a conditional T-maze task},
      pages = {534-543},
      journal = {Hippocampus},
      year = {2012},
      volume= {22},
      abstract = {The firing of hippocampal place cells encodes instantaneous location but can also reflect where the animal is heading prospective firing, or where it has just come from retrospective firing. The current experiment sought to explicitly control the prospective firing of place cells with a visual discriminada in a T-maze. Rats were trained to associate a specific visual stimulus e.g. a flashing light with the occurrence of reward in a specific location e.g. and the left arm of the T. A different visual stimulus e.g. and a constant light signalled the availability of reward in the opposite arm of the T. After this discrimination had been acquired, rats were implanted with electrodes in the CA1 layer of the hippocampus. Place cells were then identified and recorded as the animals performed the discrimination task, and the presentation of the visual stimulus was manipulated. A subset of CA1 place cells fired at different rates on the central stem of the T depending on the animals intended destination, but this conditional or prospective firing was independent of the visual discriminative stimulus. The firing rate of some place cells was, however, modulated by changes in the timing of presentation of the visual stimulus. Thus, place cells fired prospectively, but this firing did not appear to be controlled, directly, by a salient visual stimulus that controlled behaviour}}
    Abstract: The firing of hippocampal place cells encodes instantaneous location but can also reflect where the animal is heading prospective firing, or where it has just come from retrospective firing. The current experiment sought to explicitly control the prospective firing of place cells with a visual discriminada in a T-maze. Rats were trained to associate a specific visual stimulus e.g. a flashing light with the occurrence of reward in a specific location e.g. and the left arm of the T. A different visual stimulus e.g. and a constant light signalled the availability of reward in the opposite arm of the T. After this discrimination had been acquired, rats were implanted with electrodes in the CA1 layer of the hippocampus. Place cells were then identified and recorded as the animals performed the discrimination task, and the presentation of the visual stimulus was manipulated. A subset of CA1 place cells fired at different rates on the central stem of the T depending on the animals intended destination, but this conditional or prospective firing was independent of the visual discriminative stimulus. The firing rate of some place cells was, however, modulated by changes in the timing of presentation of the visual stimulus. Thus, place cells fired prospectively, but this firing did not appear to be controlled, directly, by a salient visual stimulus that controlled behaviour
    Review:
    Abramov, A. and Papon, J. and Pauwels, K. and Wörgötter, F. and Babette, D. (2012).
    Real-time Segmentation of Stereo Videos on a Resource-limited System with a Mobile GPU. IEEE Transactions on circuits and systems for video technology, 1292 - 1305, 22, 9. DOI: 10.1109/TCSVT.2012.2199389.
    BibTeX:
    @inproceedings{abramovpaponpauwels2012a,
      author = {Abramov, A. and Papon, J. and Pauwels, K. and Wörgötter, F. and Babette, D.},
      title = {Real-time Segmentation of Stereo Videos on a Resource-limited System with a Mobile GPU},
      pages = {1292 - 1305},
      booktitle = {IEEE Transactions on circuits and systems for video technology},
      year = {2012},
      volume= {22},
      number = {9},
      month = {09},
      doi = {10.1109/TCSVT.2012.2199389},
      abstract = {In mobile robotic applications, visual information needs to be processed fast despite resource limitations of the mobile system. Here, a novel real-time framework for model-free spatiotemporal segmentation of stereo videos is presented. It combines real-time optical flow and stereo with image segmentation and runs on a portable system with an integrated mobile graphics processing unit. The system performs online, automatic, and dense segmentation of stereo videos and serves as a visual front end for preprocessing in mobile robots, providing a condensed representation of the scene that can potentially be utilized in various applications, e.g., object manipulation, manipulation recognition, visual servoing. The method was tested on real-world sequences with arbitrary motions, including videos acquired with a moving camera.}}
    Abstract: In mobile robotic applications, visual information needs to be processed fast despite resource limitations of the mobile system. Here, a novel real-time framework for model-free spatiotemporal segmentation of stereo videos is presented. It combines real-time optical flow and stereo with image segmentation and runs on a portable system with an integrated mobile graphics processing unit. The system performs online, automatic, and dense segmentation of stereo videos and serves as a visual front end for preprocessing in mobile robots, providing a condensed representation of the scene that can potentially be utilized in various applications, e.g., object manipulation, manipulation recognition, visual servoing. The method was tested on real-world sequences with arbitrary motions, including videos acquired with a moving camera.
    Review:
    Dellen, B. and Wörgötter, F. (2010).
    A Local Algorithm for the Computation of Image Velocity via Constructive Interference of Global Fourier Components. International Journal of Computer Vision, 53-70, 92. DOI: 10.1007/s11263-010-0402-2.
    BibTeX:
    @article{dellenwoergoetter2010,
      author = {Dellen, B. and Wörgötter, F.},
      title = {A Local Algorithm for the Computation of Image Velocity via Constructive Interference of Global Fourier Components},
      pages = {53-70},
      journal = {International Journal of Computer Vision},
      year = {2010},
      volume= {92},
      url = {http://dx.doi.org/10.1007/s11263-010-0402-2},
      doi = {10.1007/s11263-010-0402-2},
      abstract = {A novel Fourier-based technique for local mo- tion detection from image sequences is proposed. In this method, the instantaneous velocities of local image points are inferred directly from the global 3D Fourier components of the image sequence. This is done by selecting those ve- locities for which the superposition of the corresponding Fourier gratings leads to constructive interference at the im- age point. Hence, image velocities can be assigned locally even though position is computed from the phases and am- plitudes of global Fourier components spanning the whole image sequence that have been filtered based on the motion- constraint equation, reducing certain aperture effects typi- cally arising from windowing in other methods. Regulariza- tion is introduced for sequences having smooth flow fields. Aperture effects and their effect on optic-flow regularization are investigated in this context. The algorithm is tested on both synthetic and real image sequences and the results are compared to those of other local methods. Finally, we show that other motion features, i.e. motion direction, can be com- puted using the same algorithmic framework without requir- ing an intermediate representation of local velocity, which is an important characteristic of the proposed method}}
    Abstract: A novel Fourier-based technique for local mo- tion detection from image sequences is proposed. In this method, the instantaneous velocities of local image points are inferred directly from the global 3D Fourier components of the image sequence. This is done by selecting those ve- locities for which the superposition of the corresponding Fourier gratings leads to constructive interference at the im- age point. Hence, image velocities can be assigned locally even though position is computed from the phases and am- plitudes of global Fourier components spanning the whole image sequence that have been filtered based on the motion- constraint equation, reducing certain aperture effects typi- cally arising from windowing in other methods. Regulariza- tion is introduced for sequences having smooth flow fields. Aperture effects and their effect on optic-flow regularization are investigated in this context. The algorithm is tested on both synthetic and real image sequences and the results are compared to those of other local methods. Finally, we show that other motion features, i.e. motion direction, can be com- puted using the same algorithmic framework without requir- ing an intermediate representation of local velocity, which is an important characteristic of the proposed method
    Review:
    Tamosiunaite, M. and Markelic, I. and Kulvicius, T. and Wörgötter, F. (2011).
    Generalizing objects by analyzing language. 11th IEEE-RAS International Conference on Humanoid Robots Humanoids, 557-563. DOI: 10.1109/Humanoids.2011.6100812.
    BibTeX:
    @inproceedings{tamosiunaitemarkelickulvicius2011,
      author = {Tamosiunaite, M. and Markelic, I. and Kulvicius, T. and Wörgötter, F.},
      title = {Generalizing objects by analyzing language},
      pages = {557-563},
      booktitle = {11th IEEE-RAS International Conference on Humanoid Robots Humanoids},
      year = {2011},
      month = {10},
      doi = {10.1109/Humanoids.2011.6100812},
      abstract = {Generalizing objects in an action-context by a robot, for example addressing the problem: "Which items can be cut with which tools?", is an unresolved and difficult problem. Answering such a question defines a complete action class and robots cannot do this so far. We use a bootstrapping mechanism similar to that known from human language acquisition, and combine languagewith image-analysis to create action classes built around the verb (action) in an utterance. A human teaches the robot a certain sentence, for example: "Cut a sausage with a knife", from where on the machine generalizes the arguments (nouns) that the verb takes and searches for possible alternative nouns. Then, by ways of an internet-based image search and a classification algorithm, image classes for the alternative nouns are extracted, by which a large "picture book" of the possible objects involved in an action is created. This concludes the generalization step. Using the same classifier, the machine can now also perform a recognition procedure. Without having seen the objects before, it can analyze a visual scene, discovering, for example, a cucumber and a mandolin, which match to the earlier found nouns allowing it to suggest actions like: "I could cut a cucumber with a mandolin". The algorithm for generalizing objects by analyzing/anguage (GOAL) presented here, allows, thus, generalization and recognition of objects in an action-context. It can then be combined with methods for action execution (e.g. action generation-based on human demonstration) to execute so far unknown actions.}}
    Abstract: Generalizing objects in an action-context by a robot, for example addressing the problem: "Which items can be cut with which tools?", is an unresolved and difficult problem. Answering such a question defines a complete action class and robots cannot do this so far. We use a bootstrapping mechanism similar to that known from human language acquisition, and combine languagewith image-analysis to create action classes built around the verb (action) in an utterance. A human teaches the robot a certain sentence, for example: "Cut a sausage with a knife", from where on the machine generalizes the arguments (nouns) that the verb takes and searches for possible alternative nouns. Then, by ways of an internet-based image search and a classification algorithm, image classes for the alternative nouns are extracted, by which a large "picture book" of the possible objects involved in an action is created. This concludes the generalization step. Using the same classifier, the machine can now also perform a recognition procedure. Without having seen the objects before, it can analyze a visual scene, discovering, for example, a cucumber and a mandolin, which match to the earlier found nouns allowing it to suggest actions like: "I could cut a cucumber with a mandolin". The algorithm for generalizing objects by analyzing/anguage (GOAL) presented here, allows, thus, generalization and recognition of objects in an action-context. It can then be combined with methods for action execution (e.g. action generation-based on human demonstration) to execute so far unknown actions.
    Review:
    Ning, K. and Kulvicius, T. and Tamosiunaite, M. and Wörgötter, F. (2011).
    A Novel Trajectory Generation Method for Robot Control. Journal of Intelligent Robotic Systems, 165-184, 68, 2. DOI: 10.1007/s10846-012-9683-8.
    BibTeX:
    @article{ningkulviciustamosiunaite2011,
      author = {Ning, K. and Kulvicius, T. and Tamosiunaite, M. and Wörgötter, F.},
      title = {A Novel Trajectory Generation Method for Robot Control},
      pages = {165-184},
      journal = {Journal of Intelligent Robotic Systems},
      year = {2011},
      volume= {68},
      number = {2},
      doi = {10.1007/s10846-012-9683-8},
      abstract = {This paper presents a novel trajectory generator based on Dynamic Movement Primitives DMP. The key ideas from the original DMP formalism are extracted, reformulated and extended from a control theoretical viewpoint. This method can generate smooth trajectories, satisfy position- and velocity boundary conditions at start- and endpoint with high precision, and follow accurately geometrical paths as desired. Paths can be complex and processed as a whole, and smooth transitions can be generated automatically. This novel trajectory generating technology appears therefore to be a viable alternative to the existing solutions not only for service robotics but possibly also in industry}}
    Abstract: This paper presents a novel trajectory generator based on Dynamic Movement Primitives DMP. The key ideas from the original DMP formalism are extracted, reformulated and extended from a control theoretical viewpoint. This method can generate smooth trajectories, satisfy position- and velocity boundary conditions at start- and endpoint with high precision, and follow accurately geometrical paths as desired. Paths can be complex and processed as a whole, and smooth transitions can be generated automatically. This novel trajectory generating technology appears therefore to be a viable alternative to the existing solutions not only for service robotics but possibly also in industry
    Review:
    Tetzlaff, C. and Kolodziejski, C. and Timm, M. and Wörgötter, F. (2011).
    Synaptic Scaling in Combination with many Generic Plasticity Mechanisms Stabilizes Circuit Connectivity. Front. Comput. Neurosci, 47, 5. DOI: 10.3389/fncom.2011.00047.
    BibTeX:
    @article{tetzlaffkolodziejskitimm2011,
      author = {Tetzlaff, C. and Kolodziejski, C. and Timm, M. and Wörgötter, F.},
      title = {Synaptic Scaling in Combination with many Generic Plasticity Mechanisms Stabilizes Circuit Connectivity},
      pages = {47},
      journal = {Front. Comput. Neurosci},
      year = {2011},
      volume= {5},
      doi = {10.3389/fncom.2011.00047},
      abstract = {Synaptic scaling is a slow process that modifies synapses, keeping the firing rate of neural circuits in specific regimes. Together with other processes, such as conventional synaptic plasticity in the form of long term depression and potentiation, synaptic scaling changes the synaptic patterns in a network, ensuring diverse, functionally relevant, stable, and input-dependent connectivity. How synaptic patterns are generated and stabilized, however, is largely unknown. Here we formally describe and analyze synaptic scaling based on results from experimental studies and demonstrate that the combination of different conventional plasticity mechanisms and synaptic scaling provides a powerful general framework for regulating network connectivity. In addition, we design several simple models that reproduce experimentally observed synaptic distributions as well as the observed synaptic modifications during sustained activity changes. These models predict that the combination of plasticity with scaling generates globally stable, input-controlled synaptic patterns, also in recurrent networks. Thus, in combination with other forms of plasticity, synaptic scaling can robustly yield neuronal circuits with high synaptic diversity, which potentially enables robust dynamic storage of complex activation patterns. This mechanism is even more pronounced when considering networks with a realistic degree of inhibition. Synaptic scaling combined with plasticity could thus be the basis for learning structured behavior even in initially random networks}}
    Abstract: Synaptic scaling is a slow process that modifies synapses, keeping the firing rate of neural circuits in specific regimes. Together with other processes, such as conventional synaptic plasticity in the form of long term depression and potentiation, synaptic scaling changes the synaptic patterns in a network, ensuring diverse, functionally relevant, stable, and input-dependent connectivity. How synaptic patterns are generated and stabilized, however, is largely unknown. Here we formally describe and analyze synaptic scaling based on results from experimental studies and demonstrate that the combination of different conventional plasticity mechanisms and synaptic scaling provides a powerful general framework for regulating network connectivity. In addition, we design several simple models that reproduce experimentally observed synaptic distributions as well as the observed synaptic modifications during sustained activity changes. These models predict that the combination of plasticity with scaling generates globally stable, input-controlled synaptic patterns, also in recurrent networks. Thus, in combination with other forms of plasticity, synaptic scaling can robustly yield neuronal circuits with high synaptic diversity, which potentially enables robust dynamic storage of complex activation patterns. This mechanism is even more pronounced when considering networks with a realistic degree of inhibition. Synaptic scaling combined with plasticity could thus be the basis for learning structured behavior even in initially random networks
    Review:
    Tamosiunaite, M. and Nemec, B. and Ude, A. and Wörgötter, F. (2011).
    Learning to pour with a robot arm combining goal and shape learning for dynamic movement primitives. Robotics and Autonomous Systems RAS, 910-922, 59, 11. DOI: 10.1016/j.robot.2011.07.004.
    BibTeX:
    @article{tamosiunaitenemecude2011,
      author = {Tamosiunaite, M. and Nemec, B. and Ude, A. and Wörgötter, F.},
      title = {Learning to pour with a robot arm combining goal and shape learning for dynamic movement primitives},
      pages = {910-922},
      journal = {Robotics and Autonomous Systems RAS},
      year = {2011},
      volume= {59},
      number = {11},
      url = {http://www.sciencedirect.com/science/article/pii/S0921889011001254},
      doi = {10.1016/j.robot.2011.07.004},
      abstract = {When describing robot motion with dynamic motion primitives DMPs, goal-trajectory endpoint, shape and temporal scaling parameters are used. In reinforcement learning with DMPs, usually goals and temporal scaling parameters are pre-defined and only the weights for shaping a DMP are learned. Many tasks, however, exist where the best goal position is not a priori known, requiring to learn it. Thus, here we specifically address the question of how to simultaneously combine goal and shape parameter learning. This is a difficult problem because both parameters could easily interfere in a destructive way. We apply value function approximation techniques for goal learning and policy gradient methods for shape learning. Specifically, we use policy improvement with path integrals and natural actor-critic for the policy gradient approach. Methods are analyzed with simulations and implemented on a real robot setup. Results for learning from scratch, learning initialized by human demonstration, as well as for modifying the tool for the learned DMPs are presented. We observe that the combination goal- together with shape learning is stable and robust within large parameter regimes. Learning converges quickly even in the presence of large disturbances, which makes this combined method suitable for robotic applications}}
    Abstract: When describing robot motion with dynamic motion primitives DMPs, goal-trajectory endpoint, shape and temporal scaling parameters are used. In reinforcement learning with DMPs, usually goals and temporal scaling parameters are pre-defined and only the weights for shaping a DMP are learned. Many tasks, however, exist where the best goal position is not a priori known, requiring to learn it. Thus, here we specifically address the question of how to simultaneously combine goal and shape parameter learning. This is a difficult problem because both parameters could easily interfere in a destructive way. We apply value function approximation techniques for goal learning and policy gradient methods for shape learning. Specifically, we use policy improvement with path integrals and natural actor-critic for the policy gradient approach. Methods are analyzed with simulations and implemented on a real robot setup. Results for learning from scratch, learning initialized by human demonstration, as well as for modifying the tool for the learned DMPs are presented. We observe that the combination goal- together with shape learning is stable and robust within large parameter regimes. Learning converges quickly even in the presence of large disturbances, which makes this combined method suitable for robotic applications
    Review:
    Porr, B. and McCabe, L. and Kolodziejski, C. and Wörgötter, F. (2011).
    How feedback inhibition shapes spike-timing-dependent plasticity and its implications for recent Schizophrenia models. Neural Networks, 560-567, 24, 6. DOI: 10.1016/j.neunet.2011.03.004.
    BibTeX:
    @article{porrmccabekolodziejski2011,
      author = {Porr, B. and McCabe, L. and Kolodziejski, C. and Wörgötter, F.},
      title = {How feedback inhibition shapes spike-timing-dependent plasticity and its implications for recent Schizophrenia models},
      pages = {560-567},
      journal = {Neural Networks},
      year = {2011},
      volume= {24},
      number = {6},
      url = {http://www.sciencedirect.com/science/article/pii/S0893608011000888},
      doi = {10.1016/j.neunet.2011.03.004},
      abstract = {It has been shown that plasticity is not a fixed property but, in fact, changes depending on the location of the synapse on the neuron and/or changes of biophysical parameters. Here we investigate how plasticity is shaped by feedback inhibition in a cortical microcircuit. We use a differential Hebbian learning rule to model spike-timing dependent plasticity and show analytically that the feedback inhibition shortens the time window for LTD during spike-timing dependent plasticity but not for LTP. We then use a realistic GENESIS model to test two hypothesis about interneuron hypofunction and conclude that a reduction in GAD67 is the most likely candidate as the cause for hypofrontality as observed in Schizophrenia}}
    Abstract: It has been shown that plasticity is not a fixed property but, in fact, changes depending on the location of the synapse on the neuron and/or changes of biophysical parameters. Here we investigate how plasticity is shaped by feedback inhibition in a cortical microcircuit. We use a differential Hebbian learning rule to model spike-timing dependent plasticity and show analytically that the feedback inhibition shortens the time window for LTD during spike-timing dependent plasticity but not for LTP. We then use a realistic GENESIS model to test two hypothesis about interneuron hypofunction and conclude that a reduction in GAD67 is the most likely candidate as the cause for hypofrontality as observed in Schizophrenia
    Review:
    Pechrach, K. and Manoonpong, P. and Wörgötter, F. and Tungpimolrut, K. and Hatti, N. and Phontip, J. and Komoljindakul, K. (2011).
    Piezoelectric Energy Harvesting for Self Power Generation of Upper and Lower Prosthetic Legs. International Conference on Piezo 2011-Electroceramics for End-Users VI.
    BibTeX:
    @inproceedings{pechrachmanoonpongwoergoetter2011,
      author = {Pechrach, K. and Manoonpong, P. and Wörgötter, F. and Tungpimolrut, K. and Hatti, N. and Phontip, J. and Komoljindakul, K.},
      title = {Piezoelectric Energy Harvesting for Self Power Generation of Upper and Lower Prosthetic Legs},
      booktitle = {International Conference on Piezo 2011-Electroceramics for End-Users VI},
      year = {2011},
      abstract = {This works present the design of an energy harvesting system using smart materials for self power generation of upper and lower prosthetic legs. The smart materials like Piezo-Composites, Piezo Flexible Film, Macro Fiber Composites, and PZT have been employed and modified to be appropriately embedded in the prosthesis. The movements of the prosthesis would extract and transfer energy directly from the piezoelectric via a converter to a power management system. Afterward, the power management system manages and accumulates the generated electrical energy to be sufficient for later powering electronic components of the prosthesis. Here we show our preliminary experimental results of energy harvesting and efficiency in peak piezoelectric voltages during step up and continuous walking for a period of time.}}
    Abstract: This works present the design of an energy harvesting system using smart materials for self power generation of upper and lower prosthetic legs. The smart materials like Piezo-Composites, Piezo Flexible Film, Macro Fiber Composites, and PZT have been employed and modified to be appropriately embedded in the prosthesis. The movements of the prosthesis would extract and transfer energy directly from the piezoelectric via a converter to a power management system. Afterward, the power management system manages and accumulates the generated electrical energy to be sufficient for later powering electronic components of the prosthesis. Here we show our preliminary experimental results of energy harvesting and efficiency in peak piezoelectric voltages during step up and continuous walking for a period of time.
    Review:
    Ning, K. and Wörgötter, F. (2011).
    Control System Development for a Novel Wire-Driven Hyper-Redundant Chain Robot, 3D-Trunk. IEEE-ASME, 949-959, 17, 5. DOI: 10.1109/TMECH.2011.2151202.
    BibTeX:
    @article{ningwoergoetter2011,
      author = {Ning, K. and Wörgötter, F.},
      title = {Control System Development for a Novel Wire-Driven Hyper-Redundant Chain Robot, 3D-Trunk},
      pages = {949-959},
      journal = {IEEE-ASME},
      year = {2011},
      volume= {17},
      number = {5},
      month = {10},
      doi = {10.1109/TMECH.2011.2151202},
      abstract = {This paper presents the control system for our novel hyper-redundant chain robot HRCR system 3D-Trunk demonstrating an operational principle which is much different from traditional solutions. Its main features are that all the joints are passive, state controllable and share common inputs introduced by wire-driven control. For this unique design, a force-oriented method is employed to control the driving wires. The mechanical analysis, as well as an analysis of the differential driven mechanism of this design is formulated. The design of a novel wire tension state sensing component and its operation are also described. The system is controlled by distributed embedded controllers. The actuators coordination mechanism and the Bang-Bang controller-based closed-loop control implementation of this novel prototype are discussed from a mechatronic system level. Thus, this paper, together with a predecessor 1, presents all required details allowing for building and controlling 3D-Trunk}}
    Abstract: This paper presents the control system for our novel hyper-redundant chain robot HRCR system 3D-Trunk demonstrating an operational principle which is much different from traditional solutions. Its main features are that all the joints are passive, state controllable and share common inputs introduced by wire-driven control. For this unique design, a force-oriented method is employed to control the driving wires. The mechanical analysis, as well as an analysis of the differential driven mechanism of this design is formulated. The design of a novel wire tension state sensing component and its operation are also described. The system is controlled by distributed embedded controllers. The actuators coordination mechanism and the Bang-Bang controller-based closed-loop control implementation of this novel prototype are discussed from a mechatronic system level. Thus, this paper, together with a predecessor 1, presents all required details allowing for building and controlling 3D-Trunk
    Review:
    Ning, K. and Kulvicius, T. and Tamosiunaite, M. and Wörgötter, F. (2011).
    Accurate Position and Velocity Control for Trajectories Based on Dynamic Movement Primitives. IEEE International Conference on Robotics and Automation ICRA, 5006-5011. DOI: 10.1109/ICRA.2011.5979668.
    BibTeX:
    @inproceedings{ningkulviciustamosiunaite2011a,
      author = {Ning, K. and Kulvicius, T. and Tamosiunaite, M. and Wörgötter, F.},
      title = {Accurate Position and Velocity Control for Trajectories Based on Dynamic Movement Primitives},
      pages = {5006-5011},
      booktitle = {IEEE International Conference on Robotics and Automation ICRA},
      year = {2011},
      doi = {10.1109/ICRA.2011.5979668},
      abstract = {This paper presents a novel method for trajectory generation based on dynamic movement primitives DMPs treated from a control theoretical perspective. We extended the key ideas from the original DMP formalism by introducing a velocity convergence mechanism in the reformulated system. Theoretical proof is given to guarantee its validity. The new method can deal with complex paths as a whole. Based on this, we can generate smooth trajectories with automatically generated transition zones, satisfy position- and velocity boundary conditions at start and endpoint with high precision, and support multiple via-point applications. Theoretic proof of this method and experiments are presented}}
    Abstract: This paper presents a novel method for trajectory generation based on dynamic movement primitives DMPs treated from a control theoretical perspective. We extended the key ideas from the original DMP formalism by introducing a velocity convergence mechanism in the reformulated system. Theoretical proof is given to guarantee its validity. The new method can deal with complex paths as a whole. Based on this, we can generate smooth trajectories with automatically generated transition zones, satisfy position- and velocity boundary conditions at start and endpoint with high precision, and support multiple via-point applications. Theoretic proof of this method and experiments are presented
    Review:
    Markelic, I. and Kjaer-Nielsen, A. and Pauwels, K. and Baunegaardand Jensen, L. and Chumerin, N. and Vidugiriene, A. and Tamosiunaite M.and Hulle, M V. and Krüger, N. and Rotter, A. and Wörgötter, F. (2011).
    The Driving School System: Learning Automated Basic Driving Skills from a Teacher in a Real Car. IEEE Trans. Intelligent Transportation Systems, 1-12, PP99. DOI: 10.1109/TITS.2011.2157690.
    BibTeX:
    @article{markelickjaernielsenpauwels2011,
      author = {Markelic, I. and Kjaer-Nielsen, A. and Pauwels, K. and Baunegaardand Jensen, L. and Chumerin, N. and Vidugiriene, A. and Tamosiunaite M.and Hulle, M V. and Krüger, N. and Rotter, A. and Wörgötter, F.},
      title = {The Driving School System: Learning Automated Basic Driving Skills from a Teacher in a Real Car},
      pages = {1-12},
      journal = {IEEE Trans. Intelligent Transportation Systems},
      year = {2011},
      volume= {PP99},
      doi = {10.1109/TITS.2011.2157690},
      abstract = {We present a system that learns basic vision based driving skills from a human teacher. In contrast to much other work in this area which is based on simulation, or data obtained from simulation, our system is implemented as a multi-threaded, parallel CPU/GPU architecture in a real car and trained with real driving data to generate steering and acceleration control for road following. In addition it uses a novel algorithm for detecting independently moving objects IMOs for spotting obstacles. Both, learning and IMO detection algorithms, are data driven and thus improve above the limitations of model based approaches. The systems ability to imitate the teachers behavior is analyzed on known and unknown streets and the results suggest its use for steering assistance but limit the use of the acceleration signal to curve negotiation. We propose that this ability to adapt to the driver has high potential for future intelligent driver assistance systems since it can serve to increase the drivers security as well as the comfort, an important sales argument in the car industry}}
    Abstract: We present a system that learns basic vision based driving skills from a human teacher. In contrast to much other work in this area which is based on simulation, or data obtained from simulation, our system is implemented as a multi-threaded, parallel CPU/GPU architecture in a real car and trained with real driving data to generate steering and acceleration control for road following. In addition it uses a novel algorithm for detecting independently moving objects IMOs for spotting obstacles. Both, learning and IMO detection algorithms, are data driven and thus improve above the limitations of model based approaches. The systems ability to imitate the teachers behavior is analyzed on known and unknown streets and the results suggest its use for steering assistance but limit the use of the acceleration signal to curve negotiation. We propose that this ability to adapt to the driver has high potential for future intelligent driver assistance systems since it can serve to increase the drivers security as well as the comfort, an important sales argument in the car industry
    Review:
    Manoonpong, P. and Wörgötter, F. and Pechrach, K. and Tungpimolrut, K. and Hatti, N. and Phontip, J. and Komol, K. (2011).
    Using Neural Networks for Modelling Piezoelectric Energy Harvesting Systems in a Prosthetic Leg. International Conference on Piezo 2011-Electroceramics for End-Users VI.
    BibTeX:
    @inproceedings{manoonpongwoergoetterpechrach2011,
      author = {Manoonpong, P. and Wörgötter, F. and Pechrach, K. and Tungpimolrut, K. and Hatti, N. and Phontip, J. and Komol, K.},
      title = {Using Neural Networks for Modelling Piezoelectric Energy Harvesting Systems in a Prosthetic Leg},
      booktitle = {International Conference on Piezo 2011-Electroceramics for End-Users VI},
      year = {2011},
      abstract = {In this paper, we present energy harvesting systems in a prosthetic leg using piezoceramic Macro Fiber Composites MFCs and their models using artificial neural networks. The piezoceramic MFCs are implemented at the sole and heel of the leg and transform impact forces into electrical power during walking. The neural model of the energy harvesting system installed at the sole is developed on the basis of a standard feedforward backpropagation neural network. On the other hand, the neural model of the energy harvesting system installed at the heel is manually synthesized from different neural modules networks. Experimental results show that these neural models can appropriately transform the impact forces detected by force sensing resistors FSRs into the electrical responses of the piezoceramic MFCs. The models will be used to study and analyze dynamical behaviors of the piezoelectric materials with respect to walking}}
    Abstract: In this paper, we present energy harvesting systems in a prosthetic leg using piezoceramic Macro Fiber Composites MFCs and their models using artificial neural networks. The piezoceramic MFCs are implemented at the sole and heel of the leg and transform impact forces into electrical power during walking. The neural model of the energy harvesting system installed at the sole is developed on the basis of a standard feedforward backpropagation neural network. On the other hand, the neural model of the energy harvesting system installed at the heel is manually synthesized from different neural modules networks. Experimental results show that these neural models can appropriately transform the impact forces detected by force sensing resistors FSRs into the electrical responses of the piezoceramic MFCs. The models will be used to study and analyze dynamical behaviors of the piezoelectric materials with respect to walking
    Review:
    Manoonpong, P. and Wörgötter, F. and Pasemann, F. (2011).
    Biological Inspiration for Mechanical Design and Control of Autonomous Walking Robots: Towards Life-Like Robots. Int. J. Appl. Biomed. Eng. IJABME, 1-12, 31.
    BibTeX:
    @article{manoonpongwoergoetterpasemann2011,
      author = {Manoonpong, P. and Wörgötter, F. and Pasemann, F.},
      title = {Biological Inspiration for Mechanical Design and Control of Autonomous Walking Robots: Towards Life-Like Robots},
      pages = {1-12},
      journal = {Int. J. Appl. Biomed. Eng. IJABME},
      year = {2011},
      volume= {31},
      abstract = {Nature apparently has succeeded in evolving biomechanics and creating neural mechanisms that allow living systems like walking animals to perform various sophisticated behaviors, e.g. and different gaits, climbing, turning, orienting, obstacle avoidance, attraction, anticipation. This shows that general principles of nature can provide biological inspiration for robotic designs or give useful hints of what is possible and design ideas that may have escaped our consideration. Instead of starting from scratch, this article presents how the biological principles can be used for mechanical design and control of walking robots, in order to approach living creatures in their level of performance. Employing this strategy allows us to successfully develop versatile, adaptive, and autonomous walking robots. Versatility in this sense means a variety of reactive behaviors including memory guidance, while adaptivity implies online learning capabilities. Autonomy is an ability to function without continuous human guidance. These three key elements are achieved under modular neural control and learning. In addition, the presented neural control technique is shown to be a powerful method of solving sensor-motor coordination problems of high complexity systems}}
    Abstract: Nature apparently has succeeded in evolving biomechanics and creating neural mechanisms that allow living systems like walking animals to perform various sophisticated behaviors, e.g. and different gaits, climbing, turning, orienting, obstacle avoidance, attraction, anticipation. This shows that general principles of nature can provide biological inspiration for robotic designs or give useful hints of what is possible and design ideas that may have escaped our consideration. Instead of starting from scratch, this article presents how the biological principles can be used for mechanical design and control of walking robots, in order to approach living creatures in their level of performance. Employing this strategy allows us to successfully develop versatile, adaptive, and autonomous walking robots. Versatility in this sense means a variety of reactive behaviors including memory guidance, while adaptivity implies online learning capabilities. Autonomy is an ability to function without continuous human guidance. These three key elements are achieved under modular neural control and learning. In addition, the presented neural control technique is shown to be a powerful method of solving sensor-motor coordination problems of high complexity systems
    Review:
    Manoonpong, P. and Kulvicius, T. and Wörgötter, F. and Kunze, L. and Renjewski, D. and Seyfarth, A. (2011).
    Compliant Ankles and Flat Feet for Improved Self-Stabilization and Passive Dynamics of the Biped Robot RunBot. The 2011 IEEE-RAS International Conference on Humanoid Robots, 276 - 281. DOI: 10.1109/Humanoids.2011.6100804.
    BibTeX:
    @inproceedings{manoonpongkulviciuswoergoetter2011,
      author = {Manoonpong, P. and Kulvicius, T. and Wörgötter, F. and Kunze, L. and Renjewski, D. and Seyfarth, A.},
      title = {Compliant Ankles and Flat Feet for Improved Self-Stabilization and Passive Dynamics of the Biped Robot RunBot},
      pages = {276 - 281},
      booktitle = {The 2011 IEEE-RAS International Conference on Humanoid Robots},
      year = {2011},
      month = {10},
      doi = {10.1109/Humanoids.2011.6100804},
      abstract = {Biomechanical studies of human walking reveal that compliance plays an important role at least in natural and smooth motions as well as for self-stabilization. Inspired by this, we present here the development of a new lower leg segment of the dynamic biped robot "RunBot". This new lower leg segment features a compliant ankle connected to a flat foot. It is mainly employed to realize robust self-stabilization in a passive manner. In general, such self-stabilization is achieved through mechanical feedback due to elasticity. Using real-time walking experiments, this study shows that the new lower leg segment improves dynamic walking behavior of the robot in two main respects compared to an old lower leg segment consisting of rigid ankle and curved foot: 1) it provides better self-stabilization after stumbling and 2) it increases passive dynamics during some stages of the gait cycle of the robot i.e., when the whole robot moves unactuated. As a consequence, a combination of compliance (i.e., the new lower leg segment) and active components (i.e., actuated hip and knee joints) driven by a neural mechanism (i.e., reflexive neural control) enables RunBot to perform robust self stabilization and at the same time natural, smooth, and energy efficient walking behavior without high control effort.}}
    Abstract: Biomechanical studies of human walking reveal that compliance plays an important role at least in natural and smooth motions as well as for self-stabilization. Inspired by this, we present here the development of a new lower leg segment of the dynamic biped robot "RunBot". This new lower leg segment features a compliant ankle connected to a flat foot. It is mainly employed to realize robust self-stabilization in a passive manner. In general, such self-stabilization is achieved through mechanical feedback due to elasticity. Using real-time walking experiments, this study shows that the new lower leg segment improves dynamic walking behavior of the robot in two main respects compared to an old lower leg segment consisting of rigid ankle and curved foot: 1) it provides better self-stabilization after stumbling and 2) it increases passive dynamics during some stages of the gait cycle of the robot i.e., when the whole robot moves unactuated. As a consequence, a combination of compliance (i.e., the new lower leg segment) and active components (i.e., actuated hip and knee joints) driven by a neural mechanism (i.e., reflexive neural control) enables RunBot to perform robust self stabilization and at the same time natural, smooth, and energy efficient walking behavior without high control effort.
    Review:
    Liu, G. and Wörgötter, F. and Markelic, I. (2011).
    Nonlinear Estimation Using Central Difference Information Filter. IEEE International Workshop on Statistical Signal Processing, 593-596, 28-30. DOI: 10.1109/SSP.2011.5967768.
    BibTeX:
    @inproceedings{liuwoergoettermarkelic2011b,
      author = {Liu, G. and Wörgötter, F. and Markelic, I.},
      title = {Nonlinear Estimation Using Central Difference Information Filter},
      pages = {593-596},
      booktitle = {IEEE International Workshop on Statistical Signal Processing},
      year = {2011},
      volume= {28-30},
      doi = {10.1109/SSP.2011.5967768},
      abstract = {n this contribution, we introduce a new state estimation filter for nonlinear estimation and sensor fusion, which we call cen- tral difference information filter CDIF. As we know, the ex- tended information filter EIF has two shortcomings: one is the limited accuracy of the Taylor series linearization method, the other is the calculation of the Jacobians. These shortcom- ings can be compensated by utilizing sigma point information filters SPIFs, e.g. and the unscented information filter UIF, which uses deterministic sigma points to approximate the distribution of Gaussian random variables and does not require the calculation of Jacobians. As an alternative to the UIF, the CDIF is derived by using Stirlings interpolation to generate sigma points in the SPIFs architecture, which uses less parameters, has lower computational cost and achieves the same accuracy as UIF. To demonstrate the performance of our al- gorithm, a classic space vehicle reentry tracking simulation is used}}
    Abstract: n this contribution, we introduce a new state estimation filter for nonlinear estimation and sensor fusion, which we call cen- tral difference information filter CDIF. As we know, the ex- tended information filter EIF has two shortcomings: one is the limited accuracy of the Taylor series linearization method, the other is the calculation of the Jacobians. These shortcom- ings can be compensated by utilizing sigma point information filters SPIFs, e.g. and the unscented information filter UIF, which uses deterministic sigma points to approximate the distribution of Gaussian random variables and does not require the calculation of Jacobians. As an alternative to the UIF, the CDIF is derived by using Stirlings interpolation to generate sigma points in the SPIFs architecture, which uses less parameters, has lower computational cost and achieves the same accuracy as UIF. To demonstrate the performance of our al- gorithm, a classic space vehicle reentry tracking simulation is used
    Review:
    Liu, G. and Wörgötter, F. and Markelic, I. (2011).
    Lane Shape Estimation Using a Partitioned Particle Filter for Autonomous Driving. IEEE International Conference on Robotics and Automation ICRA, 1627-1633, 9-13. DOI: 10.1109/ICRA.2011.5979753.
    BibTeX:
    @inproceedings{liuwoergoettermarkelic2011,
      author = {Liu, G. and Wörgötter, F. and Markelic, I.},
      title = {Lane Shape Estimation Using a Partitioned Particle Filter for Autonomous Driving},
      pages = {1627-1633},
      booktitle = {IEEE International Conference on Robotics and Automation ICRA},
      year = {2011},
      volume= {9-13},
      doi = {10.1109/ICRA.2011.5979753},
      abstract = {This paper presents a probabilistic algorithm for lane shape estimation in an urban environment which is important for example for driver assistance systems and autonomous driving. For the first time, we bring together the so-called Partitioned Particle filter, an improvement of the traditional Particle filter, and the linear-parabolic lane model which alleviates many shortcomings of traditional lane models. The former improves the traditional Particle filter by subdividing the whole state space of particles into several subspaces and estimating those subspaces in a hierarchical structure, such that the number of particles for each subspace is flexible and the robustness of the whole system is increased. Furthermore, we introduce a new statistical observation model, an important part of the Particle filter, where we use multi- kernel density to model the probability distribution of lane parameters. Our observation model considers not only color and position information as image cues, but also the image gradient. Our experimental results illustrate the robustness and efficiency of our algorithm even when confronted with challenging scenes}}
    Abstract: This paper presents a probabilistic algorithm for lane shape estimation in an urban environment which is important for example for driver assistance systems and autonomous driving. For the first time, we bring together the so-called Partitioned Particle filter, an improvement of the traditional Particle filter, and the linear-parabolic lane model which alleviates many shortcomings of traditional lane models. The former improves the traditional Particle filter by subdividing the whole state space of particles into several subspaces and estimating those subspaces in a hierarchical structure, such that the number of particles for each subspace is flexible and the robustness of the whole system is increased. Furthermore, we introduce a new statistical observation model, an important part of the Particle filter, where we use multi- kernel density to model the probability distribution of lane parameters. Our observation model considers not only color and position information as image cues, but also the image gradient. Our experimental results illustrate the robustness and efficiency of our algorithm even when confronted with challenging scenes
    Review:
    Kulvicius, T. and Ning, K. and Tamosiunaite, M. and Wörgötter, F. (2011).
    Modified dynamic movement primitives for joining movement sequences. IEEE International Conference on Robotics and Automation, 2275-2280. DOI: 10.1109/ICRA.2011.5979716.
    BibTeX:
    @inproceedings{kulviciusningtamosiunaite2011,
      author = {Kulvicius, T. and Ning, K. and Tamosiunaite, M. and Wörgötter, F.},
      title = {Modified dynamic movement primitives for joining movement sequences},
      pages = {2275-2280},
      booktitle = {IEEE International Conference on Robotics and Automation},
      year = {2011},
      month = {05},
      doi = {10.1109/ICRA.2011.5979716},
      abstract = {The generation of complex movement patterns, in particular in cases where one needs to smoothly and accurately join trajectories, is still a difficult problem in robotics. This paper presents a novel approach for joining of several dynamic movement primitives DMPs based on a modification of the original formulation for DMPs. The new method produces smooth and natural transitions in position as well as velocity space. The properties of the method are demonstrated by applying it to simulated handwriting generation implemented on a robot, where an adaptive algorithm is used to learn trajectories from human demonstration. These results demonstrate that the new method is a feasible alternative for trajectory learning and generation and its accuracy and modular character has potential for various robotics applications}}
    Abstract: The generation of complex movement patterns, in particular in cases where one needs to smoothly and accurately join trajectories, is still a difficult problem in robotics. This paper presents a novel approach for joining of several dynamic movement primitives DMPs based on a modification of the original formulation for DMPs. The new method produces smooth and natural transitions in position as well as velocity space. The properties of the method are demonstrated by applying it to simulated handwriting generation implemented on a robot, where an adaptive algorithm is used to learn trajectories from human demonstration. These results demonstrate that the new method is a feasible alternative for trajectory learning and generation and its accuracy and modular character has potential for various robotics applications
    Review:
    Krüger, N. and Piater, J. and Geib, C. and Petrick, R. and M, S. and Wörgötter, F. and Ude, A. and Asfour, T. and Kraft, D. and Omrcen, D. and Agostini, A. and Dillmann, R. (2011).
    Object-Action Complexes: Grounded Abstractions of Sensorimotor Processes. Robotics and Autonomous Systems RAS, 740 - 757, 59, 10. DOI: 10.1016/j.robot.2011.05.009.
    BibTeX:
    @article{kruegerpiatergeib2011,
      author = {Krüger, N. and Piater, J. and Geib, C. and Petrick, R. and M, S. and Wörgötter, F. and Ude, A. and Asfour, T. and Kraft, D. and Omrcen, D. and Agostini, A. and Dillmann, R.},
      title = {Object-Action Complexes: Grounded Abstractions of Sensorimotor Processes},
      pages = {740 - 757},
      journal = {Robotics and Autonomous Systems RAS},
      year = {2011},
      volume= {59},
      number = {10},
      url = {http://www.sciencedirect.com/science/article/pii/S0921889011000935},
      doi = {10.1016/j.robot.2011.05.009},
      abstract = {Autonomous cognitive robots must be able to interact with the world and reason about their interactions. On the one hand, physical interactions are inherently continuous, noisy, and require feedback. On the other hand, the knowledge needed for reasoning about high-level objectives and plans is more conveniently expressed as symbolic predictions about state changes. Bridging this gap between control knowledge and abstract reasoning has been a fundamental concern of autonomous robotics. This paper proposes a formalism called an Object-Action Complex as the basis for symbolic representations of sensorimotor experience. OACs are designed to capture the interaction between objects and associated actions in artificial cognitive systems. This paper defines a formalism for describing object action relations and their use for autonomous cognitive robots, and describes how OACs can be learned. We also demonstrate how OACs interact across different levels of abstraction in the context of two tasks: the grounding of objects and grasping accordances, and the execution of plans using grounded representations}}
    Abstract: Autonomous cognitive robots must be able to interact with the world and reason about their interactions. On the one hand, physical interactions are inherently continuous, noisy, and require feedback. On the other hand, the knowledge needed for reasoning about high-level objectives and plans is more conveniently expressed as symbolic predictions about state changes. Bridging this gap between control knowledge and abstract reasoning has been a fundamental concern of autonomous robotics. This paper proposes a formalism called an Object-Action Complex as the basis for symbolic representations of sensorimotor experience. OACs are designed to capture the interaction between objects and associated actions in artificial cognitive systems. This paper defines a formalism for describing object action relations and their use for autonomous cognitive robots, and describes how OACs can be learned. We also demonstrate how OACs interact across different levels of abstraction in the context of two tasks: the grounding of objects and grasping accordances, and the execution of plans using grounded representations
    Review:
    Hesse, F. and Manoonpong, P. and Wörgötter, F. (2011).
    A Neural Pre- and Post-Processing Framework for Goal Directed Behavior in Self-Organizing Robots. The 21st Annual Conference of the Japanese Neural Network Society.
    BibTeX:
    @inproceedings{hessemanoonpongwoergoetter2011,
      author = {Hesse, F. and Manoonpong, P. and Wörgötter, F.},
      title = {A Neural Pre- and Post-Processing Framework for Goal Directed Behavior in Self-Organizing Robots},
      booktitle = {The 21st Annual Conference of the Japanese Neural Network Society},
      year = {2011},
      abstract = {In the work at hand we introduce a neural pre- and post-processing framework whose parameters can be adapted by any learning mechanism, e.g. reinforcement learning. The framework allows to generate goal-directed behaviors while at the same time exploiting the bene cial properties, e.g. robustness, of self-organization based primitive behaviors}}
    Abstract: In the work at hand we introduce a neural pre- and post-processing framework whose parameters can be adapted by any learning mechanism, e.g. reinforcement learning. The framework allows to generate goal-directed behaviors while at the same time exploiting the bene cial properties, e.g. robustness, of self-organization based primitive behaviors
    Review:
    Aksoy, E E. and Dellen, B. and Tamosiunaite, M. and Wörgötter, F. (2011).
    Execution of a Dual-Object Pushing Action with Semantic Event Chains. IEEE-RAS Int. Conf. on Humanoid Robots, 576-583. DOI: 10.1109/Humanoids.2011.6100833.
    BibTeX:
    @inproceedings{aksoydellentamosiunaite2011,
      author = {Aksoy, E E. and Dellen, B. and Tamosiunaite, M. and Wörgötter, F.},
      title = {Execution of a Dual-Object Pushing Action with Semantic Event Chains},
      pages = {576-583},
      booktitle = {IEEE-RAS Int. Conf. on Humanoid Robots},
      year = {2011},
      doi = {10.1109/Humanoids.2011.6100833},
      abstract = {Execution of a manipulation after learning from demonstration many times requires intricate planning and control systems or some form of manual guidance for a robot. Here we present a framework for manipulation execution based on the so called "Semantic Event Chain" which is an abstract description of relations between the objects in the scene. It captures the change of those relations during a manipulation and thereby provides the decisive temporal anchor points by which a manipulation is critically defined. Using semantic event chains a model of a manipulation can be learned. We will show that it is possible to add the required control parameters (the spatial anchor points) to this model, which can then be executed by a robot in a fully autonomous way. The process of learning and execution of semantic event chains is explained using a box pushing example.}}
    Abstract: Execution of a manipulation after learning from demonstration many times requires intricate planning and control systems or some form of manual guidance for a robot. Here we present a framework for manipulation execution based on the so called "Semantic Event Chain" which is an abstract description of relations between the objects in the scene. It captures the change of those relations during a manipulation and thereby provides the decisive temporal anchor points by which a manipulation is critically defined. Using semantic event chains a model of a manipulation can be learned. We will show that it is possible to add the required control parameters (the spatial anchor points) to this model, which can then be executed by a robot in a fully autonomous way. The process of learning and execution of semantic event chains is explained using a box pushing example.
    Review:
    Aksoy, E E. and Abramov, A. and Dörr, J. and Kejun, N. and Dellen, B. and Wörgötter, F. (2011).
    Learning the semantics of object-action relations by observation. The International Journal of Robotics Research September, 1229-1249, 30.
    BibTeX:
    @article{aksoyabramovdoerr2011,
      author = {Aksoy, E E. and Abramov, A. and Dörr, J. and Kejun, N. and Dellen, B. and Wörgötter, F.},
      title = {Learning the semantics of object-action relations by observation},
      pages = {1229-1249},
      journal = {The International Journal of Robotics Research September},
      year = {2011},
      volume= {30},
      url = {http://ijr.sagepub.com/content/30/10/1229.abstract},
      abstract = {Recognizing manipulations performed by a human and the transfer and execution of this by a robot is a difficult problem. We address this in the current study by introducing a novel representation of the relations between objects at decisive time points during a manipulation. Thereby, we encode the essential changes in a visual scenery in a condensed way such that a robot can recognize and learn a manipulation without prior object knowledge. To achieve this we continuously track image segments in the video and construct a dynamic graph sequence. Topological transitions of those graphs occur whenever a spatial relation between some segments has changed in a discontinuous way and these moments are stored in a transition matrix called the semantic event chain (SEC). We demonstrate that these time points are highly descriptive for distinguishing between different manipulations. Employing simple sub-string search algorithms, SECs can be compared and type-similar manipulations can be recognized with high confidence. As the approach is generic, statistical learning can be used to find the archetypal SEC of a given manipulation class. The performance of the algorithm is demonstrated on a set of real videos showing hands manipulating various objects and performing different actions. In experiments with a robotic arm, we show that the SEC can be learned by observing human manipulations, transferred to a new scenario, and then reproduced by the machine.}}
    Abstract: Recognizing manipulations performed by a human and the transfer and execution of this by a robot is a difficult problem. We address this in the current study by introducing a novel representation of the relations between objects at decisive time points during a manipulation. Thereby, we encode the essential changes in a visual scenery in a condensed way such that a robot can recognize and learn a manipulation without prior object knowledge. To achieve this we continuously track image segments in the video and construct a dynamic graph sequence. Topological transitions of those graphs occur whenever a spatial relation between some segments has changed in a discontinuous way and these moments are stored in a transition matrix called the semantic event chain (SEC). We demonstrate that these time points are highly descriptive for distinguishing between different manipulations. Employing simple sub-string search algorithms, SECs can be compared and type-similar manipulations can be recognized with high confidence. As the approach is generic, statistical learning can be used to find the archetypal SEC of a given manipulation class. The performance of the algorithm is demonstrated on a set of real videos showing hands manipulating various objects and performing different actions. In experiments with a robotic arm, we show that the SEC can be learned by observing human manipulations, transferred to a new scenario, and then reproduced by the machine.
    Review:
    Abramov, A. and Kulvicius, T. and Wörgötter, F. and Dellen, B. (2011).
    Real-Time Image Segmentation on a GPU. Facing the Multicore-Challenge, 131-142, 6310. DOI: 10.1007/978-3-642-16233-6_14.
    BibTeX:
    @inproceedings{abramovkulviciuswoergoetter2011,
      author = {Abramov, A. and Kulvicius, T. and Wörgötter, F. and Dellen, B.},
      title = {Real-Time Image Segmentation on a GPU},
      pages = {131-142},
      booktitle = {Facing the Multicore-Challenge},
      year = {2011},
      volume= {6310},
      doi = {10.1007/978-3-642-16233-6_14},
      abstract = {Efficient segmentation of color images is important for many applications in computer vision. Non-parametric solutions are required in situations where little or no prior knowledge about the data is available. In this paper, we present a novel parallel image segmentation algorithm which segments images in real-time in a non-parametric way. The algorithm finds the equilibrium states of a Potts model in the superparamagnetic phase of the system. Our method maps perfectly onto the Graphics Processing Unit GPU architecture and has been implemented using the framework NVIDIA Compute Unified Device Architecture CUDA. For images of 256 x 320 pixels we obtained a frame rate of 30 Hz that demonstrates the applicability of the algorithm to video-processing tasks in real-time1}}
    Abstract: Efficient segmentation of color images is important for many applications in computer vision. Non-parametric solutions are required in situations where little or no prior knowledge about the data is available. In this paper, we present a novel parallel image segmentation algorithm which segments images in real-time in a non-parametric way. The algorithm finds the equilibrium states of a Potts model in the superparamagnetic phase of the system. Our method maps perfectly onto the Graphics Processing Unit GPU architecture and has been implemented using the framework NVIDIA Compute Unified Device Architecture CUDA. For images of 256 x 320 pixels we obtained a frame rate of 30 Hz that demonstrates the applicability of the algorithm to video-processing tasks in real-time1
    Review:
    Agostini, A. and Torras, C. and Wörgötter, F. (2011).
    Integrating Task Planning and Interactive Learning for Robots to Work in Human Environments. 22nd International Joint Conference on Artificial Intelligence IJCAI11. Barcelona, Spain, 2386-2391.
    BibTeX:
    @inproceedings{agostinitorraswoergoetter2011,
      author = {Agostini, A. and Torras, C. and Wörgötter, F.},
      title = {Integrating Task Planning and Interactive Learning for Robots to Work in Human Environments},
      pages = {2386-2391},
      booktitle = {22nd International Joint Conference on Artificial Intelligence IJCAI11. Barcelona, Spain},
      year = {2011},
      url = {http://www.iri.upc.edu/publications/show/1247},
      abstract = {Human environments are challenging for robots, which need to be trainable by lay people and learn new behaviours rapidly without disrupting much the ongoing activity. A system that integrates AI techniques for planning and learning is here proposed to satisfy these strong demands. The approach rapidly learns planning operators from few action experiences using a competitive strategy where many alternatives of cause-effect explanations are evaluated in parallel, and the most successful ones are used to generate the operators. The success of a cause-effect explanation is evaluated by a probabilistic estimate that compensates the lack of experience, producing more confident estimations and speeding up the learning in relation to other known estimates. The system operates without task interruption by integrating in the planning-learning loop a human teacher that supports the planner in making decisions. All the mechanisms are integrated and synchronized in the robot using a general decision-making framework. The feasibility and scalability of the architecture are evaluated in two different robot platforms: a Stäubli arm, and the humanoid ARMAR III.}}
    Abstract: Human environments are challenging for robots, which need to be trainable by lay people and learn new behaviours rapidly without disrupting much the ongoing activity. A system that integrates AI techniques for planning and learning is here proposed to satisfy these strong demands. The approach rapidly learns planning operators from few action experiences using a competitive strategy where many alternatives of cause-effect explanations are evaluated in parallel, and the most successful ones are used to generate the operators. The success of a cause-effect explanation is evaluated by a probabilistic estimate that compensates the lack of experience, producing more confident estimations and speeding up the learning in relation to other known estimates. The system operates without task interruption by integrating in the planning-learning loop a human teacher that supports the planner in making decisions. All the mechanisms are integrated and synchronized in the robot using a general decision-making framework. The feasibility and scalability of the architecture are evaluated in two different robot platforms: a Stäubli arm, and the humanoid ARMAR III.
    Review:
    Liu, G. and Wörgötter, F. and Markelic, I. (2011).
    Square-Root Sigma-Point Information Filter for Nonlinear Estimation and Sensor Fusion. IEEE Transactions on Automatic Control, 2945 - 2950, 57, 11. DOI: 10.1109/TAC.2012.2193708.
    BibTeX:
    @inproceedings{liuwoergoettermarkelic2011a,
      author = {Liu, G. and Wörgötter, F. and Markelic, I.},
      title = {Square-Root Sigma-Point Information Filter for Nonlinear Estimation and Sensor Fusion},
      pages = {2945 - 2950},
      booktitle = {IEEE Transactions on Automatic Control},
      year = {2011},
      volume= {57},
      number = {11},
      month = {11},
      doi = {10.1109/TAC.2012.2193708},
      abstract = {The sigma-point information filters employ a number of deterministic sigma-points to calculate the mean and covariance of a random variable which undergoes a nonlinear transformation. These sigma-points can be generated by the unscented transform or Stirlings interpolation, which corresponds to the unscented information filter (UIF) and the central difference information filter (CDIF) respectively. In this technical note, we develop the square-root extensions of UIF and CDIF, which have better numerical properties than the original versions, e.g., improved numerical accuracy, double order precision and preservation of symmetry. We also show that the square-root unscented information filter (SRUIF) might lose the positive-definiteness due to the negative Cholesky update, whereas the square-root central difference information filter (SRCDIF) has only positive Cholesky update. Therefore, the SRCDIF is preferable to the SRUIF concerning the numerical stability.}}
    Abstract: The sigma-point information filters employ a number of deterministic sigma-points to calculate the mean and covariance of a random variable which undergoes a nonlinear transformation. These sigma-points can be generated by the unscented transform or Stirlings interpolation, which corresponds to the unscented information filter (UIF) and the central difference information filter (CDIF) respectively. In this technical note, we develop the square-root extensions of UIF and CDIF, which have better numerical properties than the original versions, e.g., improved numerical accuracy, double order precision and preservation of symmetry. We also show that the square-root unscented information filter (SRUIF) might lose the positive-definiteness due to the negative Cholesky update, whereas the square-root central difference information filter (SRCDIF) has only positive Cholesky update. Therefore, the SRCDIF is preferable to the SRUIF concerning the numerical stability.
    Review:
    Agostini, A. and Wörgötter, F. and Torras, C. (2010).
    Quick Learning of Cause-Effects Relevant for Robot Action. , 1 -- 18, IRI-TR-10-01.
    BibTeX:
    @techreport{agostiniwoergoettertorras2010,
      author = {Agostini, A. and Wörgötter, F. and Torras, C.},
      title = {Quick Learning of Cause-Effects Relevant for Robot Action},
      pages = {1 -- 18},
      year = {2010},
      number = {IRI-TR-10-01},
      institution = {Institut de Robotica i Informatica Industrial, CSIC-UPC},
      url = {http://www.iri.upc.edu/download/scidoc/1222},
      abstract = {In this work we propose a new paradigm for the rapid learning of cause-effect relations relevant for task execution. Learning occurs automatically from action experiences by means of a novel constructive learning approach designed for applications where there is no previous knowledge of the task or world model, examples are provided on-line during run time, and the number of examples is small compared to the number of incoming experiences. These limitations pose obstacles for the existing constructive learning methods, where on-line learning is either not considered, a significant amount of prior knowledge has to be provided, or a large number of experiences or training streams are required. The system is implemented and evaluated in a humanoid robot platform using a decision-making framework that integrates a planner, the proposed learning mechanism, and a human teacher that supports the planner in the action selection. Results demonstrate the feasibility of the system for decision making in robotic applications.}}
    Abstract: In this work we propose a new paradigm for the rapid learning of cause-effect relations relevant for task execution. Learning occurs automatically from action experiences by means of a novel constructive learning approach designed for applications where there is no previous knowledge of the task or world model, examples are provided on-line during run time, and the number of examples is small compared to the number of incoming experiences. These limitations pose obstacles for the existing constructive learning methods, where on-line learning is either not considered, a significant amount of prior knowledge has to be provided, or a large number of experiences or training streams are required. The system is implemented and evaluated in a humanoid robot platform using a decision-making framework that integrates a planner, the proposed learning mechanism, and a human teacher that supports the planner in the action selection. Results demonstrate the feasibility of the system for decision making in robotic applications.
    Review:
    Thompson, A M. and Porr, B. and Wörgötter, F. (2010).
    Learning and Reversal Learning in the Subcortical Limbic System: A Computational Model. Adaptive Behavior, 211-236, 18, 3-4. DOI: 10.1177/1059712309353612.
    BibTeX:
    @article{thompsonporrwoergoetter2010,
      author = {Thompson, A M. and Porr, B. and Wörgötter, F.},
      title = {Learning and Reversal Learning in the Subcortical Limbic System: A Computational Model},
      pages = {211-236},
      journal = {Adaptive Behavior},
      year = {2010},
      volume= {18},
      number = {3-4},
      url = {http://adb.sagepub.com/content/18/3-4/211},
      doi = {10.1177/1059712309353612},
      abstract = {We present a biologically inspired model of the subcortical nuclei of the limbic system that is capable of performing reversal learning in a food-seeking task. In contrast to previous models, the reversal is modeled by the inhibition of the previously learned behavior. This allows for the reinstatement of behavior to recur quickly, as observed in animal behavior. In this model learning is achieved by implementing isotropic sequence order learning and a third factor ISO-3 that triggers learning at relevant moments. This third factor is modeled by phasic and tonic dopaminergic activity which respectively enable long-term potentiation to occur during acquisition, and long-term depression LTD to occur when adjustments in learned behaviors are required. It will be shown how the nucleus accumbens core uses conditioned reinforcers to invigorate instrumental responding while relatively strong LTD in the shell influences the core through a shell-ventral pallido-mediodorsal pathway. This pathway functions as a feed-forward switching mechanism and enables behavioral flexibility}}
    Abstract: We present a biologically inspired model of the subcortical nuclei of the limbic system that is capable of performing reversal learning in a food-seeking task. In contrast to previous models, the reversal is modeled by the inhibition of the previously learned behavior. This allows for the reinstatement of behavior to recur quickly, as observed in animal behavior. In this model learning is achieved by implementing isotropic sequence order learning and a third factor ISO-3 that triggers learning at relevant moments. This third factor is modeled by phasic and tonic dopaminergic activity which respectively enable long-term potentiation to occur during acquisition, and long-term depression LTD to occur when adjustments in learned behaviors are required. It will be shown how the nucleus accumbens core uses conditioned reinforcers to invigorate instrumental responding while relatively strong LTD in the shell influences the core through a shell-ventral pallido-mediodorsal pathway. This pathway functions as a feed-forward switching mechanism and enables behavioral flexibility
    Review:
    Tetzlaff, C. and Okujeni, S. and Egert, U. and Wörgötter, F. and Butz, M. (2010).
    Self-Organized Criticality in Developing Neuronal Networks. PLoS Comput Biol, 6, 12. DOI: 10.1371/journal.pcbi.1001013.
    BibTeX:
    @article{tetzlaffokujeniegert2010,
      author = {Tetzlaff, C. and Okujeni, S. and Egert, U. and Wörgötter, F. and Butz, M.},
      title = {Self-Organized Criticality in Developing Neuronal Networks},
      journal = {PLoS Comput Biol},
      year = {2010},
      volume= {6},
      number = {12},
      url = {http://www.ploscompbiol.org/article/info%3Adoi%2F10.1371%2Fjournal.pcbi.1001013},
      doi = {10.1371/journal.pcbi.1001013},
      abstract = {Recently evidence has accumulated that many neural networks exhibit self-organized criticality. In this state, activity is similar across temporal scales and this is beneficial with respect to information flow. If subcritical, activity can die out, if supercritical epileptiform patterns may occur. Little is known about how developing networks will reach and stabilize criticality. Here we monitor the development between 13 and 95 days in vitro of cortical cell cultures n 20 and find four different phases, related to their morphological maturation: An initial low-activity state is followed by a supercritical and then a subcritical one until the network finally reaches stable criticality Using network modeling and mathematical analysis we describe the dynamics of the emergent connectivity in such developing systems. Based on physiological observations, the synaptic development in the model is determined by the drive of the neurons to adjust their connectivity for reaching on average firing rate homeostasis. We predict a specific time course for the maturation of inhibition, with strong onset and delayed pruning, and that total synaptic connectivity should be strongly linked to the relative levels of excitation and inhibition. These results demonstrate that the interplay between activity and connectivity guides developing networks into criticality suggesting that this may be a generic and stable state of many networks in vivo and in vitro}}
    Abstract: Recently evidence has accumulated that many neural networks exhibit self-organized criticality. In this state, activity is similar across temporal scales and this is beneficial with respect to information flow. If subcritical, activity can die out, if supercritical epileptiform patterns may occur. Little is known about how developing networks will reach and stabilize criticality. Here we monitor the development between 13 and 95 days in vitro of cortical cell cultures n 20 and find four different phases, related to their morphological maturation: An initial low-activity state is followed by a supercritical and then a subcritical one until the network finally reaches stable criticality Using network modeling and mathematical analysis we describe the dynamics of the emergent connectivity in such developing systems. Based on physiological observations, the synaptic development in the model is determined by the drive of the neurons to adjust their connectivity for reaching on average firing rate homeostasis. We predict a specific time course for the maturation of inhibition, with strong onset and delayed pruning, and that total synaptic connectivity should be strongly linked to the relative levels of excitation and inhibition. These results demonstrate that the interplay between activity and connectivity guides developing networks into criticality suggesting that this may be a generic and stable state of many networks in vivo and in vitro
    Review:
    Steingrube, S. and Timme, M. and Wörgötter, F. and Manoonpong, P. (2010).
    Self-Organized Adaptation of Simple Neural Circuits Enables Complex Robot Behavior. Nature Physics, 224-230, 6. DOI: 10.1038/nphys1508.
    BibTeX:
    @article{steingrubetimmewoergoetter2010,
      author = {Steingrube, S. and Timme, M. and Wörgötter, F. and Manoonpong, P.},
      title = {Self-Organized Adaptation of Simple Neural Circuits Enables Complex Robot Behavior},
      pages = {224-230},
      journal = {Nature Physics},
      year = {2010},
      volume= {6},
      doi = {10.1038/nphys1508},
      abstract = {Controlling sensori-motor systems in higher animals or complex robots is a challenging combinatorial problem, because many sensory signals need to be simultaneously coordinated into a broad behavioural spectrum. To rapidly interact with the environment, this control needs to be fast and adaptive. Present robotic solutions operate with limited autonomy and are mostly restricted to few behavioural patterns. Here we introduce chaos control as a new strategy to generate complex behaviour of an autonomous robot. In the presented system, 18 sensors drive 18 motors by means of a simple neural control circuit, thereby generating 11 basic behavioural patterns for example, orienting, taxis, self-protection and various gaits and their combinations. The control signal quickly and reversibly adapts to new situations and also enables learning and synaptic long-term storage of behaviourally useful motor responses. Thus, such neural control provides a powerful yet simple way to self-organize versatile behaviours in autonomous agents with many degrees of freedom}}
    Abstract: Controlling sensori-motor systems in higher animals or complex robots is a challenging combinatorial problem, because many sensory signals need to be simultaneously coordinated into a broad behavioural spectrum. To rapidly interact with the environment, this control needs to be fast and adaptive. Present robotic solutions operate with limited autonomy and are mostly restricted to few behavioural patterns. Here we introduce chaos control as a new strategy to generate complex behaviour of an autonomous robot. In the presented system, 18 sensors drive 18 motors by means of a simple neural control circuit, thereby generating 11 basic behavioural patterns for example, orienting, taxis, self-protection and various gaits and their combinations. The control signal quickly and reversibly adapts to new situations and also enables learning and synaptic long-term storage of behaviourally useful motor responses. Thus, such neural control provides a powerful yet simple way to self-organize versatile behaviours in autonomous agents with many degrees of freedom
    Review:
    Schröder-Schetelig, J. and Manoonpong, P. and Wörgötter, F. (2010).
    Using efference copy and a forward internal model for adaptive biped walking. Autonomous Robots, 357-366, 29, 3-4. DOI: 10.1007/s10514-010-9199-7.
    BibTeX:
    @article{schroederscheteligmanoonpongwoergoe,
      author = {Schröder-Schetelig, J. and Manoonpong, P. and Wörgötter, F.},
      title = {Using efference copy and a forward internal model for adaptive biped walking},
      pages = {357-366},
      journal = {Autonomous Robots},
      year = {2010},
      volume= {29},
      number = {3-4},
      language = {English},
      publisher = {Springer US},
      doi = {10.1007/s10514-010-9199-7},
      abstract = {To behave properly in an unknown environment, animals or robots must distinguish external from self-generated stimuli on their sensors. The biologically inspired concepts of efference copy and internal model have been successfully applied to a number of robot control problems. Here we present an application of this for our dynamic walking robot RunBot. We use efference copies of the motor commands with a simple forward internal model to predict the expected self-generated acceleration during walking. The difference to the actually measured acceleration is then used to stabilize the walking on terrains with changing slopes through its up- per body component controller. As a consequence, the con- troller drives the upper body component UBC to lean for- wards/backwards as soon as an error occurs resulting in dy- namical stable walking. We have evaluated the performance of the system on four different track configurations. Further- more we believe that the experimental studies pursued here will sharpen our understanding of how the efference copies influence dynamic locomotion control to the benefit of mod- ern neural control strategies in robots}}
    Abstract: To behave properly in an unknown environment, animals or robots must distinguish external from self-generated stimuli on their sensors. The biologically inspired concepts of efference copy and internal model have been successfully applied to a number of robot control problems. Here we present an application of this for our dynamic walking robot RunBot. We use efference copies of the motor commands with a simple forward internal model to predict the expected self-generated acceleration during walking. The difference to the actually measured acceleration is then used to stabilize the walking on terrains with changing slopes through its up- per body component controller. As a consequence, the con- troller drives the upper body component UBC to lean for- wards/backwards as soon as an error occurs resulting in dy- namical stable walking. We have evaluated the performance of the system on four different track configurations. Further- more we believe that the experimental studies pursued here will sharpen our understanding of how the efference copies influence dynamic locomotion control to the benefit of mod- ern neural control strategies in robots
    Review:
    Pugeault, N. and Wörgötter, F. and Krüger, N. (2010).
    Visual Primitives: Local, Condensed, Semantically Rich Visual Descriptors and their Applications in Robotics. International Journal of Humanoid Robotics, 379-405, 7, 3. DOI: 10.1142/S0219843610002209.
    BibTeX:
    @article{pugeaultwoergoetterkrueger2010,
      author = {Pugeault, N. and Wörgötter, F. and Krüger, N.},
      title = {Visual Primitives: Local, Condensed, Semantically Rich Visual Descriptors and their Applications in Robotics},
      pages = {379-405},
      journal = {International Journal of Humanoid Robotics},
      year = {2010},
      volume= {7},
      number = {3},
      url = {http://www.worldscientific.com/doi/abs/10.1142/S0219843610002209},
      doi = {10.1142/S0219843610002209},
      abstract = {We present a novel representation of visual information, based on local symbolic descriptors, that we call visual primitives. These primitives: 1 combine different visual modalities, 2 associate semantic to local scene information, and 3 reduce the bandwidth while increasing the predictability of the information exchanged across the system. This representation leads to the concept of early cognitive vision that we define as an intermediate level between dense, signal-based early vision and high-level cognitive vision. The frameworks potential is demonstrated in several applications, in particular in the area of robotics and humanoid robotics, which are briefly outlined}}
    Abstract: We present a novel representation of visual information, based on local symbolic descriptors, that we call visual primitives. These primitives: 1 combine different visual modalities, 2 associate semantic to local scene information, and 3 reduce the bandwidth while increasing the predictability of the information exchanged across the system. This representation leads to the concept of early cognitive vision that we define as an intermediate level between dense, signal-based early vision and high-level cognitive vision. The frameworks potential is demonstrated in several applications, in particular in the area of robotics and humanoid robotics, which are briefly outlined
    Review:
    Pugeault, N. and Wörgötter, F. and Krüger, N. (2010).
    Disambiguating Multi-Modal Scene Representations Using Perceptual Grouping Constraints. PLoS ONE, 1-16, 5. DOI: 10.1371/journal.pone.0010663.
    BibTeX:
    @article{pugeaultwoergoetterkrueger2010a,
      author = {Pugeault, N. and Wörgötter, F. and Krüger, N.},
      title = {Disambiguating Multi-Modal Scene Representations Using Perceptual Grouping Constraints},
      pages = {1-16},
      journal = {PLoS ONE},
      year = {2010},
      volume= {5},
      doi = {10.1371/journal.pone.0010663},
      abstract = {In its early stages, the visual system suffers from a lot of ambiguity and noise that severely limits the performance of early vision algorithms. This article presents feedback mechanisms between early visual processes, such as perceptual grouping, stereopsis and depth reconstruction, that allow the system to reduce this ambiguity and improve early representation of visual information. In the first part, the article proposes a local perceptual grouping algorithm that in addition to commonly used geometric information makes use of a novel multi-modal measure between local edge/line features. The grouping information is then used to: 1 disambiguate stereopsis by enforcing that stereo matches preserve groups: and 2 correct the reconstruction error due to the image pixel sampling using a linear interpolation over the groups. The integration of mutual feedback between early vision processes is shown to reduce considerably ambiguity and noise without the need for global constraints}}
    Abstract: In its early stages, the visual system suffers from a lot of ambiguity and noise that severely limits the performance of early vision algorithms. This article presents feedback mechanisms between early visual processes, such as perceptual grouping, stereopsis and depth reconstruction, that allow the system to reduce this ambiguity and improve early representation of visual information. In the first part, the article proposes a local perceptual grouping algorithm that in addition to commonly used geometric information makes use of a novel multi-modal measure between local edge/line features. The grouping information is then used to: 1 disambiguate stereopsis by enforcing that stereo matches preserve groups: and 2 correct the reconstruction error due to the image pixel sampling using a linear interpolation over the groups. The integration of mutual feedback between early vision processes is shown to reduce considerably ambiguity and noise without the need for global constraints
    Review:
    Pauwels, K. and Krüger, N. and Lappe, M. and Wörgötter, F. and Hulle, M V. (2010).
    A cortical architecture on parallel hardware for motion processing in real time. Journal of Vision, 1-21, 10, 10. DOI: 10.1167/10.10.18.
    BibTeX:
    @article{pauwelskruegerlappe2010,
      author = {Pauwels, K. and Krüger, N. and Lappe, M. and Wörgötter, F. and Hulle, M V.},
      title = {A cortical architecture on parallel hardware for motion processing in real time},
      pages = {1-21},
      journal = {Journal of Vision},
      year = {2010},
      volume= {10},
      number = {10},
      url = {http://www.journalofvision.org/content/10/10/18},
      doi = {10.1167/10.10.18},
      abstract = {Walking through a crowd or driving on a busy street requires monitoring your own movement and that of others. The segmentation of these other, independently moving, objects is one of the most challenging tasks in vision as it requires fast and accurate computations for the disentangling of independent motion from egomotion, often in cluttered scenes. This is accomplished in our brain by the dorsal visual stream relying on heavy parallel-hierarchical processing across many areas. This study is the first to utilize the potential of such design in an artificial vision system. We emulate large parts of the dorsal stream in an abstract way and implement an architecture with six interdependent feature extraction stages e.g. and edges, stereo, optical flow, etc.. The computationally highly demanding combination of these features is used to reliably extract moving objects in real time. This way utilizing the advantages of parallel-hierarchical design arrive at a novel and powerful artificial vision system that approaches richness, speed, and accuracy of visual processing in biological systems}}
    Abstract: Walking through a crowd or driving on a busy street requires monitoring your own movement and that of others. The segmentation of these other, independently moving, objects is one of the most challenging tasks in vision as it requires fast and accurate computations for the disentangling of independent motion from egomotion, often in cluttered scenes. This is accomplished in our brain by the dorsal visual stream relying on heavy parallel-hierarchical processing across many areas. This study is the first to utilize the potential of such design in an artificial vision system. We emulate large parts of the dorsal stream in an abstract way and implement an architecture with six interdependent feature extraction stages e.g. and edges, stereo, optical flow, etc.. The computationally highly demanding combination of these features is used to reliably extract moving objects in real time. This way utilizing the advantages of parallel-hierarchical design arrive at a novel and powerful artificial vision system that approaches richness, speed, and accuracy of visual processing in biological systems
    Review:
    Ning, K. and Wörgötter, F. (2010).
    To Paint What Is Seen: A System Implementation of a Novel Conceptual Hyper-Redundant Chain Robot With Monocular Vision. ISR/ROBOTIK 2010 41st International Symposium on Robotics, 722-727.
    BibTeX:
    @inproceedings{ningwoergoetter2010,
      author = {Ning, K. and Wörgötter, F.},
      title = {To Paint What Is Seen: A System Implementation of a Novel Conceptual Hyper-Redundant Chain Robot With Monocular Vision},
      pages = {722-727},
      booktitle = {ISR/ROBOTIK 2010 41st International Symposium on Robotics},
      year = {2010},
      abstract = {This paper presents our system-level implementation on operating a hyper-redundant chain robot HRCR with mo- nocular vision. The task is to let our original HRCR prototype, 3D-Trunk, to paint what it sees. The involved processes, technical issues, and solutions for achieving this task are described in this paper. This work represents also an implemen- tation experiment on vision-based human-machine interface, potentially useful for future research and applications}}
    Abstract: This paper presents our system-level implementation on operating a hyper-redundant chain robot HRCR with mo- nocular vision. The task is to let our original HRCR prototype, 3D-Trunk, to paint what it sees. The involved processes, technical issues, and solutions for achieving this task are described in this paper. This work represents also an implemen- tation experiment on vision-based human-machine interface, potentially useful for future research and applications
    Review:
    Liu, G. and Wörgötter, F. and Markelic, I. (2010).
    Combining Statistical Hough Transform and Particle Filter for robust lane detection and tracking. Intelligent Vehicles Symposium IV, 2010 IEEE, 993 -997. DOI: 10.1109/IVS.2010.5548021.
    BibTeX:
    @inproceedings{liuwoergoettermarkelic2010,
      author = {Liu, G. and Wörgötter, F. and Markelic, I.},
      title = {Combining Statistical Hough Transform and Particle Filter for robust lane detection and tracking},
      pages = {993 -997},
      booktitle = {Intelligent Vehicles Symposium IV, 2010 IEEE},
      year = {2010},
      doi = {10.1109/IVS.2010.5548021},
      abstract = {Lane detection and tracking is still a challenging task. Here, we combine the recently introduced Statistical Hough transform SHT with a Particle Filter PF and show its application for robust lane tracking. SHT improves the standard Hough transform HT which was shown to work well for lane detection. We use the local descriptors of the SHT as measurement for the PF, and show how a new three kernel density based observation model can be modeled based on the SHT and used with the PF. The application of the former becomes feasible by the reduced computations achieved with the tracking algorithm. We demonstrate the use of the resulting algorithm for lane detection and tracking by applying it to images freed from the perspective effect achieved by applying Inverse Perspective Mapping IPM. The presented results show the robustness of the presented algorithm}}
    Abstract: Lane detection and tracking is still a challenging task. Here, we combine the recently introduced Statistical Hough transform SHT with a Particle Filter PF and show its application for robust lane tracking. SHT improves the standard Hough transform HT which was shown to work well for lane detection. We use the local descriptors of the SHT as measurement for the PF, and show how a new three kernel density based observation model can be modeled based on the SHT and used with the PF. The application of the former becomes feasible by the reduced computations achieved with the tracking algorithm. We demonstrate the use of the resulting algorithm for lane detection and tracking by applying it to images freed from the perspective effect achieved by applying Inverse Perspective Mapping IPM. The presented results show the robustness of the presented algorithm
    Review:
    Kulvicius, T. and Kolodziejski, C. and Tamosiunaite, M. and Porr, B. and Wörgötter, F. (2010).
    Behavioral analysis of differential hebbian learning in closed-loop systems. Biological Cybernetics, 255-271, 103, 4. DOI: 10.1007/s00422-010-0396-4.
    BibTeX:
    @article{kulviciuskolodziejskitamosiunaite20,
      author = {Kulvicius, T. and Kolodziejski, C. and Tamosiunaite, M. and Porr, B. and Wörgötter, F.},
      title = {Behavioral analysis of differential hebbian learning in closed-loop systems},
      pages = {255-271},
      journal = {Biological Cybernetics},
      year = {2010},
      volume= {103},
      number = {4},
      publisher = {Springer-Verlag},
      doi = {10.1007/s00422-010-0396-4},
      abstract = {Understanding closed loop behavioral systems is a non-trivial problem, especially when they change during learning. Descriptions of closed loop systems in terms of information theory date back to the 50s, however, there have been only a few attempts which take into account learning, mostly measuring information of inputs. In this study we analyze a specific type of closed loop system by looking at the input as well as the output space. For this, we investigate simulated agents that perform differential Hebbian learning STDP. In the first part we show that analytical solutions can be found for the temporal development of such systems for relatively simple cases. In the second part of this study we try to answer the following question: How can we predict which system from a given class would be the best for a particular scenario? This question is addressed using energy and entropy measures and investigating their development during learning. This way we can show that within well- specified scenarios there are indeed agents which are optimal with respect to their structure and adaptive properties}}
    Abstract: Understanding closed loop behavioral systems is a non-trivial problem, especially when they change during learning. Descriptions of closed loop systems in terms of information theory date back to the 50s, however, there have been only a few attempts which take into account learning, mostly measuring information of inputs. In this study we analyze a specific type of closed loop system by looking at the input as well as the output space. For this, we investigate simulated agents that perform differential Hebbian learning STDP. In the first part we show that analytical solutions can be found for the temporal development of such systems for relatively simple cases. In the second part of this study we try to answer the following question: How can we predict which system from a given class would be the best for a particular scenario? This question is addressed using energy and entropy measures and investigating their development during learning. This way we can show that within well- specified scenarios there are indeed agents which are optimal with respect to their structure and adaptive properties
    Review:
    Krüger, N. and Pugeault N. and, E. and Baunegaard With Jensen, L. and Kalkan, S. and Kraft, D. and Jessen, J B. and Pilz, F. and Kjaer-Nielsen, A. and Popovic, M. and Asfour, T. and Piater, J. and Kragic, D. and Wörgötter, F. (2010).
    Early Cognitive Vision as a Front-end for Cognitive Systems. In proceedings of the Workshop of Vision for Cognitive Tasks, at ECCV2010, 1-15.
    BibTeX:
    @inproceedings{kruegerpugeaultnandbaunegaardwithje,
      author = {Krüger, N. and Pugeault N. and, E. and Baunegaard With Jensen, L. and Kalkan, S. and Kraft, D. and Jessen, J B. and Pilz, F. and Kjaer-Nielsen, A. and Popovic, M. and Asfour, T. and Piater, J. and Kragic, D. and Wörgötter, F.},
      title = {Early Cognitive Vision as a Front-end for Cognitive Systems},
      pages = {1-15},
      booktitle = {In proceedings of the Workshop of Vision for Cognitive Tasks, at ECCV2010},
      year = {2010},
      url = {http://workshops.acin.tuwien.ac.at/eccv10/Papers/W06.009.pdf},
      abstract = {We discuss the need of an elaborated in-between stage bridg- ing early vision and cognitive vision which we call Early Cognitive VisionECV. This stage provides semantically rich, disambiguated and largely task independent scene representations which can be used in many contexts. In addition, the ECV stage is important for generaliza- tion processes across objects and actions. We exemplify this at a concrete realisation of an ECV system that has already been used in variety of application domains}}
    Abstract: We discuss the need of an elaborated in-between stage bridg- ing early vision and cognitive vision which we call Early Cognitive VisionECV. This stage provides semantically rich, disambiguated and largely task independent scene representations which can be used in many contexts. In addition, the ECV stage is important for generaliza- tion processes across objects and actions. We exemplify this at a concrete realisation of an ECV system that has already been used in variety of application domains
    Review:
    Kolodziejski, C. and Tetzlaff, C. and Wörgötter, F. (2010).
    Closed-form treatment of the interactions between neuronal activity and timing-dependent plasticity in networks of linear neurons. Front. Comput. Neurosci, 1-15, 4. DOI: 10.3389/fncom.2010.00134.
    BibTeX:
    @article{kolodziejskitetzlaffwoergoetter2010,
      author = {Kolodziejski, C. and Tetzlaff, C. and Wörgötter, F.},
      title = {Closed-form treatment of the interactions between neuronal activity and timing-dependent plasticity in networks of linear neurons},
      pages = {1-15},
      journal = {Front. Comput. Neurosci},
      year = {2010},
      volume= {4},
      doi = {10.3389/fncom.2010.00134},
      abstract = {Network activity and network connectivity mutually influence each other. Especially for fast processes, like spike-timing- dependent plasticity STDP, which depends on the interaction of few two signals, the question arises how these inter- actions are continuously altering the behavior and structure of the network. To address this question a time-continuous treatment of plasticity is required. However, this is even in simple recurrent network structures currently not possible. Thus, here we develop for a linear differential Hebbian learning system a method by which we can analytically investigate the dynamics and stability of the connections in recurrent networks. We use noisy periodic external input signals, which through the recurrent connections lead to complex actual ongoing inputs and observe that large stable ranges emerge in these networks without boundaries or weight-normalization. Somewhat counter-intuitively, we find that about 40 of these cases are obtained with an LTP- dominated STDP curve. Noise can reduce stability in some cases, but generally this does not occur. Instead stable domains are often enlarged. This study is a first step towards a better understanding of the on- going interactions between activity and plasticity in recurrent networks using STDP. The results suggests that stability of sub-networks should generically be present also in larger structures}}
    Abstract: Network activity and network connectivity mutually influence each other. Especially for fast processes, like spike-timing- dependent plasticity STDP, which depends on the interaction of few two signals, the question arises how these inter- actions are continuously altering the behavior and structure of the network. To address this question a time-continuous treatment of plasticity is required. However, this is even in simple recurrent network structures currently not possible. Thus, here we develop for a linear differential Hebbian learning system a method by which we can analytically investigate the dynamics and stability of the connections in recurrent networks. We use noisy periodic external input signals, which through the recurrent connections lead to complex actual ongoing inputs and observe that large stable ranges emerge in these networks without boundaries or weight-normalization. Somewhat counter-intuitively, we find that about 40 of these cases are obtained with an LTP- dominated STDP curve. Noise can reduce stability in some cases, but generally this does not occur. Instead stable domains are often enlarged. This study is a first step towards a better understanding of the on- going interactions between activity and plasticity in recurrent networks using STDP. The results suggests that stability of sub-networks should generically be present also in larger structures
    Review:
    Di Prodi, P. and Porr, B. and Wörgötter, F. (2010).
    A novel information measure for predictive learning in a social system setting. Proceedings of the 11th international conference on Simulation of adaptive behavior: from animals to animats, 511-522. DOI: 10.1007/978-3-642-15193-4_48.
    BibTeX:
    @inproceedings{diprodiporrwoergoetter2010,
      author = {Di Prodi, P. and Porr, B. and Wörgötter, F.},
      title = {A novel information measure for predictive learning in a social system setting},
      pages = {511-522},
      booktitle = {Proceedings of the 11th international conference on Simulation of adaptive behavior: from animals to animats},
      year = {2010},
      url = {http://dx.doi.org/10.1007/978-3-642-15193-4_48},
      doi = {10.1007/978-3-642-15193-4_48},
      abstract = {We introduce a new theoretical framework, based on Shannons communication theory and on Ashbys law of requisite variety, suitable for artificial agents using predictive learning. The framework quantifies the performance constraints of a predictive adaptive controller as a function of its learning stage. In addition, we formulate a practical measure, based on information flow, that can be applied to adaptive con- trollers which use hebbian learning, input correlation learning ICO/ISO and temporal difference learning. The framework is also useful in quanti- fying the social division of tasks in a social group of honest, cooperative food foraging, communicating agents. Simulations are in accordance with Luhmann, who suggested that adaptive agents self-organise by reducing the amount of sensory infor- mation or, equivalently, reducing the complexity of the perceived envi- ronment from the agents perspective}}
    Abstract: We introduce a new theoretical framework, based on Shannons communication theory and on Ashbys law of requisite variety, suitable for artificial agents using predictive learning. The framework quantifies the performance constraints of a predictive adaptive controller as a function of its learning stage. In addition, we formulate a practical measure, based on information flow, that can be applied to adaptive con- trollers which use hebbian learning, input correlation learning ICO/ISO and temporal difference learning. The framework is also useful in quanti- fying the social division of tasks in a social group of honest, cooperative food foraging, communicating agents. Simulations are in accordance with Luhmann, who suggested that adaptive agents self-organise by reducing the amount of sensory infor- mation or, equivalently, reducing the complexity of the perceived envi- ronment from the agents perspective
    Review:
    Dellen, B. and Wessel, R. and Clark, J. W. and Wörgötter, F. (2010).
    Motion processing with wide-field neurons in the retino-tecto-rotundal pathway. Journal of Computational Neuroscience, 47-64, 28, 1. DOI: 10.1007/s10827-009-0186-y.
    BibTeX:
    @article{dellenwesselclark2010,
      author = {Dellen, B. and Wessel, R. and Clark, J. W. and Wörgötter, F.},
      title = {Motion processing with wide-field neurons in the retino-tecto-rotundal pathway},
      pages = {47-64},
      journal = {Journal of Computational Neuroscience},
      year = {2010},
      volume= {28},
      number = {1},
      language = {English},
      publisher = {Springer US},
      doi = {10.1007/s10827-009-0186-y},
      abstract = {The retino-tecto-rotundal pathway is the main visual pathway in non-mammalian vertebrates and has been found to be highly involved in visual processing. Despite the extensive receptive fields of tectal and rotundal wide-field neurons, pattern dis- crimination tasks suggest a system with high spatial resolution. In this paper, we address the problem of how global processing performed by motion-sensitive wide-field neurons can be brought into agreement with the concept of a local analysis of visual stimuli. As a solution to this problem, we propose a firing-rate model of the retino-tecto-rotundal pathway which describes how spatiotemporal information can be orga- nized and retained by tectal and rotundal wide-field neurons while processing Fourier-based motion in ab- sence of periodic receptive-field structures. The model}}
    Abstract: The retino-tecto-rotundal pathway is the main visual pathway in non-mammalian vertebrates and has been found to be highly involved in visual processing. Despite the extensive receptive fields of tectal and rotundal wide-field neurons, pattern dis- crimination tasks suggest a system with high spatial resolution. In this paper, we address the problem of how global processing performed by motion-sensitive wide-field neurons can be brought into agreement with the concept of a local analysis of visual stimuli. As a solution to this problem, we propose a firing-rate model of the retino-tecto-rotundal pathway which describes how spatiotemporal information can be orga- nized and retained by tectal and rotundal wide-field neurons while processing Fourier-based motion in ab- sence of periodic receptive-field structures. The model
    Review:
    Abramov, A. and Aksoy, E E. and Dörr, J. and Wörgötter, F. and Pauwels, K. and Dellen, B. (2010).
    3d semantic representation of actions from efficient stereo-image-sequence segmentation on GPUs. 5th International Symposium 3D Data Processing, Visualization and Transmission.
    BibTeX:
    @inproceedings{abramovaksoydoerr2010,
      author = {Abramov, A. and Aksoy, E E. and Dörr, J. and Wörgötter, F. and Pauwels, K. and Dellen, B.},
      title = {3d semantic representation of actions from efficient stereo-image-sequence segmentation on GPUs},
      booktitle = {5th International Symposium 3D Data Processing, Visualization and Transmission},
      year = {2010},
      abstract = {A novel real-time framework for model-free stereo-video segmentation and stereo-segment tracking is presented, combining real-time optical flow and stereo with image segmentation running separately on two GPUs. The stereosegment tracking algorithm achieves a frame rate of 23 Hz for regular videos with a frame size of 256x320 pixels and nearly real time for stereo videos. The computed stereo segments are used to construct 3D segment graphs, from which main graphs, representing a relevant change in the scene, are extracted, which allow us to represent a movie of e.g. 396 original frames by only 12 graphs, each containing only a small number of nodes, providing a condensed description of the scene while preserving data-intrinsic semantics. Using this method, human activities, e.g. and handling of objects, can be encoded in an efficient way. The method has potential applications for manipulation action recognition and learning, and provides a vision-front end for applications in cognitive robotics}}
    Abstract: A novel real-time framework for model-free stereo-video segmentation and stereo-segment tracking is presented, combining real-time optical flow and stereo with image segmentation running separately on two GPUs. The stereosegment tracking algorithm achieves a frame rate of 23 Hz for regular videos with a frame size of 256x320 pixels and nearly real time for stereo videos. The computed stereo segments are used to construct 3D segment graphs, from which main graphs, representing a relevant change in the scene, are extracted, which allow us to represent a movie of e.g. 396 original frames by only 12 graphs, each containing only a small number of nodes, providing a condensed description of the scene while preserving data-intrinsic semantics. Using this method, human activities, e.g. and handling of objects, can be encoded in an efficient way. The method has potential applications for manipulation action recognition and learning, and provides a vision-front end for applications in cognitive robotics
    Review:
    Manoonpong, P. and Wörgötter, F. and Morimoto, J. (2010).
    Extraction of Reward-Related Feature Space Using Correlation-Based and Reward-Based Learning Methods. ICONIP 1, 414-421, 6443. DOI: 10.1007/978-3-642-17537-4_51.
    BibTeX:
    @inproceedings{manoonpongwoergoettermorimoto2010,
      author = {Manoonpong, P. and Wörgötter, F. and Morimoto, J.},
      title = {Extraction of Reward-Related Feature Space Using Correlation-Based and Reward-Based Learning Methods},
      pages = {414-421},
      booktitle = {ICONIP 1},
      year = {2010},
      volume= {6443},
      doi = {10.1007/978-3-642-17537-4_51},
      abstract = {The purpose of this article is to present a novel learning paradigm that extracts reward-related low-dimensional state space by combining correlation-based learning like Input Correlation Learning ICO learning and reward-based learning like Reinforcement Learn- ing RL. Since ICO learning can quickly find a correlation between a state and an unwanted condition e.g. and failure, we use it to extract low-dimensional feature space in which we can find a failure avoidance policy. Then, the extracted feature space is used as a prior for RL. If we can extract proper feature space for a given task, a model of the policy can be simple and the policy can be easily improved. The performance of this learning paradigm is evaluated through simulation of a cart-pole system. As a result, we show that the proposed method can enhance the feature extraction process to find the proper feature space for a pole bal- ancing policy. That is it allows a policy to effectively stabilize the pole in the largest domain of initial conditions compared to only using ICO learning or only using RL without any prior knowledge}}
    Abstract: The purpose of this article is to present a novel learning paradigm that extracts reward-related low-dimensional state space by combining correlation-based learning like Input Correlation Learning ICO learning and reward-based learning like Reinforcement Learn- ing RL. Since ICO learning can quickly find a correlation between a state and an unwanted condition e.g. and failure, we use it to extract low-dimensional feature space in which we can find a failure avoidance policy. Then, the extracted feature space is used as a prior for RL. If we can extract proper feature space for a given task, a model of the policy can be simple and the policy can be easily improved. The performance of this learning paradigm is evaluated through simulation of a cart-pole system. As a result, we show that the proposed method can enhance the feature extraction process to find the proper feature space for a pole bal- ancing policy. That is it allows a policy to effectively stabilize the pole in the largest domain of initial conditions compared to only using ICO learning or only using RL without any prior knowledge
    Review:
    Manoonpong, P. and Pasemann, F. and Kolodziejski, C. and Wörgötter, F. (2010).
    Designing Simple Nonlinear Filters Using Hysteresis of Single Recurrent Neurons for Acoustic Signal Recognition in Robots. ICANN 1, 374-383, 6352. DOI: 10.1007/978-3-642-15819-3_50.
    BibTeX:
    @inproceedings{manoonpongpasemannkolodziejski2010,
      author = {Manoonpong, P. and Pasemann, F. and Kolodziejski, C. and Wörgötter, F.},
      title = {Designing Simple Nonlinear Filters Using Hysteresis of Single Recurrent Neurons for Acoustic Signal Recognition in Robots},
      pages = {374-383},
      booktitle = {ICANN 1},
      year = {2010},
      volume= {6352},
      doi = {10.1007/978-3-642-15819-3_50},
      abstract = {In this article we exploit the discrete-time dynamics of a sin- gle neuron with self-connection to systematically design simple signal fil- ters. Due to hysteresis effects and transient dynamics, this single neuron behaves as an adjustable low-pass filter for specific parameter configura- tions. Extending this neuro-module by two more recurrent neurons leads to versatile high- and band-pass filters. The approach presented here helps to understand how the dynamical properties of recurrent neural networks can be used for filter design. Furthermore, it gives guidance to a new way of implementing sensory preprocessing for acoustic signal recognition in autonomous robots}}
    Abstract: In this article we exploit the discrete-time dynamics of a sin- gle neuron with self-connection to systematically design simple signal fil- ters. Due to hysteresis effects and transient dynamics, this single neuron behaves as an adjustable low-pass filter for specific parameter configura- tions. Extending this neuro-module by two more recurrent neurons leads to versatile high- and band-pass filters. The approach presented here helps to understand how the dynamical properties of recurrent neural networks can be used for filter design. Furthermore, it gives guidance to a new way of implementing sensory preprocessing for acoustic signal recognition in autonomous robots
    Review:
    Aksoy, E E. and Abramov, A. and Wörgötter, F. and Dellen, B. (2010).
    Categorizing object-action relations from semantic scene graphs. IEEE International Conference on Robotics and Automation ICRA, 398-405. DOI: 10.1109/ROBOT.2010.5509319.
    BibTeX:
    @inproceedings{aksoyabramovwoergoetter2010,
      author = {Aksoy, E E. and Abramov, A. and Wörgötter, F. and Dellen, B.},
      title = {Categorizing object-action relations from semantic scene graphs},
      pages = {398-405},
      booktitle = {IEEE International Conference on Robotics and Automation ICRA},
      year = {2010},
      month = {05},
      doi = {10.1109/ROBOT.2010.5509319},
      abstract = {In this work we introduce a novel approach for detecting spatiotemporal object-action relations, leading to both, action recognition and object categorization. Semantic scene graphs are extracted from image sequences and used to find the characteristic main graphs of the action sequence via an exact graph-matching technique, thus providing an event table of the action scene, which allows extracting object- action relations. The method is applied to several artificial and real action scenes containing limited context. The central novelty of this approach is that it is model free and needs a priori representation neither for objects nor actions. Essentially actions are recognized without requiring prior object knowledge and objects are categorized solely based on their exhibited role within an action sequence. Thus, this approach is grounded in the affordance principle, which has recently attracted much attention in robotics and provides a way forward for trial and error learning of object-action relations through repeated experimentation. It may therefore be useful for recognition and categorization tasks for example in imitation learning in developmental and cognitive robotics}}
    Abstract: In this work we introduce a novel approach for detecting spatiotemporal object-action relations, leading to both, action recognition and object categorization. Semantic scene graphs are extracted from image sequences and used to find the characteristic main graphs of the action sequence via an exact graph-matching technique, thus providing an event table of the action scene, which allows extracting object- action relations. The method is applied to several artificial and real action scenes containing limited context. The central novelty of this approach is that it is model free and needs a priori representation neither for objects nor actions. Essentially actions are recognized without requiring prior object knowledge and objects are categorized solely based on their exhibited role within an action sequence. Thus, this approach is grounded in the affordance principle, which has recently attracted much attention in robotics and provides a way forward for trial and error learning of object-action relations through repeated experimentation. It may therefore be useful for recognition and categorization tasks for example in imitation learning in developmental and cognitive robotics
    Review:
    Butz, M. and Wörgötter, F. and Ooyen, A. (2009).
    Activity-dependent structural plasticity. Brain Research Reviews, 287 - 305, 60, 2. DOI: http://dx.doi.org/9.1016/j.brainresrev.2008.12.023.
    BibTeX:
    @article{butzworgotterooyen2009,
      author = {Butz, M. and Wörgötter, F. and Ooyen, A.},
      title = {Activity-dependent structural plasticity},
      pages = {287 - 305},
      journal = {Brain Research Reviews},
      year = {2009},
      volume= {60},
      number = {2},
      url = {http://www.sciencedirect.com/scienc},
      doi = {http://dx.doi.org/9.1016/j.brainresrev.2008.12.023}}
    Abstract:
    Review:
    Markelic, I. and Kulvicius, T. and Tamosiunaite, M. and Wörgötter, F. (2009).
    Anticipatory Driving for a Robot-Car Based on Supervised Learning. Lecture Notes in Computer Science: Anticipatory Behavior in Adaptive Learning Systems, 267-282.
    BibTeX:
    @article{markelickulviciustamosiunaite2009,
      author = {Markelic, I. and Kulvicius, T. and Tamosiunaite, M. and Wörgötter, F.},
      title = {Anticipatory Driving for a Robot-Car Based on Supervised Learning},
      pages = {267-282},
      journal = {Lecture Notes in Computer Science: Anticipatory Behavior in Adaptive Learning Systems},
      year = {2009},
      abstract = {Using look ahead information and plan making improves hu- man driving. We therefore propose that also autonomously driving systems should dispose over such abilities. We adapt a machine learning approach, where the system, a car-like robot, is trained by an experienced driver by correlating visual input to human driving actions. The heart of the system is a database where look ahead sensory information is stored together with action sequences issued by the human supervi- sor. The result is a robot that runs at real-time and issues steering and velocity control in a human-like way. For steer we adapt a two-level ap- proach, where the result of the database is combined with an additional reactive controller for robust behavior. Concerning velocity control this paper makes a novel contribution which is the ability of the system to react adequatly to upcoming curves}}
    Abstract: Using look ahead information and plan making improves hu- man driving. We therefore propose that also autonomously driving systems should dispose over such abilities. We adapt a machine learning approach, where the system, a car-like robot, is trained by an experienced driver by correlating visual input to human driving actions. The heart of the system is a database where look ahead sensory information is stored together with action sequences issued by the human supervi- sor. The result is a robot that runs at real-time and issues steering and velocity control in a human-like way. For steer we adapt a two-level ap- proach, where the result of the database is combined with an additional reactive controller for robust behavior. Concerning velocity control this paper makes a novel contribution which is the ability of the system to react adequatly to upcoming curves
    Review:
    Wörgötter, F. and Agostini, A. and Krüger, N. and Shylo, N. and Porr, B. (2009).
    Cognitive agents - a procedural perspective relying on the predictability of Object-Action-Complexes OACs. Robotics and Autonomous Systems, 420-432, 57, 4.
    BibTeX:
    @article{woergoetteragostinikrueger2009,
      author = {Wörgötter, F. and Agostini, A. and Krüger, N. and Shylo, N. and Porr, B.},
      title = {Cognitive agents - a procedural perspective relying on the predictability of Object-Action-Complexes OACs},
      pages = {420-432},
      journal = {Robotics and Autonomous Systems},
      year = {2009},
      volume= {57},
      number = {4},
      abstract = {Embodied cognition suggests that complex cognitive traits can only arise when agents have a body situated in the world. The aspects of embodiment and situatedness are being discussed here from the perspective of linear systems theory. This perspective treats bodies as dynamic, temporally variable entities, which can be extended or curtailed at their boundaries. We show how acting agents can, for example, actively extend their body for some time by incorporating predictably behaving parts of the world and how this affects the transfer functions. We suggest that primates have mastered this to a large degree increasingly splitting their world into predictable and unpredictable entities. We argue that temporary body extension may have been instrumental in paving the way for the development of higher cognitive complexity as it is reliably widening the cause-effect horizon about the actions of the agent. A first robot experiment is sketched to support these ideas. We continue discussing the concept of Object-Action Complexes OACs introduced by the European PACO-PLUS consortium to emphasize the notion that, for a cognitive agent, objects and actions are inseparably intertwined. In another robot experiment we devise a semi-supervised procedure using the OAC-concept to demonstrate how an agent can acquire knowledge about its world. Here the notion of predicting changes fundamentally underlies the implemented procedure and we try to show how this concept can be used to improve the robots inner model and behaviour. Hence, in this article we have tried to show how predictability can be used to augment the agents body and to acquire knowledge about the external world, possibly leading to more advanced cognitive traits}}
    Abstract: Embodied cognition suggests that complex cognitive traits can only arise when agents have a body situated in the world. The aspects of embodiment and situatedness are being discussed here from the perspective of linear systems theory. This perspective treats bodies as dynamic, temporally variable entities, which can be extended or curtailed at their boundaries. We show how acting agents can, for example, actively extend their body for some time by incorporating predictably behaving parts of the world and how this affects the transfer functions. We suggest that primates have mastered this to a large degree increasingly splitting their world into predictable and unpredictable entities. We argue that temporary body extension may have been instrumental in paving the way for the development of higher cognitive complexity as it is reliably widening the cause-effect horizon about the actions of the agent. A first robot experiment is sketched to support these ideas. We continue discussing the concept of Object-Action Complexes OACs introduced by the European PACO-PLUS consortium to emphasize the notion that, for a cognitive agent, objects and actions are inseparably intertwined. In another robot experiment we devise a semi-supervised procedure using the OAC-concept to demonstrate how an agent can acquire knowledge about its world. Here the notion of predicting changes fundamentally underlies the implemented procedure and we try to show how this concept can be used to improve the robots inner model and behaviour. Hence, in this article we have tried to show how predictability can be used to augment the agents body and to acquire knowledge about the external world, possibly leading to more advanced cognitive traits
    Review:
    Tamosiunaite, M. and Asfour, T. and Wörgötter, F. (2009).
    Learning to reach by reinforcement learning using a receptive field based function approximation approach with continuous actions. Biological Cybernetics, 249-260, 100, 3.
    BibTeX:
    @article{tamosiunaiteasfourwoergoetter2009,
      author = {Tamosiunaite, M. and Asfour, T. and Wörgötter, F.},
      title = {Learning to reach by reinforcement learning using a receptive field based function approximation approach with continuous actions},
      pages = {249-260},
      journal = {Biological Cybernetics},
      year = {2009},
      volume= {100},
      number = {3},
      abstract = {Reinforcement learning methods can be used in robotics applications especially for specific target-oriented problems, for example the reward-based recalibration of goal directed actions. To this end still relatively large and continuous state-action spaces need to be efficiently handled. The goal of this paper is, thus, to develop a novel, rather simple method which uses reinforcement learning with function approximation in conjunction with different reward-strategies for solving such problems. For the testing of our method, we use a four degree-of-freedom reaching problem in 3D-space simulated by a two-joint robot arm system with two DOF each. Function approximation is based on 4D, overlapping kernels receptive fields and the state-action space contains about 10,000 of these. Different types of reward structures are being compared, for example, reward-on- touching-only against reward-on-approach. Furthermore, forbidden joint configurations are punished. A continuous action space is used. In spite of a rather large number of states and the continuous action space these reward/punishment strategies allow the system to find a good solution usually within about 20 trials. The efficiency of our method demonstrated in this test scenario suggests that it might be possible to use it on a real robot for problems where mixed rewards can be defined in situations where other types of learning might be difficult}}
    Abstract: Reinforcement learning methods can be used in robotics applications especially for specific target-oriented problems, for example the reward-based recalibration of goal directed actions. To this end still relatively large and continuous state-action spaces need to be efficiently handled. The goal of this paper is, thus, to develop a novel, rather simple method which uses reinforcement learning with function approximation in conjunction with different reward-strategies for solving such problems. For the testing of our method, we use a four degree-of-freedom reaching problem in 3D-space simulated by a two-joint robot arm system with two DOF each. Function approximation is based on 4D, overlapping kernels receptive fields and the state-action space contains about 10,000 of these. Different types of reward structures are being compared, for example, reward-on- touching-only against reward-on-approach. Furthermore, forbidden joint configurations are punished. A continuous action space is used. In spite of a rather large number of states and the continuous action space these reward/punishment strategies allow the system to find a good solution usually within about 20 trials. The efficiency of our method demonstrated in this test scenario suggests that it might be possible to use it on a real robot for problems where mixed rewards can be defined in situations where other types of learning might be difficult
    Review:
    Renjewski, D. and Seyfarth, A. and Manoonpong, P. and Wörgötter, F. (2009).
    The development of a biomechanical leg system and its neural control. IEEE International Conference on Robotics and Biomimetics ROBIO, 1894 -1899. DOI: 10.1109/ROBIO.2009.5420535.
    BibTeX:
    @inproceedings{renjewskiseyfarthmanoonpong2009,
      author = {Renjewski, D. and Seyfarth, A. and Manoonpong, P. and Wörgötter, F.},
      title = {The development of a biomechanical leg system and its neural control},
      pages = {1894 -1899},
      booktitle = {IEEE International Conference on Robotics and Biomimetics ROBIO},
      year = {2009},
      doi = {10.1109/ROBIO.2009.5420535},
      abstract = {The function of the locomotor system in human gait is still an open question. Today robot bipeds are not able to reproduce the versatility of human locomotion. In this article a robotic knee joint and an experimental setup are proposed. The leg function is tested and the acquired data is compared to human leg behaviour in running observed in experiments}}
    Abstract: The function of the locomotor system in human gait is still an open question. Today robot bipeds are not able to reproduce the versatility of human locomotion. In this article a robotic knee joint and an experimental setup are proposed. The leg function is tested and the acquired data is compared to human leg behaviour in running observed in experiments
    Review:
    Ning, K. and Wörgötter, F. (2009).
    A Novel Concept for Building a Hyper-Redundant Chain Robot. IEEE Transactions on Robotics, 1237-1248, 25, 6.
    BibTeX:
    @article{ningwoergoetter2009,
      author = {Ning, K. and Wörgötter, F.},
      title = {A Novel Concept for Building a Hyper-Redundant Chain Robot},
      pages = {1237-1248},
      journal = {IEEE Transactions on Robotics},
      year = {2009},
      volume= {25},
      number = {6},
      abstract = {This paper puts forward a novel design concept for building a 3-D hyper-redundant chain robot HRCR system, con- sisting of linked, identical modules and one base module. All the joints of this HRCR are passive and state controllable and share common inputs introduced by wire-driven control. The original prototype developed here, named 3D-Trunk, is used as a proof of concept. We will present its whole mechanical design and con- troller architecture. The key components of 3D-Trunk, its opera- tional principles, and all implementation issues are exhibited and described in detail. Basic robotics analyses, dynamics simulations, and some experiments are also shown. This novel design concept is highly modular and scalable, no matter how many degrees of free- dom are implemented and, thus, provides an affordable solution for constructing an HRCR}}
    Abstract: This paper puts forward a novel design concept for building a 3-D hyper-redundant chain robot HRCR system, con- sisting of linked, identical modules and one base module. All the joints of this HRCR are passive and state controllable and share common inputs introduced by wire-driven control. The original prototype developed here, named 3D-Trunk, is used as a proof of concept. We will present its whole mechanical design and con- troller architecture. The key components of 3D-Trunk, its opera- tional principles, and all implementation issues are exhibited and described in detail. Basic robotics analyses, dynamics simulations, and some experiments are also shown. This novel design concept is highly modular and scalable, no matter how many degrees of free- dom are implemented and, thus, provides an affordable solution for constructing an HRCR
    Review:
    Nemec, B. and Tamosiunaite, M. and Wörgötter, F. and Ude, A. (2009).
    Task adaptation through exploration and action sequencing. 9th IEEE-RAS International Conference on Humanoid Robots, 2009, 610 -616. DOI: 10.1109/ICHR.2009.5379568.
    BibTeX:
    @inproceedings{nemectamosiunaitewoergoetter2009,
      author = {Nemec, B. and Tamosiunaite, M. and Wörgötter, F. and Ude, A.},
      title = {Task adaptation through exploration and action sequencing},
      pages = {610 -616},
      booktitle = {9th IEEE-RAS International Conference on Humanoid Robots, 2009},
      year = {2009},
      doi = {10.1109/ICHR.2009.5379568},
      abstract = {General-purpose autonomous robots need to have the ability to sequence and adapt the available sensorimotor knowledge, which is often given in the form of movement primitives. In order to solve a given task in situations that were not considered during the initial learning, it is necessary to adapt trajectories contained in the library of primitive motions to new situations. In this paper we explore how to apply reinforcement learning to modify the subgoals of primitive movements involved in the given task. As the underlying sensorimotor representation we selected nonlinear dynamic systems, which provide a powerful machinery for the modification of motion trajectories. We propose a new formulation for dynamic systems, which ensures that consecutive primitive movements can be splined together in a continuous way up to second order derivatives}}
    Abstract: General-purpose autonomous robots need to have the ability to sequence and adapt the available sensorimotor knowledge, which is often given in the form of movement primitives. In order to solve a given task in situations that were not considered during the initial learning, it is necessary to adapt trajectories contained in the library of primitive motions to new situations. In this paper we explore how to apply reinforcement learning to modify the subgoals of primitive movements involved in the given task. As the underlying sensorimotor representation we selected nonlinear dynamic systems, which provide a powerful machinery for the modification of motion trajectories. We propose a new formulation for dynamic systems, which ensures that consecutive primitive movements can be splined together in a continuous way up to second order derivatives
    Review:
    Manoonpong, P. and Wörgötter, F. (2009).
    Efference copies in neural control of dynamic biped walking. Robotics and Autonomous Systems, 1140-1153, 57, 11.
    BibTeX:
    @article{manoonpongwoergoetter2009,
      author = {Manoonpong, P. and Wörgötter, F.},
      title = {Efference copies in neural control of dynamic biped walking},
      pages = {1140-1153},
      journal = {Robotics and Autonomous Systems},
      year = {2009},
      volume= {57},
      number = {11},
      abstract = {In the early 1950s, von Holst and Mittelstaedt proposed that motor commands copied within the central nervous system efference copy help to distinguish reafference activity afference activity due to self- generated motion from exafference activity afference activity due to external stimulus. In addition, an efference copy can be also used to compare it with the actual sensory feedback in order to suppress self- generated sensations. Based on these biological findings, we conduct here two experimental studies on our biped RunBot where such principles together with neural forward models are applied to RunBots dynamic locomotion control. The main purpose of this article is to present the modular design of RunBots control architecture and discuss how the inherent dynamic properties of the different modules lead to the required signal processing. We believe that the experimental studies pursued here will sharpen our understanding of how the efference copies influence dynamic locomotion control to the benefit of modern neural control strategies in robots}}
    Abstract: In the early 1950s, von Holst and Mittelstaedt proposed that motor commands copied within the central nervous system efference copy help to distinguish reafference activity afference activity due to self- generated motion from exafference activity afference activity due to external stimulus. In addition, an efference copy can be also used to compare it with the actual sensory feedback in order to suppress self- generated sensations. Based on these biological findings, we conduct here two experimental studies on our biped RunBot where such principles together with neural forward models are applied to RunBots dynamic locomotion control. The main purpose of this article is to present the modular design of RunBots control architecture and discuss how the inherent dynamic properties of the different modules lead to the required signal processing. We believe that the experimental studies pursued here will sharpen our understanding of how the efference copies influence dynamic locomotion control to the benefit of modern neural control strategies in robots
    Review:
    Kolodziejski, C. and Porr, B. and Wörgötter, F. (2009).
    On the Asymptotic Equivalence Between Differential Hebbian and Temporal Difference Learning. Neural Computation, 1173-1202, 21, 4.
    BibTeX:
    @article{kolodziejskiporrwoergoetter2009,
      author = {Kolodziejski, C. and Porr, B. and Wörgötter, F.},
      title = {On the Asymptotic Equivalence Between Differential Hebbian and Temporal Difference Learning},
      pages = {1173-1202},
      journal = {Neural Computation},
      year = {2009},
      volume= {21},
      number = {4},
      abstract = {In this theoretical contribution, we provide mathematical proof that two of the most important classes of network learning correlation-based differential Hebbian learning and reward-based temporal difference learning are asymptotically equivalent when timing the learning with a modulatory signal. This opens the opportunity to consistently refor- mulate most of the abstract reinforcement learning framework from a correlation-based perspective more closely related to the biophysics of neurons}}
    Abstract: In this theoretical contribution, we provide mathematical proof that two of the most important classes of network learning correlation-based differential Hebbian learning and reward-based temporal difference learning are asymptotically equivalent when timing the learning with a modulatory signal. This opens the opportunity to consistently refor- mulate most of the abstract reinforcement learning framework from a correlation-based perspective more closely related to the biophysics of neurons
    Review:
    Kolodziejski, C. and Porr, B. and Tamosiunaite, M. and Wörgötter, F. (2009).
    On the asymptotic equivalence between differential Hebbian and temporal difference learning using a local third factor. Advances in Neural Information Processing Systems, 857-864, 21.
    BibTeX:
    @inproceedings{kolodziejskiporrtamosiunaite2009,
      author = {Kolodziejski, C. and Porr, B. and Tamosiunaite, M. and Wörgötter, F.},
      title = {On the asymptotic equivalence between differential Hebbian and temporal difference learning using a local third factor},
      pages = {857-864},
      booktitle = {Advances in Neural Information Processing Systems},
      year = {2009},
      volume= {21},
      abstract = {In this theoretical contribution we provide mathematical proof that two of the most important classes of network learning - correlation-based differential Heb- bian learning and reward-based temporal difference learning - are asymptotically equivalent when timing the learning with a local modulatory signal. This opens the opportunity to consistently reformulate most of the abstract reinforcement learn- ing framework from a correlation based perspective that is more closely related to the biophysics of neurons}}
    Abstract: In this theoretical contribution we provide mathematical proof that two of the most important classes of network learning - correlation-based differential Heb- bian learning and reward-based temporal difference learning - are asymptotically equivalent when timing the learning with a local modulatory signal. This opens the opportunity to consistently reformulate most of the abstract reinforcement learn- ing framework from a correlation based perspective that is more closely related to the biophysics of neurons
    Review:
    Dellen, B. and Wörgötter, F. (2009).
    Disparity from stereo-segment silhouettes of weakly-textured images. British Machine Vision Conference.
    BibTeX:
    @inproceedings{dellenwoergoetter2009,
      author = {Dellen, B. and Wörgötter, F.},
      title = {Disparity from stereo-segment silhouettes of weakly-textured images},
      booktitle = {British Machine Vision Conference},
      year = {2009},
      abstract = {We propose a novel robust stereo algorithm for weakly-textured scenes. Unique cor- respondences existing between the silhouettes of corresponding image segments allow assigning accurate disparities to segment boundary points. This information as well as stereo from the weak texture inside segments, which is extracted using a region- constrained window-based matching algorithm, are fused and disparities are interpolated inside segments while considering potentially occluded areas derived from the depth- ordering of segments. The algorithm is applied to a set of weakly-textured images and it is demonstrated that stereo from segment silhouettes often provides sufficient informa- tion to reconstruct disparities in weakly- and non-textured image areas. The algorithm is applied to several real stereo images and its performance is evaluated quantitatively using images from the 2006 Middlebury dataset}}
    Abstract: We propose a novel robust stereo algorithm for weakly-textured scenes. Unique cor- respondences existing between the silhouettes of corresponding image segments allow assigning accurate disparities to segment boundary points. This information as well as stereo from the weak texture inside segments, which is extracted using a region- constrained window-based matching algorithm, are fused and disparities are interpolated inside segments while considering potentially occluded areas derived from the depth- ordering of segments. The algorithm is applied to a set of weakly-textured images and it is demonstrated that stereo from segment silhouettes often provides sufficient informa- tion to reconstruct disparities in weakly- and non-textured image areas. The algorithm is applied to several real stereo images and its performance is evaluated quantitatively using images from the 2006 Middlebury dataset
    Review:
    Dellen, B. and Aksoy, E E. and Wörgötter, F. (2009).
    Segment Tracking via a Spatiotemporal Linking Process including Feedback Stabilization in an n-D Lattice Model. Sensors, 9355-9379, 9, 11. DOI: 10.3390/s91109355.
    BibTeX:
    @article{dellenaksoywoergoetter2009,
      author = {Dellen, B. and Aksoy, E E. and Wörgötter, F.},
      title = {Segment Tracking via a Spatiotemporal Linking Process including Feedback Stabilization in an n-D Lattice Model},
      pages = {9355-9379},
      journal = {Sensors},
      year = {2009},
      volume= {9},
      number = {11},
      url = {http://www.mdpi.com/1424-8220/9/11/9355},
      doi = {10.3390/s91109355},
      abstract = {Model-free tracking is important for solving tasks such as moving-object tracking and action recognition in cases where no prior object knowledge is available. For this purpose, we extend the concept of spatially synchronous dynamics in spin-lattice models to the spatiotemporal domain to track segments within an image sequence. The method is related to synchronization processes in neural networks and based on superparamagnetic clustering of data. Spin interactions result in the formation of clusters of correlated spins, providing an automatic labeling of corresponding image regions. The algorithm obeys detailed balance. This is an important property as it allows for consistent spin-transfer across subsequent frames, which can be used for segment tracking. Therefore, in the tracking process the correct equilibrium will always be found, which is an important advance as compared with other more heuristic tracking procedures. In the case of long image sequences, i.e. and movies, the algorithm is augmented with a feedback mechanism, further stabilizing segment tracking}}
    Abstract: Model-free tracking is important for solving tasks such as moving-object tracking and action recognition in cases where no prior object knowledge is available. For this purpose, we extend the concept of spatially synchronous dynamics in spin-lattice models to the spatiotemporal domain to track segments within an image sequence. The method is related to synchronization processes in neural networks and based on superparamagnetic clustering of data. Spin interactions result in the formation of clusters of correlated spins, providing an automatic labeling of corresponding image regions. The algorithm obeys detailed balance. This is an important property as it allows for consistent spin-transfer across subsequent frames, which can be used for segment tracking. Therefore, in the tracking process the correct equilibrium will always be found, which is an important advance as compared with other more heuristic tracking procedures. In the case of long image sequences, i.e. and movies, the algorithm is augmented with a feedback mechanism, further stabilizing segment tracking
    Review:
    Butz, M. and Wörgötter, F. (2009).
    A Model for Cortical Rewiring Following Deafferentation and Focal Stroke. Front. Comput. Neurosci, 1-15, 3-10. DOI: 10.3389/neuro.10.010.2009.
    BibTeX:
    @article{butzwoergoetter2009,
      author = {Butz, M. and Wörgötter, F.},
      title = {A Model for Cortical Rewiring Following Deafferentation and Focal Stroke},
      pages = {1-15},
      journal = {Front. Comput. Neurosci},
      year = {2009},
      volume= {3-10},
      doi = {10.3389/neuro.10.010.2009},
      abstract = {It is still unclear to what extent structural plasticity in terms of synaptic rewiring is the cause for cortical remapping after a lesion. Recent two-photon laser imaging studies demonstrate that synaptic rewiring is persistent in the adult brain and is dramatically increased following brain lesions or after a loss of sensory input cortical deafferentation. We use a recurrent neural network model to study the time course of synaptic rewiring following a peripheral lesion. For this, we represent axonal and dendritic elements of cortical neurons to model synapse formation, pruning and synaptic rewiring. Neurons increase and decrease the number of axonal and dendritic elements in an activity-dependent fashion in order to maintain their activity in a homeostatic equilibrium. In this study we demonstrate that synaptic rewiring contributes to neuronal homeostasis during normal development as well as following lesions. We show that networks in homeostasis, which can therefore be considered as adult networks, are much less able to compensate for a loss of input. Interestingly, we found that paused stimulation of the networks are much more effective promoting reorganization than continuous stimulation. This can be explained as neurons quickly adapt to this stimulation whereas pauses prevents a saturation of the positive stimulation effect. These findings may suggest strategies for improving therapies in neurologic rehabilitation}}
    Abstract: It is still unclear to what extent structural plasticity in terms of synaptic rewiring is the cause for cortical remapping after a lesion. Recent two-photon laser imaging studies demonstrate that synaptic rewiring is persistent in the adult brain and is dramatically increased following brain lesions or after a loss of sensory input cortical deafferentation. We use a recurrent neural network model to study the time course of synaptic rewiring following a peripheral lesion. For this, we represent axonal and dendritic elements of cortical neurons to model synapse formation, pruning and synaptic rewiring. Neurons increase and decrease the number of axonal and dendritic elements in an activity-dependent fashion in order to maintain their activity in a homeostatic equilibrium. In this study we demonstrate that synaptic rewiring contributes to neuronal homeostasis during normal development as well as following lesions. We show that networks in homeostasis, which can therefore be considered as adult networks, are much less able to compensate for a loss of input. Interestingly, we found that paused stimulation of the networks are much more effective promoting reorganization than continuous stimulation. This can be explained as neurons quickly adapt to this stimulation whereas pauses prevents a saturation of the positive stimulation effect. These findings may suggest strategies for improving therapies in neurologic rehabilitation
    Review:
    Ning, K. and Wörgötter, F. (2009).
    A DOF state controllable and driving shared solution for building a hyper-redundant chain robot. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 5880-5885. DOI: 10.1109/IROS.2009.5354795.
    BibTeX:
    @inproceedings{ningwoergoetter2009a,
      author = {Ning, K. and Wörgötter, F.},
      title = {A DOF state controllable and driving shared solution for building a hyper-redundant chain robot},
      pages = {5880-5885},
      booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
      year = {2009},
      month = {10},
      doi = {10.1109/IROS.2009.5354795},
      abstract = {This paper puts forward a novel design solution for building a 3D hyper-redundant chain robot HRCR system, which consists of linked, identical modules and one base module. All the joints of this HRCR are passive and state controllable, and share common inputs introduced by wire-driven control, no matter how many degrees of freedom DOF are implemented using different numbers of modules. The prototype developed here, named 3D-Trunk, is used as a proof of concept. We will present here its concept, mechanical and embedded controller design and the implementation}}
    Abstract: This paper puts forward a novel design solution for building a 3D hyper-redundant chain robot HRCR system, which consists of linked, identical modules and one base module. All the joints of this HRCR are passive and state controllable, and share common inputs introduced by wire-driven control, no matter how many degrees of freedom DOF are implemented using different numbers of modules. The prototype developed here, named 3D-Trunk, is used as a proof of concept. We will present here its concept, mechanical and embedded controller design and the implementation
    Review:
    Manoonpong, P. and Wörgötter, F. (2009).
    Adaptive Sensor-Driven Neural Control for Learning in Walking Machines. Neural Information Processing, 47-55, 5864. DOI: 10.1007/978-3-642-10684-2_6.
    BibTeX:
    @inproceedings{manoonpongwoergoetter2009a,
      author = {Manoonpong, P. and Wörgötter, F.},
      title = {Adaptive Sensor-Driven Neural Control for Learning in Walking Machines},
      pages = {47-55},
      booktitle = {Neural Information Processing},
      year = {2009},
      volume= {5864},
      editor = {Leung, ChiSing and Lee, Minho and Chan, Jonathan},
      publisher = {Springer Berlin Heidelberg},
      series = {Lecture Notes in Computer Science},
      doi = {10.1007/978-3-642-10684-2_6},
      abstract = {Wild rodents learn the danger-predicting meaning of preda- tor bird calls through the paring of cues which are an aversive stimulus immediate danger signal or unconditioned stimulus, US and the acous- tic stimulus predator signal or conditioned stimulus, CS. This learning is a form of pavlovian conditioning. In analogy, in this article a setup is described where adaptive sensor-driven neural control is used to simulate biologically-inspired acoustic predator-recognition learning for a safe es- cape on a six-legged walking machine. As a result, the controller allows the walking machine to learn the association of a predictive acoustic sig- nal predator signal, CS and a reflex infrared signal immediate danger signal, US. Such that after learning the machine performs fast walking behavior when hearing an approaching predator from behind leading to safely escape from the attack}}
    Abstract: Wild rodents learn the danger-predicting meaning of preda- tor bird calls through the paring of cues which are an aversive stimulus immediate danger signal or unconditioned stimulus, US and the acous- tic stimulus predator signal or conditioned stimulus, CS. This learning is a form of pavlovian conditioning. In analogy, in this article a setup is described where adaptive sensor-driven neural control is used to simulate biologically-inspired acoustic predator-recognition learning for a safe es- cape on a six-legged walking machine. As a result, the controller allows the walking machine to learn the association of a predictive acoustic sig- nal predator signal, CS and a reflex infrared signal immediate danger signal, US. Such that after learning the machine performs fast walking behavior when hearing an approaching predator from behind leading to safely escape from the attack
    Review:
    Dellen, B. and Wörgötter, F. (2009).
    Simulating Dynamical Systems for Early Vision. VISAPP, 525-528, 2.
    BibTeX:
    @inproceedings{dellenwoergoetter2009a,
      author = {Dellen, B. and Wörgötter, F.},
      title = {Simulating Dynamical Systems for Early Vision},
      pages = {525-528},
      booktitle = {VISAPP},
      year = {2009},
      volume= {2},
      abstract = {We propose a novel algorithm for stereo matching using a dynamical systems approach. The stereo correspon- dence problem is first formulated as an energy minimization problem. From the energy function, we derive a system of differential equations describing the corresponding dynamical system of interacting elements, which we solve using numerical integration. Optimization is introduced by means of a damping term and a noise term, an idea similar to simulated annealing. The algorithm is tested on the Middlebury stereo benchmark}}
    Abstract: We propose a novel algorithm for stereo matching using a dynamical systems approach. The stereo correspon- dence problem is first formulated as an energy minimization problem. From the energy function, we derive a system of differential equations describing the corresponding dynamical system of interacting elements, which we solve using numerical integration. Optimization is introduced by means of a damping term and a noise term, an idea similar to simulated annealing. The algorithm is tested on the Middlebury stereo benchmark
    Review:
    Baseski, E. and Baunegaard With Jensen, L. and Pugeault, N. and Pilz, F. and Pauwels, K. and Hulle, M V. and Wörgötter, F. and Krüger, N. (2009).
    Road Interpretation for Driver Assistance based on an Early Cognitive Vision System. VISAPP, 496-505, 1.
    BibTeX:
    @inproceedings{baseskibaunegaardwithjensenpugeault,
      author = {Baseski, E. and Baunegaard With Jensen, L. and Pugeault, N. and Pilz, F. and Pauwels, K. and Hulle, M V. and Wörgötter, F. and Krüger, N.},
      title = {Road Interpretation for Driver Assistance based on an Early Cognitive Vision System},
      pages = {496-505},
      booktitle = {VISAPP},
      year = {2009},
      volume= {1},
      abstract = {In this work, we address the problem of road interpretation for driver assistance based on an early cognitive vision system. The structure of a road and the relevant traffic are interpreted in terms of ego-motion estimation of the car, independently moving objects on the road, lane markers and large scale maps of the road. We make use of temporal and spatial disambiguation mechanisms to increase the reliability of visually extracted 2D and 3D information. This information is then used to interpret the layout of the road by using lane markers that are detected via Bayesian reasoning. We also estimate the ego-motion of the car which is used to create large scale maps of the road and also to detect independently moving objects. Sample results for the presented algorithms are shown on a stereo image sequence, that has been collected from a structured road}}
    Abstract: In this work, we address the problem of road interpretation for driver assistance based on an early cognitive vision system. The structure of a road and the relevant traffic are interpreted in terms of ego-motion estimation of the car, independently moving objects on the road, lane markers and large scale maps of the road. We make use of temporal and spatial disambiguation mechanisms to increase the reliability of visually extracted 2D and 3D information. This information is then used to interpret the layout of the road by using lane markers that are detected via Bayesian reasoning. We also estimate the ego-motion of the car which is used to create large scale maps of the road and also to detect independently moving objects. Sample results for the presented algorithms are shown on a stereo image sequence, that has been collected from a structured road
    Review:
    Wörgötter, F. and Porr, B. (2008).
    Reinforcement learning. Scholarpedia, 1448, 3, 3. DOI: doi:10.4249/scholarpedia.1448.
    BibTeX:
    @article{woergoetterporr2008,
      author = {Wörgötter, F. and Porr, B.},
      title = {Reinforcement learning},
      pages = {1448},
      journal = {Scholarpedia},
      year = {2008},
      volume= {3},
      number = {3},
      doi = {doi:10.4249/scholarpedia.1448}}
    Abstract:
    Review:
    Wörgötter, F. (2008).
    Lernende Systeme. In Georgia Augusta Wissenschaftsmagazin der Universität Göttingen, 55-60.
    BibTeX:
    @article{woergoetter2008,
      author = {Wörgötter, F.},
      title = {Lernende Systeme},
      pages = {55-60},
      journal = {In Georgia Augusta Wissenschaftsmagazin der Universität Göttingen},
      year = {2008}}
    Abstract:
    Review:
    Thompson, A M. and Porr, B. and Kolodziejski, C. and Wörgötter, F. (2008).
    Second Order Conditioning in the Sub-cortical Nuclei of the Limbic System. From Animals to Animats 10, 189-198, 5040. DOI: 10.1007/978-3-540-69134-1_19.
    BibTeX:
    @inproceedings{thompsonporrkolodziejski2008,
      author = {Thompson, A M. and Porr, B. and Kolodziejski, C. and Wörgötter, F.},
      title = {Second Order Conditioning in the Sub-cortical Nuclei of the Limbic System},
      pages = {189-198},
      booktitle = {From Animals to Animats 10},
      year = {2008},
      volume= {5040},
      editor = {Asada, Minoru and Hallam, JohnC.T. and Meyer, Jean-Arcady and Tani, Jun},
      publisher = {Springer Berlin Heidelberg},
      series = {Lecture Notes in Computer Science},
      url = {http://dx.doi.org/10.1007/978-3-540-69134-1_19},
      doi = {10.1007/978-3-540-69134-1_19},
      abstract = {Three factor Isotropic sequence order ISO3 learning is a form of differential Hebbian learning where a third factor switches on learning at relevant moments for example, after reward retreival. This switch enables learning only at specific moments and, thus, stablises the corresponding weights. The concept of using a third factor as a gating signal for learning at relevant moments has been extended in this pa- per to perform second order conditioning SOC. We present a biological model of the sub-cortical nuclei of the limbic system that is capable of performing SOC in a food seeking task. The 3rd-factor is modelled by dopaminergic neurons of the VTA which are activated via a direct exci- tatory glutamatergic pathway, and an indirect dis-inhibitory GABAergic pathway. The latter generates an amplification in the number of tonically active DA neurons. This produces an increase in DA outside the event of a primary reward and enables SOC to be accomplished}}
    Abstract: Three factor Isotropic sequence order ISO3 learning is a form of differential Hebbian learning where a third factor switches on learning at relevant moments for example, after reward retreival. This switch enables learning only at specific moments and, thus, stablises the corresponding weights. The concept of using a third factor as a gating signal for learning at relevant moments has been extended in this pa- per to perform second order conditioning SOC. We present a biological model of the sub-cortical nuclei of the limbic system that is capable of performing SOC in a food seeking task. The 3rd-factor is modelled by dopaminergic neurons of the VTA which are activated via a direct exci- tatory glutamatergic pathway, and an indirect dis-inhibitory GABAergic pathway. The latter generates an amplification in the number of tonically active DA neurons. This produces an increase in DA outside the event of a primary reward and enables SOC to be accomplished
    Review:
    Tamosiunaite, M. and Ainge, J. and Kulvicius, T. and Porr, B. and Dudchenko, P. and Wörgötter, F. (2008).
    Path-finding in real and simulated rats: assessing the influence of path characteristics on navigation learning. Journal of Computational Neuroscience, 562-582, 25, 3.
    BibTeX:
    @article{tamosiunaiteaingekulvicius2008,
      author = {Tamosiunaite, M. and Ainge, J. and Kulvicius, T. and Porr, B. and Dudchenko, P. and Wörgötter, F.},
      title = {Path-finding in real and simulated rats: assessing the influence of path characteristics on navigation learning},
      pages = {562-582},
      journal = {Journal of Computational Neuroscience},
      year = {2008},
      volume= {25},
      number = {3},
      abstract = {A large body of experimental evidence suggests that the hippocampal place field system is involved in reward based navigation learning in rodents. Reinforcement learning RL mechanisms have been used to model this, associating the state space in an RL-algorithm to the place-field map in a rat. The convergence properties of RL-algorithms are affected by the exploration patterns of the learner. Therefore, we first analyzed the path characteristics of freely exploring rats in a test arena. We found that straight path segments with mean length 23 cm up to a maximal length of 80 cm take up a significant proportion of the total paths. Thus, rat paths are biased as compared to random exploration. Next we designed a RL system that reproduces these specific path characteristics. Our model arena is covered by overlapping, probabilistically firing place fields PF of realistic size and coverage. Because convergence of RL-algorithms is also influenced by the state space characteristics, different PF-sizes and densities, leading to a different degree of overlap, were also investigated. The model rat learns finding a reward opposite to its starting point. We observed that the combination of biased straight exploration, overlapping coverage and probabilistic firing will strongly impair the convergence of learning. When the degree of randomness in the exploration is increased, convergence improves, but the distribution of straight path segments becomes unrealistic and paths become wiggly. To mend this situation without affecting the path characteristic two additional mechanisms are implemented: A gradual drop of the learned weights weight decay and path length limitation, which prevents learning if the reward is not found after some expected time. Both mechanisms limit the memory of the system and thereby counteract effects of getting trapped on a wrong path. When using these strategies individually divergent cases get substantially reduced and for some parameter settings no divergence was found anymore at all. Using weight decay and path length limitation at the same time, convergence is not much improved but instead time to convergence increases as the memory limiting effect is getting too strong. The degree of improvement relies also on the size and degree of overlap coverage density in the place field system. The used combination of these two parameters leads to a trade-off between convergence and speed to convergence. Thus, this study suggests that the role of the PF-system in navigation learning cannot be considered independently from the animals exploration pattern}}
    Abstract: A large body of experimental evidence suggests that the hippocampal place field system is involved in reward based navigation learning in rodents. Reinforcement learning RL mechanisms have been used to model this, associating the state space in an RL-algorithm to the place-field map in a rat. The convergence properties of RL-algorithms are affected by the exploration patterns of the learner. Therefore, we first analyzed the path characteristics of freely exploring rats in a test arena. We found that straight path segments with mean length 23 cm up to a maximal length of 80 cm take up a significant proportion of the total paths. Thus, rat paths are biased as compared to random exploration. Next we designed a RL system that reproduces these specific path characteristics. Our model arena is covered by overlapping, probabilistically firing place fields PF of realistic size and coverage. Because convergence of RL-algorithms is also influenced by the state space characteristics, different PF-sizes and densities, leading to a different degree of overlap, were also investigated. The model rat learns finding a reward opposite to its starting point. We observed that the combination of biased straight exploration, overlapping coverage and probabilistic firing will strongly impair the convergence of learning. When the degree of randomness in the exploration is increased, convergence improves, but the distribution of straight path segments becomes unrealistic and paths become wiggly. To mend this situation without affecting the path characteristic two additional mechanisms are implemented: A gradual drop of the learned weights weight decay and path length limitation, which prevents learning if the reward is not found after some expected time. Both mechanisms limit the memory of the system and thereby counteract effects of getting trapped on a wrong path. When using these strategies individually divergent cases get substantially reduced and for some parameter settings no divergence was found anymore at all. Using weight decay and path length limitation at the same time, convergence is not much improved but instead time to convergence increases as the memory limiting effect is getting too strong. The degree of improvement relies also on the size and degree of overlap coverage density in the place field system. The used combination of these two parameters leads to a trade-off between convergence and speed to convergence. Thus, this study suggests that the role of the PF-system in navigation learning cannot be considered independently from the animals exploration pattern
    Review:
    Shylo, N. and Wörgötter, F. and Dellen, B. (2008).
    Ascertaining relevant changes in visual data by interfacing AI reasoning and low-level visual information via temporally stable image segments. In Proceedings of the International Conference on Cognitive Systems.
    BibTeX:
    @inproceedings{shylowoergoetterdellen2008,
      author = {Shylo, N. and Wörgötter, F. and Dellen, B.},
      title = {Ascertaining relevant changes in visual data by interfacing AI reasoning and low-level visual information via temporally stable image segments},
      booktitle = {In Proceedings of the International Conference on Cognitive Systems},
      year = {2008},
      abstract = {Action planning and robot control require logical operations to be performed on sensory information, i.e. images of the world as seen by a camera consisting of continuous values of pixels. Artificial intelligence AI planning algorithms however use symbolic descriptors such as objects and actions to define logic rules and future actions. The representational differences at these distinct processing levels have to be bridged in order to allow communication between both levels. In this paper, we suggest a novel framework for interfacing AI planning with low-level visual processing by transferring the visual data into a discrete symbolic representation of temporally stable image segments. At the AI planning level, action-relevant changes in the configuration of image segments are inferred from a set of experiments using the Group Method of Data Handling. We apply the method to a data set obtained by repeating an action in an abstract scenario for varying initial conditions, determining the success or failure of the action. From the set of experiments, joint representations of actions and objects are extracted, which capture the rules of the given scenario}}
    Abstract: Action planning and robot control require logical operations to be performed on sensory information, i.e. images of the world as seen by a camera consisting of continuous values of pixels. Artificial intelligence AI planning algorithms however use symbolic descriptors such as objects and actions to define logic rules and future actions. The representational differences at these distinct processing levels have to be bridged in order to allow communication between both levels. In this paper, we suggest a novel framework for interfacing AI planning with low-level visual processing by transferring the visual data into a discrete symbolic representation of temporally stable image segments. At the AI planning level, action-relevant changes in the configuration of image segments are inferred from a set of experiments using the Group Method of Data Handling. We apply the method to a data set obtained by repeating an action in an abstract scenario for varying initial conditions, determining the success or failure of the action. From the set of experiments, joint representations of actions and objects are extracted, which capture the rules of the given scenario
    Review:
    Renjewski, D. and Manoonpong, P. and Seyfarth, A. and Wörgötter, F. (2008).
    From Biomechanical Concepts Towards Fast And Robust Robots. Advances in Mobile Robotics: Proc. of 11th CLAWAR, Marques L, Almeida A, Tokhi MO, Virk GS Eds., World Scientific, 630-637.
    BibTeX:
    @inproceedings{renjewskimanoonpongseyfarth2008,
      author = {Renjewski, D. and Manoonpong, P. and Seyfarth, A. and Wörgötter, F.},
      title = {From Biomechanical Concepts Towards Fast And Robust Robots},
      pages = {630-637},
      booktitle = {Advances in Mobile Robotics: Proc. of 11th CLAWAR, Marques L, Almeida A, Tokhi MO, Virk GS Eds., World Scientific},
      year = {2008},
      abstract = {Robots of any kind, highly integrated mechatronic systems, are smart combina- tions of mechanics, electronics and information technology. The development of bipedal robots in particular, which perform human-like locomotion, challenges scientists on even higher levels. Facing this challenge, this article presents a biomimetic bottom-up approach to use knowledge of biomechanical experiments on human walking and running, computer simulation and neuronal control concepts to sequentially design highly adaptable and compliant walking machines}}
    Abstract: Robots of any kind, highly integrated mechatronic systems, are smart combina- tions of mechanics, electronics and information technology. The development of bipedal robots in particular, which perform human-like locomotion, challenges scientists on even higher levels. Facing this challenge, this article presents a biomimetic bottom-up approach to use knowledge of biomechanical experiments on human walking and running, computer simulation and neuronal control concepts to sequentially design highly adaptable and compliant walking machines
    Review:
    Pugeault, N. and Kalkan, S. and Baseski, E. and Wörgötter, F. and Krüger, N. (2008).
    Relations Between Reconstructed 3D Entities. In Int. Conf. on Computer Vision Theory and Applications VISAPP08, 186-193.
    BibTeX:
    @inproceedings{pugeaultkalkanbaeski2008,
      author = {Pugeault, N. and Kalkan, S. and Baseski, E. and Wörgötter, F. and Krüger, N.},
      title = {Relations Between Reconstructed 3D Entities},
      pages = {186-193},
      booktitle = {In Int. Conf. on Computer Vision Theory and Applications VISAPP08},
      year = {2008},
      abstract = {In this paper, we first propose an analytic formulation for the positions and orientations uncertainty of local 3D line descriptors reconstructed by stereo. We evaluate these predicted uncertainties with Monte Carlo simulations, and study their dependency on different parameters position and orientation. In a second part, we use this definition to derive a new formulation for interfeatures distance and coplanarity. These new formulations take into account the predicted uncertainty, allowing for better robustness. We demonstrate the positive effect of the modified definitions on some simple scenarios}}
    Abstract: In this paper, we first propose an analytic formulation for the positions and orientations uncertainty of local 3D line descriptors reconstructed by stereo. We evaluate these predicted uncertainties with Monte Carlo simulations, and study their dependency on different parameters position and orientation. In a second part, we use this definition to derive a new formulation for interfeatures distance and coplanarity. These new formulations take into account the predicted uncertainty, allowing for better robustness. We demonstrate the positive effect of the modified definitions on some simple scenarios
    Review:
    Manoonpong, P. and Wörgötter, F. (2008).
    Using efference copy for external and self-generated sensory noise cancellation. Proceedings of 4th International Symposium on Adaptive Motion of Animals and Machines, Case Western Reserve University, Cleveland OH-USA, 227-228.
    BibTeX:
    @inproceedings{manoonpongwoergoetter2008,
      author = {Manoonpong, P. and Wörgötter, F.},
      title = {Using efference copy for external and self-generated sensory noise cancellation},
      pages = {227-228},
      booktitle = {Proceedings of 4th International Symposium on Adaptive Motion of Animals and Machines, Case Western Reserve University, Cleveland OH-USA},
      year = {2008},
      url = {http://amam.case.edu/AMAM2008}}
    Abstract:
    Review:
    Manoonpong, P. and Wörgötter, F. (2008).
    Biologically-Inspired Reactive Walking Machine AMOS-WD06. In Proceedings of 4th International Symposium on Adaptive Motion of Animals and Machines AMAM2008, 240-241.
    BibTeX:
    @inproceedings{manoonpongwoergoetter2008a,
      author = {Manoonpong, P. and Wörgötter, F.},
      title = {Biologically-Inspired Reactive Walking Machine AMOS-WD06},
      pages = {240-241},
      booktitle = {In Proceedings of 4th International Symposium on Adaptive Motion of Animals and Machines AMAM2008},
      year = {2008},
      abstract = {The six-legged walking machine AMOS-WD061 see Fig. 1A is a hardware platform for studying the coordination of many degrees of freedom, for performing experiments with neural controllers, and for the development of artificial perception-action systems}}
    Abstract: The six-legged walking machine AMOS-WD061 see Fig. 1A is a hardware platform for studying the coordination of many degrees of freedom, for performing experiments with neural controllers, and for the development of artificial perception-action systems
    Review:
    Manoonpong, P. and Wörgötter, F. (2008).
    Neural Control for Locomotion of Walking Machines. In Proceedings of 4th International Symposium on Adaptive Motion of Animals and Machines AMAM2008, 54-55.
    BibTeX:
    @inproceedings{manoonpongwoergoetter2008b,
      author = {Manoonpong, P. and Wörgötter, F.},
      title = {Neural Control for Locomotion of Walking Machines},
      pages = {54-55},
      booktitle = {In Proceedings of 4th International Symposium on Adaptive Motion of Animals and Machines AMAM2008},
      year = {2008},
      abstract = {The basic locomotion and rhythm of stepping in walking animals like cockroaches mostly relies on a central pattern generator CPG 1, while their peripheral sensors are used to control walking behaviors 2. By contrast, in stick insects, sensory feedback serving as reflexive mechanism plays a critical role in shaping the motor pattern for adaptivity and robustness of walking gaits 2. Inspired by the principles of biological locomotion control, two different types of neural mechanism for locomotion control of walking machines are presented. One is called modular reactive neural control and the other is adaptive reflex neural control}}
    Abstract: The basic locomotion and rhythm of stepping in walking animals like cockroaches mostly relies on a central pattern generator CPG 1, while their peripheral sensors are used to control walking behaviors 2. By contrast, in stick insects, sensory feedback serving as reflexive mechanism plays a critical role in shaping the motor pattern for adaptivity and robustness of walking gaits 2. Inspired by the principles of biological locomotion control, two different types of neural mechanism for locomotion control of walking machines are presented. One is called modular reactive neural control and the other is adaptive reflex neural control
    Review:
    Manoonpong, P. and Pasemann, F. and Wörgötter, F. (2008).
    Sensor-driven neural control for omnidirectional locomotion and versatile reactive behaviors of walking machines. Robotics and Autonomous Systems, 265-288, 56, 3.
    BibTeX:
    @article{manoonpongpasemannwoergoetter2008,
      author = {Manoonpong, P. and Pasemann, F. and Wörgötter, F.},
      title = {Sensor-driven neural control for omnidirectional locomotion and versatile reactive behaviors of walking machines},
      pages = {265-288},
      journal = {Robotics and Autonomous Systems},
      year = {2008},
      volume= {56},
      number = {3},
      abstract = {This article describes modular neural control structures for different walking machines utilizing discrete-time neurodynamics. A simple neural oscillator network serves as a central pattern generator producing the basic rhythmic leg movements. Other modules, like the velocity regulating and the phase switching networks, enable the machines to perform omnidirectional walking as well as reactive behaviors, like obstacle avoidance and different types of tropisms. These behaviors are generated in a sensori-motor loop with respect to appropriate sensor inputs, to which a neural preprocessing is applied. The neuromodules presented are small so that their structure function relationship can be analysed. The complete controller is general in the sense that it can be easily adapted to different types of even-legged walking machines without changing its internal structure and parameters}}
    Abstract: This article describes modular neural control structures for different walking machines utilizing discrete-time neurodynamics. A simple neural oscillator network serves as a central pattern generator producing the basic rhythmic leg movements. Other modules, like the velocity regulating and the phase switching networks, enable the machines to perform omnidirectional walking as well as reactive behaviors, like obstacle avoidance and different types of tropisms. These behaviors are generated in a sensori-motor loop with respect to appropriate sensor inputs, to which a neural preprocessing is applied. The neuromodules presented are small so that their structure function relationship can be analysed. The complete controller is general in the sense that it can be easily adapted to different types of even-legged walking machines without changing its internal structure and parameters
    Review:
    Kulvicius, T. and Tamosiunaite, M. and Ainge, J. and Dudchenko, P. and Wörgötter, F. (2008).
    Odor supported place cell model and goal navigation in rodents. Journal of Computational Neuroscience, 481-500, 25.
    BibTeX:
    @article{kulviciustamosiunaiteainge2008,
      author = {Kulvicius, T. and Tamosiunaite, M. and Ainge, J. and Dudchenko, P. and Wörgötter, F.},
      title = {Odor supported place cell model and goal navigation in rodents},
      pages = {481-500},
      journal = {Journal of Computational Neuroscience},
      year = {2008},
      volume= {25},
      abstract = {Experiments with rodents demonstrate that visual cues play an important role in the control of hippocampal place cells and spatial navigation. Never- theless, rats may also rely on auditory, olfactory and somatosensory stimuli for orientation. It is also known that rats can track odors or self-generated scent marks to find a food source. Here we model odor supported place cells by using a simple feed-forward network and analyze the impact of olfactory cues on place cell formation and spatial navigation. The obtained place cells are used to solve a goal navigation task by a novel mechanism based on self-marking by odor patches combined with a Q-learning algorithm. We also analyze the impact of place cell remapping on goal directed behavior when switching between two environments. We emphasize the importance of olfactory cues in place cell formation and show that the utility of environ- mental and self-generated olfactory cues, together with a mixed navigation strategy, improves goal directed navigation}}
    Abstract: Experiments with rodents demonstrate that visual cues play an important role in the control of hippocampal place cells and spatial navigation. Never- theless, rats may also rely on auditory, olfactory and somatosensory stimuli for orientation. It is also known that rats can track odors or self-generated scent marks to find a food source. Here we model odor supported place cells by using a simple feed-forward network and analyze the impact of olfactory cues on place cell formation and spatial navigation. The obtained place cells are used to solve a goal navigation task by a novel mechanism based on self-marking by odor patches combined with a Q-learning algorithm. We also analyze the impact of place cell remapping on goal directed behavior when switching between two environments. We emphasize the importance of olfactory cues in place cell formation and show that the utility of environ- mental and self-generated olfactory cues, together with a mixed navigation strategy, improves goal directed navigation
    Review:
    Kraft, D. and Pugeault, N. and Baseski, E. and Popovic, M. and Kragic, D. and Kalkan, S. and Wörgötter, F. and Krüger, N. (2008).
    Birth of the Object: Detection of Objectness and Extraction of Object Shape through Object-Action complexes. Inf. J. Humanoid Robotics, 247-265, 5, 2.
    BibTeX:
    @article{kraftpugeaultbaseski2008,
      author = {Kraft, D. and Pugeault, N. and Baseski, E. and Popovic, M. and Kragic, D. and Kalkan, S. and Wörgötter, F. and Krüger, N.},
      title = {Birth of the Object: Detection of Objectness and Extraction of Object Shape through Object-Action complexes},
      pages = {247-265},
      journal = {Inf. J. Humanoid Robotics},
      year = {2008},
      volume= {5},
      number = {2},
      abstract = {We describe a process in which the segmentation of objects as well as the extraction of the object shape becomes realized through active exploration of a robot vision system. In the exploration process, two behavioral modules that link robot actions to the visual and haptic perception of objects interact. First, by making use of an object independent grasping mechanism, physical control over potential objects can be gained. Having evaluated the initial grasping mechanism as being successful, a second behavior extracts the object shape by making use of prediction based on the motion induced by the robot. This also leads to the concept of an object as a set of features that change predictably over different frames. The system is equipped with a certain degree of generic prior knowledge about the world in terms of a sophisticated visual feature extraction process in an early cognitive vision system, knowledge about its own embodiment as well as knowledge about geomet- ric relationships such as rigid body motion. This prior knowledge allows the extraction of representations that are semantically richer compared to many other approaches}}
    Abstract: We describe a process in which the segmentation of objects as well as the extraction of the object shape becomes realized through active exploration of a robot vision system. In the exploration process, two behavioral modules that link robot actions to the visual and haptic perception of objects interact. First, by making use of an object independent grasping mechanism, physical control over potential objects can be gained. Having evaluated the initial grasping mechanism as being successful, a second behavior extracts the object shape by making use of prediction based on the motion induced by the robot. This also leads to the concept of an object as a set of features that change predictably over different frames. The system is equipped with a certain degree of generic prior knowledge about the world in terms of a sophisticated visual feature extraction process in an early cognitive vision system, knowledge about its own embodiment as well as knowledge about geomet- ric relationships such as rigid body motion. This prior knowledge allows the extraction of representations that are semantically richer compared to many other approaches
    Review:
    Kraft, D. and Baseski, E. and Popovic, M. and Batog, A M. and Kjaer-Nielsen, A. and Krüger, N. and Petrick, R. and Geib, C. and Pugeault N., S. and Asfour, T. and Dillmann, R. and Kalkan Sand Wörgötter, F. and B., H. and Detry, R. and Piater, J. (2008).
    Exploration and Planning in a Three-Level Cognitive Architecture. International Conference on Cognitive Systems COGSYS.
    BibTeX:
    @inproceedings{kraftbaseskipopovic2008,
      author = {Kraft, D. and Baseski, E. and Popovic, M. and Batog, A M. and Kjaer-Nielsen, A. and Krüger, N. and Petrick, R. and Geib, C. and Pugeault N., S. and Asfour, T. and Dillmann, R. and Kalkan Sand Wörgötter, F. and B., H. and Detry, R. and Piater, J.},
      title = {Exploration and Planning in a Three-Level Cognitive Architecture},
      booktitle = {International Conference on Cognitive Systems COGSYS},
      year = {2008},
      abstract = {We describe an embodied cognitive system based on a three-level architecture that includes a sensorimotor layer, a mid-level layer that stores and reasons about object-action episodes, and a high-level symbolic planner that creates abstract action plans to be realised and possibly further specified by the lower levels. The system works in two modes, exploration and plan execution, that both make use of the same architecture. We give results of different sub-processes as well as their interaction. In particular, we describe the generation and execution of plans as well as a set of learning processes that take place independently of, or in parallel with, plan execution}}
    Abstract: We describe an embodied cognitive system based on a three-level architecture that includes a sensorimotor layer, a mid-level layer that stores and reasons about object-action episodes, and a high-level symbolic planner that creates abstract action plans to be realised and possibly further specified by the lower levels. The system works in two modes, exploration and plan execution, that both make use of the same architecture. We give results of different sub-processes as well as their interaction. In particular, we describe the generation and execution of plans as well as a set of learning processes that take place independently of, or in parallel with, plan execution
    Review:
    Kolodziejski, C. and Porr, B. and Wörgötter, F. (2008).
    Mathematical properties of neuronal TD-rules and differential Hebbian learning: a comparison. Biological Cybernetics, 259-272, 98, 3.
    BibTeX:
    @article{kolodziejskiporrwoergoetter2008,
      author = {Kolodziejski, C. and Porr, B. and Wörgötter, F.},
      title = {Mathematical properties of neuronal TD-rules and differential Hebbian learning: a comparison},
      pages = {259-272},
      journal = {Biological Cybernetics},
      year = {2008},
      volume= {98},
      number = {3},
      abstract = {A confusingly wide variety of temporally asym- metric learning rules exists related to reinforcement learning and/or to spike-timing dependent plasticity, many of which look exceedingly similar, while displaying strongly different behavior. These rules often find their use in control tasks, for example in robotics and for this rigorous convergence and numerical stability is required. The goal of this article is to review these rules and compare them to provide a better over- view over their different properties. Two main classes will be discussed: temporal difference TD rules and correlation based differential hebbian rules and some transition cases. In general we will focus on neuronal implementations with changeable synaptic weights and a time-continuous represen- tation of activity. In a machine learning non-neuronal con- text, for TD-learning a solid mathematical theory has existed since several years. This can partly be transfered to a neu- ronal framework, too. On the other hand, only now a more complete theory has also emerged for differential Hebb rules. In general rules differ by their convergence conditions and their numerical stability, which can lead to very undesirable behavior, when wanting to apply them. For TD, convergence can be enforced with a certain output condition assuring that the}}
    Abstract: A confusingly wide variety of temporally asym- metric learning rules exists related to reinforcement learning and/or to spike-timing dependent plasticity, many of which look exceedingly similar, while displaying strongly different behavior. These rules often find their use in control tasks, for example in robotics and for this rigorous convergence and numerical stability is required. The goal of this article is to review these rules and compare them to provide a better over- view over their different properties. Two main classes will be discussed: temporal difference TD rules and correlation based differential hebbian rules and some transition cases. In general we will focus on neuronal implementations with changeable synaptic weights and a time-continuous represen- tation of activity. In a machine learning non-neuronal con- text, for TD-learning a solid mathematical theory has existed since several years. This can partly be transfered to a neu- ronal framework, too. On the other hand, only now a more complete theory has also emerged for differential Hebb rules. In general rules differ by their convergence conditions and their numerical stability, which can lead to very undesirable behavior, when wanting to apply them. For TD, convergence can be enforced with a certain output condition assuring that the
    Review:
    Kalkan, S. and Wörgötter, F. and Krüger, N. (2008).
    Depth Prediction at Homogeneous Image Structures. In Int. Conf. on Computer Vision Theory and Applications VISAPP08, 520-527.
    BibTeX:
    @inproceedings{kalkanwoergoetterkrueger2008,
      author = {Kalkan, S. and Wörgötter, F. and Krüger, N.},
      title = {Depth Prediction at Homogeneous Image Structures},
      pages = {520-527},
      booktitle = {In Int. Conf. on Computer Vision Theory and Applications VISAPP08},
      year = {2008},
      abstract = {This paper proposes a voting-based model that predicts depth at weakly-structured image areas from the depth that is extracted using a feature-based stereo method. We provide results, on both real and artificial scenes, that show the accuracy and robustness of our approach. Moreover, we compare our method to different dense stereo algorithms to investigate the effect of texture on performance of the two different approaches. The results confirm the expectation that dense stereo methods are suited better for textured image areas and our method for weakly-textured image areas}}
    Abstract: This paper proposes a voting-based model that predicts depth at weakly-structured image areas from the depth that is extracted using a feature-based stereo method. We provide results, on both real and artificial scenes, that show the accuracy and robustness of our approach. Moreover, we compare our method to different dense stereo algorithms to investigate the effect of texture on performance of the two different approaches. The results confirm the expectation that dense stereo methods are suited better for textured image areas and our method for weakly-textured image areas
    Review:
    Kalkan, S. and Wörgötter, F. and Krüger, N. (2008).
    A Signal-Symbol Loop Mechanism for Enhanced Edge Extraction. In Int. Conf. on Computer Vision Theory and Applications VISAPP08, 214-221.
    BibTeX:
    @inproceedings{kalkanwoergoetterkrueger2008a,
      author = {Kalkan, S. and Wörgötter, F. and Krüger, N.},
      title = {A Signal-Symbol Loop Mechanism for Enhanced Edge Extraction},
      pages = {214-221},
      booktitle = {In Int. Conf. on Computer Vision Theory and Applications VISAPP08},
      year = {2008},
      abstract = {The transition to symbolic information from images involves in general the loss or misclassification of infor- mation. One way to deal with this missing or wrong information is to get feedback from concrete hypotheses derived at a symbolic level to the sub-symbolic signal stage to amplify weak information or correct misclas- sifications. This paper proposes such a feedback mechanism between the symbolic level and the signal level, which we call signal symbol loop. We apply this framework for the detection of low contrast edges making use of predictions based on Rigid Body Motion. Once the Rigid Body Motion is known, the location and the properties of edges at a later frame can be predicted. We use these predictions as feedback to the signal level at a later frame to improve the detection of low contrast edges. We demonstrate our mechanism on a real example, and evaluate the results using an artificial scene, where the ground truth data is available}}
    Abstract: The transition to symbolic information from images involves in general the loss or misclassification of infor- mation. One way to deal with this missing or wrong information is to get feedback from concrete hypotheses derived at a symbolic level to the sub-symbolic signal stage to amplify weak information or correct misclas- sifications. This paper proposes such a feedback mechanism between the symbolic level and the signal level, which we call signal symbol loop. We apply this framework for the detection of low contrast edges making use of predictions based on Rigid Body Motion. Once the Rigid Body Motion is known, the location and the properties of edges at a later frame can be predicted. We use these predictions as feedback to the signal level at a later frame to improve the detection of low contrast edges. We demonstrate our mechanism on a real example, and evaluate the results using an artificial scene, where the ground truth data is available
    Review:
    Jensen, L. and Baseski, E. and Kalkan, S. and Pugeault, N. and Wörgötter, F. and Krüger, N. (2008).
    Cognitive Vision. Cognitive Vision, 121--134, 5329. DOI: 10.1007/978-3-540-92781-5_10.
    BibTeX:
    @incollection{jensenbaseskikalkan2008,
      author = {Jensen, L. and Baseski, E. and Kalkan, S. and Pugeault, N. and Wörgötter, F. and Krüger, N.},
      title = {Cognitive Vision},
      pages = {121--134},
      booktitle = {Cognitive Vision},
      year = {2008},
      volume= {5329},
      chapter = {Semantic Reasoning for Scene Interpretation},
      editor = {Caputo, Barbara and Vincze, Markus},
      publisher = {Springer-Verlag},
      url = {http://dx.doi.org/10.1007/978-3-540-92781-5_10},
      doi = {10.1007/978-3-540-92781-5_10},
      abstract = {In this paper, we propose a hierarchical architecture for representing scenes, covering 2D and 3D aspects of visual scenes as well as the semantic relations between the different aspects. We argue that labeled graphs are a suitable representational framework for this representation and demonstrate its potential by two applications. As a first application, we localize lane structures by the semantic descriptors and their relations in a Bayesian framework. As the second application, which is in the context of vision based grasping, we show how the semantic re- lations can be associated to actions that allow for grasping without using any object knowledge}}
    Abstract: In this paper, we propose a hierarchical architecture for representing scenes, covering 2D and 3D aspects of visual scenes as well as the semantic relations between the different aspects. We argue that labeled graphs are a suitable representational framework for this representation and demonstrate its potential by two applications. As a first application, we localize lane structures by the semantic descriptors and their relations in a Bayesian framework. As the second application, which is in the context of vision based grasping, we show how the semantic re- lations can be associated to actions that allow for grasping without using any object knowledge
    Review:
    Agostini, A. and Celaya, E. and Torras, C. and Wörgötter, F. (2008).
    Action Rule Induction from Cause-Effect Pairs Learned through Robot-Teacher Interaction. International Conference on Cognitive Systems COGSYS.
    BibTeX:
    @inproceedings{agostinicelayatorras2008,
      author = {Agostini, A. and Celaya, E. and Torras, C. and Wörgötter, F.},
      title = {Action Rule Induction from Cause-Effect Pairs Learned through Robot-Teacher Interaction},
      booktitle = {International Conference on Cognitive Systems COGSYS},
      year = {2008},
      abstract = {In this work we propose a decision-making system that efficiently learns behaviors in the form of rules using natural human instructions about cause-effect relations in currently observed situations, avoiding complicated instructions and explanations of long-run action sequences and complete world dynamics. The learned rules are represented in a way suitable to both reactive and deliberative approaches, which are thus smoothly integrated. Simple and repetitive tasks are resolved reactively, while complex tasks would be faced in a more deliberative manner using a planner module. Human interaction is only required if the system fails to obtain the expected results when applying a rule, or fails to resolve the task with the knowledge acquired so far}}
    Abstract: In this work we propose a decision-making system that efficiently learns behaviors in the form of rules using natural human instructions about cause-effect relations in currently observed situations, avoiding complicated instructions and explanations of long-run action sequences and complete world dynamics. The learned rules are represented in a way suitable to both reactive and deliberative approaches, which are thus smoothly integrated. Simple and repetitive tasks are resolved reactively, while complex tasks would be faced in a more deliberative manner using a planner module. Human interaction is only required if the system fails to obtain the expected results when applying a rule, or fails to resolve the task with the knowledge acquired so far
    Review:
    Kolodziejski, C. and Porr, B. and Wörgötter, F. (2007).
    Anticipative adaptive muscle control: forward modeling with self-induced disturbances and recruitment. BMC Neuroscience, 1-1, 8, Suppl 2. DOI: 10.1186/1471-2202-8-S2-P202.
    BibTeX:
    @article{kolodziejskiporrwoergoetter2007,
      author = {Kolodziejski, C. and Porr, B. and Wörgötter, F.},
      title = {Anticipative adaptive muscle control: forward modeling with self-induced disturbances and recruitment},
      pages = {1-1},
      journal = {BMC Neuroscience},
      year = {2007},
      volume= {8},
      number = {Suppl 2},
      publisher = {BioMed Central},
      url = {http://www.biomedcentral.com/1471-2202/8/S2/P202},
      doi = {10.1186/1471-2202-8-S2-P202}}
    Abstract:
    Review:
    Wischmann, S. and Stamm, K. and Wörgötter, F. (2007).
    Embodied evolution and learning: The neglected timing of maturation. Advances in Artificial Life: 9th European Conference on Artificial Life, ECAL, Almeidae Costa, F. and Rocha, L. M. and Costa, E. and Harvey, I. and Coutinho, A. eds. Springer Series: LNAI, 4648, 284-293.
    BibTeX:
    @inproceedings{wischmannstammwoergoetter2007,
      author = {Wischmann, S. and Stamm, K. and Wörgötter, F.},
      title = {Embodied evolution and learning: The neglected timing of maturation},
      pages = {284-293},
      booktitle = {Advances in Artificial Life: 9th European Conference on Artificial Life, ECAL, Almeidae Costa, F. and Rocha, L. M. and Costa, E. and Harvey, I. and Coutinho, A. eds. Springer Series: LNAI, 4648},
      year = {2007}}
    Abstract:
    Review:
    Wischmann, S. and Pasemann, F. and Wörgötter, F. (2007).
    Cooperation and competition: Neural mechanisms of evolved communication systems. Workshop on The Emergence of Social Behaviour / CD-ROM : From Cooperation to Language. Lisbon, Portugal.
    BibTeX:
    @inproceedings{wischmannpasemannwoergoetter2007,
      author = {Wischmann, S. and Pasemann, F. and Wörgötter, F.},
      title = {Cooperation and competition: Neural mechanisms of evolved communication systems},
      booktitle = {Workshop on The Emergence of Social Behaviour / CD-ROM : From Cooperation to Language. Lisbon, Portugal},
      year = {2007}}
    Abstract:
    Review:
    Tamosiunaite, M. and Porr, B. and Wörgötter, F. (2007).
    Developing velocity sensitivity in a model neuron by local synaptic plasticity. Biol. Cybern, 507-518, 96.
    BibTeX:
    @article{tamosiunaiteporrwoergoetter2007,
      author = {Tamosiunaite, M. and Porr, B. and Wörgötter, F.},
      title = {Developing velocity sensitivity in a model neuron by local synaptic plasticity},
      pages = {507-518},
      journal = {Biol. Cybern},
      year = {2007},
      volume= {96}}
    Abstract:
    Review:
    Tamosiunaite, M. and Porr, B. and Wörgötter, F. (2007).
    Self-influencing synaptic plasticity: Recurrent changes of synaptic weights can lead to specific functional properties. Journal of Computational Neuroscience, 113-127, 23, 1. DOI: 10.1007/s10827-007-0021-2.
    BibTeX:
    @article{tamosiunaiteporrwoergoetter2007a,
      author = {Tamosiunaite, M. and Porr, B. and Wörgötter, F.},
      title = {Self-influencing synaptic plasticity: Recurrent changes of synaptic weights can lead to specific functional properties},
      pages = {113-127},
      journal = {Journal of Computational Neuroscience},
      year = {2007},
      volume= {23},
      number = {1},
      doi = {10.1007/s10827-007-0021-2},
      abstract = {Recent experimental results suggest that dendritic and back-propagating spikes can influence synaptic plastic- ity in different ways Holthoff, 2004: Holthoff et al. and 2005. In this study we investigate how these signals could interact at dendrites in space and time leading to changing plasticity properties at local synapse clusters. Similar to a previous study Saudargiene et al. and 2004 we employ a differential Hebbian learning rule to emulate spike-timing dependent plasticity and investigate how the interaction of dendritic and back-propagating spikes, as the post-synaptic signals, could influence plasticity. Specifically, we will show that lo- cal synaptic plasticity driven by spatially confined dendritic spikes can lead to the emergence of synaptic clusters with different properties. If one of these clusters can drive the neu- ron into spiking, plasticity may change and the now arising global influence of a back-propagating spike can lead to a further segregation of the clusters and possibly the dying-off of some of them leading to more functional specificity. These results suggest that through plasticity being a spatial and tem- poral local process, the computational properties of dendrites or complete neurons can be substantially augmented}}
    Abstract: Recent experimental results suggest that dendritic and back-propagating spikes can influence synaptic plastic- ity in different ways Holthoff, 2004: Holthoff et al. and 2005. In this study we investigate how these signals could interact at dendrites in space and time leading to changing plasticity properties at local synapse clusters. Similar to a previous study Saudargiene et al. and 2004 we employ a differential Hebbian learning rule to emulate spike-timing dependent plasticity and investigate how the interaction of dendritic and back-propagating spikes, as the post-synaptic signals, could influence plasticity. Specifically, we will show that lo- cal synaptic plasticity driven by spatially confined dendritic spikes can lead to the emergence of synaptic clusters with different properties. If one of these clusters can drive the neu- ron into spiking, plasticity may change and the now arising global influence of a back-propagating spike can lead to a further segregation of the clusters and possibly the dying-off of some of them leading to more functional specificity. These results suggest that through plasticity being a spatial and tem- poral local process, the computational properties of dendrites or complete neurons can be substantially augmented
    Review:
    Porr, B. and Wörgötter, F. (2007).
    Fast heterosynaptic learning in a robot food retrieval task inspired by the limbic system. BioSystems, 294-299, 891-3.
    BibTeX:
    @article{porrwoergoetter2007,
      author = {Porr, B. and Wörgötter, F.},
      title = {Fast heterosynaptic learning in a robot food retrieval task inspired by the limbic system},
      pages = {294-299},
      journal = {BioSystems},
      year = {2007},
      volume= {891-3}}
    Abstract:
    Review:
    Porr, B. and Wörgötter, F. (2007).
    Learning with Relevance: Using a third factor to stabilize Hebbian learning. Neural Comp, 2694-2719, 1910. DOI: 10.1162/neco.2007.19.10.2694.
    BibTeX:
    @article{porrwoergoetter2007a,
      author = {Porr, B. and Wörgötter, F.},
      title = {Learning with Relevance: Using a third factor to stabilize Hebbian learning},
      pages = {2694-2719},
      journal = {Neural Comp},
      year = {2007},
      volume= {1910},
      doi = {10.1162/neco.2007.19.10.2694}}
    Abstract:
    Review:
    Porr, B. and Kulvicius, T. and Wörgötter, F. (2007).
    Improved stability and convergence with three factor learning. Neurocomputing, 2005-2008, 70.
    BibTeX:
    @article{porrkulviciuswoergoetter2007,
      author = {Porr, B. and Kulvicius, T. and Wörgötter, F.},
      title = {Improved stability and convergence with three factor learning},
      pages = {2005-2008},
      journal = {Neurocomputing},
      year = {2007},
      volume= {70}}
    Abstract:
    Review:
    Manoonpong, P. and Pasemann, F. and Wörgötter, F. (2007).
    Reactive Neural Control for Phototaxis and Obstacle Avoidance Behavior of Walking Machines. Proceedings of World Academy of Science, Engineering and Technology PWASET, International conference on Intelligent systems ICIS 07, Bangkok, Thailand, December, 14-16.
    BibTeX:
    @inproceedings{manoonpongpasemannwoergoetter2007,
      author = {Manoonpong, P. and Pasemann, F. and Wörgötter, F.},
      title = {Reactive Neural Control for Phototaxis and Obstacle Avoidance Behavior of Walking Machines},
      pages = {14-16},
      booktitle = {Proceedings of World Academy of Science, Engineering and Technology PWASET, International conference on Intelligent systems ICIS 07, Bangkok, Thailand, December},
      year = {2007}}
    Abstract:
    Review:
    Manoonpong, P. and Geng, T. and Porr, B. and Wörgötter, F. (2007).
    The RunBot architecture for adaptive, fast, dynamic walking. IEEE International Symposium on Circuits and Systems ISCAS, New Orleans, USA, 1181-1184.
    BibTeX:
    @inproceedings{manoonponggengporr2007,
      author = {Manoonpong, P. and Geng, T. and Porr, B. and Wörgötter, F.},
      title = {The RunBot architecture for adaptive, fast, dynamic walking},
      pages = {1181-1184},
      booktitle = {IEEE International Symposium on Circuits and Systems ISCAS, New Orleans, USA},
      year = {2007}}
    Abstract:
    Review:
    Manoonpong, P. and Geng, T. and Porr, B. and Kulvicius, T. and Wörgötter, F. (2007).
    Adaptive, Fast Walking in a Biped Robot under Neuronal Control and Learning. Public Library of Science Computational Biology PLoS Comp. Biol., 37, e134. DOI: 10.1371/journal.pcbi.0030134.
    BibTeX:
    @article{manoonponggengporr2007a,
      author = {Manoonpong, P. and Geng, T. and Porr, B. and Kulvicius, T. and Wörgötter, F.},
      title = {Adaptive, Fast Walking in a Biped Robot under Neuronal Control and Learning},
      journal = {Public Library of Science Computational Biology PLoS Comp. Biol., 37, e134},
      year = {2007},
      doi = {10.1371/journal.pcbi.0030134},
      abstract = {Human walking is a dynamic, partly self-stabilizing process relying on the interaction of the biomechanical design with its neuronal control. The coordination of this process is a very difficult problem, and it has been suggested that it involves a hierarchy of levels, where the lower ones, e.g., interactions between muscles and the spinal cord, are largely autonomous, and where higher level control (e.g., cortical) arises only pointwise, as needed. This requires an architecture of several nested, sensori-motor loops where the walking process provides feedback signals to the walkers sensory systems, which can be used to coordinate its movements. To complicate the situation, at a maximal walking speed of more than four leg-lengths per second, the cycle period available to coordinate all these loops is rather short. In this study we present a planar biped robot, which uses the design principle of nested loops to combine the self-stabilizing properties of its biomechanical design with several levels of neuronal control. Specifically, we show how to adapt control by including online learning mechanisms based on simulated synaptic plasticity. This robot can walk with a high speed (3.0 leg length/s), self-adapting to minor disturbances, and reacting in a robust way to abruptly induced gait changes. At the same time, it can learn walking on different terrains, requiring only few learning experiences. This study shows that the tight coupling of physical with neuronal control, guided by sensory feedback from the walking pattern itself, combined with synaptic learning may be a way forward to better understand and solve coordination problems in other complex motor tasks.}}
    Abstract: Human walking is a dynamic, partly self-stabilizing process relying on the interaction of the biomechanical design with its neuronal control. The coordination of this process is a very difficult problem, and it has been suggested that it involves a hierarchy of levels, where the lower ones, e.g., interactions between muscles and the spinal cord, are largely autonomous, and where higher level control (e.g., cortical) arises only pointwise, as needed. This requires an architecture of several nested, sensori-motor loops where the walking process provides feedback signals to the walkers sensory systems, which can be used to coordinate its movements. To complicate the situation, at a maximal walking speed of more than four leg-lengths per second, the cycle period available to coordinate all these loops is rather short. In this study we present a planar biped robot, which uses the design principle of nested loops to combine the self-stabilizing properties of its biomechanical design with several levels of neuronal control. Specifically, we show how to adapt control by including online learning mechanisms based on simulated synaptic plasticity. This robot can walk with a high speed (3.0 leg length/s), self-adapting to minor disturbances, and reacting in a robust way to abruptly induced gait changes. At the same time, it can learn walking on different terrains, requiring only few learning experiences. This study shows that the tight coupling of physical with neuronal control, guided by sensory feedback from the walking pattern itself, combined with synaptic learning may be a way forward to better understand and solve coordination problems in other complex motor tasks.
    Review:
    Xiong, X. and Wörgötter, F. and Manoonpong, P. (2014).
    Virtual Agonist-antagonist Mechanisms Produce Biological Muscle-like Functions: An Application for Robot Joint Control. Industrial Robot: An International Journal, 340 - 346, 41, 4. DOI: 10.1108/IR-11-2013-421.
    BibTeX:
    @article{xiongwoergoettermanoonpong2014,
      author = {Xiong, X. and Wörgötter, F. and Manoonpong, P.},
      title = {Virtual Agonist-antagonist Mechanisms Produce Biological Muscle-like Functions: An Application for Robot Joint Control},
      pages = {340 - 346},
      journal = {Industrial Robot: An International Journal},
      year = {2014},
      volume= {41},
      number = {4},
      publisher = {Emerald Group Publishing Ltd.},
      url = {http://www.emeraldinsight.com/doi/abs/10.1108/IR-11-2013-421},
      doi = {10.1108/IR-11-2013-421},
      abstract = {Purpose - Biological muscles of animals have a surprising variety of functions, i.e., struts, springs, and brakes. According to this, the purpose of this paper is to apply virtual agonist-antagonist mechanisms to robot joint control allowing for muscle-like functions and variably compliant joint motions. Design/methodology/approach - Each joint is driven by a pair of virtual agonist-antagonist mechanism (VAAM, i.e., passive components). The muscle-like functions as well as the variable joint compliance are simply achieved by tuning the damping coefficient of the VAAM. Findings - With the VAAM, variably compliant joint motions can be produced without mechanically bulky and complex mechanisms or complex force/toque sensing at each joint. Moreover, through tuning the damping coefficient of the VAAM, the functions of the VAAM are comparable to biological muscles. Originality/value - The model (i.e., VAAM) provides a way forward to emulate muscle-like functions that are comparable to those found in physiological experiments of biological muscles. Based on these muscle-like functions, the robotic joints can easily achieve variable compliance that does not require complex physical components or torque sensing systems thereby capable of implementing the model on small legged robots driven by, e.g., standard servo motors. Thus, the VAAM minimizes hardware and reduces system complexity. From this point of view, the model opens up another way of simulating muscle behaviors on artificial machines. Executive summary The VAAM can be applied to produce variable compliant motions of a high DOF robot. Only relying on force sensing at the end effector, this application is easily achieved by changing coefficients of the VAAM. Therefore, the VAAM can reduce economic cost on mechanical and sensing components of the robot, compared to traditional methods (e.g., artificial muscles).}}
    Abstract: Purpose - Biological muscles of animals have a surprising variety of functions, i.e., struts, springs, and brakes. According to this, the purpose of this paper is to apply virtual agonist-antagonist mechanisms to robot joint control allowing for muscle-like functions and variably compliant joint motions. Design/methodology/approach - Each joint is driven by a pair of virtual agonist-antagonist mechanism (VAAM, i.e., passive components). The muscle-like functions as well as the variable joint compliance are simply achieved by tuning the damping coefficient of the VAAM. Findings - With the VAAM, variably compliant joint motions can be produced without mechanically bulky and complex mechanisms or complex force/toque sensing at each joint. Moreover, through tuning the damping coefficient of the VAAM, the functions of the VAAM are comparable to biological muscles. Originality/value - The model (i.e., VAAM) provides a way forward to emulate muscle-like functions that are comparable to those found in physiological experiments of biological muscles. Based on these muscle-like functions, the robotic joints can easily achieve variable compliance that does not require complex physical components or torque sensing systems thereby capable of implementing the model on small legged robots driven by, e.g., standard servo motors. Thus, the VAAM minimizes hardware and reduces system complexity. From this point of view, the model opens up another way of simulating muscle behaviors on artificial machines. Executive summary The VAAM can be applied to produce variable compliant motions of a high DOF robot. Only relying on force sensing at the end effector, this application is easily achieved by changing coefficients of the VAAM. Therefore, the VAAM can reduce economic cost on mechanical and sensing components of the robot, compared to traditional methods (e.g., artificial muscles).
    Review:
    Kulvicius, T. and Porr, B. and Wörgötter, F. (2007).
    Development of Receptive Fields in a Closed-loop Behavioural System. Neurocomput, 2046--2049, 70, 10-12. DOI: 10.1016/j.neucom.2006.10.132.
    BibTeX:
    @article{kulviciusporrwoergoetter2007,
      author = {Kulvicius, T. and Porr, B. and Wörgötter, F.},
      title = {Development of Receptive Fields in a Closed-loop Behavioural System},
      pages = {2046--2049},
      journal = {Neurocomput},
      year = {2007},
      volume= {70},
      number = {10-12},
      publisher = {Elsevier Science Publishers B. V},
      url = {http://www.sciencedirect.com/science/article/pii/S0925231206004127},
      doi = {10.1016/j.neucom.2006.10.132}}
    Abstract:
    Review:
    Kulvicius, T. and Bernd, P. and Wörgötter, F. (2007).
    Chained learning architectures in a simple closed-loop behavioural context. Biol. Cybern, 363-378, 97. DOI: 10.1007/s00422-007-0176-y.
    BibTeX:
    @article{kulviciusberndwoergoetter2007,
      author = {Kulvicius, T. and Bernd, P. and Wörgötter, F.},
      title = {Chained learning architectures in a simple closed-loop behavioural context},
      pages = {363-378},
      journal = {Biol. Cybern},
      year = {2007},
      volume= {97},
      doi = {10.1007/s00422-007-0176-y}}
    Abstract:
    Review:
    Kalkan, S. and Wörgötter, F. and Krüger, N. (2007).
    First-order and Second-order Statistical Analysis of 3D and 2D Structure, Network. Computation in Neural Systems, 129-160, 182. DOI: 10.1080/09548980701580444.
    BibTeX:
    @inproceedings{kalkanwoergoetterkrueger2007,
      author = {Kalkan, S. and Wörgötter, F. and Krüger, N.},
      title = {First-order and Second-order Statistical Analysis of 3D and 2D Structure, Network},
      pages = {129-160},
      booktitle = {Computation in Neural Systems},
      year = {2007},
      volume= {182},
      doi = {10.1080/09548980701580444}}
    Abstract:
    Review:
    Kalkan, S. and Wörgötter, F. and Krüger, N. (2007).
    Statistical Analysis of Second-order Relations of 3D Structures. International Conference on Computer Vision Theory and Applications VISAPP, 13-20.
    BibTeX:
    @inproceedings{kalkanwoergoetterkrueger2007a,
      author = {Kalkan, S. and Wörgötter, F. and Krüger, N.},
      title = {Statistical Analysis of Second-order Relations of 3D Structures},
      pages = {13-20},
      booktitle = {International Conference on Computer Vision Theory and Applications VISAPP},
      year = {2007}}
    Abstract:
    Review:
    Hennig, M H. and Wörgötter, F. (2007).
    Effects of fixational eye movements on retinal ganglion cell responses: a modelling study. Front. Comput. Neurosci, 1:2. DOI: 10.3389/neuro.10.002.2007.
    BibTeX:
    @article{hennigwoergoetter2007,
      author = {Hennig, M H. and Wörgötter, F.},
      title = {Effects of fixational eye movements on retinal ganglion cell responses: a modelling study},
      journal = {Front. Comput. Neurosci},
      year = {2007},
      volume= {1:2},
      doi = {10.3389/neuro.10.002.2007}}
    Abstract:
    Review:
    Funke, K. and Kerscher, N J. and Wörgötter, F. (2007).
    Noise-improved signal detection in cat primary visual cortex via a well balanced stochastic resonance like process. Europ. J. Neurosci, 1322-1332, 26 5. DOI: 10.1111/j.1460-9568.2007.05735.
    BibTeX:
    @article{funkekerscherwoergoetter2007,
      author = {Funke, K. and Kerscher, N J. and Wörgötter, F.},
      title = {Noise-improved signal detection in cat primary visual cortex via a well balanced stochastic resonance like process},
      pages = {1322-1332},
      journal = {Europ. J. Neurosci},
      year = {2007},
      volume= {26 5},
      doi = {10.1111/j.1460-9568.2007.05735}}
    Abstract:
    Review:
    Baseski, E. and Pugeault, N. and Kalkan, S. and Kraft, D. and Wörgötter, F. and Krüger, N. (2007).
    A Scene Representation Based on Multi-Modal 2D and 3D Features. ICCV 2007 Workshop on 3D Representation for Recognition 3dRR-07.
    BibTeX:
    @inproceedings{baseskipugeaultkalkan2007,
      author = {Baseski, E. and Pugeault, N. and Kalkan, S. and Kraft, D. and Wörgötter, F. and Krüger, N.},
      title = {A Scene Representation Based on Multi-Modal 2D and 3D Features},
      booktitle = {ICCV 2007 Workshop on 3D Representation for Recognition 3dRR-07},
      year = {2007}}
    Abstract:
    Review:
    Ainge, J. and Tamosiunaite, M. and Wörgötter, F. and Dudchenko, P. (2007).
    Hippocampal CA1 place cells encode intended destination on a maze with multiple choice points. J. Neurosci, 9769-9779, 27, 36. DOI: 10.1523/JNEUROSCI.2011-07.2007.
    BibTeX:
    @article{aingetamosiunaitewoergoetter2007,
      author = {Ainge, J. and Tamosiunaite, M. and Wörgötter, F. and Dudchenko, P.},
      title = {Hippocampal CA1 place cells encode intended destination on a maze with multiple choice points},
      pages = {9769-9779},
      journal = {J. Neurosci},
      year = {2007},
      volume= {27},
      number = {36},
      doi = {10.1523/JNEUROSCI.2011-07.2007}}
    Abstract:
    Review:
    Henning, M. and Wörgötter, F. (2006).
    Influence of retinal ganglion cell nonlinearities on visual perception. Journal of Vision.
    BibTeX:
    @article{henningwoergoetter2006,
      author = {Henning, M. and Wörgötter, F.},
      title = {Influence of retinal ganglion cell nonlinearities on visual perception},
      journal = {Journal of Vision},
      year = {2006},
      url = {http://www.bccn-goettingen.de/Publications/Henning_Retinal}}
    Abstract:
    Review:
    Yang Z, M. and Wörgötter, F. and Cameron, K. and Boonsobhak, V. (2006).
    A neuromorphic depth-from-motion vision model with STDP adaptation. IEEE Trans. Neural Networks, 482-495, 172.
    BibTeX:
    @article{yangzwoergoettercameron2006,
      author = {Yang Z, M. and Wörgötter, F. and Cameron, K. and Boonsobhak, V.},
      title = {A neuromorphic depth-from-motion vision model with STDP adaptation},
      pages = {482-495},
      journal = {IEEE Trans. Neural Networks},
      year = {2006},
      volume= {172}}
    Abstract:
    Review:
    Thompson, A M. and Porr, B. and Wörgötter, F. (2006).
    Stabilising Hebbian learning with a third factor in a food retrieval task. SAB 2006, Lecture Notes in Computer Science, 313-322, 4095. DOI: 10.1007/11840541.
    BibTeX:
    @inproceedings{thompsonporrwoergoetter2006,
      author = {Thompson, A M. and Porr, B. and Wörgötter, F.},
      title = {Stabilising Hebbian learning with a third factor in a food retrieval task},
      pages = {313-322},
      booktitle = {SAB 2006, Lecture Notes in Computer Science},
      year = {2006},
      volume= {4095},
      doi = {10.1007/11840541}}
    Abstract:
    Review:
    Tamosiunaite, M. and Porr, B. and Wörgötter, F. (2006).
    Temporally changing synaptic plasticity. Advances in Neural Information Processing Systems, 18, 1337-1344.
    BibTeX:
    @inproceedings{tamosiunaiteporrwoergoetter2006,
      author = {Tamosiunaite, M. and Porr, B. and Wörgötter, F.},
      title = {Temporally changing synaptic plasticity},
      pages = {1337-1344},
      booktitle = {Advances in Neural Information Processing Systems, 18},
      year = {2006},
      publisher = {MIT Press, Cambridge, MA}}
    Abstract:
    Review:
    Pugeault, N. and Wörgötter, F. and Krüger, N. (2006).
    Multi-modal Primitives: Local, Condensed, and Semantically Rich Visual Descriptors and the Formalisation of Contextual Information. International Journal of Humanoid Robotics, Special Issue on Cognitive Humanoidd Vision, 379-405, 7, 3. DOI: 10.1142/S0219843610002209.
    BibTeX:
    @article{pugeaultwoergoetterkrueger2006,
      author = {Pugeault, N. and Wörgötter, F. and Krüger, N.},
      title = {Multi-modal Primitives: Local, Condensed, and Semantically Rich Visual Descriptors and the Formalisation of Contextual Information},
      pages = {379-405},
      journal = {International Journal of Humanoid Robotics, Special Issue on Cognitive Humanoidd Vision},
      year = {2006},
      volume= {7},
      number = {3},
      url = {http://www.worldscientific.com/doi/abs/10.1142/S0219843610002209},
      doi = {10.1142/S0219843610002209},
      abstract = {We present a novel representation of visual information, based on local symbolic descriptors, that we call visual primitives. These primitives: (1) combine different visual modalities, (2) associate semantic to local scene information, and (3) reduce the bandwidth while increasing the predictability of the information exchanged across the system. This representation leads to the concept of early cognitive vision that we define as an intermediate level between dense, signal-based early vision and high-level cognitive vision. The frameworks potential is demonstrated in several applications, in particular in the area of robotics and humanoid robotics, which are briefly outlined. Read More: http://www.worldscientific.com/doi/abs/10.1142/S0219843610002209}}
    Abstract: We present a novel representation of visual information, based on local symbolic descriptors, that we call visual primitives. These primitives: (1) combine different visual modalities, (2) associate semantic to local scene information, and (3) reduce the bandwidth while increasing the predictability of the information exchanged across the system. This representation leads to the concept of early cognitive vision that we define as an intermediate level between dense, signal-based early vision and high-level cognitive vision. The frameworks potential is demonstrated in several applications, in particular in the area of robotics and humanoid robotics, which are briefly outlined. Read More: http://www.worldscientific.com/doi/abs/10.1142/S0219843610002209
    Review:
    Pugeault, N. and Wörgötter, F. and Krüger, N. (2006).
    Multi-modal scene reconstruction using perceptual grouping constraints. Computer Vision and Pattern Recognition Workshop, CVPRW, 195-195.
    BibTeX:
    @inproceedings{pugeaultwoergoetterkrueger2006a,
      author = {Pugeault, N. and Wörgötter, F. and Krüger, N.},
      title = {Multi-modal scene reconstruction using perceptual grouping constraints},
      pages = {195-195},
      booktitle = {Computer Vision and Pattern Recognition Workshop, CVPRW},
      year = {2006}}
    Abstract:
    Review:
    Porr, B. and Wörgötter, F. (2006).
    Strongly improved stability and faster convergence of temporal sequence learning by utilising input correlations only. Neural Comp, 1380-1412, 186.
    BibTeX:
    @article{porrwoergoetter2006,
      author = {Porr, B. and Wörgötter, F.},
      title = {Strongly improved stability and faster convergence of temporal sequence learning by utilising input correlations only},
      pages = {1380-1412},
      journal = {Neural Comp},
      year = {2006},
      volume= {186}}
    Abstract:
    Review:
    Porr, B. and Egerton, A. and Wörgötter, F. (2006).
    Towards closed loop information: Predictive information. Constructivist Foundat, 83-90, 12.
    BibTeX:
    @article{porregertonwoergoetter2006,
      author = {Porr, B. and Egerton, A. and Wörgötter, F.},
      title = {Towards closed loop information: Predictive information},
      pages = {83-90},
      journal = {Constructivist Foundat},
      year = {2006},
      volume= {12}}
    Abstract:
    Review:
    Manoonpong, P. and Geng, T. and Wörgötter, F. (2006).
    Exploring the dynamic walking range of the biped robot Runbot with an active upper-body component. IEEE-RAS International Conference on Humanoid Robots Humanoids 2006, 418-424.
    BibTeX:
    @inproceedings{manoonponggengwoergoetter2006,
      author = {Manoonpong, P. and Geng, T. and Wörgötter, F.},
      title = {Exploring the dynamic walking range of the biped robot Runbot with an active upper-body component},
      pages = {418-424},
      booktitle = {IEEE-RAS International Conference on Humanoid Robots Humanoids 2006},
      year = {2006}}
    Abstract:
    Review:
    Krüger, N. and Wörgötter, F. and Hulle, M V. (2006).
    Editorial: ECOVISION: Challenges in Early-Cognitive Vision. Int. J. Comp. Vision, 5-7, 721.
    BibTeX:
    @article{kruegerwoergoetterhulle2006,
      author = {Krüger, N. and Wörgötter, F. and Hulle, M V.},
      title = {Editorial: ECOVISION: Challenges in Early-Cognitive Vision},
      pages = {5-7},
      journal = {Int. J. Comp. Vision},
      year = {2006},
      volume= {721}}
    Abstract:
    Review:
    Kulvicius, T. and Geng, T. and Porr, B. and Wörgötter, F. (2006).
    Speed Optimization of a 2D Walking Robot through STDP. Dynamical principles for neuroscience and intelligennt biomimetic devices: EPFL LATSIS Symposium 2006, 99-100.
    BibTeX:
    @inproceedings{kulviciusgengporr2006,
      author = {Kulvicius, T. and Geng, T. and Porr, B. and Wörgötter, F.},
      title = {Speed Optimization of a 2D Walking Robot through STDP},
      pages = {99-100},
      booktitle = {Dynamical principles for neuroscience and intelligennt biomimetic devices: EPFL LATSIS Symposium 2006},
      year = {2006}}
    Abstract:
    Review:
    Krüger, N. and Wörgötter, F. (2006).
    Symbolic Pointillism: Computer art motivated by human brain structures. Leonardo, 337-340, 38:4.
    BibTeX:
    @article{kruegerwoergoetter2006,
      author = {Krüger, N. and Wörgötter, F.},
      title = {Symbolic Pointillism: Computer art motivated by human brain structures},
      pages = {337-340},
      journal = {Leonardo},
      year = {2006},
      volume= {38:4}}
    Abstract:
    Review:
    Kalkan S. Wörgötter, F. and Krüger, N. (2006).
    Statistical Analysis of Local 3D Structure in 2D Images. IEEE Computer Vision and Pattern Recognition CVPR, 1114-1121.
    BibTeX:
    @inproceedings{kalkanswoergoetterkrueger2006,
      author = {Kalkan S. Wörgötter, F. and Krüger, N.},
      title = {Statistical Analysis of Local 3D Structure in 2D Images},
      pages = {1114-1121},
      booktitle = {IEEE Computer Vision and Pattern Recognition CVPR},
      year = {2006}}
    Abstract:
    Review:
    Geib, C. and Mourao, K. and Petrick, R. and Pugeault, N. and Steedman, M. and Krüger, N. and Wörgötter, F. (2006).
    Object Action Complexes as an Interface for Planning and Robot Control. IEEE-RAS International Conference on Humanoid Robots Workshop at the Humanoids 2006, Genova, Italy.
    BibTeX:
    @inproceedings{geibmouraopetrick2006,
      author = {Geib, C. and Mourao, K. and Petrick, R. and Pugeault, N. and Steedman, M. and Krüger, N. and Wörgötter, F.},
      title = {Object Action Complexes as an Interface for Planning and Robot Control},
      booktitle = {IEEE-RAS International Conference on Humanoid Robots Workshop at the Humanoids 2006, Genova, Italy},
      year = {2006}}
    Abstract:
    Review:
    Geng, T. and Porr, B. and Wörgötter, F. (2006).
    Coupling of neural computation with physical computation for stable dynamic biped walking control. Neural Comp, 1156-1196, 185.
    BibTeX:
    @article{gengporrwoergoetter2006,
      author = {Geng, T. and Porr, B. and Wörgötter, F.},
      title = {Coupling of neural computation with physical computation for stable dynamic biped walking control},
      pages = {1156-1196},
      journal = {Neural Comp},
      year = {2006},
      volume= {185}}
    Abstract:
    Review:
    Geng, T. and Porr, B. and Wörgötter, F. (2006).
    Fast biped walking with a reflexive controller and real-time policy searching. Advances in Neural Information Processing, 427-434, 18.
    BibTeX:
    @inproceedings{gengporrwoergoetter2006a,
      author = {Geng, T. and Porr, B. and Wörgötter, F.},
      title = {Fast biped walking with a reflexive controller and real-time policy searching},
      pages = {427-434},
      booktitle = {Advances in Neural Information Processing},
      year = {2006},
      volume= {18}}
    Abstract:
    Review:
    Geng, T. and Porr, B. and Wörgötter, F. (2006).
    Fast biped walking with a reflexive neuronal controller and real-time online learning. Int. Journal of Robotics Res, 243-261, 3.
    BibTeX:
    @article{gengporrwoergoetter2006b,
      author = {Geng, T. and Porr, B. and Wörgötter, F.},
      title = {Fast biped walking with a reflexive neuronal controller and real-time online learning},
      pages = {243-261},
      journal = {Int. Journal of Robotics Res},
      year = {2006},
      volume= {3}}
    Abstract:
    Review:
    Wörgötter, F. and Porr, B. (2005).
    Temporal sequence learning, prediction and control - A review of different models and their relation to biological mechanisms. Neural Comp, 245-319, 17.
    BibTeX:
    @article{woergoetterporr2005,
      author = {Wörgötter, F. and Porr, B.},
      title = {Temporal sequence learning, prediction and control - A review of different models and their relation to biological mechanisms},
      pages = {245-319},
      journal = {Neural Comp},
      year = {2005},
      volume= {17}}
    Abstract:
    Review:
    Saudargiene, A. and Porr, B. and Wörgötter, F. (2005).
    Synaptic modifications depend on synapse location and activity: a biophysical model of STDP. Biosystems, 3-10, 79.
    BibTeX:
    @article{saudargieneporrwoergoetter2005,
      author = {Saudargiene, A. and Porr, B. and Wörgötter, F.},
      title = {Synaptic modifications depend on synapse location and activity: a biophysical model of STDP},
      pages = {3-10},
      journal = {Biosystems},
      year = {2005},
      volume= {79}}
    Abstract:
    Review:
    Saudargiene, A. and Porr, B. and Wörgötter, F. (2005).
    Local learning rules: Predicted influence of dendritic location on synaptic modification in spike timing dependent plasticity. Biol. Cybern, 128-138, 922.
    BibTeX:
    @article{saudargieneporrwoergoetter2005a,
      author = {Saudargiene, A. and Porr, B. and Wörgötter, F.},
      title = {Local learning rules: Predicted influence of dendritic location on synaptic modification in spike timing dependent plasticity},
      pages = {128-138},
      journal = {Biol. Cybern},
      year = {2005},
      volume= {922}}
    Abstract:
    Review:
    Porr, B. and Wörgötter, F. (2005).
    Inside Embodiment What means Embodiment for Radical Constructivists. Kybernetes, 105-117, 34.
    BibTeX:
    @article{porrwoergoetter2005,
      author = {Porr, B. and Wörgötter, F.},
      title = {Inside Embodiment What means Embodiment for Radical Constructivists},
      pages = {105-117},
      journal = {Kybernetes},
      year = {2005},
      volume= {34}}
    Abstract:
    Review:
    Krüger, N. and Wörgötter, F. (2005).
    Multi-modal primitives as functional models of hyper-columns and their use for contextual integration. Proc. 1st Int. Symp. Brain, Vision and Artificial Intelligence, Lecture Notes in Computer Science, 157-166, 3704. DOI: 10.1007/11565123.
    BibTeX:
    @inproceedings{kruegerwoergoetter2005,
      author = {Krüger, N. and Wörgötter, F.},
      title = {Multi-modal primitives as functional models of hyper-columns and their use for contextual integration},
      pages = {157-166},
      booktitle = {Proc. 1st Int. Symp. Brain, Vision and Artificial Intelligence, Lecture Notes in Computer Science},
      year = {2005},
      volume= {3704},
      doi = {10.1007/11565123}}
    Abstract:
    Review:
    Kalkan, S. and Calow, D. and Wörgötter, F. and Lappe, M. and Krüger, N. (2005).
    Local image structures and optic flow estimation. Network, 341-356, 164.
    BibTeX:
    @article{kalkancalowwoergoetter2005,
      author = {Kalkan, S. and Calow, D. and Wörgötter, F. and Lappe, M. and Krüger, N.},
      title = {Local image structures and optic flow estimation},
      pages = {341-356},
      journal = {Network},
      year = {2005},
      volume= {164}}
    Abstract:
    Review:
    Calow, D. and Krüger, N. and Wörgötter, F. and Lappe, M. (2005).
    Biologically motivated space-variant filtering for robust optic flow processing. Network, 323-340, 164. DOI: 10.1080/09548980600563962.
    BibTeX:
    @article{calowkruegerwoergoetter2005,
      author = {Calow, D. and Krüger, N. and Wörgötter, F. and Lappe, M.},
      title = {Biologically motivated space-variant filtering for robust optic flow processing},
      pages = {323-340},
      journal = {Network},
      year = {2005},
      volume= {164},
      doi = {10.1080/09548980600563962}}
    Abstract:
    Review:
    Wörgötter, F. and Krüger, N. and Pugeault, N. and Calow, D. and Lappe, M. and Pauwels, K. and Hulle, M V. and Tan, S. and Johnston, A. (2004).
    Early cognitive vision: Using Gestalt laws for task-dependent, active image processing. Natural Computing, 293-321, 3.
    BibTeX:
    @article{woergoetterkruegerpugeault2004,
      author = {Wörgötter, F. and Krüger, N. and Pugeault, N. and Calow, D. and Lappe, M. and Pauwels, K. and Hulle, M V. and Tan, S. and Johnston, A.},
      title = {Early cognitive vision: Using Gestalt laws for task-dependent, active image processing},
      pages = {293-321},
      journal = {Natural Computing},
      year = {2004},
      volume= {3}}
    Abstract:
    Review:
    Wörgötter, F. (2004).
    Actor-Critic Models of Animal Control - A critique of reinforcement learning. Fourth International ICSC Symposium on Engineering of Intelligent Systems, Madeira, Portugal.
    BibTeX:
    @inproceedings{woergoetter2004,
      author = {Wörgötter, F.},
      title = {Actor-Critic Models of Animal Control - A critique of reinforcement learning},
      booktitle = {Fourth International ICSC Symposium on Engineering of Intelligent Systems, Madeira, Portugal},
      year = {2004}}
    Abstract:
    Review:
    Saudargiene, A. and Porr, B. and Wörgötter, F. (2004).
    Biologically inspired artificial neural network algorithm which implements local learning rules. ISCAS, Vancouver.
    BibTeX:
    @inproceedings{saudargieneporrwoergoetter2004,
      author = {Saudargiene, A. and Porr, B. and Wörgötter, F.},
      title = {Biologically inspired artificial neural network algorithm which implements local learning rules},
      booktitle = {ISCAS, Vancouver},
      year = {2004}}
    Abstract:
    Review:
    Saudargiene, A. and Porr, B. and Wörgötter, F. (2004).
    How the shape of pre- and postsynaptic signals can influence STDP: A biophysical model. Neural Comp, 595-626, 16.
    BibTeX:
    @article{saudargieneporrwoergoetter2004a,
      author = {Saudargiene, A. and Porr, B. and Wörgötter, F.},
      title = {How the shape of pre- and postsynaptic signals can influence STDP: A biophysical model},
      pages = {595-626},
      journal = {Neural Comp},
      year = {2004},
      volume= {16}}
    Abstract:
    Review:
    Pugeault, N. and Wörgötter, F. and Krüger, N. (2004).
    A non-local stereo similarity based on collinear groups. Fourth International ICSC Symposium on Engineering of Intelligent Systems, Madeira, Portugal.
    BibTeX:
    @inproceedings{pugeaultwoergoetterkrueger2004,
      author = {Pugeault, N. and Wörgötter, F. and Krüger, N.},
      title = {A non-local stereo similarity based on collinear groups},
      booktitle = {Fourth International ICSC Symposium on Engineering of Intelligent Systems, Madeira, Portugal},
      year = {2004}}
    Abstract:
    Review:
    Porr, B. and Saudargiene, A. and Wörgötter, F. (2004).
    Analytical solution of spike-timing dependent plasticity based on synaptic biophysics. Advances in Neural Information Processing Systems 16. Thrun, S. and Saul, L. and Schölkopf, B. eds. MIT Press.
    BibTeX:
    @inproceedings{porrsaudargienewoergoetter2004,
      author = {Porr, B. and Saudargiene, A. and Wörgötter, F.},
      title = {Analytical solution of spike-timing dependent plasticity based on synaptic biophysics},
      booktitle = {Advances in Neural Information Processing Systems 16. Thrun, S. and Saul, L. and Schölkopf, B. eds. MIT Press},
      year = {2004}}
    Abstract:
    Review:
    Krüger, N. and Wörgötter, F. (2004).
    Statistical and deterministic regularities: Utilization of motion and grouping in biological and artificial visual systems. Advances in Imaging and Electron Physics, 82-147, 131.
    BibTeX:
    @article{kruegerwoergoetter2004,
      author = {Krüger, N. and Wörgötter, F.},
      title = {Statistical and deterministic regularities: Utilization of motion and grouping in biological and artificial visual systems},
      pages = {82-147},
      journal = {Advances in Imaging and Electron Physics},
      year = {2004},
      volume= {131}}
    Abstract:
    Review:
    Krüger, N. and Lappe, M. and Wörgötter, F. (2004).
    Biologically motivated multi-modal processing of visual primitives. The Interdisciplinary Journal of Artificial Intelligence and the Simulation of Behaviour, 417-427, 15.
    BibTeX:
    @article{kruegerlappewoergoetter2004,
      author = {Krüger, N. and Lappe, M. and Wörgötter, F.},
      title = {Biologically motivated multi-modal processing of visual primitives},
      pages = {417-427},
      journal = {The Interdisciplinary Journal of Artificial Intelligence and the Simulation of Behaviour},
      year = {2004},
      volume= {15}}
    Abstract:
    Review:
    Krüger, N. and Felsberg, M. and Wörgötter, F. (2004).
    Processing multi-modal primitives from image sequences. Fourth International ICSC Symposium on Engineering of Intelligent Systems, Madeira, Portugal.
    BibTeX:
    @inproceedings{kruegerfelsbergwoergoetter2004,
      author = {Krüger, N. and Felsberg, M. and Wörgötter, F.},
      title = {Processing multi-modal primitives from image sequences},
      booktitle = {Fourth International ICSC Symposium on Engineering of Intelligent Systems, Madeira, Portugal},
      year = {2004}}
    Abstract:
    Review:
    Kalkan, S. and Calow, D. and Felsberg, M. and Wörgötter, F. and Lappe, M. and Krüger, N. (2004).
    Optic flow statistics and intrinsic dimensionality. Brain Inspired Cognitive Systems.
    BibTeX:
    @inproceedings{kalkancalowfelsberg2004,
      author = {Kalkan, S. and Calow, D. and Felsberg, M. and Wörgötter, F. and Lappe, M. and Krüger, N.},
      title = {Optic flow statistics and intrinsic dimensionality},
      booktitle = {Brain Inspired Cognitive Systems},
      year = {2004}}
    Abstract:
    Review:
    Hennig, M. and Wörgötter, F. (2004).
    Eye micro-movements improve stimulus detection beyond the Nyquist limit in the peripheral retina. Advances in Neural Information Processing Systems, 16.
    BibTeX:
    @inproceedings{hennigwoergoetter2004,
      author = {Hennig, M. and Wörgötter, F.},
      title = {Eye micro-movements improve stimulus detection beyond the Nyquist limit in the peripheral retina},
      booktitle = {Advances in Neural Information Processing Systems},
      year = {2004},
      volume= {16}}
    Abstract:
    Review:
    Calow, D. and Krüger, N. and Wörgötter, F. and Lappe, M. (2004).
    Space variant filtering of optic flow for robust three dimensional motion estimation. Fourth International ICSC Symposium on Engineering of Intelligent Systems, Madeira, Portugal.
    BibTeX:
    @inproceedings{calowkruegerwoergoetter2004,
      author = {Calow, D. and Krüger, N. and Wörgötter, F. and Lappe, M.},
      title = {Space variant filtering of optic flow for robust three dimensional motion estimation},
      booktitle = {Fourth International ICSC Symposium on Engineering of Intelligent Systems, Madeira, Portugal},
      year = {2004}}
    Abstract:
    Review:
    Wörgötter, F. and Suder, K. and Funke, K. (2003).
    Response characteristics in the lateral geniculate nucleus and their primary afferent influences on the visual cortex of cat. In: Modulation of Neuronal Responses: Implications for Active Vision. G.T. Buracas, O. Ruksenas, G.M. Boyton and T.D. Albright, eds. NATO Science, 165-188.
    BibTeX:
    @article{woergoettersuderfunke2003,
      author = {Wörgötter, F. and Suder, K. and Funke, K.},
      title = {Response characteristics in the lateral geniculate nucleus and their primary afferent influences on the visual cortex of cat. In: Modulation of Neuronal Responses: Implications for Active Vision. G.T. Buracas, O. Ruksenas, G.M. Boyton and T.D. Albright, eds},
      pages = {165-188},
      journal = {NATO Science},
      year = {2003}}
    Abstract:
    Review:
    Porr, B. and Wörgötter, F. (2003).
    Learning a forward model of a reflex. Advances in Neural Information Processing Systems 15MIT Press.
    BibTeX:
    @inproceedings{porrwoergoetter2003,
      author = {Porr, B. and Wörgötter, F.},
      title = {Learning a forward model of a reflex},
      booktitle = {Advances in Neural Information Processing Systems 15},
      journal = {MIT Press},
      year = {2003}}
    Abstract:
    Review:
    Porr, B. and Wörgötter, F. (2003).
    Interaction, self-reference and contingency in computational neuroscience: analytical descriptions and information theoretic consequences. Rethinking Communicative Interaction: New interdisciplinary horizons, Colin B. Grant, Ed., John Benjamins Publ. Co. and Amsterdam, Philadelphia, 145-162.
    BibTeX:
    @inproceedings{porrwoergoetter2003a,
      author = {Porr, B. and Wörgötter, F.},
      title = {Interaction, self-reference and contingency in computational neuroscience: analytical descriptions and information theoretic consequences},
      pages = {145-162},
      booktitle = {Rethinking Communicative Interaction: New interdisciplinary horizons, Colin B. Grant, Ed., John Benjamins Publ. Co. and Amsterdam, Philadelphia},
      year = {2003}}
    Abstract:
    Review:
    Porr, B. and Wörgötter, F. (2003).
    Isotropic sequence order learning in a closed loop behavioural system. Roy. Soc. Phil. Trans. Mathematical, Physical and Engineering Sciences, 2225-2244, 3611811.
    BibTeX:
    @article{porrwoergoetter2003b,
      author = {Porr, B. and Wörgötter, F.},
      title = {Isotropic sequence order learning in a closed loop behavioural system},
      pages = {2225-2244},
      journal = {Roy. Soc. Phil. Trans. Mathematical, Physical and Engineering Sciences},
      year = {2003},
      volume= {3611811}}
    Abstract:
    Review:
    Porr, B. and Wörgötter, F. (2003).
    Isotropic sequence order learning. Neural Comp, 831-864, 15.
    BibTeX:
    @article{porrwoergoetter2003c,
      author = {Porr, B. and Wörgötter, F.},
      title = {Isotropic sequence order learning},
      pages = {831-864},
      journal = {Neural Comp},
      year = {2003},
      volume= {15}}
    Abstract:
    Review:
    Porr, B. and Ferber, C. and Wörgötter, F. (2003).
    ISO-learning approximates a solution to the inverse controller problem in an unsupervised behavioural paradigm. Neural Comp, 865-884, 15.
    BibTeX:
    @article{porrferberwoergoetter2003,
      author = {Porr, B. and Ferber, C. and Wörgötter, F.},
      title = {ISO-learning approximates a solution to the inverse controller problem in an unsupervised behavioural paradigm},
      pages = {865-884},
      journal = {Neural Comp},
      year = {2003},
      volume= {15}}
    Abstract:
    Review:
    Krüger, N. and Wörgötter, F. (2003).
    Symbolic Pointillism: Computer art motivated by human perception. Proceedings of the AISB 2003 Symposium on Biologically inspired and Machine Vision, Theory and Application, Wales, 63-69.
    BibTeX:
    @inproceedings{kruegerwoergoetter2003,
      author = {Krüger, N. and Wörgötter, F.},
      title = {Symbolic Pointillism: Computer art motivated by human perception},
      pages = {63-69},
      booktitle = {Proceedings of the AISB 2003 Symposium on Biologically inspired and Machine Vision, Theory and Application, Wales},
      year = {2003}}
    Abstract:
    Review:
    Eyding, D. and Macklis, J D. and Neubacher, U. and Funke, K. and Wörgötter, F. (2003).
    Selective elimination of corticogeniculate feedback abolishes the electroencephalogram dependence of primary visual cortical receptive fields and reduces their spatial specificity. J. Neurosci, 7021-7033, 2318.
    BibTeX:
    @article{eydingmacklisneubacher2003,
      author = {Eyding, D. and Macklis, J D. and Neubacher, U. and Funke, K. and Wörgötter, F.},
      title = {Selective elimination of corticogeniculate feedback abolishes the electroencephalogram dependence of primary visual cortical receptive fields and reduces their spatial specificity},
      pages = {7021-7033},
      journal = {J. Neurosci},
      year = {2003},
      volume= {2318}}
    Abstract:
    Review:
    Porr, B. and Wörgötter, F. (2003).
    Inside Embodiment - Control from the Organisms Point of View. .
    BibTeX:
    @misc{porrwoergoetter2003d,
      author = {Porr, B. and Wörgötter, F.},
      title = {Inside Embodiment - Control from the Organisms Point of View},
      year = {2003}}
    Abstract:
    Review:
    Krüger, N. and Wörgötter, F. (2002).
    Multi-Modal Statistics of Edges in Natural Image Sequences. Proc. WS on Dynamic Perception, Bochum.
    BibTeX:
    @misc{kruegerwoergoetter2002,
      author = {Krüger, N. and Wörgötter, F.},
      title = {Multi-Modal Statistics of Edges in Natural Image Sequences},
      journal = {Proc. WS on Dynamic Perception, Bochum},
      year = {2002},
      url = {http://www.bccn-goettingen.de/Publications/A14},
      abstract = {In this work we investigate the multi-modal statistics of natural image sequences looking at the modalities orientation, color, optic flow and contrast transition. It turns out the statistical interdependencies corresponding to the Gestalt law collinearity increase significantly when we look not at orientation only.}}
    Abstract: In this work we investigate the multi-modal statistics of natural image sequences looking at the modalities orientation, color, optic flow and contrast transition. It turns out the statistical interdependencies corresponding to the Gestalt law collinearity increase significantly when we look not at orientation only.
    Review:
    Wörgötter, F. and Eyding, D. and Macklis, J D. and Funke, K. (2002).
    The influence of the corticothalamic projection on responses in thalamus and cortex. Phil. Trans. R. Soc. Lond. B, 1823-1834.
    BibTeX:
    @article{woergoettereydingmacklis2002,
      author = {Wörgötter, F. and Eyding, D. and Macklis, J D. and Funke, K.},
      title = {The influence of the corticothalamic projection on responses in thalamus and cortex},
      pages = {1823-1834},
      journal = {Phil. Trans. R. Soc. Lond. B},
      year = {2002}}
    Abstract:
    Review:
    Suder, K. and Funke, K. and Zhao, Y. and Kerscher, N. and Eysel, U. and Wennekers, T. and Wörgötter, F. (2002).
    Spatial dynamics of receptive fields in cat primary visual cortex related to the temporal structure of thalamo-cortical feed-forward activity experiments and models. Exp. Brain Res, 430-444, 144.
    BibTeX:
    @article{suderfunkezhao2002,
      author = {Suder, K. and Funke, K. and Zhao, Y. and Kerscher, N. and Eysel, U. and Wennekers, T. and Wörgötter, F.},
      title = {Spatial dynamics of receptive fields in cat primary visual cortex related to the temporal structure of thalamo-cortical feed-forward activity experiments and models},
      pages = {430-444},
      journal = {Exp. Brain Res},
      year = {2002},
      volume= {144}}
    Abstract:
    Review:
    Porr, B. and Wörgötter, F. (2002).
    Isotropic sequence order learning using a novel linear algorithm in a closed loop behavioural system. BioSystems, 195-202, 67, 1-3.
    BibTeX:
    @article{porrwoergoetter2002,
      author = {Porr, B. and Wörgötter, F.},
      title = {Isotropic sequence order learning using a novel linear algorithm in a closed loop behavioural system},
      pages = {195-202},
      journal = {BioSystems},
      year = {2002},
      volume= {67},
      number = {1-3}}
    Abstract:
    Review:
    Porr, B. and Wörgötter, F. (2002).
    Predictive learning in rate-coded neural networks: A theoretical approach towards classical conditioning. Neurocomputing, 585-590, 44-46.
    BibTeX:
    @article{porrwoergoetter2002a,
      author = {Porr, B. and Wörgötter, F.},
      title = {Predictive learning in rate-coded neural networks: A theoretical approach towards classical conditioning},
      pages = {585-590},
      journal = {Neurocomputing},
      year = {2002},
      volume= {44-46}}
    Abstract:
    Review:
    Porr, B. and Nürenberg, B. and Wörgötter, F. (2002).
    A VLSI-Compatible Computer Vision Algorithm for Stereoscopic Depth analysis in Real-Time. Int. J. Comp. Vis. IJCV, 39-55, 491.
    BibTeX:
    @article{porrnuerenbergwoergoetter2002,
      author = {Porr, B. and Nürenberg, B. and Wörgötter, F.},
      title = {A VLSI-Compatible Computer Vision Algorithm for Stereoscopic Depth analysis in Real-Time},
      pages = {39-55},
      journal = {Int. J. Comp. Vis. IJCV},
      year = {2002},
      volume= {491}}
    Abstract:
    Review:
    Krüger, N. and Wörgötter, F. (2002).
    Multi-modal feature statistics and self-emergence of feature constellations. Workshop on Dynamic Perception, Bochum.
    BibTeX:
    @inproceedings{kruegerwoergoetter2002a,
      author = {Krüger, N. and Wörgötter, F.},
      title = {Multi-modal feature statistics and self-emergence of feature constellations},
      booktitle = {Workshop on Dynamic Perception, Bochum},
      year = {2002},
      url = {http://www.bccn-goettingen.de/Publications/A14},
      abstract = {In this work we investigate the multi-modal statistics of natural image sequences looking at the modalities orientation, color, optic flow and contrast transition. It turns out the statistical interdependencies corresponding to the Gestalt low collinearity increases significantly when we look not at orientation only.}}
    Abstract: In this work we investigate the multi-modal statistics of natural image sequences looking at the modalities orientation, color, optic flow and contrast transition. It turns out the statistical interdependencies corresponding to the Gestalt low collinearity increases significantly when we look not at orientation only.
    Review:
    Krüger, N. and Wörgötter, F. (2002).
    Statistics of second order multi-modal feature events and their exploitation in biological and artificial visual systems. Workshop on Biological Motivated Computer Vision, BMCV, Tübingen.
    BibTeX:
    @inproceedings{kruegerwoergoetter2002b,
      author = {Krüger, N. and Wörgötter, F.},
      title = {Statistics of second order multi-modal feature events and their exploitation in biological and artificial visual systems},
      booktitle = {Workshop on Biological Motivated Computer Vision, BMCV, Tübingen},
      year = {2002}}
    Abstract:
    Review:
    Krüger, N. and Wörgötter, F. (2002).
    The Gestalt principle collinearity and the multi-modal statistics of image sequences. Workshop on Cognitive Vision, DAGM, Zürich.
    BibTeX:
    @inproceedings{kruegerwoergoetter2002c,
      author = {Krüger, N. and Wörgötter, F.},
      title = {The Gestalt principle collinearity and the multi-modal statistics of image sequences},
      booktitle = {Workshop on Cognitive Vision, DAGM, Zürich},
      year = {2002}}
    Abstract:
    Review:
    Krüger, N. and Wörgötter, F. (2002).
    Multi-modal estimation of collinarity and parallelism in natural image sequences. Network: Comput. Neural Syst, 553-576, 13.
    BibTeX:
    @article{kruegerwoergoetter2002d,
      author = {Krüger, N. and Wörgötter, F.},
      title = {Multi-modal estimation of collinarity and parallelism in natural image sequences},
      pages = {553-576},
      journal = {Network: Comput. Neural Syst},
      year = {2002},
      volume= {13}}
    Abstract:
    Review:
    Krüger, N. and Wörgötter, F. (2002).
    Different Degree of Genetical Prestructuring in the Ontogenesis of Visual Abilities based on Deterministic and Statistical Regularities. SAB, Edinburgh.
    BibTeX:
    @inproceedings{kruegerwoergoetter2002e,
      author = {Krüger, N. and Wörgötter, F.},
      title = {Different Degree of Genetical Prestructuring in the Ontogenesis of Visual Abilities based on Deterministic and Statistical Regularities},
      booktitle = {SAB, Edinburgh},
      year = {2002}}
    Abstract:
    Review:
    Hennig, M. and Funke, K. and Wörgötter, F. (2002).
    The influence of different retinal sub-circuits on the non-linearity of ganglion cell behavior. J. Neurosci, 8726-8738, 22.
    BibTeX:
    @article{hennigfunkewoergoetter2002,
      author = {Hennig, M. and Funke, K. and Wörgötter, F.},
      title = {The influence of different retinal sub-circuits on the non-linearity of ganglion cell behavior},
      pages = {8726-8738},
      journal = {J. Neurosci},
      year = {2002},
      volume= {22}}
    Abstract:
    Review:
    Hennig, M H. and Kerscher, N J. and Funke, K. and Wörgötter, F. (2002).
    Stochastic resonance in visual cortical neurons: Does the eye-tremor actually improve visual acuity. Neurocomputing, 115-120, 44-46.
    BibTeX:
    @article{hennigkerscherfunke2002,
      author = {Hennig, M H. and Kerscher, N J. and Funke, K. and Wörgötter, F.},
      title = {Stochastic resonance in visual cortical neurons: Does the eye-tremor actually improve visual acuity},
      pages = {115-120},
      journal = {Neurocomputing},
      year = {2002},
      volume= {44-46}}
    Abstract:
    Review:
    Funke, K. and Kisvarday, Z. and Volgushev, M. and Wörgötter, F. (2002).
    Integrating anatomy and physiology of the primary visual pathway: from LGN to Cortex. Models of Neural Networks IV, 97-171, Models of neura. DOI: 10.1007/978-0-387-21703-1_3.
    BibTeX:
    @inproceedings{funkekisvardayvolgushev2002,
      author = {Funke, K. and Kisvarday, Z. and Volgushev, M. and Wörgötter, F.},
      title = {Integrating anatomy and physiology of the primary visual pathway: from LGN to Cortex},
      pages = {97-171},
      booktitle = {Models of Neural Networks IV},
      year = {2002},
      volume= {Models of neura},
      publisher = {Springer New York},
      doi = {10.1007/978-0-387-21703-1_3},
      abstract = {This chapter deals with the structure and function of the visual thalamus (lateral geniculate nucleus, LGN) and the primary visual cortex and aims to put this system into a computational perspective. We start with an overview of the basic structures of the primary visual pathway and the terminology used. Next, the organization of the LGN and its main functions are described: receptive field structure of LGN cells, excitatory and inhibitory influences, contrast gain- control, spatial summation, temporal structure of activity and influence of extra-retinal inputs. The section closes with models on three functional aspects of the LGN: 1) Switching between burst firing and tonic transmission modes of LGN cells, 2) Control of LGN function during the sleep-wake cycle, and 3) Involvement of LGN in gating visual signals. The section on the visual cortex starts with details of its morphological organisation: cortical layers, cell types, columnar structure and horizontal connections. This is followed by a description of the basic response characteristics of neurons, the organisation of receptive fields and their dynamic behavior. Here, mechanisms of establishing cortical orientation selectivity are considered in detail. Next, we focus on functional maps, e.g. distribution of orientation preferences of cells. The chapter closes with a section on basic models of the primary visual cortex, concerning: 1) Temporal firing patterns of neuronal assemblies, i.e. oscillations and synchronization, 2) Cortical cell characteristics, e.g. orientation specificity, and 3) Formation of functional maps, e.g. orientation map.}}
    Abstract: This chapter deals with the structure and function of the visual thalamus (lateral geniculate nucleus, LGN) and the primary visual cortex and aims to put this system into a computational perspective. We start with an overview of the basic structures of the primary visual pathway and the terminology used. Next, the organization of the LGN and its main functions are described: receptive field structure of LGN cells, excitatory and inhibitory influences, contrast gain- control, spatial summation, temporal structure of activity and influence of extra-retinal inputs. The section closes with models on three functional aspects of the LGN: 1) Switching between burst firing and tonic transmission modes of LGN cells, 2) Control of LGN function during the sleep-wake cycle, and 3) Involvement of LGN in gating visual signals. The section on the visual cortex starts with details of its morphological organisation: cortical layers, cell types, columnar structure and horizontal connections. This is followed by a description of the basic response characteristics of neurons, the organisation of receptive fields and their dynamic behavior. Here, mechanisms of establishing cortical orientation selectivity are considered in detail. Next, we focus on functional maps, e.g. distribution of orientation preferences of cells. The chapter closes with a section on basic models of the primary visual cortex, concerning: 1) Temporal firing patterns of neuronal assemblies, i.e. oscillations and synchronization, 2) Cortical cell characteristics, e.g. orientation specificity, and 3) Formation of functional maps, e.g. orientation map.
    Review:
    Dahlem, M. and Wörgötter, F. (2002).
    Dynamical retino-cortical mapping. Workshop on Dynamic Perception, Bochum.
    BibTeX:
    @inproceedings{dahlemwoergoetter2002,
      author = {Dahlem, M. and Wörgötter, F.},
      title = {Dynamical retino-cortical mapping},
      booktitle = {Workshop on Dynamic Perception, Bochum},
      year = {2002}}
    Abstract:
    Review:
    Dahlem, M. and Wörgötter, F. (2002).
    Rotation-invariant optical flow by gaze-depending retino-cortical mapping. Lecture Notes in Comp. Sci, 137-145, 2525.
    BibTeX:
    @article{dahlemwoergoetter2002a,
      author = {Dahlem, M. and Wörgötter, F.},
      title = {Rotation-invariant optical flow by gaze-depending retino-cortical mapping},
      pages = {137-145},
      journal = {Lecture Notes in Comp. Sci},
      year = {2002},
      volume= {2525}}
    Abstract:
    Review:
    Wörgötter, F. (2001).
    Bad design and good performance: Strategies of the visual system for enhanced image analysis. In Proc. Artificial Neural Networks, ICANN 2001 Vienna Dorffner, Bischof and Hornick, eds., Springer, 13-15.
    BibTeX:
    @inproceedings{woergoetter2001,
      author = {Wörgötter, F.},
      title = {Bad design and good performance: Strategies of the visual system for enhanced image analysis},
      pages = {13-15},
      booktitle = {In Proc. Artificial Neural Networks, ICANN 2001 Vienna Dorffner, Bischof and Hornick, eds., Springer},
      year = {2001}}
    Abstract:
    Review:
    Suder, K. and Wörgötter, F. (2001).
    Modeling motion induction to analyze connectivity in the early visual system. Neurocomputing, 612-618, 34-35.
    BibTeX:
    @article{suderwoergoetter2001,
      author = {Suder, K. and Wörgötter, F.},
      title = {Modeling motion induction to analyze connectivity in the early visual system},
      pages = {612-618},
      journal = {Neurocomputing},
      year = {2001},
      volume= {34-35}}
    Abstract:
    Review:
    Porr, B. and Wörgötter, F. (2001).
    Temporal hebbian learning in rate-coded neural networks: A theoretical approach towards classical conditioning. Proc. Artificial Neural Networks, ICANN 2001 Vienna Dorffner, Bischof and Hornick, eds., Springer, 1115-1120.
    BibTeX:
    @inproceedings{porrwoergoetter2001,
      author = {Porr, B. and Wörgötter, F.},
      title = {Temporal hebbian learning in rate-coded neural networks: A theoretical approach towards classical conditioning},
      pages = {1115-1120},
      booktitle = {Proc. Artificial Neural Networks, ICANN 2001 Vienna Dorffner, Bischof and Hornick, eds., Springer},
      year = {2001}}
    Abstract:
    Review:
    Cozzi, A. and Wörgötter, F. (2001).
    COMVIS, a communication framework for computer vision. Int. J. Comp. Vis. IJCV, 183-194, 413.
    BibTeX:
    @article{cozziwoergoetter2001,
      author = {Cozzi, A. and Wörgötter, F.},
      title = {COMVIS, a communication framework for computer vision},
      pages = {183-194},
      journal = {Int. J. Comp. Vis. IJCV},
      year = {2001},
      volume= {413}}
    Abstract:
    Review:
    Wörgötter, F. and Eysel, U T. (2000).
    The effects of context and state on receptive fields in the striate cortex. TINS, 497-503, 23.
    BibTeX:
    @article{woergoettereysel2000,
      author = {Wörgötter, F. and Eysel, U T.},
      title = {The effects of context and state on receptive fields in the striate cortex},
      pages = {497-503},
      journal = {TINS},
      year = {2000},
      volume= {23}}
    Abstract:
    Review:
    Wörgötter, F. (2000).
    Stereoscopic depth analysis in video real-time based on visual cortical cell behavior and an FPGA solution. Proc. NC2000, Second International ICSC Symposium on Neural Computation Berlin, H.H. Bothe, ed. CD publication only.
    BibTeX:
    @inproceedings{woergoetter2000,
      author = {Wörgötter, F.},
      title = {Stereoscopic depth analysis in video real-time based on visual cortical cell behavior and an FPGA solution},
      booktitle = {Proc. NC2000, Second International ICSC Symposium on Neural Computation Berlin, H.H. Bothe, ed. CD publication only},
      year = {2000}}
    Abstract:
    Review:
    Suder, K. and Wörgötter, F. (2000).
    The control of low level information flow in the visual system. Rev. Neurosci, 127-146, 11.
    BibTeX:
    @article{suderwoergoetter2000,
      author = {Suder, K. and Wörgötter, F.},
      title = {The control of low level information flow in the visual system},
      pages = {127-146},
      journal = {Rev. Neurosci},
      year = {2000},
      volume= {11}}
    Abstract:
    Review:
    Suder, K. and Wennekers, T. and Wörgötter, F. (2000).
    Neural field model of receptive field restructuring in primary visual cortex. Neural Comp, 1-21, 12.
    BibTeX:
    @article{suderwennekerswoergoetter2000,
      author = {Suder, K. and Wennekers, T. and Wörgötter, F.},
      title = {Neural field model of receptive field restructuring in primary visual cortex},
      pages = {1-21},
      journal = {Neural Comp},
      year = {2000},
      volume= {12}}
    Abstract:
    Review:
    Quill, U. and Funke, K. and Wörgötter, F. (2000).
    Investigations on emergent spatio-temporal neural response characteristics in a small network model. Biol. Cybern, 461-470, 83.
    BibTeX:
    @article{quillfunkewoergoetter2000,
      author = {Quill, U. and Funke, K. and Wörgötter, F.},
      title = {Investigations on emergent spatio-temporal neural response characteristics in a small network model},
      pages = {461-470},
      journal = {Biol. Cybern},
      year = {2000},
      volume= {83}}
    Abstract:
    Review:
    Ferber, C. and Wörgötter, F. (2000).
    Cluster update algorithm and recognition. Phys. Rev. B rapid comm. Phys. Rev. E 62, 1461-1464.
    BibTeX:
    @article{ferberwoergoetter2000,
      author = {Ferber, C. and Wörgötter, F.},
      title = {Cluster update algorithm and recognition},
      pages = {1461-1464},
      journal = {Phys. Rev. B rapid comm. Phys. Rev. E 62},
      year = {2000}}
    Abstract:
    Review:
    Wörgötter, F. and Suder, K. and Funke, K. (1999).
    The dynamic spatio-temporal behavior of visual responses in thalamus and cortex. Restor Neurol Neuroscience, 137-152, 15, 2-3.
    BibTeX:
    @article{woergoettersuderfunke1999,
      author = {Wörgötter, F. and Suder, K. and Funke, K.},
      title = {The dynamic spatio-temporal behavior of visual responses in thalamus and cortex},
      pages = {137-152},
      journal = {Restor Neurol Neuroscience},
      year = {1999},
      volume= {15},
      number = {2-3},
      url = {http://www.bccn-goettingen.de/Publi}}
    Abstract:
    Review:
    Wörgötter, F. (1999).
    Comparing different modeling approaches of visual cortical cell characteristics. Models of Cortical Circuits, Cerebral Cortex, 201-249, 13. DOI: 10.1007/978-1-4615-4903-1_4.
    BibTeX:
    @inproceedings{woergoetter1999,
      author = {Wörgötter, F.},
      title = {Comparing different modeling approaches of visual cortical cell characteristics},
      pages = {201-249},
      booktitle = {Models of Cortical Circuits, Cerebral Cortex},
      year = {1999},
      volume= {13},
      publisher = {Springer US},
      doi = {10.1007/978-1-4615-4903-1_4},
      abstract = {The goal of a modeling study should be to arrive at a better analytical and intuitive understanding of the underlying system. To achieve this, every model needs to represent an abstract reflection of reality, and the level of abstraction largely depends on the complexity of the system which is to be modeled. As soon as this complexity exceeds a certain degree, a multitude of possible model levels can be designed and these models can coexist without mutually contradicting each other. A hopeful assumption at this point would be to believe that this diversity could be reduced if all models were required to represent only one common aspect of the system. The complexity of the visual cortex of vertebrates, however, thwarts this hope, because even a single aspect of cortical cell behavior can lead to quite different successful modeling approaches. In this chapter visual cortical orientation specificity shall be the common aspect on which a comparison of different models will be based. The goal shall be to try to develop cross-links between quite different approaches in order to arrive at a }}
    Abstract: The goal of a modeling study should be to arrive at a better analytical and intuitive understanding of the underlying system. To achieve this, every model needs to represent an abstract reflection of reality, and the level of abstraction largely depends on the complexity of the system which is to be modeled. As soon as this complexity exceeds a certain degree, a multitude of possible model levels can be designed and these models can coexist without mutually contradicting each other. A hopeful assumption at this point would be to believe that this diversity could be reduced if all models were required to represent only one common aspect of the system. The complexity of the visual cortex of vertebrates, however, thwarts this hope, because even a single aspect of cortical cell behavior can lead to quite different successful modeling approaches. In this chapter visual cortical orientation specificity shall be the common aspect on which a comparison of different models will be based. The goal shall be to try to develop cross-links between quite different approaches in order to arrive at a
    Review:
    Wörgötter, F. and Cozzi, A. and Gerdes, V. (1999).
    A parallel noise-robust algorithm to recover depth information from radial flow fields. Neural Comp, 381-416, 11.
    BibTeX:
    @article{woergoettercozzigerdes1999,
      author = {Wörgötter, F. and Cozzi, A. and Gerdes, V.},
      title = {A parallel noise-robust algorithm to recover depth information from radial flow fields},
      pages = {381-416},
      journal = {Neural Comp},
      year = {1999},
      volume= {11}}
    Abstract:
    Review:
    Suder, K. and Wörgötter, F. and Wenneckers, T. (1999).
    Neural field description suggests feedforward mechanism of state-dependent visual receptive field changes. Proc. ICANN99, Edinburgh, 67-72.
    BibTeX:
    @inproceedings{suderwoergoetterwenneckers1999,
      author = {Suder, K. and Wörgötter, F. and Wenneckers, T.},
      title = {Neural field description suggests feedforward mechanism of state-dependent visual receptive field changes},
      pages = {67-72},
      booktitle = {Proc. ICANN99, Edinburgh},
      year = {1999}}
    Abstract:
    Review:
    Suder, K. and Wenneckers, T. and Wörgötter, F. (1999).
    Neural Field description of state-dependent receptive field changes in the visual cortex. Proc. ESANN99, M. Verleysen, ed, 171-176.
    BibTeX:
    @inproceedings{suderwenneckerswoergoetter1999,
      author = {Suder, K. and Wenneckers, T. and Wörgötter, F.},
      title = {Neural Field description of state-dependent receptive field changes in the visual cortex},
      pages = {171-176},
      booktitle = {Proc. ESANN99, M. Verleysen, ed},
      year = {1999}}
    Abstract:
    Review:
    Li, B. and Funke, K. and Wörgötter, F. and Eysel, U T. (1999).
    Correlated variations in EEG pattern and visual responsiveness of cat lateral geniculate relay cells. J. Physiol, 857-874, 514.3.
    BibTeX:
    @article{lifunkewoergoetter1999,
      author = {Li, B. and Funke, K. and Wörgötter, F. and Eysel, U T.},
      title = {Correlated variations in EEG pattern and visual responsiveness of cat lateral geniculate relay cells},
      pages = {857-874},
      journal = {J. Physiol},
      year = {1999},
      volume= {514.3}}
    Abstract:
    Review:
    Cozzi, A. and Wörgötter, F. (1999).
    Computing stereoscopic disparity with binocular cortical simple and complex cells. Proc. ICANN99 Edinburgh, 269-273.
    BibTeX:
    @inproceedings{cozziwoergoetter1999,
      author = {Cozzi, A. and Wörgötter, F.},
      title = {Computing stereoscopic disparity with binocular cortical simple and complex cells},
      pages = {269-273},
      booktitle = {Proc. ICANN99 Edinburgh},
      year = {1999}}
    Abstract:
    Review:
    Wörgötter, F. and Nelle, E. and Li, B. and Wang, L. and Diao, Y. (1998).
    A possible basic cortical microcircuit called cascaded inhibition Results from cortical network models and recording experiments from striate simple cells. Experimental Brain Research, 318-332, 122, 3. DOI: 10.1007/s002210050520.
    BibTeX:
    @article{woergoetternelleli1998,
      author = {Wörgötter, F. and Nelle, E. and Li, B. and Wang, L. and Diao, Y.},
      title = {A possible basic cortical microcircuit called cascaded inhibition Results from cortical network models and recording experiments from striate simple cells},
      pages = {318-332},
      journal = {Experimental Brain Research},
      year = {1998},
      volume= {122},
      number = {3},
      publisher = {Springer-Verlag},
      url = {http://dx.doi.org/10.1007/s002210050520},
      doi = {10.1007/s002210050520}}
    Abstract:
    Review:
    Wörgötter, F. and Suder, K. and Zhao, Y. and Kerscher, N. and Eysel, U. and Funke, K. (1998).
    State-dependent receptive field restructuring in the visual cortex. Nature, 165-168, 396.
    BibTeX:
    @article{woergoettersuderzhao1998,
      author = {Wörgötter, F. and Suder, K. and Zhao, Y. and Kerscher, N. and Eysel, U. and Funke, K.},
      title = {State-dependent receptive field restructuring in the visual cortex},
      pages = {165-168},
      journal = {Nature},
      year = {1998},
      volume= {396}}
    Abstract:
    Review:
    Wörgötter, F. and Nelle, E. and Li, B. and Wang, L. and Diao, Y. -. (1998).
    Linearity of spatial summation in simple cells: Experiments and Models. Experimental Brain Research, 318-332, 122.
    BibTeX:
    @article{woergoetternelleli1998a,
      author = {Wörgötter, F. and Nelle, E. and Li, B. and Wang, L. and Diao, Y. -.},
      title = {Linearity of spatial summation in simple cells: Experiments and Models},
      pages = {318-332},
      journal = {Experimental Brain Research},
      year = {1998},
      volume= {122}}
    Abstract:
    Review:
    Wörgötter, F. and Nelle, E. and Li, B. and Funke, K. (1998).
    The influence of corticofugal feedback on the temporal structure of visual responses of cat thalamic relay cells. J. Physiol, 797-815, 509.
    BibTeX:
    @article{woergoetternelleli1998b,
      author = {Wörgötter, F. and Nelle, E. and Li, B. and Funke, K.},
      title = {The influence of corticofugal feedback on the temporal structure of visual responses of cat thalamic relay cells},
      pages = {797-815},
      journal = {J. Physiol},
      year = {1998},
      volume= {509}}
    Abstract:
    Review:
    Suder, K. and Funke, K. and Wörgötter, F. (1998).
    State-dependent spatio-temporal restructuring of receptive fields in the primary visual pathway. ICANN 98, 331-336. DOI: 10.1007/978-1-4471-1599-1_48.
    BibTeX:
    @inproceedings{suderfunkewoergoetter1998,
      author = {Suder, K. and Funke, K. and Wörgötter, F.},
      title = {State-dependent spatio-temporal restructuring of receptive fields in the primary visual pathway},
      pages = {331-336},
      booktitle = {ICANN 98},
      year = {1998},
      editor = {Niklasson, Lars and Bod},
      publisher = {Springer London},
      series = {Perspectives in Neural Computing},
      doi = {10.1007/978-1-4471-1599-1_48},
      abstract = {Changing patterns in the EEG reflect changing states of attentiveness and are correlated to changes in the firing behavior of single cells. Prom experiments it is known that LGN cells of the Thalamus exhibit a tonic firing pattern during desynchronized EEG reflecting faithfully properties of a stimulus whereas they are in a burst mode during synchronized EEG, which leads to a stereotype stimulus response. We introduce a model in which these changes in the neural temporal behavior lead to changes in the spatial characteristics of cortical receptive fields through variations in the effective connectivity between thalamic and cortical cells. This spatio-temporal receptive field restructuring reflects different modes of information processing and might be controlled by selective attention.}}
    Abstract: Changing patterns in the EEG reflect changing states of attentiveness and are correlated to changes in the firing behavior of single cells. Prom experiments it is known that LGN cells of the Thalamus exhibit a tonic firing pattern during desynchronized EEG reflecting faithfully properties of a stimulus whereas they are in a burst mode during synchronized EEG, which leads to a stereotype stimulus response. We introduce a model in which these changes in the neural temporal behavior lead to changes in the spatial characteristics of cortical receptive fields through variations in the effective connectivity between thalamic and cortical cells. This spatio-temporal receptive field restructuring reflects different modes of information processing and might be controlled by selective attention.
    Review:
    Quill, U. and Wörgötter, F. and Funke, K. and Lansner, A. (1998).
    The role of spatio-temporal structures in the formation of synchrony. ICANN 98, 997-1002. DOI: 10.1007/978-1-4471-1599-1_156.
    BibTeX:
    @inproceedings{quillwoergoetterfunke1998,
      author = {Quill, U. and Wörgötter, F. and Funke, K. and Lansner, A.},
      title = {The role of spatio-temporal structures in the formation of synchrony},
      pages = {997-1002},
      booktitle = {ICANN 98},
      year = {1998},
      editor = {Niklasson, Lars and Bod},
      publisher = {Springer London},
      series = {Perspectives in Neural Computing},
      doi = {10.1007/978-1-4471-1599-1_156},
      abstract = {Although the functional role synchronous oscillations may play has been investigated in depth, the underlying processes and spatio-temporal aspects that establish the synchrony are still not thoroughly understood. Experimental studies suggest the existence of two kinds of oscillations: stimulus-locked and stimulus-induced. While stimulus-locked oscillations are systematically dependent on the stimulus, stimulus-induced oscillations (occurring in the frequency range) show only little stimulus dependency. We propose a unifying approach which employs very generic connection structures. Different degrees of synchrony on different time scales are observed as an emergent feature of the network structure. Our model demonstrates that both, stimulus-locked and stimulus-induced oscillations are just two different states of the same system. A transition from one state to the other is observed, and the synchronous activity provides the basis for binding visual features.}}
    Abstract: Although the functional role synchronous oscillations may play has been investigated in depth, the underlying processes and spatio-temporal aspects that establish the synchrony are still not thoroughly understood. Experimental studies suggest the existence of two kinds of oscillations: stimulus-locked and stimulus-induced. While stimulus-locked oscillations are systematically dependent on the stimulus, stimulus-induced oscillations (occurring in the frequency range) show only little stimulus dependency. We propose a unifying approach which employs very generic connection structures. Different degrees of synchrony on different time scales are observed as an emergent feature of the network structure. Our model demonstrates that both, stimulus-locked and stimulus-induced oscillations are just two different states of the same system. A transition from one state to the other is observed, and the synchronous activity provides the basis for binding visual features.
    Review:
    Porr, B. and Wörgötter, F. (1998).
    How to hear visual disparities: Real-time stereoscopic spatial depth analysis using temporal resonance. Biol. Cybern, 329-336, 78.
    BibTeX:
    @article{porrwoergoetter1998,
      author = {Porr, B. and Wörgötter, F.},
      title = {How to hear visual disparities: Real-time stereoscopic spatial depth analysis using temporal resonance},
      pages = {329-336},
      journal = {Biol. Cybern},
      year = {1998},
      volume= {78}}
    Abstract:
    Review:
    Opara, R. and Wörgötter, F. (1998).
    A fast and robust cluster update algorithm for image segmentation in spin-lattice models without annealing visual latencies revisited. Neural Comp, 1547-1566, 10.
    BibTeX:
    @article{oparawoergoetter1998,
      author = {Opara, R. and Wörgötter, F.},
      title = {A fast and robust cluster update algorithm for image segmentation in spin-lattice models without annealing visual latencies revisited},
      pages = {1547-1566},
      journal = {Neural Comp},
      year = {1998},
      volume= {10}}
    Abstract:
    Review:
    Köhn, J. and Wörgötter, F. (1998).
    Employing the Z-Transform to optimize the calculation of the synaptic conductance of NMDA- and other channels in network simulations. Neural Comp, 1639-1651, 10.
    BibTeX:
    @article{koehnwoergoetter1998,
      author = {Köhn, J. and Wörgötter, F.},
      title = {Employing the Z-Transform to optimize the calculation of the synaptic conductance of NMDA- and other channels in network simulations},
      pages = {1639-1651},
      journal = {Neural Comp},
      year = {1998},
      volume= {10}}
    Abstract:
    Review:
    Cozzi, A. and Wörgötter, F. (1998).
    Reclustering techniques improve early vision feature maps. Patt. Anal. Appl, 42-51, 1.
    BibTeX:
    @inproceedings{cozziwoergoetter1998,
      author = {Cozzi, A. and Wörgötter, F.},
      title = {Reclustering techniques improve early vision feature maps},
      pages = {42-51},
      booktitle = {Patt. Anal. Appl},
      year = {1998},
      volume= {1}}
    Abstract:
    Review:
    Funke, K. and Wörgötter, F. (1997).
    On the significance of temporally structured activity in the dorsal lateral geniculate nucleus LGN. Progress in Neurobiology, 67 - 119, 53, 1. DOI: 10.1016/S0301-0082(97)00032-4.
    BibTeX:
    @article{funkewoergoetter1997a,
      author = {Funke, K. and Wörgötter, F.},
      title = {On the significance of temporally structured activity in the dorsal lateral geniculate nucleus LGN},
      pages = {67 - 119},
      journal = {Progress in Neurobiology},
      year = {1997},
      volume= {53},
      number = {1},
      url = {https://www.sciencedirect.com/science/article/pii/S0301008297000324},
      doi = {10.1016/S0301-0082(97)00032-4}}
    Abstract:
    Review:
    Opara, R. and Wörgötter, F. (1997).
    Introducing Visual Latencies into Spin-Lattice Models for Image Segmentation: A Neuromorphic Approach to a Computer Vision Problem. ICNN97 Houston, Texas, 2300-2303.
    BibTeX:
    @inproceedings{oparawoergoetter1997,
      author = {Opara, R. and Wörgötter, F.},
      title = {Introducing Visual Latencies into Spin-Lattice Models for Image Segmentation: A Neuromorphic Approach to a Computer Vision Problem},
      pages = {2300-2303},
      booktitle = {ICNN97 Houston, Texas},
      year = {1997}}
    Abstract:
    Review:
    Funke, K. and Wörgötter, F. (1997).
    The temporal dynamics of cell responses in the lateral geniculate nucleus. Prog. in Neurobiol, 67-119, 53.
    BibTeX:
    @article{funkewoergoetter1997,
      author = {Funke, K. and Wörgötter, F.},
      title = {The temporal dynamics of cell responses in the lateral geniculate nucleus},
      pages = {67-119},
      journal = {Prog. in Neurobiol},
      year = {1997},
      volume= {53}}
    Abstract:
    Review:
    Cozzi, A. and Crespi, B. and Valentinotti, F. and Wörgötter, F. (1997).
    Performance of phase-based algorithms for disparity estimation. Mach. Vis. Appl, 334-340, 9.
    BibTeX:
    @article{cozzicrespivalentinotti1997,
      author = {Cozzi, A. and Crespi, B. and Valentinotti, F. and Wörgötter, F.},
      title = {Performance of phase-based algorithms for disparity estimation},
      pages = {334-340},
      journal = {Mach. Vis. Appl},
      year = {1997},
      volume= {9}}
    Abstract:
    Review:
    Wörgötter, F. and Opara, R. and Funke, K. and Eysel, T. (1996).
    Utilizing latency for object recognition in real and artificial neural networks. NeuroReport, 741-744, 7.
    BibTeX:
    @article{woergoetteroparafunke1996,
      author = {Wörgötter, F. and Opara, R. and Funke, K. and Eysel, T.},
      title = {Utilizing latency for object recognition in real and artificial neural networks},
      pages = {741-744},
      journal = {NeuroReport},
      year = {1996},
      volume= {7}}
    Abstract:
    Review:
    Vogelgesang, J. and Cozzi, A. and Wörgötter, F. (1996).
    A parallel algorithm for depth perception from radial optical flow fields. Lect. Notes in Comp. Sci. and Proc. of ICANN96, C.v.d.Malsburg, W.v.Seelen, J.C. Vorbrüggen, B. Sendhoff, eds., Springer, 721-725, 1112.
    BibTeX:
    @inproceedings{vogelgesangcozziwoergoetter1996,
      author = {Vogelgesang, J. and Cozzi, A. and Wörgötter, F.},
      title = {A parallel algorithm for depth perception from radial optical flow fields},
      pages = {721-725},
      booktitle = {Lect. Notes in Comp. Sci. and Proc. of ICANN96, C.v.d.Malsburg, W.v.Seelen, J.C. Vorbrüggen, B. Sendhoff, eds., Springer},
      year = {1996},
      volume= {1112}}
    Abstract:
    Review:
    Opara, R. and Wörgötter, F. (1996).
    A novel algorithm for image segmentation using time dependent interaction probabilities. Artificial Neural Networks, 839-843, 1112. DOI: 10.1007/3-540-61510-5_141.
    BibTeX:
    @inproceedings{oparawoergoetter1996,
      author = {Opara, R. and Wörgötter, F.},
      title = {A novel algorithm for image segmentation using time dependent interaction probabilities},
      pages = {839-843},
      booktitle = {Artificial Neural Networks},
      year = {1996},
      volume= {1112},
      editor = {von der Malsburg, Christoph and von Seelen, Werner and Vorbrüggen, JanC. and Sendhoff, Bernhard},
      publisher = {Springer Berlin Heidelberg},
      series = {Lecture Notes in Computer Science},
      doi = {10.1007/3-540-61510-5_141},
      abstract = {For a consistent analysis of a visual scene the different features of an individual object have to be recognized as belonging together and separated from other objects and the background. Classical algorithms to segment a visual scene have an implicit representation of the image in the connection structure. We propose a new model that uses an image representation in the time domain, operating on stimulus dependent latencies. Such stimulus dependent temporal differences are observed in biological sensory systems. In our system they will be used to define the interaction probability between the different image parts. The gradually changing pattern of active image parts will thereby lead to the assignment of the different labels to different regions which leads to the segmentation of the scene.}}
    Abstract: For a consistent analysis of a visual scene the different features of an individual object have to be recognized as belonging together and separated from other objects and the background. Classical algorithms to segment a visual scene have an implicit representation of the image in the connection structure. We propose a new model that uses an image representation in the time domain, operating on stimulus dependent latencies. Such stimulus dependent temporal differences are observed in biological sensory systems. In our system they will be used to define the interaction probability between the different image parts. The gradually changing pattern of active image parts will thereby lead to the assignment of the different labels to different regions which leads to the segmentation of the scene.
    Review:
    Opara, R. and Wörgötter, F. (1996).
    Utilizing latency for object recognition in artificial neural networks. Neural Comp, 1493-1520, 8.
    BibTeX:
    @article{oparawoergoetter1996a,
      author = {Opara, R. and Wörgötter, F.},
      title = {Utilizing latency for object recognition in artificial neural networks},
      pages = {1493-1520},
      journal = {Neural Comp},
      year = {1996},
      volume= {8}}
    Abstract:
    Review:
    Köhn, J. and Wörgötter, F. (1996).
    Corticofugal feedback can reduce the visual latency of responses to antagonistic stimuli. Biol. Cybern, 199-209, 75.
    BibTeX:
    @article{koehnwoergoetter1996,
      author = {Köhn, J. and Wörgötter, F.},
      title = {Corticofugal feedback can reduce the visual latency of responses to antagonistic stimuli},
      pages = {199-209},
      journal = {Biol. Cybern},
      year = {1996},
      volume= {75}}
    Abstract:
    Review:
    Funke, K. and Nelle, E. and Li, B. and Wörgötter, F. (1996).
    Corticofugal feedback improves the timing of retino-geniculate signal transmission. NeuroReport, 2130-2134, 7.
    BibTeX:
    @article{funkenelleli1996,
      author = {Funke, K. and Nelle, E. and Li, B. and Wörgötter, F.},
      title = {Corticofugal feedback improves the timing of retino-geniculate signal transmission},
      pages = {2130-2134},
      journal = {NeuroReport},
      year = {1996},
      volume= {7}}
    Abstract:
    Review:
    Wörgötter, F. and Funke, K. (1995).
    Fine structure analysis of temporal patterns in the light response of cells in the lateral geniculate nucleus of cat. Visual Neurosci, 469-484, 12.
    BibTeX:
    @article{woergoetterfunke1995,
      author = {Wörgötter, F. and Funke, K.},
      title = {Fine structure analysis of temporal patterns in the light response of cells in the lateral geniculate nucleus of cat},
      pages = {469-484},
      journal = {Visual Neurosci},
      year = {1995},
      volume= {12}}
    Abstract:
    Review:
    Wörgötter, F. and Nelle, E. and Li, B. and Wang, L. and Diao, Y-C. (1995).
    Spatial summation in simple cells: Computational and experimental results. Europ. Symp. Artif. Neural Networks M. Verleysen, ed. D-Facto, Brussels, 81-86.
    BibTeX:
    @inproceedings{woergoetternelleli1995,
      author = {Wörgötter, F. and Nelle, E. and Li, B. and Wang, L. and Diao, Y-C.},
      title = {Spatial summation in simple cells: Computational and experimental results},
      pages = {81-86},
      booktitle = {Europ. Symp. Artif. Neural Networks M. Verleysen, ed. D-Facto, Brussels},
      year = {1995}}
    Abstract:
    Review:
    Opara, R. and Wörgötter, F. (1995).
    Improving object recognition by using a visual latency mechanism. Europ. Symp. Artif. Neural Networks M. Verleysen, ed. D-Facto, Brussels, 187-192.
    BibTeX:
    @inproceedings{oparawoergoetter1995,
      author = {Opara, R. and Wörgötter, F.},
      title = {Improving object recognition by using a visual latency mechanism},
      pages = {187-192},
      booktitle = {Europ. Symp. Artif. Neural Networks M. Verleysen, ed. D-Facto, Brussels},
      year = {1995}}
    Abstract:
    Review:
    Köhn, J. and Wörgötter, F. (1995).
    Latency-reduction in antagonistic visual channels as the result of corticofugal feedback. Europ. Symp. Artif. Neural Networks M. Verleysen, ed, 205-210.
    BibTeX:
    @inproceedings{koehnwoergoetter1995,
      author = {Köhn, J. and Wörgötter, F.},
      title = {Latency-reduction in antagonistic visual channels as the result of corticofugal feedback},
      pages = {205-210},
      booktitle = {Europ. Symp. Artif. Neural Networks M. Verleysen, ed},
      year = {1995}}
    Abstract:
    Review:
    Funke, K. and Wörgötter, F. (1995).
    Temporal structure in the light response of relay cells in the dorsal lateral geniculate nucleus of cat. J. Physiol. Lond, 715-737, 485.3.
    BibTeX:
    @article{funkewoergoetter1995,
      author = {Funke, K. and Wörgötter, F.},
      title = {Temporal structure in the light response of relay cells in the dorsal lateral geniculate nucleus of cat},
      pages = {715-737},
      journal = {J. Physiol. Lond},
      year = {1995},
      volume= {485.3}}
    Abstract:
    Review:
    Funke, K. and Wörgötter, F. (1995).
    Differences in the temporal dynamics of the visual ON- and OFF-pathways. Exp. Brain Res, 171-176, 104.
    BibTeX:
    @article{funkewoergoetter1995a,
      author = {Funke, K. and Wörgötter, F.},
      title = {Differences in the temporal dynamics of the visual ON- and OFF-pathways},
      pages = {171-176},
      journal = {Exp. Brain Res},
      year = {1995},
      volume= {104}}
    Abstract:
    Review:
    Opara, R. and Wörgötter, F. (1994).
    A Multi-Modular hardware-software development for visual scene analysis. Proc. of CNS94, Monterey, California.
    BibTeX:
    @inproceedings{oparawoergoetter1994,
      author = {Opara, R. and Wörgötter, F.},
      title = {A Multi-Modular hardware-software development for visual scene analysis},
      booktitle = {Proc. of CNS94, Monterey, California},
      year = {1994}}
    Abstract:
    Review:
    Niebur, E. and Wörgötter, F. (1994).
    Design principles of columnar organization in visual cortex. Neural Comp, 602-614, 6.
    BibTeX:
    @article{nieburwoergoetter1994,
      author = {Niebur, E. and Wörgötter, F.},
      title = {Design principles of columnar organization in visual cortex},
      pages = {602-614},
      journal = {Neural Comp},
      year = {1994},
      volume= {6}}
    Abstract:
    Review:
    Eysel, U. and Kisvarday, Z F. and Wörgötter, F. and Crook, J. (1994).
    Large basket cells and lateral inhibition in cat visual cortex. Structural and Functional Organization of the Neocortex. Experimental Brain Research Series 24Springer, Heidelberg, 212-220.
    BibTeX:
    @inproceedings{eyselkisvardaywoergoetter1994,
      author = {Eysel, U. and Kisvarday, Z F. and Wörgötter, F. and Crook, J.},
      title = {Large basket cells and lateral inhibition in cat visual cortex},
      pages = {212-220},
      booktitle = {Structural and Functional Organization of the Neocortex. Experimental Brain Research Series 24},
      journal = {Springer, Heidelberg},
      year = {1994}}
    Abstract:
    Review:
    Crook, J. and Wörgötter, F. and Eysel, U. (1994).
    Velocity invariance of preferred axis of motion for single spot stimuli in simple cells of cat striate cortex. Exp. Brain Res, 175-180, 102.
    BibTeX:
    @article{crookwoergoettereysel1994,
      author = {Crook, J. and Wörgötter, F. and Eysel, U.},
      title = {Velocity invariance of preferred axis of motion for single spot stimuli in simple cells of cat striate cortex},
      pages = {175-180},
      journal = {Exp. Brain Res},
      year = {1994},
      volume= {102}}
    Abstract:
    Review:
    Wörgötter, F. and Niebur, E. (1993).
    Cortical column design: A link between the maps of preferred orientation and orientation tuning strength. Biological Cybernetics, 1-13, 70, 1. DOI: 10.1007/BF00202561.
    BibTeX:
    @article{woergoetterniebur1993,
      author = {Wörgötter, F. and Niebur, E.},
      title = {Cortical column design: A link between the maps of preferred orientation and orientation tuning strength},
      pages = {1-13},
      journal = {Biological Cybernetics},
      year = {1993},
      volume= {70},
      number = {1},
      publisher = {Springer-Verlag},
      doi = {10.1007/BF00202561},
      abstract = {We demonstrate that the map of the preferred orientations and the corresponding map of the orientation tuning strengths as measured with optical imaging are not independent, but that band-pass filtering of the preferred orientation map at each location yields a good approximation of the orientation tuning strength. Band-pass filtering is performed by convolving the map of orientation preference with its own autocorrelation function. We suggest an interpretation of the autocorrelation function of the preferred orientations as synaptic coupling function, i.e., synaptic strength as a function of intracortical distance between cortical cells. In developmental models it has been shown previously that a }}
    Abstract: We demonstrate that the map of the preferred orientations and the corresponding map of the orientation tuning strengths as measured with optical imaging are not independent, but that band-pass filtering of the preferred orientation map at each location yields a good approximation of the orientation tuning strength. Band-pass filtering is performed by convolving the map of orientation preference with its own autocorrelation function. We suggest an interpretation of the autocorrelation function of the preferred orientations as synaptic coupling function, i.e., synaptic strength as a function of intracortical distance between cortical cells. In developmental models it has been shown previously that a
    Review:
    Niebur, E. and Wörgötter, F. (1993).
    Orientation columns from first principles. Computation and Neural Systems 92. F. Eeckman, Ed., Kluver Academic, 409-414.
    BibTeX:
    @inproceedings{nieburwoergoetter1993,
      author = {Niebur, E. and Wörgötter, F.},
      title = {Orientation columns from first principles},
      pages = {409-414},
      booktitle = {Computation and Neural Systems 92. F. Eeckman, Ed., Kluver Academic},
      year = {1993}}
    Abstract:
    Review:
    Nelle, E. and Wörgötter, F. (1993).
    Cascaded intracortical inhibition: Modelling connection schemes on a large scale simulator. ICANN 93, Proc. Int. Conf. Artificial Neural Networks, Amsterdam, 157-160.
    BibTeX:
    @inproceedings{nellewoergoetter1993,
      author = {Nelle, E. and Wörgötter, F.},
      title = {Cascaded intracortical inhibition: Modelling connection schemes on a large scale simulator},
      pages = {157-160},
      booktitle = {ICANN 93, Proc. Int. Conf. Artificial Neural Networks, Amsterdam},
      year = {1993}}
    Abstract:
    Review:
    Wörgötter, F. and Niebur, E. and Koch, C. (1992).
    Generation of direction selectivity by isotropic intracortical connections. Neural Comp, 332-340, 4. DOI: 10.1162/neco.1992.4.3.332.
    BibTeX:
    @article{woergoetternieburkoch1992,
      author = {Wörgötter, F. and Niebur, E. and Koch, C.},
      title = {Generation of direction selectivity by isotropic intracortical connections},
      pages = {332-340},
      journal = {Neural Comp},
      year = {1992},
      volume= {4},
      doi = {10.1162/neco.1992.4.3.332},
      abstract = {To what extent do the mechanisms generating different receptive field properties of neurons depend on each other? We investigated this question theoretically within the context of orientation and direction tuning of simple cells in the mammalian visual cortex. In our model a cortical cell of the "simple" type receives its orientation tuning by afferent convergence of aligned receptive fields of the lateral geniculate nucleus (Hubel and Wiesel 1962). We sharpen this orientation bias by postulating a special type of radially symmetric long-range lateral inhibition called circular inhibition. Surprisingly, this isotropic mechanism leads to the emergence of a strong bias for the direction of motion of a bar. We show that this directional anisotropy is neither caused by the probabilistic nature of the connections nor is it a consequence of the specific columnar structure chosen but that it is an inherent feature of the architecture of visual cortex.}}
    Abstract: To what extent do the mechanisms generating different receptive field properties of neurons depend on each other? We investigated this question theoretically within the context of orientation and direction tuning of simple cells in the mammalian visual cortex. In our model a cortical cell of the "simple" type receives its orientation tuning by afferent convergence of aligned receptive fields of the lateral geniculate nucleus (Hubel and Wiesel 1962). We sharpen this orientation bias by postulating a special type of radially symmetric long-range lateral inhibition called circular inhibition. Surprisingly, this isotropic mechanism leads to the emergence of a strong bias for the direction of motion of a bar. We show that this directional anisotropy is neither caused by the probabilistic nature of the connections nor is it a consequence of the specific columnar structure chosen but that it is an inherent feature of the architecture of visual cortex.
    Review:
    Huck, M. and Wörgötter, F. and Eysel, U T. (1992).
    A processor ring for the implementation of neural networks with cortical architecture. Artificial Neural Networks 2, I.Aleksander and J.Taylor eds, 1435-1438.
    BibTeX:
    @inproceedings{huckwoergoettereysel1992,
      author = {Huck, M. and Wörgötter, F. and Eysel, U T.},
      title = {A processor ring for the implementation of neural networks with cortical architecture},
      pages = {1435-1438},
      booktitle = {Artificial Neural Networks 2, I.Aleksander and J.Taylor eds},
      year = {1992}}
    Abstract:
    Review:
    Eysel, U T. and Wörgötter, F. (1992).
    Horizontal intracortical contributions to functional specificity in cat visual cortex. Information Processing in the Cortex. Experiment and Theory. A.Aertsen and V. Braitenberg Hrsg, 301-323. DOI: 10.1007/978-3-642-49967-8_20.
    BibTeX:
    @inproceedings{eyselwoergoetter1992,
      author = {Eysel, U T. and Wörgötter, F.},
      title = {Horizontal intracortical contributions to functional specificity in cat visual cortex},
      pages = {301-323},
      booktitle = {Information Processing in the Cortex. Experiment and Theory. A.Aertsen and V. Braitenberg Hrsg},
      year = {1992},
      doi = {10.1007/978-3-642-49967-8_20},
      abstract = {Intracortical lateral connections have initially been shown in anatomical studies by degeneration methods in monkey (Fisken et al. 1975) and cat (Creutzfeldt et al. 1977). However, only the use of modern tracer techniques has disclosed details like the periodic spatial pattern of such intracortical connections (Rockland and Lund 1982, 1983 Gilbert and Wiesel 1983 Gilbert 1985) or the axonal and dendritic distribution of single excitatory or inhibitory cells (Gilbert and Wiesel 1979 Somogyi et al. 1983 Kisvarday et al. 1985). Intracortical excitation can be laterally transmitted by a long-range axonal network of excitatory connections between pyramidal cells spanning up to about 5 mm in the adult cat visual cortex (Kisvarday and Eysel 1991 see also the contribution of Kisvarday in this volume). Intracortical inhibition can be mediated by GABAergic cells which comprise on the average 20% of the cortical neurons in rat, cat and monkey (Ribak 1978 Hendrickson et al. 1981 Gabbott and Somogyi 1986 Fitz Patrick et al. 1987 Hendry et al. 1987). These nonpyramidal cells are a heterogenous group which show a variety of morphological specializations (Somogyi 1986). Some of the GABAergic cells give rise to locally restricted axonal systems (Kisvarday et al. 1985), others send axons over distances from 0.5 to 2 mm (Somogyi et al. 1983 Matsubara et al. 1987b).}}
    Abstract: Intracortical lateral connections have initially been shown in anatomical studies by degeneration methods in monkey (Fisken et al. 1975) and cat (Creutzfeldt et al. 1977). However, only the use of modern tracer techniques has disclosed details like the periodic spatial pattern of such intracortical connections (Rockland and Lund 1982, 1983 Gilbert and Wiesel 1983 Gilbert 1985) or the axonal and dendritic distribution of single excitatory or inhibitory cells (Gilbert and Wiesel 1979 Somogyi et al. 1983 Kisvarday et al. 1985). Intracortical excitation can be laterally transmitted by a long-range axonal network of excitatory connections between pyramidal cells spanning up to about 5 mm in the adult cat visual cortex (Kisvarday and Eysel 1991 see also the contribution of Kisvarday in this volume). Intracortical inhibition can be mediated by GABAergic cells which comprise on the average 20% of the cortical neurons in rat, cat and monkey (Ribak 1978 Hendrickson et al. 1981 Gabbott and Somogyi 1986 Fitz Patrick et al. 1987 Hendry et al. 1987). These nonpyramidal cells are a heterogenous group which show a variety of morphological specializations (Somogyi 1986). Some of the GABAergic cells give rise to locally restricted axonal systems (Kisvarday et al. 1985), others send axons over distances from 0.5 to 2 mm (Somogyi et al. 1983 Matsubara et al. 1987b).
    Review:
    Wörgötter, F. and Niebur, E. and Koch, C. (1991).
    Isotropic connections generate functional asymmetrical behavior in visual cortical cells. Journal of Neurophysiology, 444-459, 66.
    BibTeX:
    @article{woergoetternieburkoch1991,
      author = {Wörgötter, F. and Niebur, E. and Koch, C.},
      title = {Isotropic connections generate functional asymmetrical behavior in visual cortical cells},
      pages = {444-459},
      journal = {Journal of Neurophysiology},
      year = {1991},
      volume= {66},
      publisher = {American Physiological Society},
      abstract = {We study the relationship between structure and function in inhibitory long-range interactions in visual cortex. The sharpening of orientation tuning with "cross-orientation inhibition" is used as an example to discuss anisotropies that are generated by long-range connections. 2. In this study, as opposed to the detailed cortex model described in a previous report, a model of the cortical orientation column structure is proposed in which cortical cells are described only by their orientation preference. 3. We present results using different geometric arrangements of orientation columns. In the simplest case, straight parallel orientation columns were used. We also utilized more realistic, curved columns generated by a simple algorithm. The results were confirmed by the study of a patch of real column structure, determined experimentally by Swindale et al. 4. A given cell receives functionally defined cross-orientation inhibition if the cell receives inhibitory input that is strongest along its nonpreferred orientation. On the other hand, a cell is said to receive structurally defined cross-orientation inhibition if the inhibition arises from source cells with an orientation preference orthogonal to that of the target cell. Even though those definitions seem to describe similar situations, we show that, in the general case, structurally defined cross-orientation inhibition does not efficiently sharpen orientation selectivity. In particular, for straight and parallel columns, structurally defined cross-orientation inhibition results in unequal amounts of inhibition for whole cell populations with different preferred orientations. 5. In more realistic column structures, we studied the question of whether structural cross-orientation inhibition could be implemented in a more efficient way. However, for the majority of cells, it is demonstrated that their nonpreferred stimulus will not preferably excite "cross-oriented" cells. Thus structural cross-orientation inhibition is not efficient in real cortical columns. 6. We propose a new mechanism called circular inhibition. In this connection scheme, a target cell receives inhibitory input from source cells that are located at a given distance (the same for all cells) from the target cell. Circular inhibition can be regarded as two-dimensional long-range lateral inhibition. As opposed to structural cross-orientation inhibition, this mechanism does not introduce unwanted anisotropies in the orientation tuning of the target cells. It is also conceptually much simpler and developmentally advantageous. It is shown that this connection scheme results in a net functional cross-orientation inhibition in all realistic column geometries. The inhibitory tuning strength obtained with circular inhibition is weak and similar to that measured in reality.}}
    Abstract: We study the relationship between structure and function in inhibitory long-range interactions in visual cortex. The sharpening of orientation tuning with "cross-orientation inhibition" is used as an example to discuss anisotropies that are generated by long-range connections. 2. In this study, as opposed to the detailed cortex model described in a previous report, a model of the cortical orientation column structure is proposed in which cortical cells are described only by their orientation preference. 3. We present results using different geometric arrangements of orientation columns. In the simplest case, straight parallel orientation columns were used. We also utilized more realistic, curved columns generated by a simple algorithm. The results were confirmed by the study of a patch of real column structure, determined experimentally by Swindale et al. 4. A given cell receives functionally defined cross-orientation inhibition if the cell receives inhibitory input that is strongest along its nonpreferred orientation. On the other hand, a cell is said to receive structurally defined cross-orientation inhibition if the inhibition arises from source cells with an orientation preference orthogonal to that of the target cell. Even though those definitions seem to describe similar situations, we show that, in the general case, structurally defined cross-orientation inhibition does not efficiently sharpen orientation selectivity. In particular, for straight and parallel columns, structurally defined cross-orientation inhibition results in unequal amounts of inhibition for whole cell populations with different preferred orientations. 5. In more realistic column structures, we studied the question of whether structural cross-orientation inhibition could be implemented in a more efficient way. However, for the majority of cells, it is demonstrated that their nonpreferred stimulus will not preferably excite "cross-oriented" cells. Thus structural cross-orientation inhibition is not efficient in real cortical columns. 6. We propose a new mechanism called circular inhibition. In this connection scheme, a target cell receives inhibitory input from source cells that are located at a given distance (the same for all cells) from the target cell. Circular inhibition can be regarded as two-dimensional long-range lateral inhibition. As opposed to structural cross-orientation inhibition, this mechanism does not introduce unwanted anisotropies in the orientation tuning of the target cells. It is also conceptually much simpler and developmentally advantageous. It is shown that this connection scheme results in a net functional cross-orientation inhibition in all realistic column geometries. The inhibitory tuning strength obtained with circular inhibition is weak and similar to that measured in reality.
    Review:
    Wörgötter, F. and Koch, C. (1991).
    A detailed model of the primary visual pathway in the cat. Comparison of afferent excitatory and intracortical inhibitory connection schemes for orientation selectivity. J. Neurosci, 1959-1979, 11.
    BibTeX:
    @article{woergoetterkoch1991,
      author = {Wörgötter, F. and Koch, C.},
      title = {A detailed model of the primary visual pathway in the cat. Comparison of afferent excitatory and intracortical inhibitory connection schemes for orientation selectivity},
      pages = {1959-1979},
      journal = {J. Neurosci},
      year = {1991},
      volume= {11}}
    Abstract:
    Review:
    Wörgötter, F. and Muche, T. and Eysel, U T. (1991).
    Correlations between directional and orientational tuning of cells in cat striate cortex. Exp. Brain Res, 665-669, 83.
    BibTeX:
    @article{woergoettermucheeysel1991,
      author = {Wörgötter, F. and Muche, T. and Eysel, U T.},
      title = {Correlations between directional and orientational tuning of cells in cat striate cortex},
      pages = {665-669},
      journal = {Exp. Brain Res},
      year = {1991},
      volume= {83}}
    Abstract:
    Review:
    Wörgötter, F. and Holt, G. (1991).
    Spatio-temporal mechanisms in receptive fields of visual cortical simple cells: A model. J. Neurophysiol, 494-510, 65.
    BibTeX:
    @article{woergoetterholt1991,
      author = {Wörgötter, F. and Holt, G.},
      title = {Spatio-temporal mechanisms in receptive fields of visual cortical simple cells: A model},
      pages = {494-510},
      journal = {J. Neurophysiol},
      year = {1991},
      volume= {65}}
    Abstract:
    Review:
    Wörgötter, F. and Eysel, U T. (1991).
    Axial responses in visual cortical cells. Spatio-temporal mechanisms quantified by Fourier components of cortical tuning curves. Exp. Brain Res, 656-664, 83.
    BibTeX:
    @article{woergoettereysel1991,
      author = {Wörgötter, F. and Eysel, U T.},
      title = {Axial responses in visual cortical cells. Spatio-temporal mechanisms quantified by Fourier components of cortical tuning curves},
      pages = {656-664},
      journal = {Exp. Brain Res},
      year = {1991},
      volume= {83}}
    Abstract:
    Review:
    Wörgötter, F. and Eysel, U. (1991).
    Topographical aspects of the contributions of intracortical excitation and inhibition to orientation specificity in area 17 of the cat visual cortex. Europ. J. Neurosci, 1232-1244, 3.
    BibTeX:
    @article{woergoettereysel1991a,
      author = {Wörgötter, F. and Eysel, U.},
      title = {Topographical aspects of the contributions of intracortical excitation and inhibition to orientation specificity in area 17 of the cat visual cortex. Europ},
      pages = {1232-1244},
      journal = {J. Neurosci},
      year = {1991},
      volume= {3},
      abstract = {Intracortical mechanisms contributing to orientation and direction specificity were investigated with a method of local cortical inactivation. Single-unit activity was recorded in area 17 of the anaesthetized cat while a small volume of cortical tissue 400 - 2900 microm lateral to the recorded cell was inactivated by gamma-aminobutyric acid (GABA) microiontophoresis. Cells were stimulated with moving bars of variable orientation and changes of the response were monitored. Recording and inactivation sites were histologically verified. Statistically significant changes in orientation tuning during GABA-induced remote inactivation were observed in 80 of 145 cells (55%), and consisted in a reduced orientation specificity due to either increased (36%) or decreased (19%) responses. Increases of responses were more pronounced for the non-optimal orientations. This effect mainly occurred with GABA application at distances around 500 microm and is interpreted as loss of inhibition. Reduced orientation specificity as a result of decreasing response mainly to the optimal orientation was interpreted as loss of excitation. This effect most frequently occurred with inactivation at distances around 1000 microm. Loss of inhibition was also elicited from a distance of 1000 microm such inhibition, however, affected only directionality, without inducing changes in orientation tuning. For several cells at distances 1000 microm from the inactivation site a temporal sequence consisting of a change in direction specificity followed by a reduction of orientation specificity, and finally by direct GABAergic inhibition of the cell under study, could be induced with gradually increasing ejecting currents. The results indicate that excitation and inhibition originating from populations of neurons at different horizontal distances differentially contribute to direction and orientation specificity of a given visual cortical cell.}}
    Abstract: Intracortical mechanisms contributing to orientation and direction specificity were investigated with a method of local cortical inactivation. Single-unit activity was recorded in area 17 of the anaesthetized cat while a small volume of cortical tissue 400 - 2900 microm lateral to the recorded cell was inactivated by gamma-aminobutyric acid (GABA) microiontophoresis. Cells were stimulated with moving bars of variable orientation and changes of the response were monitored. Recording and inactivation sites were histologically verified. Statistically significant changes in orientation tuning during GABA-induced remote inactivation were observed in 80 of 145 cells (55%), and consisted in a reduced orientation specificity due to either increased (36%) or decreased (19%) responses. Increases of responses were more pronounced for the non-optimal orientations. This effect mainly occurred with GABA application at distances around 500 microm and is interpreted as loss of inhibition. Reduced orientation specificity as a result of decreasing response mainly to the optimal orientation was interpreted as loss of excitation. This effect most frequently occurred with inactivation at distances around 1000 microm. Loss of inhibition was also elicited from a distance of 1000 microm such inhibition, however, affected only directionality, without inducing changes in orientation tuning. For several cells at distances 1000 microm from the inactivation site a temporal sequence consisting of a change in direction specificity followed by a reduction of orientation specificity, and finally by direct GABAergic inhibition of the cell under study, could be induced with gradually increasing ejecting currents. The results indicate that excitation and inhibition originating from populations of neurons at different horizontal distances differentially contribute to direction and orientation specificity of a given visual cortical cell.
    Review:
    Niebur, E. and Wörgötter, F. and C., K. (1991).
    Influence of the column structure on intracortical long-range interactions. Proc. 3rd Mid-Western Conf. on Neural Networks, S. Samir ed., Purdue Univ. Press, W. Lafayett.
    BibTeX:
    @inproceedings{nieburwoergoetterc1991,
      author = {Niebur, E. and Wörgötter, F. and C., K.},
      title = {Influence of the column structure on intracortical long-range interactions},
      booktitle = {Proc. 3rd Mid-Western Conf. on Neural Networks, S. Samir ed., Purdue Univ. Press, W. Lafayett},
      year = {1991}}
    Abstract:
    Review:
    Niebur, E. and Wörgötter, F. and Koch, C. (1991).
    The Caltech computer cat: A fine-grain simulator of the primary visual system of cat. Annales du Carnac, 1-7, 4.
    BibTeX:
    @inproceedings{nieburwoergoetterkoch1991,
      author = {Niebur, E. and Wörgötter, F. and Koch, C.},
      title = {The Caltech computer cat: A fine-grain simulator of the primary visual system of cat},
      pages = {1-7},
      booktitle = {Annales du Carnac},
      year = {1991},
      volume= {4}}
    Abstract:
    Review:
    Wörgötter, F. and Niebur, E. and Koch, C. (1990).
    Modeling visual cortex: Hidden anisotropies in an isotropic inhibitory connection scheme. Advanced Neural Computers, R. Eckmiller ed. Elsevier Amsterdam, 87-95.
    BibTeX:
    @inproceedings{woergoetternieburkoch1990,
      author = {Wörgötter, F. and Niebur, E. and Koch, C.},
      title = {Modeling visual cortex: Hidden anisotropies in an isotropic inhibitory connection scheme},
      pages = {87-95},
      booktitle = {Advanced Neural Computers, R. Eckmiller ed. Elsevier Amsterdam},
      year = {1990},
      abstract = {A detailed cortex model (15,000 cells) of the adult cat is presented and it is shown that a combination of unspecific inhibitory mechanisms together with input of aligned receptive fields from the LGN with little elongation (1:1.5) is able to reproduce cortical orientation selectivity and other features of cortical cell behavior. We introduce a novel isotropic intracortical con nection scheme ("cir-cular inhibition") and demonstrate analytically that this mechanism results in two anisotropies: orientation tuning and a directional bias. Thus, our network shows that structurally unspecific isotropic connections can result in functionally specific behavior. Directional anisotropy introduced in this way could be the starting point for the development of the true direction specificity found in cortical cells.}}
    Abstract: A detailed cortex model (15,000 cells) of the adult cat is presented and it is shown that a combination of unspecific inhibitory mechanisms together with input of aligned receptive fields from the LGN with little elongation (1:1.5) is able to reproduce cortical orientation selectivity and other features of cortical cell behavior. We introduce a novel isotropic intracortical con nection scheme ("cir-cular inhibition") and demonstrate analytically that this mechanism results in two anisotropies: orientation tuning and a directional bias. Thus, our network shows that structurally unspecific isotropic connections can result in functionally specific behavior. Directional anisotropy introduced in this way could be the starting point for the development of the true direction specificity found in cortical cells.
    Review:
    Wörgötter, F. and Gründel, O. and Eysel, U T. (1990).
    Quantification and comparison of cell properties in cats striate cortex determined by different types of stimuli. Europ. J. Neurosci, 928-941, 2.
    BibTeX:
    @article{woergoettergruendeleysel1990,
      author = {Wörgötter, F. and Gründel, O. and Eysel, U T.},
      title = {Quantification and comparison of cell properties in cats striate cortex determined by different types of stimuli},
      pages = {928-941},
      journal = {Europ. J. Neurosci},
      year = {1990},
      volume= {2}}
    Abstract:
    Review:
    Wörgötter, F. and Kammen, D. M. and Brandt, B. (1990).
    Temporal dynamics in neuronal microcircuitry. Parallel Processing in Neural Systems and Computers, R. Eckmiller, G. Hartmann and G. Hauske eds, 147-150.
    BibTeX:
    @inproceedings{woergoetterkammenbrandt1990,
      author = {Wörgötter, F. and Kammen, D. M. and Brandt, B.},
      title = {Temporal dynamics in neuronal microcircuitry},
      pages = {147-150},
      booktitle = {Parallel Processing in Neural Systems and Computers, R. Eckmiller, G. Hartmann and G. Hauske eds},
      year = {1990},
      editor = {R. Eckmiller, G. Hartmann and G. Hauske},
      publisher = {Elsevier, Netherlands},
      url = {http://www.bccn-goettingen.de/Publications/WorgotterTemporaldynamics}}
    Abstract:
    Review:
    Niebur, E. and Wörgötter, F. (1990).
    Sharpening of orientation selective receptive fields in the mammalian visual cortex by long-range interactions. Proc. of the German Workshop on Artificial Intelligence GWAI-90, 125-133, 251. DOI: 10.1007/978-3-642-76071-6_14.
    BibTeX:
    @inproceedings{nieburwoergoetter1990,
      author = {Niebur, E. and Wörgötter, F.},
      title = {Sharpening of orientation selective receptive fields in the mammalian visual cortex by long-range interactions},
      pages = {125-133},
      booktitle = {Proc. of the German Workshop on Artificial Intelligence GWAI-90},
      year = {1990},
      volume= {251},
      editor = {Marburger, Heinz},
      month = {09},
      doi = {10.1007/978-3-642-76071-6_14},
      abstract = {Lateral intracortical interactions are believed to be responsible for the sharpening of the receptive field profiles of visual cortical cells. This study demonstrates a structurally imposed limitation of long range interactions on the frequently invoked cross orientation inhibition scheme: it leads to inhomogeneous input for different cell populations which is experimentally not observed. We propose a novel connection scheme called }}
    Abstract: Lateral intracortical interactions are believed to be responsible for the sharpening of the receptive field profiles of visual cortical cells. This study demonstrates a structurally imposed limitation of long range interactions on the frequently invoked cross orientation inhibition scheme: it leads to inhomogeneous input for different cell populations which is experimentally not observed. We propose a novel connection scheme called
    Review:
    Wörgötter, F. and Eysel, U T. (1989).
    Axis of preferred motion is a function of bar length in visual cortical receptive fields. Exp. Brain Res, 307-314, 76, 2. DOI: 10.1007/BF00247890.
    BibTeX:
    @article{woergoettereysel1989,
      author = {Wörgötter, F. and Eysel, U T.},
      title = {Axis of preferred motion is a function of bar length in visual cortical receptive fields},
      pages = {307-314},
      journal = {Exp. Brain Res},
      year = {1989},
      volume= {76},
      number = {2},
      doi = {10.1007/BF00247890},
      abstract = {The responses of 82 simple cells and 41 complex cells in area 17 of anesthetized and paralysed cats were examined with light bars of different length. For 84% of the simple cells and 66% of the complex cells the preferred axis of orientation of a stationary flashing long bar (orientational selectivity) and the preferred axis of movement of a small spot were parallel. As a consequence, the axis of maximal response to a moving light spot was mostly orthogonal to the optimal axis of a moving bar. Thus, a single cell responds to two perpendicular axes of preferred movement one for a long bar and one for a light spot, respectively. For both axes independent direction preferences could be distinguished. Additional preferred axes of movement between the two orthogonal extremes could be found with moving bars of intermediate lengths. This can be explained by the fact that cells with a pronounced response to a moving spot showed a strong tendency for intermediate bar length to elicit responses consisting of a superposition of both components. Therefore, decreasing bar length resulted in a gradual rotation of the preferred direction of movement from orthogonal to parallel with respect to the orientational axis, rather than to a mere widening of the tuning curve. Accordingly, the change in orientation selectivity with decreasing bar length is a regular transition from the orientation dependent response to a response type that depends only on the movement axis of the spot. Thus, in a simple model, the resulting response characteristic can be interpreted as an average of both components weighted according to the length of the stimulus.}}
    Abstract: The responses of 82 simple cells and 41 complex cells in area 17 of anesthetized and paralysed cats were examined with light bars of different length. For 84% of the simple cells and 66% of the complex cells the preferred axis of orientation of a stationary flashing long bar (orientational selectivity) and the preferred axis of movement of a small spot were parallel. As a consequence, the axis of maximal response to a moving light spot was mostly orthogonal to the optimal axis of a moving bar. Thus, a single cell responds to two perpendicular axes of preferred movement one for a long bar and one for a light spot, respectively. For both axes independent direction preferences could be distinguished. Additional preferred axes of movement between the two orthogonal extremes could be found with moving bars of intermediate lengths. This can be explained by the fact that cells with a pronounced response to a moving spot showed a strong tendency for intermediate bar length to elicit responses consisting of a superposition of both components. Therefore, decreasing bar length resulted in a gradual rotation of the preferred direction of movement from orthogonal to parallel with respect to the orientational axis, rather than to a mere widening of the tuning curve. Accordingly, the change in orientation selectivity with decreasing bar length is a regular transition from the orientation dependent response to a response type that depends only on the movement axis of the spot. Thus, in a simple model, the resulting response characteristic can be interpreted as an average of both components weighted according to the length of the stimulus.
    Review:
    Wörgötter, F. and Eysel, U. T. (1988).
    A simple glass coated, fire-polished tungsten electrode with impedance adjustment using hydrofluoridic acid. Journal of Neuroscience Methods, 135-138, 25, 2. DOI: 10.1016/0165-0270(88)90150-1.
    BibTeX:
    @article{woergoettereysel1988,
      author = {Wörgötter, F. and Eysel, U. T.},
      title = {A simple glass coated, fire-polished tungsten electrode with impedance adjustment using hydrofluoridic acid},
      pages = {135-138},
      journal = {Journal of Neuroscience Methods},
      year = {1988},
      volume= {25},
      number = {2},
      url = {http://www.sciencedirect.com/science/article/pii/0165027088901501},
      doi = {10.1016/0165-0270(88)90150-1},
      abstract = {A method is described to produce glass-coated tungsten microelectrodes in 4 simple steps: (1) etching of the wire, (2) coating with glass, (3) fire-polishing, and (4) reopening with hydrofluoric acid to adjust the conductance to a final value. Continuous conductance control is provided during the reopening process by means of an admittance meter to guarantee an exact final adjustment of the conductance required. The complete process yields electrodes of high reliability within a few minutes and the quality of the electrodes remains largely unaffected by any of the manufacturing parameters involved, so that high-performance electrodes are produced without sophisticated procedures. The electrodes have been tested successfully over several years recording from cells in the striate visual pathway of the cat.}}
    Abstract: A method is described to produce glass-coated tungsten microelectrodes in 4 simple steps: (1) etching of the wire, (2) coating with glass, (3) fire-polishing, and (4) reopening with hydrofluoric acid to adjust the conductance to a final value. Continuous conductance control is provided during the reopening process by means of an admittance meter to guarantee an exact final adjustment of the conductance required. The complete process yields electrodes of high reliability within a few minutes and the quality of the electrodes remains largely unaffected by any of the manufacturing parameters involved, so that high-performance electrodes are produced without sophisticated procedures. The electrodes have been tested successfully over several years recording from cells in the striate visual pathway of the cat.
    Review:
    Eysel, U. T. and Muche, T. and Wörgötter, F. (1988).
    Lateral interactions at directions-selective striate neurones in the cat demonstrated by local cortical inactivation. The Journal of Physiology, 657-675, 399. DOI: 10.1113/jphysiol.1988.sp017102.
    BibTeX:
    @article{eyselmuchewoergoetter1988,
      author = {Eysel, U. T. and Muche, T. and Wörgötter, F.},
      title = {Lateral interactions at directions-selective striate neurones in the cat demonstrated by local cortical inactivation},
      pages = {657-675},
      journal = {The Journal of Physiology},
      year = {1988},
      volume= {399},
      doi = {10.1113/jphysiol.1988.sp017102},
      abstract = {1. Single neurones were recorded with glass-coated tungsten electrodes from area 17 of the cats visual cortex. The cats were anaesthetized and artificially respirated with a mixture of halothane, nitrous oxide and oxygen. 2. For local cortical inactivation a multibarrel pipette was placed 0.5-2.5 mm posterior (or anterior) to the recording site, at a depth of 400-600 micron. Four separate barrels of the pipette were filled with gamma-aminobutyric acid (GABA) the fifth was filled with Pontamine Sky Blue for labelling of the centre of the inactivation site. 3. Direction-selective cells, of differing optimal orientations and preferred directions of motion, were classified as simple or complex and tested with computer-controlled stimuli presented on an oscilloscope. 4. During continuous recording GABA was microionophoretically applied for different durations and with different ejection currents. The effectiveness of GABA microionophoresis was evident from the direct GABAergic effects (strong overall inhibition of the recorded cells) observed with high ejection currents and prolonged application. 5. Two discrete effects could be observed during local inactivation distant from the cortical cell under study: an increase of the response in either the non-preferred or the preferred direction or a decrease of the response in the preferred direction. All GABA-induced changes were reversible. 6. The depressant action of GABA was independent of the relative topography between recording and inactivation site and affected mainly the response to the preferred direction of stimulus motion. 7. Disinhibition was only observed when the stimulus-evoked response moved on the cortical map in a direction from the GABA pipette towards the recording electrode. It is concluded that GABA reversibly silences inhibitory interneurones that are situated in the vicinity of the micropipette tip and are involved in generation of direction selectivity. 8. No fundamental differences between cells from different cortical layers were observed. The disinhibitory effects of GABA inactivation were more pronounced and more frequently seen in simple cells (61%) than in complex cells (38%), while the opposite was true for reduced excitation during lateral GABA inactivation (observed in 62% of the complex vs. 39% of the simple cells). Accordingly, lateral inhibition statistically prevails in simple cells and lateral excitation in complex cells. 9. Among the inhibitory and excitatory mechanisms affected by lateral GABA inactivation, inhibition is organized with a higher topographic specificity.}}
    Abstract: 1. Single neurones were recorded with glass-coated tungsten electrodes from area 17 of the cats visual cortex. The cats were anaesthetized and artificially respirated with a mixture of halothane, nitrous oxide and oxygen. 2. For local cortical inactivation a multibarrel pipette was placed 0.5-2.5 mm posterior (or anterior) to the recording site, at a depth of 400-600 micron. Four separate barrels of the pipette were filled with gamma-aminobutyric acid (GABA) the fifth was filled with Pontamine Sky Blue for labelling of the centre of the inactivation site. 3. Direction-selective cells, of differing optimal orientations and preferred directions of motion, were classified as simple or complex and tested with computer-controlled stimuli presented on an oscilloscope. 4. During continuous recording GABA was microionophoretically applied for different durations and with different ejection currents. The effectiveness of GABA microionophoresis was evident from the direct GABAergic effects (strong overall inhibition of the recorded cells) observed with high ejection currents and prolonged application. 5. Two discrete effects could be observed during local inactivation distant from the cortical cell under study: an increase of the response in either the non-preferred or the preferred direction or a decrease of the response in the preferred direction. All GABA-induced changes were reversible. 6. The depressant action of GABA was independent of the relative topography between recording and inactivation site and affected mainly the response to the preferred direction of stimulus motion. 7. Disinhibition was only observed when the stimulus-evoked response moved on the cortical map in a direction from the GABA pipette towards the recording electrode. It is concluded that GABA reversibly silences inhibitory interneurones that are situated in the vicinity of the micropipette tip and are involved in generation of direction selectivity. 8. No fundamental differences between cells from different cortical layers were observed. The disinhibitory effects of GABA inactivation were more pronounced and more frequently seen in simple cells (61%) than in complex cells (38%), while the opposite was true for reduced excitation during lateral GABA inactivation (observed in 62% of the complex vs. 39% of the simple cells). Accordingly, lateral inhibition statistically prevails in simple cells and lateral excitation in complex cells. 9. Among the inhibitory and excitatory mechanisms affected by lateral GABA inactivation, inhibition is organized with a higher topographic specificity.
    Review:
    Wörgötter, F. and Eysel, U T. (1987).
    Quantitative determination of orientational and directional components in the response of visual cortical cells to moving stimuli. Biol. Cybern, 349-355, 57, 6. DOI: 10.1007/BF00354980.
    BibTeX:
    @article{woergoettereysel1987,
      author = {Wörgötter, F. and Eysel, U T.},
      title = {Quantitative determination of orientational and directional components in the response of visual cortical cells to moving stimuli},
      pages = {349-355},
      journal = {Biol. Cybern},
      year = {1987},
      volume= {57},
      number = {6},
      url = {http://link.springer.com/article/10.1007%2FBF00354980},
      doi = {10.1007/BF00354980},
      abstract = {The response characteristic of visual cortical cells to moving oriented stimuli consists mainly of directional (D) and orientational (O) components superimposed to a spontaneous activity (S). Commonly used polar plot diagrams reflect the maximal responses for different orientations and directions of stimulus movement with a periodicity of 360 degrees in the visual field. Fast Fourier analysis (FFT) is applied to polar plot data in order to determine the intermingled S, D, and O components. The zero order gain component of the spectrum corresponds to a (virtual) spontaneous activity. The first order component is interpreted as the strength of the direction selectivity and the second order component as the strength of the orientation specificity. The axes of the preferred direction and optimal orientation are represented by the respective phase values. Experimental data are well described with these parameters and relative changes of the shape of a polar plot can be detected with an accuracy better than 1%. The results are compatible with a model of converging excitatory and inhibitory inputs weighted according to the zero to second order components of the Fourier analysis. The easily performed quantitative determination of the S, D, and O components allows the study of pharmacologically induced changes in the dynamic response characteristics of single visual cortical cells.}}
    Abstract: The response characteristic of visual cortical cells to moving oriented stimuli consists mainly of directional (D) and orientational (O) components superimposed to a spontaneous activity (S). Commonly used polar plot diagrams reflect the maximal responses for different orientations and directions of stimulus movement with a periodicity of 360 degrees in the visual field. Fast Fourier analysis (FFT) is applied to polar plot data in order to determine the intermingled S, D, and O components. The zero order gain component of the spectrum corresponds to a (virtual) spontaneous activity. The first order component is interpreted as the strength of the direction selectivity and the second order component as the strength of the orientation specificity. The axes of the preferred direction and optimal orientation are represented by the respective phase values. Experimental data are well described with these parameters and relative changes of the shape of a polar plot can be detected with an accuracy better than 1%. The results are compatible with a model of converging excitatory and inhibitory inputs weighted according to the zero to second order components of the Fourier analysis. The easily performed quantitative determination of the S, D, and O components allows the study of pharmacologically induced changes in the dynamic response characteristics of single visual cortical cells.
    Review:
    Eysel, U T. and Wörgötter, F. and Pape, H C. (1987).
    Local cortical lesions abolish lateral inhibition at direction selective cells in cat visual cortex. Exp. Brain Res, 606-612, 68, 3. DOI: 10.1007/BF00249803.
    BibTeX:
    @article{eyselwoergoetterpape1987,
      author = {Eysel, U T. and Wörgötter, F. and Pape, H C.},
      title = {Local cortical lesions abolish lateral inhibition at direction selective cells in cat visual cortex},
      pages = {606-612},
      journal = {Exp. Brain Res},
      year = {1987},
      volume= {68},
      number = {3},
      doi = {10.1007/BF00249803},
      abstract = {Many cells in the cat visual cortex display a strong selectivity for the direction of motion of an optimally oriented stimulus. Postsynaptic inhibition has been suggested to generate this direction selectivity in simple cells, but the intracortical pathways involved have not been identified. While continuously recording from simple cells in layers 4 and 6, we have inactivated the superficial cortical layers in small regions 0.4-2.5 mm from the cortical column under study by using heat lesions, localized cooling or -aminobutyric acid (GABA) microiontophoresis. When inactivation affected cortical regions retinotopically representing motion in the non-preferred direction towards the receptive field, the responses to movement in this direction increased, and the recorded cells lost direction selectivity due to loss of inhibition. Our results indicate that direction selectivity of simple cells involves asymmetric inhibition of predictable cortical topography.}}
    Abstract: Many cells in the cat visual cortex display a strong selectivity for the direction of motion of an optimally oriented stimulus. Postsynaptic inhibition has been suggested to generate this direction selectivity in simple cells, but the intracortical pathways involved have not been identified. While continuously recording from simple cells in layers 4 and 6, we have inactivated the superficial cortical layers in small regions 0.4-2.5 mm from the cortical column under study by using heat lesions, localized cooling or -aminobutyric acid (GABA) microiontophoresis. When inactivation affected cortical regions retinotopically representing motion in the non-preferred direction towards the receptive field, the responses to movement in this direction increased, and the recorded cells lost direction selectivity due to loss of inhibition. Our results indicate that direction selectivity of simple cells involves asymmetric inhibition of predictable cortical topography.
    Review:
    Wörgötter, F. and Daunicht, W J. and Eckmiller, R. (1986).
    An on-line spike form discriminator based on an analog correlation technique. J. Neurosci. Meth, 141-151, 17. DOI: 10.1016/0165-0270(86)90067-1.
    BibTeX:
    @article{woergoetterdaunichteckmiller1986,
      author = {Wörgötter, F. and Daunicht, W J. and Eckmiller, R.},
      title = {An on-line spike form discriminator based on an analog correlation technique},
      pages = {141-151},
      journal = {J. Neurosci. Meth},
      year = {1986},
      volume= {17},
      url = {http://www.sciencedirect.com/science/article/pii/0165027086900671},
      doi = {10.1016/0165-0270(86)90067-1},
      abstract = {The discrimination of single unit activity in extracellular recordings presents a serious problem when the signal-to-noise ratio is low or when the amplitudes of interspersed spikes are similar. By exploiting spike form, the system described here performs discrimination using on-line hardware template matching. Using analog delay lines, the combined deviation of 8 input signal values from 8 stored template values is calculated simultaneously. The 8 template values are selected by adjusting 8 cursors to the desired spike trace on a CRT the spike form discriminator (SPIFODIS) then generates a deviation function which steeply drops to zero whenever form similarity occurs, allowing for easy triggering. The performance of SPIFODIS was compared quantitatively with that of a conventional amplitude trigger in two cases: (a) when detecting a single unit with varied signal-to-noise ratios and (b) when separating double units of equal amplitude. (a) At signal-to-noise ratios between 2 and 1 the error rate for SPIFODIS was only 15-50% of that of an amplitude trigger. (b) In double-unit recordings showing only form differences, spikes are discriminated with very low error rate, while an amplitude trigger fails completely.}}
    Abstract: The discrimination of single unit activity in extracellular recordings presents a serious problem when the signal-to-noise ratio is low or when the amplitudes of interspersed spikes are similar. By exploiting spike form, the system described here performs discrimination using on-line hardware template matching. Using analog delay lines, the combined deviation of 8 input signal values from 8 stored template values is calculated simultaneously. The 8 template values are selected by adjusting 8 cursors to the desired spike trace on a CRT the spike form discriminator (SPIFODIS) then generates a deviation function which steeply drops to zero whenever form similarity occurs, allowing for easy triggering. The performance of SPIFODIS was compared quantitatively with that of a conventional amplitude trigger in two cases: (a) when detecting a single unit with varied signal-to-noise ratios and (b) when separating double units of equal amplitude. (a) At signal-to-noise ratios between 2 and 1 the error rate for SPIFODIS was only 15-50% of that of an amplitude trigger. (b) In double-unit recordings showing only form differences, spikes are discriminated with very low error rate, while an amplitude trigger fails completely.
    Review:
    Goldschmidt, D. and Wörgötter, F. and Manoonpong, P. (2014).
    Biologically-Inspired Adaptive Obstacle Negotiation Behavior of Hexapod Robots. Frontiers in Neurorobotics, 1 -- 16, 8, 3. DOI: 10.3389/fnbot.2014.00003.
    BibTeX:
    @article{goldschmidtwoergoettermanoonpong201,
      author = {Goldschmidt, D. and Wörgötter, F. and Manoonpong, P.},
      title = {Biologically-Inspired Adaptive Obstacle Negotiation Behavior of Hexapod Robots},
      pages = {1 -- 16},
      journal = {Frontiers in Neurorobotics},
      year = {2014},
      volume= {8},
      number = {3},
      url = {http://journal.frontiersin.org/Journal/10.3389/fnbot.2014.00003/abstract},
      doi = {10.3389/fnbot.2014.00003},
      abstract = {Neurobiological studies have shown that insects are able to adapt leg movements and posture for obstacle negotiation in changing environments. Moreover, the distance to an obstacle where an insect begins to climb is found to be a major parameter for successful obstacle negotiation. Inspired by these findings, we present an adaptive neural control mechanism for obstacle negotiation behavior in hexapod robots. It combines locomotion control, backbone joint control, local leg reflexes, and neural learning. While the first three components generate locomotion including walking and climbing, the neural learning mechanism allows the robot to adapt its behavior for obstacle negotiation with respect to changing conditions, e.g., variable obstacle heights and different walking gaits. By successfully learning the association of an early, predictive signal conditioned stimulus, CS and a late, reflex signal unconditioned stimulus, UCS, both provided by ultrasonic sensors at the front of the robot, the robot can autonomously find an appropriate distance from an obstacle to initiate climbing. The adaptive neural control was developed and tested first on a physical robot simulation, and was then successfully transferred to a real hexapod robot, called AMOS II. The results show that the robot can efficiently negotiate obstacles with a height up to 85% of the robots leg length in simulation and 75% in a real environment}}
    Abstract: Neurobiological studies have shown that insects are able to adapt leg movements and posture for obstacle negotiation in changing environments. Moreover, the distance to an obstacle where an insect begins to climb is found to be a major parameter for successful obstacle negotiation. Inspired by these findings, we present an adaptive neural control mechanism for obstacle negotiation behavior in hexapod robots. It combines locomotion control, backbone joint control, local leg reflexes, and neural learning. While the first three components generate locomotion including walking and climbing, the neural learning mechanism allows the robot to adapt its behavior for obstacle negotiation with respect to changing conditions, e.g., variable obstacle heights and different walking gaits. By successfully learning the association of an early, predictive signal conditioned stimulus, CS and a late, reflex signal unconditioned stimulus, UCS, both provided by ultrasonic sensors at the front of the robot, the robot can autonomously find an appropriate distance from an obstacle to initiate climbing. The adaptive neural control was developed and tested first on a physical robot simulation, and was then successfully transferred to a real hexapod robot, called AMOS II. The results show that the robot can efficiently negotiate obstacles with a height up to 85% of the robots leg length in simulation and 75% in a real environment
    Review:
    Stein, S. and Schoeler, M. and Papon, J. and Wörgötter, F. (2014).
    Object Partitioning using Local Convexity. Conference on Computer Vision and Pattern Recognition CVPR, 304-311. DOI: 10.1109/CVPR.2014.46.
    BibTeX:
    @inproceedings{steinschoelerpapon2014,
      author = {Stein, S. and Schoeler, M. and Papon, J. and Wörgötter, F.},
      title = {Object Partitioning using Local Convexity},
      pages = {304-311},
      booktitle = {Conference on Computer Vision and Pattern Recognition CVPR},
      year = {2014},
      location = {Columbus, OH, USA},
      month = {06},
      doi = {10.1109/CVPR.2014.46},
      abstract = {The problem of how to arrive at an appropriate 3D-segmentation of a scene remains difficult. While current state-of-the-art methods continue to gradually improve in benchmark performance, they also grow more and more complex, for example by incorporating chains of classifiers, which require training on large manually annotated data- sets. As an alternative to this, we present a new, efficient learning- and model-free approach for the segmentation of 3D point clouds into object parts. The algorithm begins by decomposing the scene into an adjacency-graph of surface patches based on a voxel grid. Edges in the graph are then classified as either convex or concave using a novel combination of simple criteria which operate on the local geometry of these patches. This way the graph is divided into locally convex connected subgraphs, which - with high accuracy - represent object parts. Additionally, we propose a novel depth dependent voxel grid to deal with the decreasing point-density at far distances in the point clouds. This improves segmentation, allowing the use of fixed parameters for vastly different scenes. The algorithm is straight-forward to implement and requires no training data, while nevertheless producing results that are comparable to state-of-the-art methods which incorporate high-level concepts involving classification, learning and model fitting.}}
    Abstract: The problem of how to arrive at an appropriate 3D-segmentation of a scene remains difficult. While current state-of-the-art methods continue to gradually improve in benchmark performance, they also grow more and more complex, for example by incorporating chains of classifiers, which require training on large manually annotated data- sets. As an alternative to this, we present a new, efficient learning- and model-free approach for the segmentation of 3D point clouds into object parts. The algorithm begins by decomposing the scene into an adjacency-graph of surface patches based on a voxel grid. Edges in the graph are then classified as either convex or concave using a novel combination of simple criteria which operate on the local geometry of these patches. This way the graph is divided into locally convex connected subgraphs, which - with high accuracy - represent object parts. Additionally, we propose a novel depth dependent voxel grid to deal with the decreasing point-density at far distances in the point clouds. This improves segmentation, allowing the use of fixed parameters for vastly different scenes. The algorithm is straight-forward to implement and requires no training data, while nevertheless producing results that are comparable to state-of-the-art methods which incorporate high-level concepts involving classification, learning and model fitting.
    Review:
    Stein, S. and Wörgötter, F. and Schoeler, M. and Papon, J. and Kulvicius, T. (2014).
    Convexity based object partitioning for robot applications. IEEE International Conference on Robotics and Automation (ICRA), 3213-3220. DOI: 10.1109/ICRA.2014.6907321.
    BibTeX:
    @inproceedings{steinwoergoetterschoeler2014,
      author = {Stein, S. and Wörgötter, F. and Schoeler, M. and Papon, J. and Kulvicius, T.},
      title = {Convexity based object partitioning for robot applications},
      pages = {3213-3220},
      booktitle = {IEEE International Conference on Robotics and Automation (ICRA)},
      year = {2014},
      month = {05},
      doi = {10.1109/ICRA.2014.6907321},
      abstract = {The idea that connected convex surfaces, separated by concave boundaries, play an important role for the perception of objects and their decomposition into parts has been discussed for a long time. Based on this idea, we present a new bottom-up approach for the segmentation of 3D point clouds into object parts. The algorithm approximates a scene using an adjacency-graph of spatially connected surface patches. Edges in the graph are then classified as either convex or concave using a novel, strictly local criterion. Region growing is employed to identify locally convex connected subgraphs, which represent the object parts. We show quantitatively that our algorithm, although conceptually easy to graph and fast to compute, produces results that are comparable to far more complex state-of-the-art methods which use classification, learning and model fitting. This suggests that convexity/concavity is a powerful feature for object partitioning using 3D data. Furthermore we demonstrate that for many objects a natural decomposition into}}
    Abstract: The idea that connected convex surfaces, separated by concave boundaries, play an important role for the perception of objects and their decomposition into parts has been discussed for a long time. Based on this idea, we present a new bottom-up approach for the segmentation of 3D point clouds into object parts. The algorithm approximates a scene using an adjacency-graph of spatially connected surface patches. Edges in the graph are then classified as either convex or concave using a novel, strictly local criterion. Region growing is employed to identify locally convex connected subgraphs, which represent the object parts. We show quantitatively that our algorithm, although conceptually easy to graph and fast to compute, produces results that are comparable to far more complex state-of-the-art methods which use classification, learning and model fitting. This suggests that convexity/concavity is a powerful feature for object partitioning using 3D data. Furthermore we demonstrate that for many objects a natural decomposition into
    Review:
    Schoeler, M. and Stein, S. and Papon, J. and Abramov, A. and Wörgötter, F. (2014).
    Fast Self-supervised On-line Training for Object Recognition Specifically for Robotic Applications. International Conference on Computer Vision Theory and Applications VISAPP, 1 - 10.
    BibTeX:
    @inproceedings{schoelersteinpapon2014,
      author = {Schoeler, M. and Stein, S. and Papon, J. and Abramov, A. and Wörgötter, F.},
      title = {Fast Self-supervised On-line Training for Object Recognition Specifically for Robotic Applications},
      pages = {1 - 10},
      booktitle = {International Conference on Computer Vision Theory and Applications VISAPP},
      year = {2014},
      month = {January},
      abstract = {Today most recognition pipelines are trained at an off-line stage, providing systems with pre-segmented images and predefined objects, or at an on-line stage, which requires a human supervisor to tediously control the learning. Self-Supervised on-line training of recognition pipelines without human intervention is a highly desirable goal, as it allows systems to learn unknown, environment specific objects on-the-fly. We propose a fast and automatic system, which can extract and learn unknown objects with minimal human intervention by employing a two-level pipeline combining the advantages of RGB-D sensors for object extraction and high-resolution cameras for object recognition. Furthermore, we significantly improve recognition results with local features by implementing a novel keypoint orientation scheme, which leads to highly invariant but discriminative object signatures. Using only one image per object for training, our system is able to achieve a recognition rate of 79% for 18 objects, benchmarked on 42 scenes with random poses, scales and occlusion, while only taking 7 seconds for the training. Additionally, we evaluate our orientation scheme on the state-of-the-art 56-object SDU-dataset boosting accuracy for one training view per object by +37% to 78% and peaking at a performance of 98% for 11 training views.}}
    Abstract: Today most recognition pipelines are trained at an off-line stage, providing systems with pre-segmented images and predefined objects, or at an on-line stage, which requires a human supervisor to tediously control the learning. Self-Supervised on-line training of recognition pipelines without human intervention is a highly desirable goal, as it allows systems to learn unknown, environment specific objects on-the-fly. We propose a fast and automatic system, which can extract and learn unknown objects with minimal human intervention by employing a two-level pipeline combining the advantages of RGB-D sensors for object extraction and high-resolution cameras for object recognition. Furthermore, we significantly improve recognition results with local features by implementing a novel keypoint orientation scheme, which leads to highly invariant but discriminative object signatures. Using only one image per object for training, our system is able to achieve a recognition rate of 79% for 18 objects, benchmarked on 42 scenes with random poses, scales and occlusion, while only taking 7 seconds for the training. Additionally, we evaluate our orientation scheme on the state-of-the-art 56-object SDU-dataset boosting accuracy for one training view per object by +37% to 78% and peaking at a performance of 98% for 11 training views.
    Review:
    Manoonpong, P. and Dasgupta, S. and Goldschmidt, D. and Wörgötter, F. (2014).
    Reservoir-based online adaptive forward models with neural control for complex locomotion in a hexapod robot. International Joint Conference on Neural Networks (IJCNN), 3295-3302. DOI: 10.1109/IJCNN.2014.6889405.
    BibTeX:
    @inproceedings{manoonpongdasguptagoldschmidt2014,
      author = {Manoonpong, P. and Dasgupta, S. and Goldschmidt, D. and Wörgötter, F.},
      title = {Reservoir-based online adaptive forward models with neural control for complex locomotion in a hexapod robot},
      pages = {3295-3302},
      booktitle = {International Joint Conference on Neural Networks (IJCNN)},
      year = {2014},
      month = {July},
      doi = {10.1109/IJCNN.2014.6889405},
      abstract = {Walking animals show fascinating locomotor abilities and complex behaviors. Biological study has revealed that such complex behaviors is a result of a combination of biomechanics and neural mechanisms. While biomechanics allows for flexibility and a variety of movements, neural mechanisms generate locomotion, make predictions, and provide adaptation. Inspired by this finding, we present here an artificial bio-inspired walking system which combines biomechanics (in terms of its body and leg structures) and neural mechanisms. The neural mechanisms consist of 1) central pattern generator-based control for generating basic rhythmic patterns and coordinated movements, 2) reservoir-based adaptive forward models with efference copies for sensory prediction as well as state estimation, and 3) searching and elevation control for adapting the movement of an individual leg to deal with different environmental conditions. Simulation results show that this bio-inspired approach allows the walking robot to perform complex locomotor abilities including walking on undulated terrains, crossing a large gap, as well as climbing over a high obstacle and a fleet of stairs.}}
    Abstract: Walking animals show fascinating locomotor abilities and complex behaviors. Biological study has revealed that such complex behaviors is a result of a combination of biomechanics and neural mechanisms. While biomechanics allows for flexibility and a variety of movements, neural mechanisms generate locomotion, make predictions, and provide adaptation. Inspired by this finding, we present here an artificial bio-inspired walking system which combines biomechanics (in terms of its body and leg structures) and neural mechanisms. The neural mechanisms consist of 1) central pattern generator-based control for generating basic rhythmic patterns and coordinated movements, 2) reservoir-based adaptive forward models with efference copies for sensory prediction as well as state estimation, and 3) searching and elevation control for adapting the movement of an individual leg to deal with different environmental conditions. Simulation results show that this bio-inspired approach allows the walking robot to perform complex locomotor abilities including walking on undulated terrains, crossing a large gap, as well as climbing over a high obstacle and a fleet of stairs.
    Review:
    Zeidan, B. and Dasgupta, S. and Wörgötter, F. and Manoonpong, P. (2014).
    Adaptive Landmark-Based Navigation System Using Learning Techniques. From Animals to Animats 13, 121-131, 8575. DOI: 10.1007/978-3-319-08864-8_12.
    BibTeX:
    @inproceedings{zeidandasguptawoergoetter2014,
      author = {Zeidan, B. and Dasgupta, S. and Wörgötter, F. and Manoonpong, P.},
      title = {Adaptive Landmark-Based Navigation System Using Learning Techniques},
      pages = {121-131},
      booktitle = {From Animals to Animats 13},
      year = {2014},
      volume= {8575},
      editor = {del Pobil, AngelP. and Chinellato, Eris and Martinez-Martin, Ester and Hallam, John and Cervera, Enric and Morales, Antonio},
      language = {English},
      month = {July},
      publisher = {Springer International Publishing},
      series = {Lecture Notes in Computer Science},
      url = {http://dx.doi.org/10.1007/978-3-319-08864-8_12},
      doi = {10.1007/978-3-319-08864-8_12},
      abstract = {The goal-directed navigational ability of animals is an essential prerequisite for them to survive. They can learn to navigate to a distal goal in a complex environment. During this long-distance navigation, they exploit environmental features, like landmarks, to guide them towards their goal. Inspired by this, we develop an adaptive landmark-based navigation system based on sequential reinforcement learning. In addition, correlation-based learning is also integrated into the system to improve learning performance. The proposed system has been applied to simulated simple wheeled and more complex hexapod robots. As a result, it allows the robots to successfully learn to navigate to distal goals in complex environments.}}
    Abstract: The goal-directed navigational ability of animals is an essential prerequisite for them to survive. They can learn to navigate to a distal goal in a complex environment. During this long-distance navigation, they exploit environmental features, like landmarks, to guide them towards their goal. Inspired by this, we develop an adaptive landmark-based navigation system based on sequential reinforcement learning. In addition, correlation-based learning is also integrated into the system to improve learning performance. The proposed system has been applied to simulated simple wheeled and more complex hexapod robots. As a result, it allows the robots to successfully learn to navigate to distal goals in complex environments.
    Review:
    Ren, G. and Chen, W. and Dasgupta, S. and Kolodziejski, C. and Wörgötter, F. and Manoonpong, P. (2014).
    Multiple chaotic central pattern generators with learning for legged locomotion and malfunction compensation. Information Sciences, 666 - 682, 294. DOI: 10.1016/j.ins.2014.05.001.
    BibTeX:
    @article{renchendasgupta2014,
      author = {Ren, G. and Chen, W. and Dasgupta, S. and Kolodziejski, C. and Wörgötter, F. and Manoonpong, P.},
      title = {Multiple chaotic central pattern generators with learning for legged locomotion and malfunction compensation},
      pages = {666 - 682},
      journal = {Information Sciences},
      year = {2014},
      volume= {294},
      month = {05},
      publisher = {Elseiver},
      url = {http://www.sciencedirect.com/science/article/pii/S0020025514005192},
      doi = {10.1016/j.ins.2014.05.001},
      abstract = {An originally chaotic system can be controlled into various periodic dynamics. When it is implemented into a legged robots locomotion control as a central pattern generator CPG, sophisticated gait patterns arise so that the robot can perform various walking behaviors. However, such a single chaotic CPG controller has difficulties dealing with leg malfunction. Specifically, in the scenarios presented here, its movement permanently deviates from the desired trajectory. To address this problem, we extend the single chaotic CPG to multiple CPGs with learning. The learning mechanism is based on a simulated annealing algorithm. In a normal situation, the CPGs synchronize and their dynamics are identical. With leg malfunction or disability, the CPGs lose synchronization leading to independent dynamics. In this case, the learning mechanism is applied to automatically adjust the remaining legs oscillation frequencies so that the robot adapts its locomotion to deal with the malfunction. As a consequence, the trajectory produced by the multiple chaotic CPGs resembles the original trajectory far better than the one produced by only a single CPG. The performance of the system is evaluated first in a physical simulation of a quadruped as well as a hexapod robot and finally in a real six-legged walking machine called AMOSII. The experimental results presented here reveal that using multiple CPGs with learning is an effective approach for adaptive locomotion generation where, for instance, different body parts have to perform independent movements for malfunction compensation}}
    Abstract: An originally chaotic system can be controlled into various periodic dynamics. When it is implemented into a legged robots locomotion control as a central pattern generator CPG, sophisticated gait patterns arise so that the robot can perform various walking behaviors. However, such a single chaotic CPG controller has difficulties dealing with leg malfunction. Specifically, in the scenarios presented here, its movement permanently deviates from the desired trajectory. To address this problem, we extend the single chaotic CPG to multiple CPGs with learning. The learning mechanism is based on a simulated annealing algorithm. In a normal situation, the CPGs synchronize and their dynamics are identical. With leg malfunction or disability, the CPGs lose synchronization leading to independent dynamics. In this case, the learning mechanism is applied to automatically adjust the remaining legs oscillation frequencies so that the robot adapts its locomotion to deal with the malfunction. As a consequence, the trajectory produced by the multiple chaotic CPGs resembles the original trajectory far better than the one produced by only a single CPG. The performance of the system is evaluated first in a physical simulation of a quadruped as well as a hexapod robot and finally in a real six-legged walking machine called AMOSII. The experimental results presented here reveal that using multiple CPGs with learning is an effective approach for adaptive locomotion generation where, for instance, different body parts have to perform independent movements for malfunction compensation
    Review:
    Agostini, A. and Torras, C. and Wörgötter, F. (2014).
    Learning Weakly-Correlated Cause-Effects for Gardening with a Cognitive System. Engineering Applications of Artificial Intelligence, 178--194, 36. DOI: 10.1016/j.engappai.2014.07.017.
    BibTeX:
    @article{agostinitorraswoergoetter2014,
      author = {Agostini, A. and Torras, C. and Wörgötter, F.},
      title = {Learning Weakly-Correlated Cause-Effects for Gardening with a Cognitive System},
      pages = {178--194},
      journal = {Engineering Applications of Artificial Intelligence},
      year = {2014},
      volume= {36},
      doi = {10.1016/j.engappai.2014.07.017},
      abstract = {We propose a cognitive system that combines artificial intelligence techniques for planning and learning to execute tasks involving delayed and variable correlations between the actions executed and their expected effects. The system is applied to the the task of controlling the growth of plants, where the evolution of the plant attributes strongly depends on different events taking place in the temporally distant past history of the plant. The main problem to tackle is how to efficiently detect these past events. This is very challenging since the inclusion of time could make the dimensionality of the search space extremely large and the collected training instances may only provide very limited information about the relevant combinations of events. To address this problem we propose a learning method that progressively identifies those events that are more likely to produce a sequence of changes under a plant treatment. Since the number of experiences is very limited compared to the size of the event space, we use a probabilistic estimate that takes into account the lack of experience to prevent biased estimations. Planning operators are generated from most accurately predicted sequences of changes. Planning and learning are integrated in a decision-making framework that operates without task interruptions by allowing a human gardener to instruct the treatments when the knowledge acquired so far is not enough to make a decision.}}
    Abstract: We propose a cognitive system that combines artificial intelligence techniques for planning and learning to execute tasks involving delayed and variable correlations between the actions executed and their expected effects. The system is applied to the the task of controlling the growth of plants, where the evolution of the plant attributes strongly depends on different events taking place in the temporally distant past history of the plant. The main problem to tackle is how to efficiently detect these past events. This is very challenging since the inclusion of time could make the dimensionality of the search space extremely large and the collected training instances may only provide very limited information about the relevant combinations of events. To address this problem we propose a learning method that progressively identifies those events that are more likely to produce a sequence of changes under a plant treatment. Since the number of experiences is very limited compared to the size of the event space, we use a probabilistic estimate that takes into account the lack of experience to prevent biased estimations. Planning operators are generated from most accurately predicted sequences of changes. Planning and learning are integrated in a decision-making framework that operates without task interruptions by allowing a human gardener to instruct the treatments when the knowledge acquired so far is not enough to make a decision.
    Review:
    Dasgupta, S. and Wörgötter, F. and Manoonpong, P. (2013).
    Active Memory in Input Driven Recurrent Neural Networks. Bernstein Conference 2013. DOI: 10.12751/nncn.bc2013.0151.
    BibTeX:
    @inproceedings{dasguptawoergoettermanoonpong2013a,
      author = {Dasgupta, S. and Wörgötter, F. and Manoonpong, P.},
      title = {Active Memory in Input Driven Recurrent Neural Networks},
      booktitle = {Bernstein Conference 2013},
      year = {2013},
      doi = {10.12751/nncn.bc2013.0151},
      abstract = {Understanding the exact mechanism of learning and memory emerging from complex dynamical systems like neural networks serves as a challenging field of research. Traditionally the neural mechanisms underlying memory and cognition in these systems are described by steady-state or stable fixed point attractor dynamics. However an alternative and refined understanding of the neuronal dynamics can be achieved through the idea of transient dynamics 1 (reservoir computing paradigm) i.e., computation through input specific trajectories in neural space without stable equilibrium. Mathematical analysis of the underlying memory through such transient dynamics is difficult. As such information theory provides tools to quantify the dynamics of memory in such networks. One such popular measure of memory capacity in reservoir networks is the linear memory capacity 2. It provides an indication of how well the network can reconstruct delayed versions of the input signal. However it assumes a linear retrieval of input signal and deteriorates with neuron non-linearity. Alternatively, active information storage 3 provides a measure of local neuron memory by quantifying the degree of influence of past activity on the next time step activity of a neuron independent of neuronal non-linearity. In this work we further extend this quantity by calculating the mutual information between a neuron past activity and its immediate future activity while conditioning out delayed versions of the input signal. Summing over different delays of input signal it provides a suitable measure of total input driven active memory in the network. Intuitively active memory calculates the actual memory in use i.e. influence of input history on local neuron memory. We compare memory capacity and active memory (AM) with different network parameters for networks driven with statistically different inputs and justify AM as an appropriate means to quantify the dynamics of memory in input driven neural networks.}}
    Abstract: Understanding the exact mechanism of learning and memory emerging from complex dynamical systems like neural networks serves as a challenging field of research. Traditionally the neural mechanisms underlying memory and cognition in these systems are described by steady-state or stable fixed point attractor dynamics. However an alternative and refined understanding of the neuronal dynamics can be achieved through the idea of transient dynamics 1 (reservoir computing paradigm) i.e., computation through input specific trajectories in neural space without stable equilibrium. Mathematical analysis of the underlying memory through such transient dynamics is difficult. As such information theory provides tools to quantify the dynamics of memory in such networks. One such popular measure of memory capacity in reservoir networks is the linear memory capacity 2. It provides an indication of how well the network can reconstruct delayed versions of the input signal. However it assumes a linear retrieval of input signal and deteriorates with neuron non-linearity. Alternatively, active information storage 3 provides a measure of local neuron memory by quantifying the degree of influence of past activity on the next time step activity of a neuron independent of neuronal non-linearity. In this work we further extend this quantity by calculating the mutual information between a neuron past activity and its immediate future activity while conditioning out delayed versions of the input signal. Summing over different delays of input signal it provides a suitable measure of total input driven active memory in the network. Intuitively active memory calculates the actual memory in use i.e. influence of input history on local neuron memory. We compare memory capacity and active memory (AM) with different network parameters for networks driven with statistically different inputs and justify AM as an appropriate means to quantify the dynamics of memory in input driven neural networks.
    Review:
    Kuhlemann, I. and Braun, J -M. and Wörgötter, F. and Manoonpong, P. (2014).
    Comparing Arc-shaped Feet and Rigid Ankles with Flat Feet and Compliant Ankles for a Dynamic Walker. Mobile Service RoboticsWSPC Proceedings, 353-360, 17. DOI: 10.1142/9789814623353_0041.
    BibTeX:
    @inproceedings{kuhlemannbraunwoergoetter2014,
      author = {Kuhlemann, I. and Braun, J -M. and Wörgötter, F. and Manoonpong, P.},
      title = {Comparing Arc-shaped Feet and Rigid Ankles with Flat Feet and Compliant Ankles for a Dynamic Walker},
      pages = {353-360},
      booktitle = {Mobile Service Robotics},
      journal = {WSPC Proceedings},
      year = {2014},
      number = {17},
      language = {English},
      location = {Poznan, Poland},
      series = {Proceedings of the International Conference on Climbing and Walking Robots},
      url = {http://www.bfnt-goettingen.de/Publications/articlereference.2014-10-23.5545515863},
      doi = {10.1142/9789814623353_0041},
      abstract = {In this paper we show that exchanging curved feet and rigid ankles by at feet and compliant ankles improves the range of gait parameters for a bipedal dynamic walker. The new lower legs were designed such that they t to the old set-up, allowing for a direct and quantitative comparison. The dynamic walking robot RunBot, controlled by an re exive neural network, uses only few sensors for generating its stable gait. The results show that at feet and compliant ankles extend RunBots parameter range especially to more leaning back postures. They also allow the robot to stably walk over obstacles with low height.}}
    Abstract: In this paper we show that exchanging curved feet and rigid ankles by at feet and compliant ankles improves the range of gait parameters for a bipedal dynamic walker. The new lower legs were designed such that they t to the old set-up, allowing for a direct and quantitative comparison. The dynamic walking robot RunBot, controlled by an re exive neural network, uses only few sensors for generating its stable gait. The results show that at feet and compliant ankles extend RunBots parameter range especially to more leaning back postures. They also allow the robot to stably walk over obstacles with low height.
    Review:
    Braun, J -M. and Wörgötter, F. and Manoonpong, P. (2014).
    Internal Models Support Specific Gaits in Orthotic Devices. Mobile Service Robotics, 539-546, 17. DOI: 10.1142/9789814623353_0063.
    BibTeX:
    @inproceedings{braunwoergoettermanoonpong2014a,
      author = {Braun, J -M. and Wörgötter, F. and Manoonpong, P.},
      title = {Internal Models Support Specific Gaits in Orthotic Devices},
      pages = {539-546},
      booktitle = {Mobile Service Robotics},
      year = {2014},
      number = {17},
      language = {English},
      location = {Poznan, Poland},
      series = {Proceedings of the International Conference on Climbing and Walking Robots},
      url = {http://www.worldscientific.com/doi/abs/10.1142/9789814623353_0063},
      doi = {10.1142/9789814623353_0063},
      abstract = {Patients use orthoses and prosthesis for the lower limbs to support and enable movements, they can not or only with difficulties perform themselves. Because traditional devices support only a limited set of movements, patients are restricted in their mobility. A possible approach to overcome such limitations is to supply the patient via the orthosis with situation-dependent gait models. To achieve this, we present a method for gait recognition using model invalidation. We show that these models are capable to predict the individual patients movements and supply the correct gait. We investigate the systems accuracy and robustness on a Knee-Ankle-Foot-Orthosis, introducing behaviour changes depending on the patients current walking situation. We conclude that the here presented model-based support of different gaits has the power to enhance the patients mobility.}}
    Abstract: Patients use orthoses and prosthesis for the lower limbs to support and enable movements, they can not or only with difficulties perform themselves. Because traditional devices support only a limited set of movements, patients are restricted in their mobility. A possible approach to overcome such limitations is to supply the patient via the orthosis with situation-dependent gait models. To achieve this, we present a method for gait recognition using model invalidation. We show that these models are capable to predict the individual patients movements and supply the correct gait. We investigate the systems accuracy and robustness on a Knee-Ankle-Foot-Orthosis, introducing behaviour changes depending on the patients current walking situation. We conclude that the here presented model-based support of different gaits has the power to enhance the patients mobility.
    Review:
    Braun, J. and Wörgötter, F. and Manoonpong, P. (2014).
    Orthosis Controller with Internal Models Supports Individual Gaits. Proceedings of the 9th Annual Dynamic Walking Conference, 1 --2, 9.
    BibTeX:
    @inproceedings{braunwoergoettermanoonpong2014,
      author = {Braun, J. and Wörgötter, F. and Manoonpong, P.},
      title = {Orthosis Controller with Internal Models Supports Individual Gaits},
      pages = {1 --2},
      booktitle = {Proceedings of the 9th Annual Dynamic Walking Conference},
      year = {2014},
      number = {9},
      language = {English},
      location = {Zürich, Switzerland},
      month = {06},
      series = {Proceedings of the 9th Annual Dynamic Walking Conference}}
    Abstract:
    Review:
    Dasgupta, S. and Wörgötter, F. and Manoonpong, P. (2014).
    Neuromodulatory Adaptive Combination of Correlation-based Learning in Cerebellum and Reward-based Learning in Basal Ganglia for Goal-directed Behavior Control. Frontiers in Neural Circuits, 1 -- 21, 8, 00126. DOI: 10.3389/fncir.2014.00126.
    BibTeX:
    @bibtexentrytype{dasguptawoergoettermanoonpong2014,
      author = {Dasgupta, S. and Wörgötter, F. and Manoonpong, P.},
      title = {Neuromodulatory Adaptive Combination of Correlation-based Learning in Cerebellum and Reward-based Learning in Basal Ganglia for Goal-directed Behavior Control},
      pages = {1 -- 21},
      journal = {Frontiers in Neural Circuits},
      year = {2014},
      volume= {8},
      number = {00126},
      url = {http://journal.frontiersin.org/Journal/10.3389/fncir.2014.00126/abstract},
      doi = {10.3389/fncir.2014.00126},
      abstract = {Goal-directed decision making in biological systems is broadly based on associations between conditional and unconditional stimuli. This can be further classified as classical conditioning (correlation-based learning) and operant conditioning (reward-based learning). A number of computational and experimental studies have well established the role of the basal ganglia in reward-based learning, where as the cerebellum plays an important role in developing specific conditioned responses. Although viewed as distinct learning systems, recent animal experiments point towards their complementary role in behavioral learning, and also show the existence of substantial two-way communication between these two brain structures. Based on this notion of co-operative learning, in this paper we hypothesize that the basal ganglia and cerebellar learning systems work in parallel and interact with each other. We envision that such an interaction is influenced by reward modulated heterosynaptic plasticity (RMHP) rule at the thalamus, guiding the overall goal directed behavior. Using a recurrent neural network actor-critic model of the basal ganglia and a feed-forward correlation-based learning model of the cerebellum, we demonstrate that the RMHP rule can effectively balance the outcomes of the two learning systems. This is tested using simulated environments of increasing complexity with a four-wheeled robot in a foraging task in both static and dynamic configurations. Although modeled with a simplified level of biological abstraction, we clearly demonstrate that such a RMHP induced combinatorial learning mechanism, leads to stabler and faster learning of goal-directed behaviors, in comparison to the individual systems. Thus in this paper we provide a computational model for adaptive combination of the basal ganglia and cerebellum learning systems by way of neuromodulated plasticity for goal-directed decision making in biological and bio-mimetic organisms.}}
    Abstract: Goal-directed decision making in biological systems is broadly based on associations between conditional and unconditional stimuli. This can be further classified as classical conditioning (correlation-based learning) and operant conditioning (reward-based learning). A number of computational and experimental studies have well established the role of the basal ganglia in reward-based learning, where as the cerebellum plays an important role in developing specific conditioned responses. Although viewed as distinct learning systems, recent animal experiments point towards their complementary role in behavioral learning, and also show the existence of substantial two-way communication between these two brain structures. Based on this notion of co-operative learning, in this paper we hypothesize that the basal ganglia and cerebellar learning systems work in parallel and interact with each other. We envision that such an interaction is influenced by reward modulated heterosynaptic plasticity (RMHP) rule at the thalamus, guiding the overall goal directed behavior. Using a recurrent neural network actor-critic model of the basal ganglia and a feed-forward correlation-based learning model of the cerebellum, we demonstrate that the RMHP rule can effectively balance the outcomes of the two learning systems. This is tested using simulated environments of increasing complexity with a four-wheeled robot in a foraging task in both static and dynamic configurations. Although modeled with a simplified level of biological abstraction, we clearly demonstrate that such a RMHP induced combinatorial learning mechanism, leads to stabler and faster learning of goal-directed behaviors, in comparison to the individual systems. Thus in this paper we provide a computational model for adaptive combination of the basal ganglia and cerebellum learning systems by way of neuromodulated plasticity for goal-directed decision making in biological and bio-mimetic organisms.
    Review:
    Xiong, X. and Wörgötter, F. and Manoonpong, P. (2014).
    Neuromechanical control for hexapedal robot walking on challenging surfaces and surface classification. Robotics and Autonomous Systems, 1777 - 1789, 62, 12. DOI: 10.1016/j.robot.2014.07.008.
    BibTeX:
    @article{xiongwoergoettermanoonpong2014a,
      author = {Xiong, X. and Wörgötter, F. and Manoonpong, P.},
      title = {Neuromechanical control for hexapedal robot walking on challenging surfaces and surface classification},
      pages = {1777 - 1789},
      journal = {Robotics and Autonomous Systems},
      year = {2014},
      volume= {62},
      number = {12},
      url = {http://www.sciencedirect.com/science/article/pii/S0921889014001353},
      doi = {10.1016/j.robot.2014.07.008},
      abstract = {The neuromechanical control principles of animal locomotion provide good insights for the development of bio-inspired legged robots for walking on challenging surfaces. Based on such principles, we developed a neuromechanical controller consisting of a modular neural network (MNN) and of virtual agonist-antagonist muscle mechanisms (VAAMs). The controller allows for variable compliant leg motions of a hexapod robot, thereby leading to energy-efficient walking on different surfaces. Without any passive mechanisms or torque and position feedback at each joint, the variable compliant leg motions are achieved by only changing the stiffness parameters of the VAAMs. In addition, six surfaces can be also classified by observing the motor signals generated by the controller. The performance of the controller is tested on a physical hexapod robot. Experimental results show that it can effectively walk on six different surfaces with the specific resistances between 9.1 and 25.0, and also classify them with high accuracy.}}
    Abstract: The neuromechanical control principles of animal locomotion provide good insights for the development of bio-inspired legged robots for walking on challenging surfaces. Based on such principles, we developed a neuromechanical controller consisting of a modular neural network (MNN) and of virtual agonist-antagonist muscle mechanisms (VAAMs). The controller allows for variable compliant leg motions of a hexapod robot, thereby leading to energy-efficient walking on different surfaces. Without any passive mechanisms or torque and position feedback at each joint, the variable compliant leg motions are achieved by only changing the stiffness parameters of the VAAMs. In addition, six surfaces can be also classified by observing the motor signals generated by the controller. The performance of the controller is tested on a physical hexapod robot. Experimental results show that it can effectively walk on six different surfaces with the specific resistances between 9.1 and 25.0, and also classify them with high accuracy.
    Review:
    Krüger, N. and Ude, A. and Petersen, H. and Nemec, B. and Ellekilde, L. and Savarimuthu, T. and Rytz, J. and Fischer, K. and Buch, A. and Kraft, D. and Mustafa, W. and Aksoy, E. and Papon, J. and Kramberger, A. and Wörgötter, F. (2014).
    Technologies for the Fast Set-Up of Automated Assembly Processes. KI - Künstliche Intelligenz, 1-9. DOI: 10.1007/s13218-014-0329-9.
    BibTeX:
    @article{kruegerudepetersen2014,
      author = {Krüger, N. and Ude, A. and Petersen, H. and Nemec, B. and Ellekilde, L. and Savarimuthu, T. and Rytz, J. and Fischer, K. and Buch, A. and Kraft, D. and Mustafa, W. and Aksoy, E. and Papon, J. and Kramberger, A. and Wörgötter, F.},
      title = {Technologies for the Fast Set-Up of Automated Assembly Processes},
      pages = {1-9},
      journal = {KI - Künstliche Intelligenz},
      year = {2014},
      language = {English},
      publisher = {Springer Berlin Heidelberg},
      url = {http://dx.doi.org/10.1007/s13218-014-0329-9},
      doi = {10.1007/s13218-014-0329-9},
      abstract = {In this article, we describe technologies facilitating the set-up of automated assembly solutions which have been developed in the context of the IntellAct project (2011-2014). Tedious procedures are currently still required to establish such robot solutions. This hinders especially the automation of so called few-of-a-kind production. Therefore, most production of this kind is done manually and thus often performed in low-wage countries. In the IntellAct project, we have developed a set of methods which facilitate the set-up of a complex automatic assembly process, and here we present our work on tele-operation, dexterous grasping, pose estimation and learning of control strategies. The prototype developed in IntellAct is at a TRL4 (corresponding to demonstration in lab environment).}}
    Abstract: In this article, we describe technologies facilitating the set-up of automated assembly solutions which have been developed in the context of the IntellAct project (2011-2014). Tedious procedures are currently still required to establish such robot solutions. This hinders especially the automation of so called few-of-a-kind production. Therefore, most production of this kind is done manually and thus often performed in low-wage countries. In the IntellAct project, we have developed a set of methods which facilitate the set-up of a complex automatic assembly process, and here we present our work on tele-operation, dexterous grasping, pose estimation and learning of control strategies. The prototype developed in IntellAct is at a TRL4 (corresponding to demonstration in lab environment).
    Review:
    Schlette, C. and Buch, A. and Aksoy, E. and Steil, T. and Papon, J. and Savarimuthu, T. and Wörgötter, F. and Krüger, N. and Roßmann, J. (2014).
    A new benchmark for pose estimation with ground truth from virtual reality. Production Engineering, 745-754, 8, 6. DOI: 10.1007/s11740-014-0552-0.
    BibTeX:
    @article{schlettebuchaksoy2014,
      author = {Schlette, C. and Buch, A. and Aksoy, E. and Steil, T. and Papon, J. and Savarimuthu, T. and Wörgötter, F. and Krüger, N. and Roßmann, J.},
      title = {A new benchmark for pose estimation with ground truth from virtual reality},
      pages = {745-754},
      journal = {Production Engineering},
      year = {2014},
      volume= {8},
      number = {6},
      language = {English},
      publisher = {Springer Berlin Heidelberg},
      url = {http://dx.doi.org/10.1007/s11740-014-0552-0},
      doi = {10.1007/s11740-014-0552-0},
      abstract = {The development of programming paradigms for industrial assembly currently gets fresh impetus from approaches in human demonstration and programming-by-demonstration. Major low- and mid-level prerequisites for machine vision and learning in these intelligent robotic applications are pose estimation, stereo reconstruction and action recognition. As a basis for the machine vision and learning involved, pose estimation is used for deriving object positions and orientations and thus target frames for robot execution. Our contribution introduces and applies a novel benchmark for typical multi-sensor setups and algorithms in the field of demonstration-based automated assembly. The benchmark platform is equipped with a multi-sensor setup consisting of stereo cameras and depth scanning devices (see Fig. 1). The dimensions and abilities of the platform have been chosen in order to reflect typical manual assembly tasks. Following the eRobotics methodology, a simulatable 3D representation of this platform was modelled in virtual reality. Based on a detailed camera and sensor simulation, we generated a set of benchmark images and point clouds with controlled levels of noise as well as ground truth data such as object positions and time stamps. We demonstrate the application of the benchmark to evaluate our latest developments in pose estimation, stereo reconstruction and action recognition and publish the benchmark data for objective comparison of sensor setups and algorithms in industry.}}
    Abstract: The development of programming paradigms for industrial assembly currently gets fresh impetus from approaches in human demonstration and programming-by-demonstration. Major low- and mid-level prerequisites for machine vision and learning in these intelligent robotic applications are pose estimation, stereo reconstruction and action recognition. As a basis for the machine vision and learning involved, pose estimation is used for deriving object positions and orientations and thus target frames for robot execution. Our contribution introduces and applies a novel benchmark for typical multi-sensor setups and algorithms in the field of demonstration-based automated assembly. The benchmark platform is equipped with a multi-sensor setup consisting of stereo cameras and depth scanning devices (see Fig. 1). The dimensions and abilities of the platform have been chosen in order to reflect typical manual assembly tasks. Following the eRobotics methodology, a simulatable 3D representation of this platform was modelled in virtual reality. Based on a detailed camera and sensor simulation, we generated a set of benchmark images and point clouds with controlled levels of noise as well as ground truth data such as object positions and time stamps. We demonstrate the application of the benchmark to evaluate our latest developments in pose estimation, stereo reconstruction and action recognition and publish the benchmark data for objective comparison of sensor setups and algorithms in industry.
    Review:
    Agostini, A. and Torras, C. and Wörgötter, F. (2015).
    Efficient interactive decision-making framework for robotic applications. Artificial Intelligence. DOI: 10.1016/j.artint.2015.04.004.
    BibTeX:
    @article{agostinitorraswoergoetter2015,
      author = {Agostini, A. and Torras, C. and Wörgötter, F.},
      title = {Efficient interactive decision-making framework for robotic applications},
      journal = {Artificial Intelligence},
      year = {2015},
      url = {http://www.sciencedirect.com/science/article/pii/S0004370215000661},
      doi = {10.1016/j.artint.2015.04.004},
      abstract = {The inclusion of robots in our society is imminent, such as service robots. Robots are now capable of reliably manipulating objects in our daily lives but only when combined with artificial intelligence (AI) techniques for planning and decision-making, which allow a machine to determine how a task can be completed successfully. To perform decision making, AI planning methods use a set of planning operators to code the state changes in the environment produced by a robotic action. Given a specific goal, the planner then searches for the best sequence of planning operators, i.e., the best plan that leads through the state space to satisfy the goal. In principle, planning operators can be hand-coded, but this is impractical for applications that involve many possible state transitions. An alternative is to learn them automatically from experience, which is most efficient when there is a human teacher. In this study, we propose a simple and efficient decision-making framework for this purpose. The robot executes its plan in a step-wise manner and any planning impasse produced by missing operators is resolved online by asking a human teacher for the next action to execute. Based on the observed state transitions, this approach rapidly generates the missing operators by evaluating the relevance of several cause-effect alternatives in parallel using a probability estimate, which compensates for the high uncertainty that is inherent when learning from a small number of samples. We evaluated the validity of our approach in simulated and real environments, where it was benchmarked against previous methods. Humans learn in the same incremental manner, so we consider that our approach may be a better alternative to existing learning paradigms, which require offline learning, a significant amount of previous knowledge, or a large number of samples.}}
    Abstract: The inclusion of robots in our society is imminent, such as service robots. Robots are now capable of reliably manipulating objects in our daily lives but only when combined with artificial intelligence (AI) techniques for planning and decision-making, which allow a machine to determine how a task can be completed successfully. To perform decision making, AI planning methods use a set of planning operators to code the state changes in the environment produced by a robotic action. Given a specific goal, the planner then searches for the best sequence of planning operators, i.e., the best plan that leads through the state space to satisfy the goal. In principle, planning operators can be hand-coded, but this is impractical for applications that involve many possible state transitions. An alternative is to learn them automatically from experience, which is most efficient when there is a human teacher. In this study, we propose a simple and efficient decision-making framework for this purpose. The robot executes its plan in a step-wise manner and any planning impasse produced by missing operators is resolved online by asking a human teacher for the next action to execute. Based on the observed state transitions, this approach rapidly generates the missing operators by evaluating the relevance of several cause-effect alternatives in parallel using a probability estimate, which compensates for the high uncertainty that is inherent when learning from a small number of samples. We evaluated the validity of our approach in simulated and real environments, where it was benchmarked against previous methods. Humans learn in the same incremental manner, so we consider that our approach may be a better alternative to existing learning paradigms, which require offline learning, a significant amount of previous knowledge, or a large number of samples.
    Review:
    Aksoy, E E. and Tamosiunaite, M. and Wörgötter, F. (2014).
    Model-free incremental learning of the semantics of manipulation actions. Robotics and Autonomous Systems, 1-42. DOI: 10.1016/j.robot.2014.11.003.
    BibTeX:
    @article{aksoytamosiunaitewoergoetter2014,
      author = {Aksoy, E E. and Tamosiunaite, M. and Wörgötter, F.},
      title = {Model-free incremental learning of the semantics of manipulation actions},
      pages = {1-42},
      journal = {Robotics and Autonomous Systems},
      year = {2014},
      url = {http://www.sciencedirect.com/science/article/pii/S0921889014002450},
      doi = {10.1016/j.robot.2014.11.003},
      abstract = {Abstract Understanding and learning the semantics of complex manipulation actions are intriguing and non-trivial issues for the development of autonomous robots. In this paper, we present a novel method for an on-line, incremental learning of the semantics of manipulation actions by observation. Recently, we had introduced the Semantic Event Chains (SECs) as a new generic representation for manipulations, which can be directly computed from a stream of images and is based on the changes in the relationships between objects involved in a manipulation. We here show that the SEC concept can be used to bootstrap the learning of the semantics of manipulation actions without using any prior knowledge about actions or objects. We create a new manipulation action benchmark with 8 different manipulation tasks including in total 120 samples to learn an archetypal SEC model for each manipulation action. We then evaluate the learned SEC models with 20 long and complex chained manipulation sequences including in total 103 manipulation samples. Thereby we put the event chains to a decisive test asking how powerful is action classification when using this framework. We find that we reach up to 100 % and 87 % average precision and recall values in the validation phase and 99 % and 92 % in the testing phase. This supports the notion that SECs are a useful tool for classifying manipulation actions in a fully automatic way.}}
    Abstract: Abstract Understanding and learning the semantics of complex manipulation actions are intriguing and non-trivial issues for the development of autonomous robots. In this paper, we present a novel method for an on-line, incremental learning of the semantics of manipulation actions by observation. Recently, we had introduced the Semantic Event Chains (SECs) as a new generic representation for manipulations, which can be directly computed from a stream of images and is based on the changes in the relationships between objects involved in a manipulation. We here show that the SEC concept can be used to bootstrap the learning of the semantics of manipulation actions without using any prior knowledge about actions or objects. We create a new manipulation action benchmark with 8 different manipulation tasks including in total 120 samples to learn an archetypal SEC model for each manipulation action. We then evaluate the learned SEC models with 20 long and complex chained manipulation sequences including in total 103 manipulation samples. Thereby we put the event chains to a decisive test asking how powerful is action classification when using this framework. We find that we reach up to 100 % and 87 % average precision and recall values in the validation phase and 99 % and 92 % in the testing phase. This supports the notion that SECs are a useful tool for classifying manipulation actions in a fully automatic way.
    Review:
    Aksoy, E E. and Abramov, A. and Wörgötter, F. and Scharr, H. and Fischbach, A. and Dellen, B. (2015).
    Modeling leaf growth of rosette plants using infrared stereo image sequences. Computers and Electronics in Agriculture, 78 - 90, 110. DOI: http://dx.doi.org/10.1016/j.compag.2014.10.020.
    BibTeX:
    @article{aksoyabramovwoergoetter2015,
      author = {Aksoy, E E. and Abramov, A. and Wörgötter, F. and Scharr, H. and Fischbach, A. and Dellen, B.},
      title = {Modeling leaf growth of rosette plants using infrared stereo image sequences},
      pages = {78 - 90},
      journal = {Computers and Electronics in Agriculture},
      year = {2015},
      volume= {110},
      url = {http://www.sciencedirect.com/science/article/pii/S0168169914002816},
      doi = {http://dx.doi.org/10.1016/j.compag.2014.10.020},
      abstract = {Abstract In this paper, we present a novel multi-level procedure for finding and tracking leaves of a rosette plant, in our case up to 3 weeks old tobacco plants, during early growth from infrared-image sequences. This allows measuring important plant parameters, e.g. leaf growth rates, in an automatic and non-invasive manner. The procedure consists of three main stages: preprocessing, leaf segmentation, and leaf tracking. Leaf-shape models are applied to improve leaf segmentation, and further used for measuring leaf sizes and handling occlusions. Leaves typically grow radially away from the stem, a property that is exploited in our method, reducing the dimensionality of the tracking task. We successfully tested the method on infrared image sequences showing the growth of tobacco-plant seedlings up to an age of about 30ꃚys, which allows measuring relevant plant growth parameters such as leaf growth rate. By robustly fitting a suitably modified autocatalytic growth model to all growth curves from plants under the same treatment, average plant growth models could be derived. Future applications of the method include plant-growth monitoring for optimizing plant production in green houses or plant phenotyping for plant research.}}
    Abstract: Abstract In this paper, we present a novel multi-level procedure for finding and tracking leaves of a rosette plant, in our case up to 3 weeks old tobacco plants, during early growth from infrared-image sequences. This allows measuring important plant parameters, e.g. leaf growth rates, in an automatic and non-invasive manner. The procedure consists of three main stages: preprocessing, leaf segmentation, and leaf tracking. Leaf-shape models are applied to improve leaf segmentation, and further used for measuring leaf sizes and handling occlusions. Leaves typically grow radially away from the stem, a property that is exploited in our method, reducing the dimensionality of the tracking task. We successfully tested the method on infrared image sequences showing the growth of tobacco-plant seedlings up to an age of about 30ꃚys, which allows measuring relevant plant growth parameters such as leaf growth rate. By robustly fitting a suitably modified autocatalytic growth model to all growth curves from plants under the same treatment, average plant growth models could be derived. Future applications of the method include plant-growth monitoring for optimizing plant production in green houses or plant phenotyping for plant research.
    Review:
    Vuga, R. and Aksoy, E E. and Wörgötter, F. and Ude, A. (2015).
    Probabilistic semantic models for manipulation action representation and extraction. Robotics and Autonomous Systems, 40 - 56, 65. DOI: 10.1016/j.robot.2014.11.012.
    BibTeX:
    @article{vugaaksoywoergoetter2015,
      author = {Vuga, R. and Aksoy, E E. and Wörgötter, F. and Ude, A.},
      title = {Probabilistic semantic models for manipulation action representation and extraction},
      pages = {40 - 56},
      journal = {Robotics and Autonomous Systems},
      year = {2015},
      volume= {65},
      url = {http://www.sciencedirect.com/science/article/pii/S0921889014002851},
      doi = {10.1016/j.robot.2014.11.012},
      abstract = {Abstract In this paper we present a hierarchical framework for representation of manipulation actions and its applicability to the problem of top down action extraction from observation. The framework consists of novel probabilistic semantic models, which encode contact relations as probability distributions over the action phase. The models are action descriptive and can be used to provide probabilistic similarity scores for newly observed action sequences. The lower level of the representation consists of parametric hidden Markov models, which encode trajectory information.}}
    Abstract: Abstract In this paper we present a hierarchical framework for representation of manipulation actions and its applicability to the problem of top down action extraction from observation. The framework consists of novel probabilistic semantic models, which encode contact relations as probability distributions over the action phase. The models are action descriptive and can be used to provide probabilistic similarity scores for newly observed action sequences. The lower level of the representation consists of parametric hidden Markov models, which encode trajectory information.
    Review:
    Schoeler, M. and Wörgötter, F. and Aein, M. and Kulvicius, T. (2014).
    Automated generation of training sets for object recognition in robotic applications. 23rd International Conference on Robotics in Alpe-Adria-Danube Region (RAAD), 1-7. DOI: 10.1109/RAAD.2014.7002247.
    BibTeX:
    @inproceedings{schoelerwoergoetteraein2014,
      author = {Schoeler, M. and Wörgötter, F. and Aein, M. and Kulvicius, T.},
      title = {Automated generation of training sets for object recognition in robotic applications},
      pages = {1-7},
      booktitle = {23rd International Conference on Robotics in Alpe-Adria-Danube Region (RAAD)},
      year = {2014},
      month = {Sept},
      doi = {10.1109/RAAD.2014.7002247},
      abstract = {Object recognition plays an important role in robotics, since objects/tools first have to be identified in the scene before they can be manipulated/used. The performance of object recognition largely depends on the training dataset. Usually such training sets are gathered manually by a human operator, a tedious procedure, which ultimately limits the size of the dataset. One reason for manual selection of samples is that results returned by search engines often contain irrelevant images, mainly due to the problem of homographs (words spelled the same but with different meanings). In this paper we present an automated and unsupervised method, coined Trainingset Cleaning by Translation ( TCT ), for generation of training sets which are able to deal with the problem of homographs. For disambiguation, it uses the context provided by a command like "tighten the nut" together with a combination of public image searches, text searches and translation services. We compare our approach against plain Google image search qualitatively as well as in a classification task and demonstrate that our method indeed leads to a task-relevant training set, which results in an improvement of 24.1% in object recognition for 12 ambiguous classes. In addition, we present an application of our method to a real robot scenario.}}
    Abstract: Object recognition plays an important role in robotics, since objects/tools first have to be identified in the scene before they can be manipulated/used. The performance of object recognition largely depends on the training dataset. Usually such training sets are gathered manually by a human operator, a tedious procedure, which ultimately limits the size of the dataset. One reason for manual selection of samples is that results returned by search engines often contain irrelevant images, mainly due to the problem of homographs (words spelled the same but with different meanings). In this paper we present an automated and unsupervised method, coined Trainingset Cleaning by Translation ( TCT ), for generation of training sets which are able to deal with the problem of homographs. For disambiguation, it uses the context provided by a command like "tighten the nut" together with a combination of public image searches, text searches and translation services. We compare our approach against plain Google image search qualitatively as well as in a classification task and demonstrate that our method indeed leads to a task-relevant training set, which results in an improvement of 24.1% in object recognition for 12 ambiguous classes. In addition, we present an application of our method to a real robot scenario.
    Review:
    Papon, J. and Schoeler, M. and Wörgötter, F. (2015).
    Spatially Stratified Correspondence Sampling for Real-Time Point Cloud Tracking. IEEE Winter Conference on Applications of Computer Vision (WACV), 124-131. DOI: 10.1109/WACV.2015.24 edition.
    BibTeX:
    @inproceedings{paponschoelerwoergoetter2015,
      author = {Papon, J. and Schoeler, M. and Wörgötter, F.},
      title = {Spatially Stratified Correspondence Sampling for Real-Time Point Cloud Tracking},
      pages = {124-131},
      booktitle = {IEEE Winter Conference on Applications of Computer Vision (WACV)},
      year = {2015},
      month = {Jan},
      doi = {10.1109/WACV.2015.24 edition},
      abstract = {In this paper we propose a novel spatially stratified sampling technique for evaluating the likelihood function in particle filters. In particular, we show that in the case where the measurement function uses spatial correspondence, we can greatly reduce computational cost by exploiting spatial structure to avoid redundant computations. We present results which quantitatively show that the technique permits equivalent, and in some cases, greater accuracy, as a reference point cloud particle filter at significantly faster run-times. We also compare to a GPU implementation, and show that we can exceed their performance on the CPU. In addition, we present results on a multi-target tracking appli- cation, demonstrating that the increases in efficiency permit online 6DoF multi-target tracking on standard hardware.}}
    Abstract: In this paper we propose a novel spatially stratified sampling technique for evaluating the likelihood function in particle filters. In particular, we show that in the case where the measurement function uses spatial correspondence, we can greatly reduce computational cost by exploiting spatial structure to avoid redundant computations. We present results which quantitatively show that the technique permits equivalent, and in some cases, greater accuracy, as a reference point cloud particle filter at significantly faster run-times. We also compare to a GPU implementation, and show that we can exceed their performance on the CPU. In addition, we present results on a multi-target tracking appli- cation, demonstrating that the increases in efficiency permit online 6DoF multi-target tracking on standard hardware.
    Review:
    Schoeler, M. and Wörgötter, F. and Papon, J. and Kulvicius, T. (2015).
    Unsupervised generation of context-relevant training-sets for visual object recognition employing multilinguality. IEEE Winter Conference on Applications of Computer Vision (WACV), 805-812. DOI: 10.1109/WACV.2015.112.
    BibTeX:
    @inproceedings{schoelerwoergoetterpapon2015,
      author = {Schoeler, M. and Wörgötter, F. and Papon, J. and Kulvicius, T.},
      title = {Unsupervised generation of context-relevant training-sets for visual object recognition employing multilinguality},
      pages = {805-812},
      booktitle = {IEEE Winter Conference on Applications of Computer Vision (WACV)},
      year = {2015},
      month = {Jan},
      doi = {10.1109/WACV.2015.112},
      abstract = {Image based object classification requires clean training data sets. Gathering such sets is usually done manually by humans, which is time-consuming and laborious. On the other hand, directly using images from search engines creates very noisy data due to ambiguous noun-focused indexing. However, in daily speech nouns and verbs are always coupled. We use this for the automatic generation of clean data sets by the here-presented TRANSCLEAN algorithm, which through the use of multiple languages also solves the problem of polysemes (a single spelling with multiple meanings). Thus, we use the implicit knowledge contained in verbs, e.g. in an imperative such as}}
    Abstract: Image based object classification requires clean training data sets. Gathering such sets is usually done manually by humans, which is time-consuming and laborious. On the other hand, directly using images from search engines creates very noisy data due to ambiguous noun-focused indexing. However, in daily speech nouns and verbs are always coupled. We use this for the automatic generation of clean data sets by the here-presented TRANSCLEAN algorithm, which through the use of multiple languages also solves the problem of polysemes (a single spelling with multiple meanings). Thus, we use the implicit knowledge contained in verbs, e.g. in an imperative such as
    Review:
    Manoonpong, P. and Goldschmidt, D. and Wörgötter, F. and Kovalev, A. and Heepe, L. and Gorb, S. (2013).
    Using a Biological Material to Improve Locomotion of Hexapod Robots. Biomimetic and Biohybrid Systems, 402-404, 8064. DOI: 10.1007/978-3-642-39802-5_48.
    BibTeX:
    @incollection{manoonponggoldschmidtwoergoetter201,
      author = {Manoonpong, P. and Goldschmidt, D. and Wörgötter, F. and Kovalev, A. and Heepe, L. and Gorb, S.},
      title = {Using a Biological Material to Improve Locomotion of Hexapod Robots},
      pages = {402-404},
      booktitle = {Biomimetic and Biohybrid Systems},
      year = {2013},
      volume= {8064},
      editor = {Lepora, NathanF. and Mura, Anna and Krapp, HolgerG. and Verschure, PaulF.M.J. and Prescott, TonyJ.},
      language = {English},
      publisher = {Springer Berlin Heidelberg},
      series = {Lecture Notes in Computer Science},
      url = {http://dx.doi.org/10.1007/978-3-642-39802-5_48},
      doi = {10.1007/978-3-642-39802-5_48},
      abstract = {Animals can move in not only elegant but also energy efficient ways. Their skin is one of the key components for this achievement. It provides a proper friction for forward motion and can protect them from slipping on a surface during locomotion. Inspired by this, we applied real shark skin to the foot soles of our hexapod robot AMOS. The material is formed to cover each foot of AMOS. Due to shark skin texture which has asymmetric profile inducing frictional anisotropy, this feature allows AMOS to grip specific surfaces and effectively locomote without slipping. Using real-time walking experiments, this study shows that implementing the biological material on the robot can reduce energy consumption while walking up a steep slope covered by carpets or other felt-like or rough substrates.}}
    Abstract: Animals can move in not only elegant but also energy efficient ways. Their skin is one of the key components for this achievement. It provides a proper friction for forward motion and can protect them from slipping on a surface during locomotion. Inspired by this, we applied real shark skin to the foot soles of our hexapod robot AMOS. The material is formed to cover each foot of AMOS. Due to shark skin texture which has asymmetric profile inducing frictional anisotropy, this feature allows AMOS to grip specific surfaces and effectively locomote without slipping. Using real-time walking experiments, this study shows that implementing the biological material on the robot can reduce energy consumption while walking up a steep slope covered by carpets or other felt-like or rough substrates.
    Review:
    Fauth, M. and Wörgötter, F. and Tetzlaff, C. (2015).
    The Formation of Multi-synaptic Connections by the Interaction of Synaptic and Structural Plasticity and Their Functional Consequences. PLoS Comput Biol, e1004031, 11, 1. DOI: 10.1371/journal.pcbi.1004031.
    BibTeX:
    @article{fauthwoergoettertetzlaff2015,
      author = {Fauth, M. and Wörgötter, F. and Tetzlaff, C.},
      title = {The Formation of Multi-synaptic Connections by the Interaction of Synaptic and Structural Plasticity and Their Functional Consequences},
      pages = {e1004031},
      journal = {PLoS Comput Biol},
      year = {2015},
      volume= {11},
      number = {1},
      month = {01},
      publisher = {Public Library of Science},
      url = {http://dx.doi.org/10.1371%2Fjournal.pcbi.1004031},
      doi = {10.1371/journal.pcbi.1004031},
      abstract = {titleAuthor Summary/title pThe connectivity between neurons is modified by different mechanisms. On a time scale of minutes to hours one finds synaptic plasticity, whereas mechanisms for structural changes at axons or dendrites may take days. One main factor determining structural changes is the weight of a connection, which, in turn, is adapted by synaptic plasticity. Both mechanisms, synaptic and structural plasticity, are influenced and determined by the activity pattern in the network. Hence, it is important to understand how activity and the different plasticity mechanisms influence each other. Especially how activity influences rewiring in adult networks is still an open question./p pWe present a model, which captures these complex interactions by abstracting structural plasticity with weight-dependent probabilities. This allows for calculating the distribution of the number of synapses between two neurons analytically. We report that biologically realistic connection patterns for different cortical layers generically arise with synaptic plasticity rules in which the synaptic weights grow with postsynaptic activity. The connectivity patterns also lead to different activity levels resembling those found in the different cortical layers. Interestingly such a system exhibits a hysteresis by which connections remain stable longer than expected, which may add to the stability of information storage in the network./p}}
    Abstract: titleAuthor Summary/title pThe connectivity between neurons is modified by different mechanisms. On a time scale of minutes to hours one finds synaptic plasticity, whereas mechanisms for structural changes at axons or dendrites may take days. One main factor determining structural changes is the weight of a connection, which, in turn, is adapted by synaptic plasticity. Both mechanisms, synaptic and structural plasticity, are influenced and determined by the activity pattern in the network. Hence, it is important to understand how activity and the different plasticity mechanisms influence each other. Especially how activity influences rewiring in adult networks is still an open question./p pWe present a model, which captures these complex interactions by abstracting structural plasticity with weight-dependent probabilities. This allows for calculating the distribution of the number of synapses between two neurons analytically. We report that biologically realistic connection patterns for different cortical layers generically arise with synaptic plasticity rules in which the synaptic weights grow with postsynaptic activity. The connectivity patterns also lead to different activity levels resembling those found in the different cortical layers. Interestingly such a system exhibits a hysteresis by which connections remain stable longer than expected, which may add to the stability of information storage in the network./p
    Review:
    Dasgupta, S. and Wörgötter, F. and Manoonpong, P. (2014).
    Goal-directed Learning with Reward Modulated Interaction between Striatal and Cerebellar Systems. Bernstein Conference 2014, 1 -- 1. DOI: 10.12751/nncn.bc2014.0177.
    BibTeX:
    @inproceedings{dasguptawoergoettermanoonpong2014a,
      author = {Dasgupta, S. and Wörgötter, F. and Manoonpong, P.},
      title = {Goal-directed Learning with Reward Modulated Interaction between Striatal and Cerebellar Systems},
      pages = {1 -- 1},
      booktitle = {Bernstein Conference 2014},
      year = {2014},
      month = {Sept},
      doi = {10.12751/nncn.bc2014.0177},
      abstract = {Goal-directed decision making in biological systems is broadly based on associations between conditional and unconditional stimuli. This can be further classified as classical conditioning (correlation based learning) and operand conditioning (reward-based learning). A number of computational and experimental studies have well established the role of the basal ganglia (striatal system) towards reward-based learning, where as the cerebellum evidently plays an important role in developing specific conditioned responses. Although, they are viewed as distinct learning systems 1, recent animal experiments point towards their complementary role in behavioral learning, and also show the existence of substantial two-way communication between the two structures 2. Based on this notion of co-operative learning, in this work we hypothesize that the basal ganglia and cerebellar learning systems work in parallel and compete with each other (Figure 1). We envision such an interaction being driven by a simple reward modulated heterosynaptic plasticity (RMHP) rule 3, in order to guide the over all goal-directed behavior. Using a recurrent neural network actor-critic model of the basal ganglia and feed-forward correlation learning model of the cerebellum (input correlation learning-ICO) 4, we demonstrate that the RMHP rule can effectively combine the outcomes of the two learning systems. This is tested using simulated environments of increasing complexity with a four-wheeled animat in a dynamic foraging task. Although, they are modeled within a highly simplified level of biological abstraction, we clearly demonstrate that such a combined learning mechanism, leads to much stabler and faster learning of goal-directed behaviors in comparison to the individual systems.}}
    Abstract: Goal-directed decision making in biological systems is broadly based on associations between conditional and unconditional stimuli. This can be further classified as classical conditioning (correlation based learning) and operand conditioning (reward-based learning). A number of computational and experimental studies have well established the role of the basal ganglia (striatal system) towards reward-based learning, where as the cerebellum evidently plays an important role in developing specific conditioned responses. Although, they are viewed as distinct learning systems 1, recent animal experiments point towards their complementary role in behavioral learning, and also show the existence of substantial two-way communication between the two structures 2. Based on this notion of co-operative learning, in this work we hypothesize that the basal ganglia and cerebellar learning systems work in parallel and compete with each other (Figure 1). We envision such an interaction being driven by a simple reward modulated heterosynaptic plasticity (RMHP) rule 3, in order to guide the over all goal-directed behavior. Using a recurrent neural network actor-critic model of the basal ganglia and feed-forward correlation learning model of the cerebellum (input correlation learning-ICO) 4, we demonstrate that the RMHP rule can effectively combine the outcomes of the two learning systems. This is tested using simulated environments of increasing complexity with a four-wheeled animat in a dynamic foraging task. Although, they are modeled within a highly simplified level of biological abstraction, we clearly demonstrate that such a combined learning mechanism, leads to much stabler and faster learning of goal-directed behaviors in comparison to the individual systems.
    Review:
    Sutterlütti, R. and Stein, S. C. and Tamosiunaite, M. and Wörgötter, F. (2014).
    Object names correspond to convex entities. Cognitive Processing, 69 -- 71, 15, 1. DOI: 10.1007/s10339-013-0597-6.
    BibTeX:
    @article{sutterluettisteintamosiunaite2014,
      author = {Sutterlütti, R. and Stein, S. C. and Tamosiunaite, M. and Wörgötter, F.},
      title = {Object names correspond to convex entities},
      pages = {69 -- 71},
      booktitle = {Cognitive Processing},
      year = {2014},
      volume= {15},
      number = {1},
      language = {English},
      organization = {Springer Berlin Heidelberg},
      publisher = {Springer Berlin Heidelberg},
      url = {http://download.springer.com/static/pdf/411/art%253A10.1007%252Fs10339-014-0632-2.pdf?auth661427114510_ae25a34c4f91888c7fea13ea0fd0da15&ext.pdf},
      doi = {10.1007/s10339-013-0597-6}}
    Abstract:
    Review:
    Aksoy, E. E. and Schoeler, M. and Wörgötter, F. (2014).
    Testing piagets ideas on robots: Assimilation and accommodation using the semantics of actions. IEEE International Conferences on Development and Learning and Epigenetic Robotics (ICDL-Epirob), 107-108. DOI: 10.1109/DEVLRN.2014.6982962.
    BibTeX:
    @inproceedings{aksoyschoelerwoergoetter2014,
      author = {Aksoy, E. E. and Schoeler, M. and Wörgötter, F.},
      title = {Testing piagets ideas on robots: Assimilation and accommodation using the semantics of actions},
      pages = {107-108},
      booktitle = {IEEE International Conferences on Development and Learning and Epigenetic Robotics (ICDL-Epirob)},
      year = {2014},
      month = {Oct},
      doi = {10.1109/DEVLRN.2014.6982962},
      abstract = {The proposed framework addresses the problem of implementing a high level}}
    Abstract: The proposed framework addresses the problem of implementing a high level
    Review:
    Strub, C. and Wörgötter, F. and Ritter, H. and Sandamirskaya, Y. (2014).
    Correcting pose estimates during tactile exploration of object shape: a neuro-robotic study. Joint IEEE International Conferences on Development and Learning and Epigenetic Robotics (ICDL-Epirob), 26-33. DOI: 10.1109/DEVLRN.2014.6982950.
    BibTeX:
    @inproceedings{strubwoergoetterritter2014a,
      author = {Strub, C. and Wörgötter, F. and Ritter, H. and Sandamirskaya, Y.},
      title = {Correcting pose estimates during tactile exploration of object shape: a neuro-robotic study},
      pages = {26-33},
      booktitle = {Joint IEEE International Conferences on Development and Learning and Epigenetic Robotics (ICDL-Epirob)},
      year = {2014},
      month = {Oct},
      doi = {10.1109/DEVLRN.2014.6982950},
      abstract = {Robots are expected to operate autonomously in unconstrained, real-world environments. Therefore, they cannot rely on access to models of all objects in their environment, in order to parameterize object-directed actions. The robot must estimate the shape of objects in such environments, based on their perception. How to estimate an objects shape based on distal sensors, such as color- or depth cameras, has been extensively studied. Using haptic sensors for this purpose, however, has not been considered in a comparable depth. Humans, to the contrary, are able to improve object manipulation capabilities by using tactile stimuli, acquired from an active haptic exploration of an object. In this paper we introduce a neural-dynamic model which allows to build an object shape representation based on haptic exploration. Acquiring this representation during object manipulation requires the robot to autonomously detect and correct errors in the localization of tactile features with respect to the object. We have implemented an architecture for haptic exploration of an objects shape on a physical robotic hand in a simple exemplary scenario, in which the geometrical models of two different n-gons are learned from tactile data while rotating them with the robotic hand.}}
    Abstract: Robots are expected to operate autonomously in unconstrained, real-world environments. Therefore, they cannot rely on access to models of all objects in their environment, in order to parameterize object-directed actions. The robot must estimate the shape of objects in such environments, based on their perception. How to estimate an objects shape based on distal sensors, such as color- or depth cameras, has been extensively studied. Using haptic sensors for this purpose, however, has not been considered in a comparable depth. Humans, to the contrary, are able to improve object manipulation capabilities by using tactile stimuli, acquired from an active haptic exploration of an object. In this paper we introduce a neural-dynamic model which allows to build an object shape representation based on haptic exploration. Acquiring this representation during object manipulation requires the robot to autonomously detect and correct errors in the localization of tactile features with respect to the object. We have implemented an architecture for haptic exploration of an objects shape on a physical robotic hand in a simple exemplary scenario, in which the geometrical models of two different n-gons are learned from tactile data while rotating them with the robotic hand.
    Review:
    Savarimuthu, R. and Papon, J. and Buch, A. G. and Aksoy, E. and Mustafa, W. and Wörgötter, F. and Krüger, N. (2015).
    An Online Vision System for Understanding Complex Assembly Tasks. International Conference on Computer Vision Theory and Applications, 1 - 8. DOI: 10.5220/0005260804540461.
    BibTeX:
    @inproceedings{savarimuthupaponbuch2015,
      author = {Savarimuthu, R. and Papon, J. and Buch, A. G. and Aksoy, E. and Mustafa, W. and Wörgötter, F. and Krüger, N.},
      title = {An Online Vision System for Understanding Complex Assembly Tasks},
      pages = {1 - 8},
      booktitle = {International Conference on Computer Vision Theory and Applications},
      year = {2015},
      location = {Berlin (Germany)},
      month = {March 11 - 14},
      doi = {10.5220/0005260804540461},
      abstract = {We present an integrated system for the recognition, pose estimation and simultaneous tracking of multiple objects in 3D scenes. Our target application is a complete semantic representation of dynamic scenes which requires three essential steps recognition of objects, tracking their movements, and identification of interactions between them. We address this challenge with a complete system which uses object recognition and pose estimation to initiate object models and trajectories, a dynamic sequential octree structure to allow for full 6DOF tracking through occlusions, and a graph-based semantic representation to distil interactions. We evaluate the proposed method on real scenarios by comparing tracked outputs to ground truth trajectories and we compare the results to Iterative Closest Point and Particle Filter based trackers.}}
    Abstract: We present an integrated system for the recognition, pose estimation and simultaneous tracking of multiple objects in 3D scenes. Our target application is a complete semantic representation of dynamic scenes which requires three essential steps recognition of objects, tracking their movements, and identification of interactions between them. We address this challenge with a complete system which uses object recognition and pose estimation to initiate object models and trajectories, a dynamic sequential octree structure to allow for full 6DOF tracking through occlusions, and a graph-based semantic representation to distil interactions. We evaluate the proposed method on real scenarios by comparing tracked outputs to ground truth trajectories and we compare the results to Iterative Closest Point and Particle Filter based trackers.
    Review:
    Strub, C. and Wörgötter, F. and Ritter, H. and Sandamirskaya, Y. (2014).
    Using haptics to extract object shape from rotational manipulations. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2179-2186. DOI: 10.1109/IROS.2014.6942856.
    BibTeX:
    @inproceedings{strubwoergoetterritter2014,
      author = {Strub, C. and Wörgötter, F. and Ritter, H. and Sandamirskaya, Y.},
      title = {Using haptics to extract object shape from rotational manipulations},
      pages = {2179-2186},
      booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
      year = {2014},
      month = {Sept},
      doi = {10.1109/IROS.2014.6942856},
      abstract = {Increasingly widespread available haptic sensors mounted on articulated hands offer new sensory channels that can complement shape extraction from vision to enable a more robust handling of objects in cases when vision is restricted or even unavailable. However, to estimate object shape from haptic interaction data is a difficult challenge due to the complexity of the contact interaction between the movable object and sensor surfaces, leading to a coupled estimation problem of shape and object pose. While for vision efficient solutions to the underlying SLAM problem are known, the available information is much sparser in the tactile case, posing great difficulties for a straightforward adoption of standard SLAM algorithms. In the present paper, we thus explore whether a biologically inspired model based on dynamic neural fields can offer a route towards a practical algorithm for tactile SLAM. Our study is focused on a restricted scenario where a two-fingered robot hand manipulates an n-gon with a fixed rotational axis. We demonstrate that our model can accumulate shape information from reasonably short interaction sequences and autonomously build a representation despite significant ambiguity of the tactile data due to the rotational periodicity of the object. We conclude that the presented framework may be a suitable basis to solve the tactile SLAM problem also in more general settings which will be the focus of subsequent work.}}
    Abstract: Increasingly widespread available haptic sensors mounted on articulated hands offer new sensory channels that can complement shape extraction from vision to enable a more robust handling of objects in cases when vision is restricted or even unavailable. However, to estimate object shape from haptic interaction data is a difficult challenge due to the complexity of the contact interaction between the movable object and sensor surfaces, leading to a coupled estimation problem of shape and object pose. While for vision efficient solutions to the underlying SLAM problem are known, the available information is much sparser in the tactile case, posing great difficulties for a straightforward adoption of standard SLAM algorithms. In the present paper, we thus explore whether a biologically inspired model based on dynamic neural fields can offer a route towards a practical algorithm for tactile SLAM. Our study is focused on a restricted scenario where a two-fingered robot hand manipulates an n-gon with a fixed rotational axis. We demonstrate that our model can accumulate shape information from reasonably short interaction sequences and autonomously build a representation despite significant ambiguity of the tactile data due to the rotational periodicity of the object. We conclude that the presented framework may be a suitable basis to solve the tactile SLAM problem also in more general settings which will be the focus of subsequent work.
    Review:
    Chatterjee, S. and Nachstedt, T. and Wörgötter, F. and Tamosiunaite, M. and Manoonpong, P. and Enomoto, Y. and Ariizumi, R. and Matsuno, F. (2014).
    Reinforcement learning approach to generate goal-directed locomotion of a snake-like robot with screw-drive units. 23rd International Conference on Robotics in Alpe-Adria-Danube Region (RAAD), 1-7. DOI: 10.1109/RAAD.2014.7002234.
    BibTeX:
    @inproceedings{chatterjeenachstedtwoergoetter2014,
      author = {Chatterjee, S. and Nachstedt, T. and Wörgötter, F. and Tamosiunaite, M. and Manoonpong, P. and Enomoto, Y. and Ariizumi, R. and Matsuno, F.},
      title = {Reinforcement learning approach to generate goal-directed locomotion of a snake-like robot with screw-drive units},
      pages = {1-7},
      booktitle = {23rd International Conference on Robotics in Alpe-Adria-Danube Region (RAAD)},
      year = {2014},
      month = {Sept},
      doi = {10.1109/RAAD.2014.7002234},
      abstract = {In this paper we apply a policy improvement algorithm called Policy Improvement with Path Integrals (PI2) to generate goal-directed locomotion of a complex snake-like robot with screw-drive units. PI2 is numerically simple and has an ability to deal with high dimensional systems. Here, this approach is used to find proper locomotion control parameters, like joint angles and screw-drive velocities, of the robot. The learning process was achieved using a simulated robot and the learned parameters were successfully transferred to the real one. As a result the robot can locomote toward a given goal.}}
    Abstract: In this paper we apply a policy improvement algorithm called Policy Improvement with Path Integrals (PI2) to generate goal-directed locomotion of a complex snake-like robot with screw-drive units. PI2 is numerically simple and has an ability to deal with high dimensional systems. Here, this approach is used to find proper locomotion control parameters, like joint angles and screw-drive velocities, of the robot. The learning process was achieved using a simulated robot and the learned parameters were successfully transferred to the real one. As a result the robot can locomote toward a given goal.
    Review:
    Aksoy, E. E. and Aein, M. J. and Tamosiunaite, M. and Wörgötter, F. (2015).
    Semantic parsing of human manipulation activities using on-line learned models for robot imitation. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2875-2882. DOI: 10.1109/IROS.2015.7353773.
    BibTeX:
    @inproceedings{aksoyaeintamosiunaite2015,
      author = {Aksoy, E. E. and Aein, M. J. and Tamosiunaite, M. and Wörgötter, F.},
      title = {Semantic parsing of human manipulation activities using on-line learned models for robot imitation},
      pages = {2875-2882},
      booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
      year = {2015},
      location = {Hamburg, Germany},
      month = {Sept},
      doi = {10.1109/IROS.2015.7353773},
      abstract = {Human manipulation activity recognition is an important yet challenging task in robot imitation. In this paper, we introduce, for the first time, a novel method for semantic decomposition and recognition of continuous human manipulation activities by using on-line learned individual manipulation models. Solely based on the spatiotemporal interactions between objects and hands in the scene, the proposed framework can parse not only sequential and concurrent (overlapping) manipulation streams but also basic primitive elements of each detected manipulation. Without requiring any prior object knowledge, the framework can furthermore extract object-like scene entities that are performing the same role in the detected manipulations. The framework was evaluated on our new egocentric activity dataset which contains 120 different samples of 8 single atomic manipulations (e.g. Cutting and Stirring) and 20 long and complex activity demonstrations such as}}
    Abstract: Human manipulation activity recognition is an important yet challenging task in robot imitation. In this paper, we introduce, for the first time, a novel method for semantic decomposition and recognition of continuous human manipulation activities by using on-line learned individual manipulation models. Solely based on the spatiotemporal interactions between objects and hands in the scene, the proposed framework can parse not only sequential and concurrent (overlapping) manipulation streams but also basic primitive elements of each detected manipulation. Without requiring any prior object knowledge, the framework can furthermore extract object-like scene entities that are performing the same role in the detected manipulations. The framework was evaluated on our new egocentric activity dataset which contains 120 different samples of 8 single atomic manipulations (e.g. Cutting and Stirring) and 20 long and complex activity demonstrations such as
    Review:
    Aksoy E.E.and Wörgötter, F. (2014).
    Piaget ve Robotlar : Özümseme ve Uyumsama. Türkiye Otonom Robotlar Konferans (TORK), 1 -- 2.
    BibTeX:
    @bibtexentrytype{aksoyeeandwoergoetter2014,
      author = {Aksoy E.E.and Wörgötter, F.},
      title = {Piaget ve Robotlar : Özümseme ve Uyumsama},
      pages = {1 -- 2},
      booktitle = {Türkiye Otonom Robotlar Konferans (TORK)},
      year = {2014},
      location = {Ankara, Turkey},
      month = {November 6 - 7}}
    Abstract:
    Review:
    Schoeler, M. and Papon, J. and Wörgötter, F. (2015).
    Constrained planar cuts - Object partitioning for point clouds. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 5207-5215. DOI: 10.1109/CVPR.2015.7299157.
    BibTeX:
    @inproceedings{schoelerpaponwoergoetter2015,
      author = {Schoeler, M. and Papon, J. and Wörgötter, F.},
      title = {Constrained planar cuts - Object partitioning for point clouds},
      pages = {5207-5215},
      booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
      year = {2015},
      location = {Boston, MA, USA},
      month = {June},
      doi = {10.1109/CVPR.2015.7299157},
      abstract = {While humans can easily separate unknown objects into meaningful parts, recent segmentation methods can only achieve similar partitionings by training on human-annotated ground-truth data. Here we introduce a bottom-up method for segmenting 3D point clouds into functional parts which does not require supervision and achieves equally good results. Our method uses local concavities as an indicator for inter-part boundaries. We show that this criterion is efficient to compute and generalizes well across different object classes. The algorithm employs a novel locally constrained geometrical boundary model which proposes greedy cuts through a local concavity graph. Only planar cuts are considered and evaluated using a cost function, which rewards cuts orthogonal to concave edges. Additionally, a local clustering constraint is applied to ensure the partitioning only affects relevant locally concave regions. We evaluate our algorithm on recordings from an RGB-D camera as well as the Princeton Segmentation Benchmark, using a fixed set of parameters across all object classes. This stands in stark contrast to most reported results which require either knowing the number of parts or annotated ground-truth for learning. Our approach outperforms all existing bottom-up methods (reducing the gap to human performance by up to 50 %) and achieves scores similar to top-down data-driven approaches.}}
    Abstract: While humans can easily separate unknown objects into meaningful parts, recent segmentation methods can only achieve similar partitionings by training on human-annotated ground-truth data. Here we introduce a bottom-up method for segmenting 3D point clouds into functional parts which does not require supervision and achieves equally good results. Our method uses local concavities as an indicator for inter-part boundaries. We show that this criterion is efficient to compute and generalizes well across different object classes. The algorithm employs a novel locally constrained geometrical boundary model which proposes greedy cuts through a local concavity graph. Only planar cuts are considered and evaluated using a cost function, which rewards cuts orthogonal to concave edges. Additionally, a local clustering constraint is applied to ensure the partitioning only affects relevant locally concave regions. We evaluate our algorithm on recordings from an RGB-D camera as well as the Princeton Segmentation Benchmark, using a fixed set of parameters across all object classes. This stands in stark contrast to most reported results which require either knowing the number of parts or annotated ground-truth for learning. Our approach outperforms all existing bottom-up methods (reducing the gap to human performance by up to 50 %) and achieves scores similar to top-down data-driven approaches.
    Review:
    Goldschmidt, D. and Dasgupta, S. and Wörgötter, F. and Manoonpong, P. (2015).
    A Neural Path Integration Mechanism for Adaptive Vector Navigation in Autonomous Agents. International Joint Conference on Neural Networks (IJCNN), neural path integration mechanism for adaptive vector navigation in autonomous agents, 1-8. DOI: 10.1109/IJCNN.2015.7280400.
    BibTeX:
    @inproceedings{goldschmidtdasguptawoergoetter2015,
      author = {Goldschmidt, D. and Dasgupta, S. and Wörgötter, F. and Manoonpong, P.},
      title = {A Neural Path Integration Mechanism for Adaptive Vector Navigation in Autonomous Agents},
      pages = {1-8},
      booktitle = {International Joint Conference on Neural Networks (IJCNN), neural path integration mechanism for adaptive vector navigation in autonomous agents},
      year = {2015},
      month = {July},
      doi = {10.1109/IJCNN.2015.7280400},
      abstract = {Animals show remarkable capabilities in navigating their habitat in a fully autonomous and energy-efficient way. In many species, these capabilities rely on a process called path integration, which enables them to estimate their current location and to find their way back home after long-distance journeys. Path integration is achieved by integrating compass and odometric cues. Here we introduce a neural path integration mechanism that interacts with a neural locomotion control to simulate homing behavior and path integration-related behaviors observed in animals. The mechanism is applied to a simulated six-legged artificial agent. Input signals from an allothetic compass and odometry are sustained through leaky neural integrator circuits, which are then used to compute the home vector by local excitation-global inhibition interactions. The home vector is computed and represented in circular arrays of neurons, where compass directions are population-coded and linear displacements are rate-coded. The mechanism allows for robust homing behavior in the presence of external sensory noise. The emergent behavior of the controlled agent does not only show a robust solution for the problem of autonomous agent navigation, but it also reproduces various aspects of animal navigation. Finally, we discuss how the proposed path integration mechanism may be used as a scaffold for spatial learning in terms of vector navigation.}}
    Abstract: Animals show remarkable capabilities in navigating their habitat in a fully autonomous and energy-efficient way. In many species, these capabilities rely on a process called path integration, which enables them to estimate their current location and to find their way back home after long-distance journeys. Path integration is achieved by integrating compass and odometric cues. Here we introduce a neural path integration mechanism that interacts with a neural locomotion control to simulate homing behavior and path integration-related behaviors observed in animals. The mechanism is applied to a simulated six-legged artificial agent. Input signals from an allothetic compass and odometry are sustained through leaky neural integrator circuits, which are then used to compute the home vector by local excitation-global inhibition interactions. The home vector is computed and represented in circular arrays of neurons, where compass directions are population-coded and linear displacements are rate-coded. The mechanism allows for robust homing behavior in the presence of external sensory noise. The emergent behavior of the controlled agent does not only show a robust solution for the problem of autonomous agent navigation, but it also reproduces various aspects of animal navigation. Finally, we discuss how the proposed path integration mechanism may be used as a scaffold for spatial learning in terms of vector navigation.
    Review:
    Geng, T. and Porr, B. and Wörgötter, F. (2006).
    A Reflexive Neural Network for Dynamic Biped Walking Control. , 1156 - 1196, 18, 5. DOI: 10.1162/neco.2006.18.5.1156.
    BibTeX:
    @bibtexentrytype{gengporrwoergoetter2006c,
      author = {Geng, T. and Porr, B. and Wörgötter, F.},
      title = {A Reflexive Neural Network for Dynamic Biped Walking Control},
      pages = {1156 - 1196},
      year = {2006},
      volume= {18},
      number = {5},
      month = {05},
      publisher = {MIT Press},
      url = {http://www.mitpressjournals.org/doi/abs/10.1162/neco.2006.18.5.1156#.VZ90f_mBswE},
      doi = {10.1162/neco.2006.18.5.1156},
      abstract = {Biped walking remains a difficult problem, and robot models can greatly facilitate our understanding of the underlying biomechanical principles as well as their neuronal control. The goal of this study is to specifically demonstrate that stable biped walking can be achieved by combining the physical properties of the walking robot with a small, reflex-based neuronal network governed mainly by local sensor signals. Building on earlier work (Taga, 1995 Cruse, Kindermann, Schumm, Dean, & Schmitz, 1998), this study shows that human-like gaits emerge without specific position or trajectory control and that the walker is able to compensate small disturbances through its own dynamical properties. The reflexive controller used here has the following characteristics, which are different from earlier approaches: (1) Control is mainly local. Hence, it uses only two signals (anterior extreme angle and ground contact), which operate at the interjoint level. All other signals operate only at single joints. (2) Neither position control nor trajectory tracking control is used. Instead, the approximate nature of the local reflexes on each joint allows the robot mechanics itself (e.g., its passive dynamics) to contribute substantially to the overall gait trajectory computation. (3) The motor control scheme used in the local reflexes of our robot is more straightforward and has more biological plausibility than that of other robots, because the outputs of the motor neurons in our reflexive controller are directly driving the motors of the joints rather than working as references for position or velocity control. As a consequence, the neural controller and the robot mechanics are closely coupled as a neuromechanical system, and this study emphasizes that dynamically stable biped walking gaits emerge from the coupling between neural computation and physical computation. This is demonstrated by different walking experiments using a real robot as well as by a Poincare map analysis applied on a model of the robot in order to assess its stability.}}
    Abstract: Biped walking remains a difficult problem, and robot models can greatly facilitate our understanding of the underlying biomechanical principles as well as their neuronal control. The goal of this study is to specifically demonstrate that stable biped walking can be achieved by combining the physical properties of the walking robot with a small, reflex-based neuronal network governed mainly by local sensor signals. Building on earlier work (Taga, 1995 Cruse, Kindermann, Schumm, Dean, & Schmitz, 1998), this study shows that human-like gaits emerge without specific position or trajectory control and that the walker is able to compensate small disturbances through its own dynamical properties. The reflexive controller used here has the following characteristics, which are different from earlier approaches: (1) Control is mainly local. Hence, it uses only two signals (anterior extreme angle and ground contact), which operate at the interjoint level. All other signals operate only at single joints. (2) Neither position control nor trajectory tracking control is used. Instead, the approximate nature of the local reflexes on each joint allows the robot mechanics itself (e.g., its passive dynamics) to contribute substantially to the overall gait trajectory computation. (3) The motor control scheme used in the local reflexes of our robot is more straightforward and has more biological plausibility than that of other robots, because the outputs of the motor neurons in our reflexive controller are directly driving the motors of the joints rather than working as references for position or velocity control. As a consequence, the neural controller and the robot mechanics are closely coupled as a neuromechanical system, and this study emphasizes that dynamically stable biped walking gaits emerge from the coupling between neural computation and physical computation. This is demonstrated by different walking experiments using a real robot as well as by a Poincare map analysis applied on a model of the robot in order to assess its stability.
    Review:
    Quack, B. and Wörgötter, F. and Agostini, A. (2015).
    Simultaneously Learning at Different Levels of Abstraction. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), 4600-4607. DOI: 10.1109/IROS.2015.7354032.
    BibTeX:
    @inproceedings{quackwoergoetteragostini2015,
      author = {Quack, B. and Wörgötter, F. and Agostini, A.},
      title = {Simultaneously Learning at Different Levels of Abstraction},
      pages = {4600-4607},
      booktitle = {IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS)},
      year = {2015},
      month = {Sept},
      doi = {10.1109/IROS.2015.7354032},
      abstract = {Robotic applications in human environments are usually implemented using a cognitive architecture that integrates techniques of different levels of abstraction, ranging from artificial intelligence techniques for making decisions at a symbolic level to robotic techniques for grounding symbolic actions. In this work we address the problem of simultaneous learning at different levels of abstractions in such an architecture. This problem is important since human environments are highly variable, and many unexpected situations may arise during the execution of a task. The usual approach under this circumstance is to train each level individually to learn how to deal with the new situations. However, this approach is limited since it implies long task interruptions every time a new situation needs to be learned. We propose an architecture where learning takes place simultaneously at all the levels of abstraction. To achieve this, we devise a method that permits higher levels to guide the learning at the levels below for the correct execution of the task. The architecture is instantiated with a logic-based planner and an online planning operator learner, at the highest level, and with online reinforcement learning units that learn action policies for the grounding of the symbolic actions, at the lowest one. A human teacher is involved in the decision-making loop to facilitate learning. The framework is tested in a physically realistic simulation of the Sokoban game.}}
    Abstract: Robotic applications in human environments are usually implemented using a cognitive architecture that integrates techniques of different levels of abstraction, ranging from artificial intelligence techniques for making decisions at a symbolic level to robotic techniques for grounding symbolic actions. In this work we address the problem of simultaneous learning at different levels of abstractions in such an architecture. This problem is important since human environments are highly variable, and many unexpected situations may arise during the execution of a task. The usual approach under this circumstance is to train each level individually to learn how to deal with the new situations. However, this approach is limited since it implies long task interruptions every time a new situation needs to be learned. We propose an architecture where learning takes place simultaneously at all the levels of abstraction. To achieve this, we devise a method that permits higher levels to guide the learning at the levels below for the correct execution of the task. The architecture is instantiated with a logic-based planner and an online planning operator learner, at the highest level, and with online reinforcement learning units that learn action policies for the grounding of the symbolic actions, at the lowest one. A human teacher is involved in the decision-making loop to facilitate learning. The framework is tested in a physically realistic simulation of the Sokoban game.
    Review:
    Agostini, A. and Aein, M. J. and Szedmak, S. and Aksoy, E. E. and Piater, J. and Wörgötter, F. (2015).
    Using Structural Bootstrapping for Object Substitution in Robotic Executions of Human-like Manipulation Tasks. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), 6479-6486. DOI: 10.1109/IROS.2015.7354303.
    BibTeX:
    @inproceedings{agostiniaeinszedmak2015,
      author = {Agostini, A. and Aein, M. J. and Szedmak, S. and Aksoy, E. E. and Piater, J. and Wörgötter, F.},
      title = {Using Structural Bootstrapping for Object Substitution in Robotic Executions of Human-like Manipulation Tasks},
      pages = {6479-6486},
      booktitle = {IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS)},
      year = {2015},
      location = {Hamburg, Germany},
      month = {Sept},
      doi = {10.1109/IROS.2015.7354303},
      abstract = {In this work we address the problem of finding replacements of missing objects that are needed for the execution of human-like manipulation tasks. This is a usual problem that is easily solved by humans provided their natural knowledge to find object substitutions: using a knife as a screwdriver or a book as a cutting board. On the other hand, in robotic applications, objects required in the task should be included in advance in the problem definition. If any of these objects is missing from the scenario, the conventional approach is to manually redefine the problem according to the available objects in the scene. In this work we propose an automatic way of finding object substitutions for the execution of manipulation tasks. The approach uses a logic-based planner to generate a plan from a prototypical problem definition and searches for replacements in the scene when some of the objects involved in the plan are missing. This is done by means of a repository of objects and attributes with roles, which is used to identify the affordances of the unknown objects in the scene. Planning actions are grounded using a novel approach that encodes the semantic structure of manipulation actions. The system was evaluated in a KUKA arm platform for the task of preparing a salad with successful results.}}
    Abstract: In this work we address the problem of finding replacements of missing objects that are needed for the execution of human-like manipulation tasks. This is a usual problem that is easily solved by humans provided their natural knowledge to find object substitutions: using a knife as a screwdriver or a book as a cutting board. On the other hand, in robotic applications, objects required in the task should be included in advance in the problem definition. If any of these objects is missing from the scenario, the conventional approach is to manually redefine the problem according to the available objects in the scene. In this work we propose an automatic way of finding object substitutions for the execution of manipulation tasks. The approach uses a logic-based planner to generate a plan from a prototypical problem definition and searches for replacements in the scene when some of the objects involved in the plan are missing. This is done by means of a repository of objects and attributes with roles, which is used to identify the affordances of the unknown objects in the scene. Planning actions are grounded using a novel approach that encodes the semantic structure of manipulation actions. The system was evaluated in a KUKA arm platform for the task of preparing a salad with successful results.
    Review:
    Tetzlaff, C. and Dasgupta, S. and Kulvicius, T. and Wörgötter, F. (2015).
    The Use of Hebbian Cell Assemblies for Nonlinear Computation. Scientific Reports, 5. DOI: 10.1038/srep12866.
    BibTeX:
    @article{tetzlaffdasguptakulvicius2015,
      author = {Tetzlaff, C. and Dasgupta, S. and Kulvicius, T. and Wörgötter, F.},
      title = {The Use of Hebbian Cell Assemblies for Nonlinear Computation},
      journal = {Scientific Reports},
      year = {2015},
      volume= {5},
      publisher = {Nature Publishing Group},
      url = {http://www.nature.com/articles/srep12866},
      doi = {10.1038/srep12866},
      abstract = {When learning a complex task our nervous system self-organizes large groups of neurons into coherent dynamic activity patterns. During this, a network with multiple, simultaneously active, and computationally powerful cell assemblies is created. How such ordered structures are formed while preserving a rich diversity of neural dynamics needed for computation is still unknown. Here we show that the combination of synaptic plasticity with the slower process of synaptic scaling achieves (i) the formation of cell assemblies and (ii) enhances the diversity of neural dynamics facilitating the learning of complex calculations. Due to synaptic scaling the dynamics of different cell assemblies do not interfere with each other. As a consequence, this type of self-organization allows executing a difficult, six degrees of freedom, manipulation task with a robot where assemblies need to learn computing complex non-linear transforms and - for execution - must cooperate with each other without interference. This mechanism, thus, permits the self-organization of computationally powerful sub-structures in dynamic networks for behavior control.}}
    Abstract: When learning a complex task our nervous system self-organizes large groups of neurons into coherent dynamic activity patterns. During this, a network with multiple, simultaneously active, and computationally powerful cell assemblies is created. How such ordered structures are formed while preserving a rich diversity of neural dynamics needed for computation is still unknown. Here we show that the combination of synaptic plasticity with the slower process of synaptic scaling achieves (i) the formation of cell assemblies and (ii) enhances the diversity of neural dynamics facilitating the learning of complex calculations. Due to synaptic scaling the dynamics of different cell assemblies do not interfere with each other. As a consequence, this type of self-organization allows executing a difficult, six degrees of freedom, manipulation task with a robot where assemblies need to learn computing complex non-linear transforms and - for execution - must cooperate with each other without interference. This mechanism, thus, permits the self-organization of computationally powerful sub-structures in dynamic networks for behavior control.
    Review:
    Dasgupta, S. and Goldschmidt, D. and Wörgötter, F. and Manoonpong, P. (2015).
    Distributed Recurrent Neural Forward Models with Synaptic Adaptation for Complex Behaviors of Walking Robots. arXiv preprint arXiv:1506.03599.
    BibTeX:
    @article{dasguptagoldschmidtwoergoetter2015,
      author = {Dasgupta, S. and Goldschmidt, D. and Wörgötter, F. and Manoonpong, P.},
      title = {Distributed Recurrent Neural Forward Models with Synaptic Adaptation for Complex Behaviors of Walking Robots},
      journal = {arXiv preprint arXiv:1506.03599},
      year = {2015},
      url = {http://arxiv.org/abs/1506.03599},
      abstract = {Walking animals, like stick insects, cockroaches or ants, demonstrate a fascinating range of locomotive abilities and complex behaviors. The locomotive behaviors can consist of a variety of walking patterns along with adaptation that allow the animals to deal with changes in environmental conditions, like uneven terrains, gaps, obstacles etc. Biological study has revealed that such complex behaviors are a result of a combination of biome- chanics and neural mechanism thus representing the true nature of embodied interactions. While the biomechanics helps maintain flexibility and sustain a variety of movements, the neural mechanisms generate movements while making appropriate predictions crucial for achieving adaptation. Such predictions or planning ahead can be achieved by way of in- ternal models that are grounded in the overall behavior of the animal. Inspired by these findings, we present here, an artificial bio-inspired walking system which effectively com- bines biomechanics (in terms of the body and leg structures) with the underlying neural mechanisms. The neural mechanisms consist of 1) central pattern generator based control for generating basic rhythmic patterns and coordinated movements, 2) distributed (at each leg) recurrent neural network based adaptive forward models with efference copies as internal models for sensory predictions and instantaneous state estimations, and 3) searching and elevation control for adapting the movement of an individual leg to deal with different environmental conditions. Using simulations we show that this bio-inspired approach with adaptive internal models allows the walking robot to perform complex loco- motive behaviors as observed in insects, including walking on undulated terrains, crossing large gaps as well as climbing over high obstacles.}}
    Abstract: Walking animals, like stick insects, cockroaches or ants, demonstrate a fascinating range of locomotive abilities and complex behaviors. The locomotive behaviors can consist of a variety of walking patterns along with adaptation that allow the animals to deal with changes in environmental conditions, like uneven terrains, gaps, obstacles etc. Biological study has revealed that such complex behaviors are a result of a combination of biome- chanics and neural mechanism thus representing the true nature of embodied interactions. While the biomechanics helps maintain flexibility and sustain a variety of movements, the neural mechanisms generate movements while making appropriate predictions crucial for achieving adaptation. Such predictions or planning ahead can be achieved by way of in- ternal models that are grounded in the overall behavior of the animal. Inspired by these findings, we present here, an artificial bio-inspired walking system which effectively com- bines biomechanics (in terms of the body and leg structures) with the underlying neural mechanisms. The neural mechanisms consist of 1) central pattern generator based control for generating basic rhythmic patterns and coordinated movements, 2) distributed (at each leg) recurrent neural network based adaptive forward models with efference copies as internal models for sensory predictions and instantaneous state estimations, and 3) searching and elevation control for adapting the movement of an individual leg to deal with different environmental conditions. Using simulations we show that this bio-inspired approach with adaptive internal models allows the walking robot to perform complex loco- motive behaviors as observed in insects, including walking on undulated terrains, crossing large gaps as well as climbing over high obstacles.
    Review:
    Fauth, M. and Wörgötter, F. and Tetzlaff, C. (2015).
    The Formation of Multi-synaptic Connections by the Interaction of Synaptic and Structural Plasticity and Their Functional Consequences. PLoS Comput Biol, e1004031, 11, 1. DOI: 10.1371/journal.pcbi.1004031.
    BibTeX:
    @article{fauthwoergoettertetzlaff2015a,
      author = {Fauth, M. and Wörgötter, F. and Tetzlaff, C.},
      title = {The Formation of Multi-synaptic Connections by the Interaction of Synaptic and Structural Plasticity and Their Functional Consequences},
      pages = {e1004031},
      journal = {PLoS Comput Biol},
      year = {2015},
      volume= {11},
      number = {1},
      institution = {Georg-August University Göttingen, Third Institute of Physics, Bernstein Center for Computational Neuroscience, Göttingen, Germany.},
      language = {english},
      month = {Jan},
      doi = {10.1371/journal.pcbi.1004031},
      abstract = {Cortical connectivity emerges from the permanent interaction between neuronal activity and synaptic as well as structural plasticity. An important experimentally observed feature of this connectivity is the distribution of the number of synapses from one neuron to another, which has been measured in several cortical layers. All of these distributions are bimodal with one peak at zero and a second one at a small number (3-8) of synapses. In this study, using a probabilistic model of structural plasticity, which depends on the synaptic weights, we explore how these distributions can emerge and which functional consequences they have. We find that bimodal distributions arise generically from the interaction of structural plasticity with synaptic plasticity rules that fulfill the following biological realistic constraints: First, the synaptic weights have to grow with the postsynaptic activity. Second, this growth curve and/or the input-output relation of the postsynaptic neuron have to change sub-linearly (negative curvature). As most neurons show such input-output-relations, these constraints can be fulfilled by many biological reasonable systems. Given such a system, we show that the different activities, which can explain the layer-specific distributions, correspond to experimentally observed activities. Considering these activities as working point of the system and varying the pre- or postsynaptic stimulation reveals a hysteresis in the number of synapses. As a consequence of this, the connectivity between two neurons can be controlled by activity but is also safeguarded against overly fast changes. These results indicate that the complex dynamics between activity and plasticity will, already between a pair of neurons, induce a variety of possible stable synaptic distributions, which could support memory mechanisms.}}
    Abstract: Cortical connectivity emerges from the permanent interaction between neuronal activity and synaptic as well as structural plasticity. An important experimentally observed feature of this connectivity is the distribution of the number of synapses from one neuron to another, which has been measured in several cortical layers. All of these distributions are bimodal with one peak at zero and a second one at a small number (3-8) of synapses. In this study, using a probabilistic model of structural plasticity, which depends on the synaptic weights, we explore how these distributions can emerge and which functional consequences they have. We find that bimodal distributions arise generically from the interaction of structural plasticity with synaptic plasticity rules that fulfill the following biological realistic constraints: First, the synaptic weights have to grow with the postsynaptic activity. Second, this growth curve and/or the input-output relation of the postsynaptic neuron have to change sub-linearly (negative curvature). As most neurons show such input-output-relations, these constraints can be fulfilled by many biological reasonable systems. Given such a system, we show that the different activities, which can explain the layer-specific distributions, correspond to experimentally observed activities. Considering these activities as working point of the system and varying the pre- or postsynaptic stimulation reveals a hysteresis in the number of synapses. As a consequence of this, the connectivity between two neurons can be controlled by activity but is also safeguarded against overly fast changes. These results indicate that the complex dynamics between activity and plasticity will, already between a pair of neurons, induce a variety of possible stable synaptic distributions, which could support memory mechanisms.
    Review:
    Fauth, M. and Wörgötter, F. and Tetzlaff, C. (2015).
    Formation and Maintenance of Robust Long-Term Information Storage in the Presence of Synaptic Turnover. PLoS Comput Biol, e1004684, 11, 12. DOI: 10.1371/journal.pcbi.1004684.
    BibTeX:
    @article{fauthwoergoettertetzlaff2015b,
      author = {Fauth, M. and Wörgötter, F. and Tetzlaff, C.},
      title = {Formation and Maintenance of Robust Long-Term Information Storage in the Presence of Synaptic Turnover},
      pages = {e1004684},
      journal = {PLoS Comput Biol},
      year = {2015},
      volume= {11},
      number = {12},
      institution = {Department of Neurobiology, Weizmann Institute of Science, Rehovot, Israel.},
      language = {eng},
      month = {Dec},
      doi = {10.1371/journal.pcbi.1004684},
      abstract = {A long-standing problem is how memories can be stored for very long times despite the volatility of the underlying neural substrate, most notably the high turnover of dendritic spines and synapses. To address this problem, here we are using a generic and simple probabilistic model for the creation and removal of synapses. We show that information can be stored for several months when utilizing the intrinsic dynamics of multi- synapse connections. In such systems, single synapses can still show high turnover, which enables fast learning of new information, but this will not perturb prior stored information (slow forgetting), which is represented by the compound state of the connections. The model matches the time course of recent experimental spine data during learning and memory in mice supporting the assumption of multi-synapse connections as the basis for long-term storage.}}
    Abstract: A long-standing problem is how memories can be stored for very long times despite the volatility of the underlying neural substrate, most notably the high turnover of dendritic spines and synapses. To address this problem, here we are using a generic and simple probabilistic model for the creation and removal of synapses. We show that information can be stored for several months when utilizing the intrinsic dynamics of multi- synapse connections. In such systems, single synapses can still show high turnover, which enables fast learning of new information, but this will not perturb prior stored information (slow forgetting), which is represented by the compound state of the connections. The model matches the time course of recent experimental spine data during learning and memory in mice supporting the assumption of multi-synapse connections as the basis for long-term storage.
    Review:
    Schoeler, M. and Wörgötter, F. (2015).
    Bootstrapping the Semantics of Tools: Affordance analysis of real world objects on a per-part basis. IEEE Transactions on Autonomous Mental Development (TAMD), 84-98, 8, 2. DOI: 10.1109/TAMD.2015.2488284.
    BibTeX:
    @article{schoelerwoergoetter2015,
      author = {Schoeler, M. and Wörgötter, F.},
      title = {Bootstrapping the Semantics of Tools: Affordance analysis of real world objects on a per-part basis},
      pages = {84-98},
      journal = {IEEE Transactions on Autonomous Mental Development (TAMD)},
      year = {2015},
      volume= {8},
      number = {2},
      month = {06},
      doi = {10.1109/TAMD.2015.2488284},
      abstract = {This study shows how understanding of object functionality arises by analyzing objects at the level of their parts where we focus here on primary tools. First, we create a set of primary tool functionalities, which we speculate is related to the possible functions of the human hand. The function of a tool is found by comparing it to this set. For this, the unknown tool is segmented, using a data-driven method, into its parts and evaluated using the geometrical part constellations against the training set. We demonstrate that various tools and even uncommon tool-versions can be recognized. The system }}
    Abstract: This study shows how understanding of object functionality arises by analyzing objects at the level of their parts where we focus here on primary tools. First, we create a set of primary tool functionalities, which we speculate is related to the possible functions of the human hand. The function of a tool is found by comparing it to this set. For this, the unknown tool is segmented, using a data-driven method, into its parts and evaluated using the geometrical part constellations against the training set. We demonstrate that various tools and even uncommon tool-versions can be recognized. The system
    Review:
    Faghihi, F. and Moustafa, A. and Heinrich, R. and Wörgötter, F. (2017).
    A computational model of conditioning inspired by Drosophila olfactory system. Neural Networks, 96 - 108, 87. DOI: 10.1016/j.neunet.2016.11.002.
    BibTeX:
    @article{faghihimoustafaheinrich2017,
      author = {Faghihi, F. and Moustafa, A. and Heinrich, R. and Wörgötter, F.},
      title = {A computational model of conditioning inspired by Drosophila olfactory system},
      pages = {96 - 108},
      journal = {Neural Networks},
      year = {2017},
      volume= {87},
      url = {http://www.sciencedirect.com/science/article/pii/S0893608016301666},
      doi = {10.1016/j.neunet.2016.11.002},
      abstract = {Recent studies have demonstrated that Drosophila melanogaster (briefly Drosophila) can successfully perform higher cognitive processes including second order olfactory conditioning. Understanding the neural mechanism of this behavior can help neuroscientists to unravel the principles of information processing in complex neural systems (e.g. the human brain) and to create efficient and robust robotic systems. In this work, we have developed a biologically-inspired spiking neural network which is able to execute both first and second order conditioning. Experimental studies demonstrated that volume signaling (e.g. by the gaseous transmitter nitric oxide) contributes to memory formation in vertebrates and invertebrates including insects. Based on the existing knowledge of odor encoding in Drosophila, the role of retrograde signaling in memory function, and the integration of synaptic and non-synaptic neural signaling, a neural system is implemented as Simulated fly. Simulated fly navigates in a two-dimensional environment in which it receives odors and electric shocks as sensory stimuli. The model suggests some experimental research on retrograde signaling to investigate neural mechanisms of conditioning in insects and other animals. Moreover, it illustrates a simple strategy to implement higher cognitive capabilities in machines including robots.}}
    Abstract: Recent studies have demonstrated that Drosophila melanogaster (briefly Drosophila) can successfully perform higher cognitive processes including second order olfactory conditioning. Understanding the neural mechanism of this behavior can help neuroscientists to unravel the principles of information processing in complex neural systems (e.g. the human brain) and to create efficient and robust robotic systems. In this work, we have developed a biologically-inspired spiking neural network which is able to execute both first and second order conditioning. Experimental studies demonstrated that volume signaling (e.g. by the gaseous transmitter nitric oxide) contributes to memory formation in vertebrates and invertebrates including insects. Based on the existing knowledge of odor encoding in Drosophila, the role of retrograde signaling in memory function, and the integration of synaptic and non-synaptic neural signaling, a neural system is implemented as Simulated fly. Simulated fly navigates in a two-dimensional environment in which it receives odors and electric shocks as sensory stimuli. The model suggests some experimental research on retrograde signaling to investigate neural mechanisms of conditioning in insects and other animals. Moreover, it illustrates a simple strategy to implement higher cognitive capabilities in machines including robots.
    Review:
    Agostini, A. and Alenya, G. and Fischbach, A. and Scharr, H. and Wörgötter, F. and Torras, C. (2017).
    A Cognitive Architecture for Automatic Gardening. Computers and Electronics in Agriculture, 69--79, 138. DOI: 10.1016/j.compag.2017.04.015.
    BibTeX:
    @article{agostinialenyafischbach2017,
      author = {Agostini, A. and Alenya, G. and Fischbach, A. and Scharr, H. and Wörgötter, F. and Torras, C.},
      title = {A Cognitive Architecture for Automatic Gardening},
      pages = {69--79},
      journal = {Computers and Electronics in Agriculture},
      year = {2017},
      volume= {138},
      publisher = {Elsevier},
      url = {http://www.sciencedirect.com/science/article/pii/S0168169916304768},
      doi = {10.1016/j.compag.2017.04.015},
      abstract = {In large industrial greenhouses, plants are usually treated following well established protocols for watering, nutrients, and shading/light. While this is practical for the automation of the process, it does not tap the full potential for optimal plant treatment. To more efficiently grow plants, specific treatments according to the plant individual needs should be applied. Experienced human gardeners are very good at treating plants individually. Unfortunately, hiring a crew of gardeners to carry out this task in large greenhouses is not cost effective. In this work we present a cognitive system that integrates artificial intelligence (AI) techniques for decision-making with robotics techniques for sensing and acting to autonomously treat plants using a real-robot platform. Artificial intelligence techniques are used to decide the amount of water and nutrients each plant needs according to the history of the plant. Robotic techniques for sensing measure plant attributes (e.g. leaves) from visual information using 3D model representations. These attributes are used by the AI system to make decisions about the treatment to apply. Acting techniques execute robot movements to supply the plants with the specified amount of water and nutrients.}}
    Abstract: In large industrial greenhouses, plants are usually treated following well established protocols for watering, nutrients, and shading/light. While this is practical for the automation of the process, it does not tap the full potential for optimal plant treatment. To more efficiently grow plants, specific treatments according to the plant individual needs should be applied. Experienced human gardeners are very good at treating plants individually. Unfortunately, hiring a crew of gardeners to carry out this task in large greenhouses is not cost effective. In this work we present a cognitive system that integrates artificial intelligence (AI) techniques for decision-making with robotics techniques for sensing and acting to autonomously treat plants using a real-robot platform. Artificial intelligence techniques are used to decide the amount of water and nutrients each plant needs according to the history of the plant. Robotic techniques for sensing measure plant attributes (e.g. leaves) from visual information using 3D model representations. These attributes are used by the AI system to make decisions about the treatment to apply. Acting techniques execute robot movements to supply the plants with the specified amount of water and nutrients.
    Review:
    Herzog, S. and Wörgötter, F. and Kulvicius, T. (2017).
    Generation of movements with boundary conditions based on optimal control theory. Robotics and Autonomous Systems, 1 - 11, 94. DOI: 10.1016/j.robot.2017.04.006.
    BibTeX:
    @article{herzogwoergoetterkulvicius2017,
      author = {Herzog, S. and Wörgötter, F. and Kulvicius, T.},
      title = {Generation of movements with boundary conditions based on optimal control theory},
      pages = {1 - 11},
      journal = {Robotics and Autonomous Systems},
      year = {2017},
      volume= {94},
      url = {http://www.sciencedirect.com/science/article/pii/S0921889016300963},
      doi = {10.1016/j.robot.2017.04.006},
      abstract = {Abstract Trajectory generation methods play an important role in robotics since they are essential for the execution of actions. In this paper we present a novel trajectory generation method for generalization of accurate movements with boundary conditions. Our approach originates from optimal control theory and is based on a second order dynamic system. We evaluate our method and compare it to the state of the art movement generation methods in both simulations and real robot experiments. We show that the new method is very compact in its representation and can reproduce reference trajectories with zero error. Moreover, it has most of the features of the state of the art movement generation methods such as robustness to perturbations and generalization to new position and velocity boundary conditions. We believe that, due to these features, our method may have potential for robotic applications where high accuracy is required paired with flexibility, for example, in modern industrial robotic applications, where more flexibility will be demanded as well as in medical robotics.}}
    Abstract: Abstract Trajectory generation methods play an important role in robotics since they are essential for the execution of actions. In this paper we present a novel trajectory generation method for generalization of accurate movements with boundary conditions. Our approach originates from optimal control theory and is based on a second order dynamic system. We evaluate our method and compare it to the state of the art movement generation methods in both simulations and real robot experiments. We show that the new method is very compact in its representation and can reproduce reference trajectories with zero error. Moreover, it has most of the features of the state of the art movement generation methods such as robustness to perturbations and generalization to new position and velocity boundary conditions. We believe that, due to these features, our method may have potential for robotic applications where high accuracy is required paired with flexibility, for example, in modern industrial robotic applications, where more flexibility will be demanded as well as in medical robotics.
    Review:
    Herzog, S. and Wörgötter, F. and Kulvicius, T. (2016).
    Optimal trajectory generation for generalization of discrete movements with boundary conditions. 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 3143-3149. DOI: 10.1109/IROS.2016.7759486.
    BibTeX:
    @inproceedings{herzogwoergoetterkulvicius2016,
      author = {Herzog, S. and Wörgötter, F. and Kulvicius, T.},
      title = {Optimal trajectory generation for generalization of discrete movements with boundary conditions},
      pages = {3143-3149},
      booktitle = {2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
      year = {2016},
      month = {Oct},
      doi = {10.1109/IROS.2016.7759486},
      abstract = {Trajectory generation methods play an important role in robotics since they are essential for the execution of actions. In this paper we present a novel trajectory generation method for generalization of accurate movements with boundary conditions. Our approach originates from optimal control theory and is based on a second order dynamic system. We evaluate our method and compare it to state-of-the-art movement generation methods in both simulations and a real robot experiment. We show that the new method is very compact in its representation and can reproduce demonstrated trajectories with zero error. Moreover, it has most of the properties of the state-of-the-art trajectory generation methods such as robustness to perturbations and generalisation to new boundary position and velocity conditions. We believe that, due to these features, our method has great potential for various robotic applications, especially, where high accuracy is required, for example, in industrial and medical robotics.}}
    Abstract: Trajectory generation methods play an important role in robotics since they are essential for the execution of actions. In this paper we present a novel trajectory generation method for generalization of accurate movements with boundary conditions. Our approach originates from optimal control theory and is based on a second order dynamic system. We evaluate our method and compare it to state-of-the-art movement generation methods in both simulations and a real robot experiment. We show that the new method is very compact in its representation and can reproduce demonstrated trajectories with zero error. Moreover, it has most of the properties of the state-of-the-art trajectory generation methods such as robustness to perturbations and generalisation to new boundary position and velocity conditions. We believe that, due to these features, our method has great potential for various robotic applications, especially, where high accuracy is required, for example, in industrial and medical robotics.
    Review:
    Ziaeetabar, F. and Aksoy, E. E. and Wörgötter, F. and Tamosiunaite, M. (2017).
    Semantic Analysis of Manipulation Actions Using Spatial Relations. IEEE International Conference on Robotics and Automation (ICRA) (accepted).
    BibTeX:
    @inproceedings{ziaeetabaraksoywoergoetter2017,
      author = {Ziaeetabar, F. and Aksoy, E. E. and Wörgötter, F. and Tamosiunaite, M.},
      title = {Semantic Analysis of Manipulation Actions Using Spatial Relations},
      booktitle = {IEEE International Conference on Robotics and Automation (ICRA)},
      year = {2017},
      month = {May- June},
      note = {accepted},
      abstract = {Recognition of human manipulation actions together with the analysis and execution by a robot is an important issue. Also, perception of spatial relationships between objects is central to understanding the meaning of manipulation actions. Here we would like to merge these two notions and analyze manipulation actions using symbolic spatial relations between objects in the scene. Specifically, we define procedures for extraction of symbolic human-readable relations based on Axis Aligned Bounding Box object models and use sequences of those relations for action recognition from image sequences. Our framework is inspired by the so called Semantic Event Chain framework, which analyzes touching and un-touching events of different objects during the manipulation. However, our framework uses fourteen spatial relations instead of two. We show that our relational framework is able to differentiate between more manipulation actions than the original Semantic Event Chains. We quantitatively evaluate the method on the MANIAC dataset containing 120 videos of eight different manipulation actions and obtain 97% classification accuracy which is 12 % more as compared to the original Semantic Event Chains.}}
    Abstract: Recognition of human manipulation actions together with the analysis and execution by a robot is an important issue. Also, perception of spatial relationships between objects is central to understanding the meaning of manipulation actions. Here we would like to merge these two notions and analyze manipulation actions using symbolic spatial relations between objects in the scene. Specifically, we define procedures for extraction of symbolic human-readable relations based on Axis Aligned Bounding Box object models and use sequences of those relations for action recognition from image sequences. Our framework is inspired by the so called Semantic Event Chain framework, which analyzes touching and un-touching events of different objects during the manipulation. However, our framework uses fourteen spatial relations instead of two. We show that our relational framework is able to differentiate between more manipulation actions than the original Semantic Event Chains. We quantitatively evaluate the method on the MANIAC dataset containing 120 videos of eight different manipulation actions and obtain 97% classification accuracy which is 12 % more as compared to the original Semantic Event Chains.
    Review:
    Ambe, Y. and Nachstedt, T. and Manoonpong, P. and Wörgötter, F. and Aoi, S. and Matsuno, F. (2013).
    Stability analysis of a hexapod robot driven by distributed nonlinear oscillators with a phase modulation mechanism. IEEE International Conference on Intelligent Robots and Systems, 5087--5092. DOI: 10.1109/IROS.2013.6697092.
    BibTeX:
    @inproceedings{ambenachstedtmanoonpong2013,
      author = {Ambe, Y. and Nachstedt, T. and Manoonpong, P. and Wörgötter, F. and Aoi, S. and Matsuno, F.},
      title = {Stability analysis of a hexapod robot driven by distributed nonlinear oscillators with a phase modulation mechanism},
      pages = {5087--5092},
      booktitle = {IEEE International Conference on Intelligent Robots and Systems},
      year = {2013},
      month = {11},
      doi = {10.1109/IROS.2013.6697092},
      abstract = {In this paper, we investigated the dynamics of a hexapod robot model whose legs are driven by nonlinear oscillators with a phase modulation mechanism including phase resetting and inhibition. This mechanism changes the oscillation period of the oscillator depending solely on the timing of the foots contact. This strategy is based on observation of animals. The performance of the controller is evaluated using a physical simulation environment. Our simulation results show that the robot produces some stable gaits depending on the locomotion speed due to the phase modulation mechanism, which are simillar to the gaits of insects.}}
    Abstract: In this paper, we investigated the dynamics of a hexapod robot model whose legs are driven by nonlinear oscillators with a phase modulation mechanism including phase resetting and inhibition. This mechanism changes the oscillation period of the oscillator depending solely on the timing of the foots contact. This strategy is based on observation of animals. The performance of the controller is evaluated using a physical simulation environment. Our simulation results show that the robot produces some stable gaits depending on the locomotion speed due to the phase modulation mechanism, which are simillar to the gaits of insects.
    Review:
    Chatterjee, S. and Nachstedt, T. and Tamosiunaite, M. and Wörgötter, F. and Enomoto, Y. and Ariizumi, R. and Matsuno, F. and Manoonpong, P. (2015).
    Learning and Chaining of Motor Primitives for Goal-Directed Locomotion of a Snake-Like Robot with Screw-Drive Units. International Journal of Advanced Robotic Systems, 12, 12. DOI: 10.5772/61621.
    BibTeX:
    @article{chatterjeenachstedttamosiunaite2015,
      author = {Chatterjee, S. and Nachstedt, T. and Tamosiunaite, M. and Wörgötter, F. and Enomoto, Y. and Ariizumi, R. and Matsuno, F. and Manoonpong, P.},
      title = {Learning and Chaining of Motor Primitives for Goal-Directed Locomotion of a Snake-Like Robot with Screw-Drive Units},
      journal = {International Journal of Advanced Robotic Systems},
      year = {2015},
      volume= {12},
      number = {12},
      doi = {10.5772/61621},
      abstract = {In this paper we apply a policy improvement algorithm called Policy Improvement with Path Integrals (PItextlesssuptextgreater2textless/suptextgreater) to generate goal-directed locomotion of a complex snake-like robot with screw-drive units. PItextlesssuptextgreater2textless/suptextgreater is numerically simple and has an ability to deal with high dimensional systems. Here, this approach is used to find proper locomotion control parameters, like joint angles and screw-drive velocities, of the robot. The learning process was achieved using a simulated robot and the learned parameters were successfully transferred to the real one. As a result the robot can locomote toward a given goal. © 2014 IEEE.}}
    Abstract: In this paper we apply a policy improvement algorithm called Policy Improvement with Path Integrals (PItextlesssuptextgreater2textless/suptextgreater) to generate goal-directed locomotion of a complex snake-like robot with screw-drive units. PItextlesssuptextgreater2textless/suptextgreater is numerically simple and has an ability to deal with high dimensional systems. Here, this approach is used to find proper locomotion control parameters, like joint angles and screw-drive velocities, of the robot. The learning process was achieved using a simulated robot and the learned parameters were successfully transferred to the real one. As a result the robot can locomote toward a given goal. © 2014 IEEE.
    Review:
    Ivanovska, T. and Ciet, P. and Perez-Rovira, A. and Nguyen, A. and Tiddens, H. and Duijts, L. and Bruijne, M. and Wörgötter, F. (2017).
    Fully Automated Lung Volume Assessment from MRI in a Population-based Child Cohort Study. Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISAPP), 53-58. DOI: 10.5220/0006075300530058.
    BibTeX:
    @conference{ivanovskacietperezrovira2017,
      author = {Ivanovska, T. and Ciet, P. and Perez-Rovira, A. and Nguyen, A. and Tiddens, H. and Duijts, L. and Bruijne, M. and Wörgötter, F.},
      title = {Fully Automated Lung Volume Assessment from MRI in a Population-based Child Cohort Study},
      pages = {53-58},
      booktitle = {Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISAPP)},
      year = {2017},
      organization = {ScitePress},
      publisher = {ScitePress},
      doi = {10.5220/0006075300530058},
      abstract = {In this work, a framework for fully automated lung extraction from magnetic resonance imaging (MRI) inspiratory data that have been acquired within a on-going epidemiological child cohort study is presented. The methods main steps are intensity inhomogeneity correction, denoising, clustering, airway extraction and lung region refinement. The presented approach produces highly accurate results (Dice coefficients 95%), when compared to semi-automatically obtained masks, and has potential to be applied to the whole study data.}}
    Abstract: In this work, a framework for fully automated lung extraction from magnetic resonance imaging (MRI) inspiratory data that have been acquired within a on-going epidemiological child cohort study is presented. The methods main steps are intensity inhomogeneity correction, denoising, clustering, airway extraction and lung region refinement. The presented approach produces highly accurate results (Dice coefficients 95%), when compared to semi-automatically obtained masks, and has potential to be applied to the whole study data.
    Review:
    Gressmann, F. and Lüddecke, T. and Ivanovska, T. and Schoeler, M. and Wörgötter, F. (2017).
    Part-driven Visual Perception of 3D Objects. Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 5: VISAPP, (VISIGRAPP 2017), 370-377. DOI: 10.5220/0006211203700377.
    BibTeX:
    @conference{gressmannlueddeckeivanovska2017,
      author = {Gressmann, F. and Lüddecke, T. and Ivanovska, T. and Schoeler, M. and Wörgötter, F.},
      title = {Part-driven Visual Perception of 3D Objects},
      pages = {370-377},
      booktitle = {Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 5: VISAPP, (VISIGRAPP 2017)},
      year = {2017},
      organization = {ScitePress},
      publisher = {ScitePress},
      doi = {10.5220/0006211203700377},
      abstract = {During the last years, approaches based on convolutional neural networks (CNN) had substantial success in visual object perception. CNNs turned out to be capable of extracting high-level features of objects, which allow for fine-grained classification. However, some object classes exhibit tremendous variance with respect to their instances appearance. We believe that considering object parts as an intermediate representation could be helpful in these cases. In this work, a part-driven perception of everyday objects with a rotation estimation is implemented using deep convolution neural networks. The used network is trained and tested on artificially generated RGB-D data. The approach has a potential to be used for part recognition of realistic sensor recordings in present robot systems.}}
    Abstract: During the last years, approaches based on convolutional neural networks (CNN) had substantial success in visual object perception. CNNs turned out to be capable of extracting high-level features of objects, which allow for fine-grained classification. However, some object classes exhibit tremendous variance with respect to their instances appearance. We believe that considering object parts as an intermediate representation could be helpful in these cases. In this work, a part-driven perception of everyday objects with a rotation estimation is implemented using deep convolution neural networks. The used network is trained and tested on artificially generated RGB-D data. The approach has a potential to be used for part recognition of realistic sensor recordings in present robot systems.
    Review:
    Ivanovska, T. and Pomschar, A. and Lorbeer, R. and Kunz, W. and Schulz, H. and Hetterich, H. and Völzke, H. and Bamber, F. and Peters, A. and Wörgötter, F. (2016).
    Efficient population-based big MR data analysis: a lung segmentation and volumetry example. In Proceedings of Sixth International Workshop on Pulmonary Image Analysis (PIA) at MICCAI 2016.
    BibTeX:
    @conference{ivanovskapomscharlorbeer2016,
      author = {Ivanovska, T. and Pomschar, A. and Lorbeer, R. and Kunz, W. and Schulz, H. and Hetterich, H. and Völzke, H. and Bamber, F. and Peters, A. and Wörgötter, F.},
      title = {Efficient population-based big MR data analysis: a lung segmentation and volumetry example},
      booktitle = {In Proceedings of Sixth International Workshop on Pulmonary Image Analysis (PIA) at MICCAI 2016},
      year = {2016},
      location = {Athens, Greece},
      abstract = {In this paper, we discuss magnetic resonance (MR) lung imaging and the related image processing tasks from two on-going epidemiological studies conducted in Germany. A modularized system for efficient lung segmentation is proposed and applied for test lung datasets from both studies. The efficiency of the framework is demonstrated by comparison of automatically computed results to the manually created ground truth masks. The presented pipeline allows one to obtain highly accurate segmentation results even for MR data with lower quality.}}
    Abstract: In this paper, we discuss magnetic resonance (MR) lung imaging and the related image processing tasks from two on-going epidemiological studies conducted in Germany. A modularized system for efficient lung segmentation is proposed and applied for test lung datasets from both studies. The efficiency of the framework is demonstrated by comparison of automatically computed results to the manually created ground truth masks. The presented pipeline allows one to obtain highly accurate segmentation results even for MR data with lower quality.
    Review:
    Goldbeck, C. and Kaul, L. and Vahrenkamp, N. and Wörgötter, F. and Asfour, T. and Braun, J. M. (2016).
    Two ways of walking: Contrasting a reflexive neuro-controller and a LIP-based ZMP-controller on the humanoid robot ARMAR-4. IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids), 966--972. DOI: 10.1109/HUMANOIDS.2016.7803389.
    BibTeX:
    @inproceedings{goldbeckkaulvahrenkamp2016,
      author = {Goldbeck, C. and Kaul, L. and Vahrenkamp, N. and Wörgötter, F. and Asfour, T. and Braun, J. M.},
      title = {Two ways of walking: Contrasting a reflexive neuro-controller and a LIP-based ZMP-controller on the humanoid robot ARMAR-4},
      pages = {966--972},
      booktitle = {IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids)},
      year = {2016},
      month = {Nov},
      url = {http://ieeexplore.ieee.org/abstract/document/7803389},
      doi = {10.1109/HUMANOIDS.2016.7803389},
      abstract = {Full-size humanoid robots are traditionally controlled with the Zero Moment Point (ZMP)-paradigm and simplified dynamics, a well established method which can be applied to balancing, walking, and whole-body manipulation tasks. For pure walking control, approaches like pattern generators and reflexes are employed, often on optimized hardware. Both controller groups are developed on different platforms and therefore can only be indirectly compared in terms of human likeness or energy efficiency. We present a reflex based neuro-controller with an underlying, simple hill-type muscle model on the extremely versatile humanoid robot ARMAR-4. We demonstrate the reflexive controllers flexible capabilities in terms of walking speed, step length, energy efficiency and inherent robustness against fall due to small slopes and pushes along the frontal axis. We contrast this controller with a Linearized Inverted Pendulum (LIP)-based ZMP-controller on the same platform. The promising results of this study show that even general humanoid robots can benefit from reflexive control schemes and encourage further investigation in this field.}}
    Abstract: Full-size humanoid robots are traditionally controlled with the Zero Moment Point (ZMP)-paradigm and simplified dynamics, a well established method which can be applied to balancing, walking, and whole-body manipulation tasks. For pure walking control, approaches like pattern generators and reflexes are employed, often on optimized hardware. Both controller groups are developed on different platforms and therefore can only be indirectly compared in terms of human likeness or energy efficiency. We present a reflex based neuro-controller with an underlying, simple hill-type muscle model on the extremely versatile humanoid robot ARMAR-4. We demonstrate the reflexive controllers flexible capabilities in terms of walking speed, step length, energy efficiency and inherent robustness against fall due to small slopes and pushes along the frontal axis. We contrast this controller with a Linearized Inverted Pendulum (LIP)-based ZMP-controller on the same platform. The promising results of this study show that even general humanoid robots can benefit from reflexive control schemes and encourage further investigation in this field.
    Review:

    © 2011 - 2017 Dept. of Computational Neuroscience • comments to: sreich _at_ gwdg.de • Impressum / Site Info