Dr. Tomas Kulvicius

Group(s): Neural Control and Robotics ,
Computer Vision ,
Neural Computation
Email:
tkulvic@gwdg.de
Phone: +49 551/ 39 10762
Room: E.01.104
Website: Kulvicius

Global QuickSearch:   Matches: 0

Search Settings

    Author / Editor / Organization
    Year
    Title
    Journal / Proceedings / Book
    Kulvicius, T. and Biehl, M. and Aein, M J. and Tamosiunaite, M. and Wörgötter, F. (2013).
    Interaction learning for dynamic movement primitives used in cooperative robotic tasks. Robotics and Autonomous Systems, 1450 - 1459, 61, 12. DOI: 10.1016/j.robot.2013.07.009.
    BibTeX:
    @article{kulviciusbiehlaein2013,
      author = {Kulvicius, T. and Biehl, M. and Aein, M J. and Tamosiunaite, M. and Wörgötter, F.},
      title = {Interaction learning for dynamic movement primitives used in cooperative robotic tasks},
      pages = {1450 - 1459},
      journal = {Robotics and Autonomous Systems},
      year = {2013},
      volume= {61},
      number = {12},
      url = http://www.sciencedirect.com/science/article/pii/S0921889013001358},
      doi = 10.1016/j.robot.2013.07.009},
      abstract = Since several years dynamic movement primitives (DMPs) are more and more getting into the center of interest for flexible movement control in robotics. In this study we introduce sensory feedback together with a predictive learning mechanism which allows tightly coupled dual-agent systems to learn an adaptive, sensor-driven interaction based on DMPs. The coupled conventional (no-sensors, no learning) DMP-system automatically equilibrates and can still be solved analytically allowing us to derive conditions for stability. When adding adaptive sensor control we can show that both agents learn to cooperate. Simulations as well as real-robot experiments are shown. Interestingly, all these mechanisms are entirely based on low level interactions without any planning or cognitive component.}}
    		
    Abstract: Since several years dynamic movement primitives (DMPs) are more and more getting into the center of interest for flexible movement control in robotics. In this study we introduce sensory feedback together with a predictive learning mechanism which allows tightly coupled dual-agent systems to learn an adaptive, sensor-driven interaction based on DMPs. The coupled conventional (no-sensors, no learning) DMP-system automatically equilibrates and can still be solved analytically allowing us to derive conditions for stability. When adding adaptive sensor control we can show that both agents learn to cooperate. Simulations as well as real-robot experiments are shown. Interestingly, all these mechanisms are entirely based on low level interactions without any planning or cognitive component.
    Review:
    Kulvicius, T. and Markelic, I. and Tamosiunaite, M. and Wörgötter, F. (2013).
    Semantic image search for robotic applications. Proc. of 22nd Int. Workshop on Robotics in Alpe-Adria-Danube Region RAAD2113, 1-8.
    BibTeX:
    @inproceedings{kulviciusmarkelictamosiunaite2013,
      author = {Kulvicius, T. and Markelic, I. and Tamosiunaite, M. and Wörgötter, F.},
      title = {Semantic image search for robotic applications},
      pages = {1-8},
      booktitle = {Proc. of 22nd Int. Workshop on Robotics in Alpe-Adria-Danube Region RAAD2113},
      year = {2013},
      location = {Portoro (Slovenia)},
      month = {September 11-13},
      abstract = Generalization in robotics is one of the most important problems. New generalization approaches use internet databases in order to solve new tasks. Modern search engines can return a large amount of information according to a query within milliseconds. However, not all of the returned information is task relevant, partly due to the problem of polysemes. Here we specifically address the problem of object generalization by using image search. We suggest a bi-modal solution, combining visual and textual information, based on the observation that humans use additional linguistic cues to demarcate intended word meaning. We evaluate the quality of our approach by comparing it to human labelled data and find that, on average, our approach leads to improved results in comparison to Google searches, and that it can treat the problem of polysemes.}}
    		
    Abstract: Generalization in robotics is one of the most important problems. New generalization approaches use internet databases in order to solve new tasks. Modern search engines can return a large amount of information according to a query within milliseconds. However, not all of the returned information is task relevant, partly due to the problem of polysemes. Here we specifically address the problem of object generalization by using image search. We suggest a bi-modal solution, combining visual and textual information, based on the observation that humans use additional linguistic cues to demarcate intended word meaning. We evaluate the quality of our approach by comparing it to human labelled data and find that, on average, our approach leads to improved results in comparison to Google searches, and that it can treat the problem of polysemes.
    Review:
    Papon, J. and Kulvicius, T. and Aksoy, E E. and Wörgötter, F. (2013).
    Point Cloud Video Object Segmentation using a Persistent Supervoxel World-Model. IEEE/RSJ International Conference on Intelligent Robots and Systems IROS, 3712-3718. DOI: 10.1109/IROS.2013.6696886.
    BibTeX:
    @inproceedings{paponkulviciusaksoy2013,
      author = {Papon, J. and Kulvicius, T. and Aksoy, E E. and Wörgötter, F.},
      title = {Point Cloud Video Object Segmentation using a Persistent Supervoxel World-Model},
      pages = {3712-3718},
      booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems IROS},
      year = {2013},
      location = {Tokyo (Japan)},
      month = {November 3-8},
      organization = {},
      doi = 10.1109/IROS.2013.6696886},
      abstract = Robust visual tracking is an essential precursor to understanding and replicating human actions in robotic systems. In order to accurately evaluate the semantic meaning of a sequence of video frames, or to replicate an action contained therein, one must be able to coherently track and segment all observed agents and objects. This work proposes a novel online point cloud based algorithm which simultaneously tracks 6DoF pose and determines spatial extent of all entities in indoor scenarios. This is accomplished using a persistent supervoxel world-model which is updated, rather than replaced, as new frames of data arrive. Maintenance of a world model enables general object permanence, permitting successful tracking through full occlusions. Object models are tracked using a bank of independent adaptive particle filters which use a supervoxel observation model to give rough estimates of object state. These are united using a novel multi-model RANSAC-like approach, which seeks to minimize a global energy function associating world-model supervoxels to predicted states. We present results on a standard robotic assembly benchmark for two application scenarios - human trajectory imitation and semantic action understanding - demonstrating the usefulness of the tracking in intelligent robotic systems.}}
    		
    Abstract: Robust visual tracking is an essential precursor to understanding and replicating human actions in robotic systems. In order to accurately evaluate the semantic meaning of a sequence of video frames, or to replicate an action contained therein, one must be able to coherently track and segment all observed agents and objects. This work proposes a novel online point cloud based algorithm which simultaneously tracks 6DoF pose and determines spatial extent of all entities in indoor scenarios. This is accomplished using a persistent supervoxel world-model which is updated, rather than replaced, as new frames of data arrive. Maintenance of a world model enables general object permanence, permitting successful tracking through full occlusions. Object models are tracked using a bank of independent adaptive particle filters which use a supervoxel observation model to give rough estimates of object state. These are united using a novel multi-model RANSAC-like approach, which seeks to minimize a global energy function associating world-model supervoxels to predicted states. We present results on a standard robotic assembly benchmark for two application scenarios - human trajectory imitation and semantic action understanding - demonstrating the usefulness of the tracking in intelligent robotic systems.
    Review:
    Kulvicius, T. and Ning, K. and Tamosiunaite, M. and Wörgötter, F. (2012).
    Joining Movement Sequences: Modified Dynamic Movement Primitives for Robotics Applications Exemplified on Handwriting. IEEE Transactions on Robotics, 145 - 157, 28, 1. DOI: 10.1109/TRO.2011.2163863.
    BibTeX:
    @article{kulviciusningtamosiunaite2012,
      author = {Kulvicius, T. and Ning, K. and Tamosiunaite, M. and Wörgötter, F.},
      title = {Joining Movement Sequences: Modified Dynamic Movement Primitives for Robotics Applications Exemplified on Handwriting},
      pages = {145 - 157},
      journal = {IEEE Transactions on Robotics},
      year = {2012},
      volume= {28},
      number = {1},
      doi = 10.1109/TRO.2011.2163863},
      abstract = The generation of complex movement patterns, in particular in cases where one needs to smoothly and accurately join trajectories in a dynamic way, is an important problem in robotics. This paper presents a novel joining method based on the modification of the original dynamic movement primitive DMP formulation. The new method can reproduce the target trajectory with high accuracy regarding both, position and velocity profile, and produces smooth and natural transitions in position as well as velocity space. The properties of the method are demonstrated by applying it to simulated handwriting generation also shown on a robot, where an adaptive algorithm is used to learn trajectories from human demonstration. These results demonstrate that the new method is a feasible alternative for joining of movement sequences which has high potential for all robotics applications where trajectory joining is required}}
    		
    Abstract: The generation of complex movement patterns, in particular in cases where one needs to smoothly and accurately join trajectories in a dynamic way, is an important problem in robotics. This paper presents a novel joining method based on the modification of the original dynamic movement primitive DMP formulation. The new method can reproduce the target trajectory with high accuracy regarding both, position and velocity profile, and produces smooth and natural transitions in position as well as velocity space. The properties of the method are demonstrated by applying it to simulated handwriting generation also shown on a robot, where an adaptive algorithm is used to learn trajectories from human demonstration. These results demonstrate that the new method is a feasible alternative for joining of movement sequences which has high potential for all robotics applications where trajectory joining is required
    Review:
    Tamosiunaite, M. and Markelic, I. and Kulvicius, T. and Wörgötter, F. (2011).
    Generalizing objects by analyzing language. 11th IEEE-RAS International Conference on Humanoid Robots Humanoids, 557-563. DOI: 10.1109/Humanoids.2011.6100812.
    BibTeX:
    @inproceedings{tamosiunaitemarkelickulvicius2011,
      author = {Tamosiunaite, M. and Markelic, I. and Kulvicius, T. and Wörgötter, F.},
      title = {Generalizing objects by analyzing language},
      pages = {557-563},
      booktitle = {11th IEEE-RAS International Conference on Humanoid Robots Humanoids},
      year = {2011},
      month = {10},
      doi = 10.1109/Humanoids.2011.6100812},
      abstract = Generalizing objects in an action-context by a robot, for example addressing the problem: "Which items can be cut with which tools?", is an unresolved and difficult problem. Answering such a question defines a complete action class and robots cannot do this so far. We use a bootstrapping mechanism similar to that known from human language acquisition, and combine languagewith image-analysis to create action classes built around the verb (action) in an utterance. A human teaches the robot a certain sentence, for example: "Cut a sausage with a knife", from where on the machine generalizes the arguments (nouns) that the verb takes and searches for possible alternative nouns. Then, by ways of an internet-based image search and a classification algorithm, image classes for the alternative nouns are extracted, by which a large "picture book" of the possible objects involved in an action is created. This concludes the generalization step. Using the same classifier, the machine can now also perform a recognition procedure. Without having seen the objects before, it can analyze a visual scene, discovering, for example, a cucumber and a mandolin, which match to the earlier found nouns allowing it to suggest actions like: "I could cut a cucumber with a mandolin". The algorithm for generalizing objects by analyzing/anguage (GOAL) presented here, allows, thus, generalization and recognition of objects in an action-context. It can then be combined with methods for action execution (e.g. action generation-based on human demonstration) to execute so far unknown actions.}}
    		
    Abstract: Generalizing objects in an action-context by a robot, for example addressing the problem: "Which items can be cut with which tools?", is an unresolved and difficult problem. Answering such a question defines a complete action class and robots cannot do this so far. We use a bootstrapping mechanism similar to that known from human language acquisition, and combine languagewith image-analysis to create action classes built around the verb (action) in an utterance. A human teaches the robot a certain sentence, for example: "Cut a sausage with a knife", from where on the machine generalizes the arguments (nouns) that the verb takes and searches for possible alternative nouns. Then, by ways of an internet-based image search and a classification algorithm, image classes for the alternative nouns are extracted, by which a large "picture book" of the possible objects involved in an action is created. This concludes the generalization step. Using the same classifier, the machine can now also perform a recognition procedure. Without having seen the objects before, it can analyze a visual scene, discovering, for example, a cucumber and a mandolin, which match to the earlier found nouns allowing it to suggest actions like: "I could cut a cucumber with a mandolin". The algorithm for generalizing objects by analyzing/anguage (GOAL) presented here, allows, thus, generalization and recognition of objects in an action-context. It can then be combined with methods for action execution (e.g. action generation-based on human demonstration) to execute so far unknown actions.
    Review:
    Ning, K. and Kulvicius, T. and Tamosiunaite, M. and Wörgötter, F. (2011).
    A Novel Trajectory Generation Method for Robot Control. Journal of Intelligent Robotic Systems, 165-184, 68, 2. DOI: 10.1007/s10846-012-9683-8.
    BibTeX:
    @article{ningkulviciustamosiunaite2011,
      author = {Ning, K. and Kulvicius, T. and Tamosiunaite, M. and Wörgötter, F.},
      title = {A Novel Trajectory Generation Method for Robot Control},
      pages = {165-184},
      journal = {Journal of Intelligent Robotic Systems},
      year = {2011},
      volume= {68},
      number = {2},
      doi = 10.1007/s10846-012-9683-8},
      abstract = This paper presents a novel trajectory generator based on Dynamic Movement Primitives DMP. The key ideas from the original DMP formalism are extracted, reformulated and extended from a control theoretical viewpoint. This method can generate smooth trajectories, satisfy position- and velocity boundary conditions at start- and endpoint with high precision, and follow accurately geometrical paths as desired. Paths can be complex and processed as a whole, and smooth transitions can be generated automatically. This novel trajectory generating technology appears therefore to be a viable alternative to the existing solutions not only for service robotics but possibly also in industry}}
    		
    Abstract: This paper presents a novel trajectory generator based on Dynamic Movement Primitives DMP. The key ideas from the original DMP formalism are extracted, reformulated and extended from a control theoretical viewpoint. This method can generate smooth trajectories, satisfy position- and velocity boundary conditions at start- and endpoint with high precision, and follow accurately geometrical paths as desired. Paths can be complex and processed as a whole, and smooth transitions can be generated automatically. This novel trajectory generating technology appears therefore to be a viable alternative to the existing solutions not only for service robotics but possibly also in industry
    Review:
    Ning, K. and Kulvicius, T. and Tamosiunaite, M. and Wörgötter, F. (2011).
    Accurate Position and Velocity Control for Trajectories Based on Dynamic Movement Primitives. IEEE International Conference on Robotics and Automation ICRA, 5006-5011. DOI: 10.1109/ICRA.2011.5979668.
    BibTeX:
    @inproceedings{ningkulviciustamosiunaite2011a,
      author = {Ning, K. and Kulvicius, T. and Tamosiunaite, M. and Wörgötter, F.},
      title = {Accurate Position and Velocity Control for Trajectories Based on Dynamic Movement Primitives},
      pages = {5006-5011},
      booktitle = {IEEE International Conference on Robotics and Automation ICRA},
      year = {2011},
      doi = 10.1109/ICRA.2011.5979668},
      abstract = This paper presents a novel method for trajectory generation based on dynamic movement primitives DMPs treated from a control theoretical perspective. We extended the key ideas from the original DMP formalism by introducing a velocity convergence mechanism in the reformulated system. Theoretical proof is given to guarantee its validity. The new method can deal with complex paths as a whole. Based on this, we can generate smooth trajectories with automatically generated transition zones, satisfy position- and velocity boundary conditions at start and endpoint with high precision, and support multiple via-point applications. Theoretic proof of this method and experiments are presented}}
    		
    Abstract: This paper presents a novel method for trajectory generation based on dynamic movement primitives DMPs treated from a control theoretical perspective. We extended the key ideas from the original DMP formalism by introducing a velocity convergence mechanism in the reformulated system. Theoretical proof is given to guarantee its validity. The new method can deal with complex paths as a whole. Based on this, we can generate smooth trajectories with automatically generated transition zones, satisfy position- and velocity boundary conditions at start and endpoint with high precision, and support multiple via-point applications. Theoretic proof of this method and experiments are presented
    Review:
    Manoonpong, P. and Kulvicius, T. and Wörgötter, F. and Kunze, L. and Renjewski, D. and Seyfarth, A. (2011).
    Compliant Ankles and Flat Feet for Improved Self-Stabilization and Passive Dynamics of the Biped Robot RunBot. The 2011 IEEE-RAS International Conference on Humanoid Robots, 276 - 281. DOI: 10.1109/Humanoids.2011.6100804.
    BibTeX:
    @inproceedings{manoonpongkulviciuswoergoetter2011,
      author = {Manoonpong, P. and Kulvicius, T. and Wörgötter, F. and Kunze, L. and Renjewski, D. and Seyfarth, A.},
      title = {Compliant Ankles and Flat Feet for Improved Self-Stabilization and Passive Dynamics of the Biped Robot RunBot},
      pages = {276 - 281},
      booktitle = {The 2011 IEEE-RAS International Conference on Humanoid Robots},
      year = {2011},
      month = {10},
      doi = 10.1109/Humanoids.2011.6100804},
      abstract = Biomechanical studies of human walking reveal that compliance plays an important role at least in natural and smooth motions as well as for self-stabilization. Inspired by this, we present here the development of a new lower leg segment of the dynamic biped robot "RunBot". This new lower leg segment features a compliant ankle connected to a flat foot. It is mainly employed to realize robust self-stabilization in a passive manner. In general, such self-stabilization is achieved through mechanical feedback due to elasticity. Using real-time walking experiments, this study shows that the new lower leg segment improves dynamic walking behavior of the robot in two main respects compared to an old lower leg segment consisting of rigid ankle and curved foot: 1) it provides better self-stabilization after stumbling and 2) it increases passive dynamics during some stages of the gait cycle of the robot i.e., when the whole robot moves unactuated. As a consequence, a combination of compliance (i.e., the new lower leg segment) and active components (i.e., actuated hip and knee joints) driven by a neural mechanism (i.e., reflexive neural control) enables RunBot to perform robust self stabilization and at the same time natural, smooth, and energy efficient walking behavior without high control effort.}}
    		
    Abstract: Biomechanical studies of human walking reveal that compliance plays an important role at least in natural and smooth motions as well as for self-stabilization. Inspired by this, we present here the development of a new lower leg segment of the dynamic biped robot "RunBot". This new lower leg segment features a compliant ankle connected to a flat foot. It is mainly employed to realize robust self-stabilization in a passive manner. In general, such self-stabilization is achieved through mechanical feedback due to elasticity. Using real-time walking experiments, this study shows that the new lower leg segment improves dynamic walking behavior of the robot in two main respects compared to an old lower leg segment consisting of rigid ankle and curved foot: 1) it provides better self-stabilization after stumbling and 2) it increases passive dynamics during some stages of the gait cycle of the robot i.e., when the whole robot moves unactuated. As a consequence, a combination of compliance (i.e., the new lower leg segment) and active components (i.e., actuated hip and knee joints) driven by a neural mechanism (i.e., reflexive neural control) enables RunBot to perform robust self stabilization and at the same time natural, smooth, and energy efficient walking behavior without high control effort.
    Review:
    Kulvicius, T. and Ning, K. and Tamosiunaite, M. and Wörgötter, F. (2011).
    Modified dynamic movement primitives for joining movement sequences. IEEE International Conference on Robotics and Automation, 2275-2280. DOI: 10.1109/ICRA.2011.5979716.
    BibTeX:
    @inproceedings{kulviciusningtamosiunaite2011,
      author = {Kulvicius, T. and Ning, K. and Tamosiunaite, M. and Wörgötter, F.},
      title = {Modified dynamic movement primitives for joining movement sequences},
      pages = {2275-2280},
      booktitle = {IEEE International Conference on Robotics and Automation},
      year = {2011},
      month = {05},
      doi = 10.1109/ICRA.2011.5979716},
      abstract = The generation of complex movement patterns, in particular in cases where one needs to smoothly and accurately join trajectories, is still a difficult problem in robotics. This paper presents a novel approach for joining of several dynamic movement primitives DMPs based on a modification of the original formulation for DMPs. The new method produces smooth and natural transitions in position as well as velocity space. The properties of the method are demonstrated by applying it to simulated handwriting generation implemented on a robot, where an adaptive algorithm is used to learn trajectories from human demonstration. These results demonstrate that the new method is a feasible alternative for trajectory learning and generation and its accuracy and modular character has potential for various robotics applications}}
    		
    Abstract: The generation of complex movement patterns, in particular in cases where one needs to smoothly and accurately join trajectories, is still a difficult problem in robotics. This paper presents a novel approach for joining of several dynamic movement primitives DMPs based on a modification of the original formulation for DMPs. The new method produces smooth and natural transitions in position as well as velocity space. The properties of the method are demonstrated by applying it to simulated handwriting generation implemented on a robot, where an adaptive algorithm is used to learn trajectories from human demonstration. These results demonstrate that the new method is a feasible alternative for trajectory learning and generation and its accuracy and modular character has potential for various robotics applications
    Review:
    Abramov, A. and Kulvicius, T. and Wörgötter, F. and Dellen, B. (2011).
    Real-Time Image Segmentation on a GPU. Facing the Multicore-Challenge, 131-142, 6310. DOI: 10.1007/978-3-642-16233-6_14.
    BibTeX:
    @inproceedings{abramovkulviciuswoergoetter2011,
      author = {Abramov, A. and Kulvicius, T. and Wörgötter, F. and Dellen, B.},
      title = {Real-Time Image Segmentation on a GPU},
      pages = {131-142},
      booktitle = {Facing the Multicore-Challenge},
      year = {2011},
      volume= {6310},
      doi = 10.1007/978-3-642-16233-6_14},
      abstract = Efficient segmentation of color images is important for many applications in computer vision. Non-parametric solutions are required in situations where little or no prior knowledge about the data is available. In this paper, we present a novel parallel image segmentation algorithm which segments images in real-time in a non-parametric way. The algorithm finds the equilibrium states of a Potts model in the superparamagnetic phase of the system. Our method maps perfectly onto the Graphics Processing Unit GPU architecture and has been implemented using the framework NVIDIA Compute Unified Device Architecture CUDA. For images of 256 x 320 pixels we obtained a frame rate of 30 Hz that demonstrates the applicability of the algorithm to video-processing tasks in real-time1}}
    		
    Abstract: Efficient segmentation of color images is important for many applications in computer vision. Non-parametric solutions are required in situations where little or no prior knowledge about the data is available. In this paper, we present a novel parallel image segmentation algorithm which segments images in real-time in a non-parametric way. The algorithm finds the equilibrium states of a Potts model in the superparamagnetic phase of the system. Our method maps perfectly onto the Graphics Processing Unit GPU architecture and has been implemented using the framework NVIDIA Compute Unified Device Architecture CUDA. For images of 256 x 320 pixels we obtained a frame rate of 30 Hz that demonstrates the applicability of the algorithm to video-processing tasks in real-time1
    Review:
    Kulvicius, T. and Kolodziejski, C. and Tamosiunaite, M. and Porr, B. and Wörgötter, F. (2010).
    Behavioral analysis of differential hebbian learning in closed-loop systems. Biological Cybernetics, 255-271, 103, 4. DOI: 10.1007/s00422-010-0396-4.
    BibTeX:
    @article{kulviciuskolodziejskitamosiunaite20,
      author = {Kulvicius, T. and Kolodziejski, C. and Tamosiunaite, M. and Porr, B. and Wörgötter, F.},
      title = {Behavioral analysis of differential hebbian learning in closed-loop systems},
      pages = {255-271},
      journal = {Biological Cybernetics},
      year = {2010},
      volume= {103},
      number = {4},
      publisher = {Springer-Verlag},
      doi = 10.1007/s00422-010-0396-4},
      abstract = Understanding closed loop behavioral systems is a non-trivial problem, especially when they change during learning. Descriptions of closed loop systems in terms of information theory date back to the 50s, however, there have been only a few attempts which take into account learning, mostly measuring information of inputs. In this study we analyze a specific type of closed loop system by looking at the input as well as the output space. For this, we investigate simulated agents that perform differential Hebbian learning STDP. In the first part we show that analytical solutions can be found for the temporal development of such systems for relatively simple cases. In the second part of this study we try to answer the following question: How can we predict which system from a given class would be the best for a particular scenario? This question is addressed using energy and entropy measures and investigating their development during learning. This way we can show that within well- specified scenarios there are indeed agents which are optimal with respect to their structure and adaptive properties}}
    		
    Abstract: Understanding closed loop behavioral systems is a non-trivial problem, especially when they change during learning. Descriptions of closed loop systems in terms of information theory date back to the 50s, however, there have been only a few attempts which take into account learning, mostly measuring information of inputs. In this study we analyze a specific type of closed loop system by looking at the input as well as the output space. For this, we investigate simulated agents that perform differential Hebbian learning STDP. In the first part we show that analytical solutions can be found for the temporal development of such systems for relatively simple cases. In the second part of this study we try to answer the following question: How can we predict which system from a given class would be the best for a particular scenario? This question is addressed using energy and entropy measures and investigating their development during learning. This way we can show that within well- specified scenarios there are indeed agents which are optimal with respect to their structure and adaptive properties
    Review:
    Markelic, I. and Kulvicius, T. and Tamosiunaite, M. and Wörgötter, F. (2009).
    Anticipatory Driving for a Robot-Car Based on Supervised Learning. Lecture Notes in Computer Science: Anticipatory Behavior in Adaptive Learning Systems, 267-282.
    BibTeX:
    @article{markelickulviciustamosiunaite2009,
      author = {Markelic, I. and Kulvicius, T. and Tamosiunaite, M. and Wörgötter, F.},
      title = {Anticipatory Driving for a Robot-Car Based on Supervised Learning},
      pages = {267-282},
      journal = {Lecture Notes in Computer Science: Anticipatory Behavior in Adaptive Learning Systems},
      year = {2009},
      abstract = Using look ahead information and plan making improves hu- man driving. We therefore propose that also autonomously driving systems should dispose over such abilities. We adapt a machine learning approach, where the system, a car-like robot, is trained by an experienced driver by correlating visual input to human driving actions. The heart of the system is a database where look ahead sensory information is stored together with action sequences issued by the human supervi- sor. The result is a robot that runs at real-time and issues steering and velocity control in a human-like way. For steer we adapt a two-level ap- proach, where the result of the database is combined with an additional reactive controller for robust behavior. Concerning velocity control this paper makes a novel contribution which is the ability of the system to react adequatly to upcoming curves}}
    		
    Abstract: Using look ahead information and plan making improves hu- man driving. We therefore propose that also autonomously driving systems should dispose over such abilities. We adapt a machine learning approach, where the system, a car-like robot, is trained by an experienced driver by correlating visual input to human driving actions. The heart of the system is a database where look ahead sensory information is stored together with action sequences issued by the human supervi- sor. The result is a robot that runs at real-time and issues steering and velocity control in a human-like way. For steer we adapt a two-level ap- proach, where the result of the database is combined with an additional reactive controller for robust behavior. Concerning velocity control this paper makes a novel contribution which is the ability of the system to react adequatly to upcoming curves
    Review:
    Tamosiunaite, M. and Ainge, J. and Kulvicius, T. and Porr, B. and Dudchenko, P. and Wörgötter, F. (2008).
    Path-finding in real and simulated rats: assessing the influence of path characteristics on navigation learning. Journal of Computational Neuroscience, 562-582, 25, 3.
    BibTeX:
    @article{tamosiunaiteaingekulvicius2008,
      author = {Tamosiunaite, M. and Ainge, J. and Kulvicius, T. and Porr, B. and Dudchenko, P. and Wörgötter, F.},
      title = {Path-finding in real and simulated rats: assessing the influence of path characteristics on navigation learning},
      pages = {562-582},
      journal = {Journal of Computational Neuroscience},
      year = {2008},
      volume= {25},
      number = {3},
      abstract = A large body of experimental evidence suggests that the hippocampal place field system is involved in reward based navigation learning in rodents. Reinforcement learning RL mechanisms have been used to model this, associating the state space in an RL-algorithm to the place-field map in a rat. The convergence properties of RL-algorithms are affected by the exploration patterns of the learner. Therefore, we first analyzed the path characteristics of freely exploring rats in a test arena. We found that straight path segments with mean length 23 cm up to a maximal length of 80 cm take up a significant proportion of the total paths. Thus, rat paths are biased as compared to random exploration. Next we designed a RL system that reproduces these specific path characteristics. Our model arena is covered by overlapping, probabilistically firing place fields PF of realistic size and coverage. Because convergence of RL-algorithms is also influenced by the state space characteristics, different PF-sizes and densities, leading to a different degree of overlap, were also investigated. The model rat learns finding a reward opposite to its starting point. We observed that the combination of biased straight exploration, overlapping coverage and probabilistic firing will strongly impair the convergence of learning. When the degree of randomness in the exploration is increased, convergence improves, but the distribution of straight path segments becomes unrealistic and paths become wiggly. To mend this situation without affecting the path characteristic two additional mechanisms are implemented: A gradual drop of the learned weights weight decay and path length limitation, which prevents learning if the reward is not found after some expected time. Both mechanisms limit the memory of the system and thereby counteract effects of getting trapped on a wrong path. When using these strategies individually divergent cases get substantially reduced and for some parameter settings no divergence was found anymore at all. Using weight decay and path length limitation at the same time, convergence is not much improved but instead time to convergence increases as the memory limiting effect is getting too strong. The degree of improvement relies also on the size and degree of overlap coverage density in the place field system. The used combination of these two parameters leads to a trade-off between convergence and speed to convergence. Thus, this study suggests that the role of the PF-system in navigation learning cannot be considered independently from the animals exploration pattern}}
    		
    Abstract: A large body of experimental evidence suggests that the hippocampal place field system is involved in reward based navigation learning in rodents. Reinforcement learning RL mechanisms have been used to model this, associating the state space in an RL-algorithm to the place-field map in a rat. The convergence properties of RL-algorithms are affected by the exploration patterns of the learner. Therefore, we first analyzed the path characteristics of freely exploring rats in a test arena. We found that straight path segments with mean length 23 cm up to a maximal length of 80 cm take up a significant proportion of the total paths. Thus, rat paths are biased as compared to random exploration. Next we designed a RL system that reproduces these specific path characteristics. Our model arena is covered by overlapping, probabilistically firing place fields PF of realistic size and coverage. Because convergence of RL-algorithms is also influenced by the state space characteristics, different PF-sizes and densities, leading to a different degree of overlap, were also investigated. The model rat learns finding a reward opposite to its starting point. We observed that the combination of biased straight exploration, overlapping coverage and probabilistic firing will strongly impair the convergence of learning. When the degree of randomness in the exploration is increased, convergence improves, but the distribution of straight path segments becomes unrealistic and paths become wiggly. To mend this situation without affecting the path characteristic two additional mechanisms are implemented: A gradual drop of the learned weights weight decay and path length limitation, which prevents learning if the reward is not found after some expected time. Both mechanisms limit the memory of the system and thereby counteract effects of getting trapped on a wrong path. When using these strategies individually divergent cases get substantially reduced and for some parameter settings no divergence was found anymore at all. Using weight decay and path length limitation at the same time, convergence is not much improved but instead time to convergence increases as the memory limiting effect is getting too strong. The degree of improvement relies also on the size and degree of overlap coverage density in the place field system. The used combination of these two parameters leads to a trade-off between convergence and speed to convergence. Thus, this study suggests that the role of the PF-system in navigation learning cannot be considered independently from the animals exploration pattern
    Review:
    Kulvicius, T. and Tamosiunaite, M. and Ainge, J. and Dudchenko, P. and Wörgötter, F. (2008).
    Odor supported place cell model and goal navigation in rodents. Journal of Computational Neuroscience, 481-500, 25.
    BibTeX:
    @article{kulviciustamosiunaiteainge2008,
      author = {Kulvicius, T. and Tamosiunaite, M. and Ainge, J. and Dudchenko, P. and Wörgötter, F.},
      title = {Odor supported place cell model and goal navigation in rodents},
      pages = {481-500},
      journal = {Journal of Computational Neuroscience},
      year = {2008},
      volume= {25},
      abstract = Experiments with rodents demonstrate that visual cues play an important role in the control of hippocampal place cells and spatial navigation. Never- theless, rats may also rely on auditory, olfactory and somatosensory stimuli for orientation. It is also known that rats can track odors or self-generated scent marks to find a food source. Here we model odor supported place cells by using a simple feed-forward network and analyze the impact of olfactory cues on place cell formation and spatial navigation. The obtained place cells are used to solve a goal navigation task by a novel mechanism based on self-marking by odor patches combined with a Q-learning algorithm. We also analyze the impact of place cell remapping on goal directed behavior when switching between two environments. We emphasize the importance of olfactory cues in place cell formation and show that the utility of environ- mental and self-generated olfactory cues, together with a mixed navigation strategy, improves goal directed navigation}}
    		
    Abstract: Experiments with rodents demonstrate that visual cues play an important role in the control of hippocampal place cells and spatial navigation. Never- theless, rats may also rely on auditory, olfactory and somatosensory stimuli for orientation. It is also known that rats can track odors or self-generated scent marks to find a food source. Here we model odor supported place cells by using a simple feed-forward network and analyze the impact of olfactory cues on place cell formation and spatial navigation. The obtained place cells are used to solve a goal navigation task by a novel mechanism based on self-marking by odor patches combined with a Q-learning algorithm. We also analyze the impact of place cell remapping on goal directed behavior when switching between two environments. We emphasize the importance of olfactory cues in place cell formation and show that the utility of environ- mental and self-generated olfactory cues, together with a mixed navigation strategy, improves goal directed navigation
    Review:
    Porr, B. and Kulvicius, T. and Wörgötter, F. (2007).
    Improved stability and convergence with three factor learning. Neurocomputing, 2005-2008, 70.
    BibTeX:
    @article{porrkulviciuswoergoetter2007,
      author = {Porr, B. and Kulvicius, T. and Wörgötter, F.},
      title = {Improved stability and convergence with three factor learning},
      pages = {2005-2008},
      journal = {Neurocomputing},
      year = {2007},
      volume= {70}}
    		
    Abstract:
    Review:
    Manoonpong, P. and Geng, T. and Porr, B. and Kulvicius, T. and Wörgötter, F. (2007).
    Adaptive, Fast Walking in a Biped Robot under Neuronal Control and Learning. Public Library of Science Computational Biology PLoS Comp. Biol., 37, e134. DOI: 10.1371/journal.pcbi.0030134.
    BibTeX:
    @article{manoonponggengporr2007a,
      author = {Manoonpong, P. and Geng, T. and Porr, B. and Kulvicius, T. and Wörgötter, F.},
      title = {Adaptive, Fast Walking in a Biped Robot under Neuronal Control and Learning},
      journal = {Public Library of Science Computational Biology PLoS Comp. Biol., 37, e134},
      year = {2007},
      doi = 10.1371/journal.pcbi.0030134},
      abstract = Human walking is a dynamic, partly self-stabilizing process relying on the interaction of the biomechanical design with its neuronal control. The coordination of this process is a very difficult problem, and it has been suggested that it involves a hierarchy of levels, where the lower ones, e.g., interactions between muscles and the spinal cord, are largely autonomous, and where higher level control (e.g., cortical) arises only pointwise, as needed. This requires an architecture of several nested, sensori-motor loops where the walking process provides feedback signals to the walkers sensory systems, which can be used to coordinate its movements. To complicate the situation, at a maximal walking speed of more than four leg-lengths per second, the cycle period available to coordinate all these loops is rather short. In this study we present a planar biped robot, which uses the design principle of nested loops to combine the self-stabilizing properties of its biomechanical design with several levels of neuronal control. Specifically, we show how to adapt control by including online learning mechanisms based on simulated synaptic plasticity. This robot can walk with a high speed (3.0 leg length/s), self-adapting to minor disturbances, and reacting in a robust way to abruptly induced gait changes. At the same time, it can learn walking on different terrains, requiring only few learning experiences. This study shows that the tight coupling of physical with neuronal control, guided by sensory feedback from the walking pattern itself, combined with synaptic learning may be a way forward to better understand and solve coordination problems in other complex motor tasks.}}
    		
    Abstract: Human walking is a dynamic, partly self-stabilizing process relying on the interaction of the biomechanical design with its neuronal control. The coordination of this process is a very difficult problem, and it has been suggested that it involves a hierarchy of levels, where the lower ones, e.g., interactions between muscles and the spinal cord, are largely autonomous, and where higher level control (e.g., cortical) arises only pointwise, as needed. This requires an architecture of several nested, sensori-motor loops where the walking process provides feedback signals to the walkers sensory systems, which can be used to coordinate its movements. To complicate the situation, at a maximal walking speed of more than four leg-lengths per second, the cycle period available to coordinate all these loops is rather short. In this study we present a planar biped robot, which uses the design principle of nested loops to combine the self-stabilizing properties of its biomechanical design with several levels of neuronal control. Specifically, we show how to adapt control by including online learning mechanisms based on simulated synaptic plasticity. This robot can walk with a high speed (3.0 leg length/s), self-adapting to minor disturbances, and reacting in a robust way to abruptly induced gait changes. At the same time, it can learn walking on different terrains, requiring only few learning experiences. This study shows that the tight coupling of physical with neuronal control, guided by sensory feedback from the walking pattern itself, combined with synaptic learning may be a way forward to better understand and solve coordination problems in other complex motor tasks.
    Review:
    Kulvicius, T. and Porr, B. and Wörgötter, F. (2007).
    Development of Receptive Fields in a Closed-loop Behavioural System. Neurocomput, 2046--2049, 70, 10-12. DOI: 10.1016/j.neucom.2006.10.132.
    BibTeX:
    @article{kulviciusporrwoergoetter2007,
      author = {Kulvicius, T. and Porr, B. and Wörgötter, F.},
      title = {Development of Receptive Fields in a Closed-loop Behavioural System},
      pages = {2046--2049},
      journal = {Neurocomput},
      year = {2007},
      volume= {70},
      number = {10-12},
      publisher = {Elsevier Science Publishers B. V},
      url = http://www.sciencedirect.com/science/article/pii/S0925231206004127},
      doi = 10.1016/j.neucom.2006.10.132}}
    		
    Abstract:
    Review:
    Kulvicius, T. and Bernd, P. and Wörgötter, F. (2007).
    Chained learning architectures in a simple closed-loop behavioural context. Biol. Cybern, 363-378, 97. DOI: 10.1007/s00422-007-0176-y.
    BibTeX:
    @article{kulviciusberndwoergoetter2007,
      author = {Kulvicius, T. and Bernd, P. and Wörgötter, F.},
      title = {Chained learning architectures in a simple closed-loop behavioural context},
      pages = {363-378},
      journal = {Biol. Cybern},
      year = {2007},
      volume= {97},
      doi = 10.1007/s00422-007-0176-y}}
    		
    Abstract:
    Review:
    Kulvicius, T. and Geng, T. and Porr, B. and Wörgötter, F. (2006).
    Speed Optimization of a 2D Walking Robot through STDP. Dynamical principles for neuroscience and intelligennt biomimetic devices: EPFL LATSIS Symposium 2006, 99-100.
    BibTeX:
    @inproceedings{kulviciusgengporr2006,
      author = {Kulvicius, T. and Geng, T. and Porr, B. and Wörgötter, F.},
      title = {Speed Optimization of a 2D Walking Robot through STDP},
      pages = {99-100},
      booktitle = {Dynamical principles for neuroscience and intelligennt biomimetic devices: EPFL LATSIS Symposium 2006},
      year = {2006}}
    		
    Abstract:
    Review:
    Kulvicius, T. and Tamosiunaite, M. and Vaisnys, R. (2005).
    T Wave Alternans Features for Automated Detection. Informatica, 587-602, 91, 4.
    BibTeX:
    @article{kulviciustamosiunaitevaisnys2005,
      author = {Kulvicius, T. and Tamosiunaite, M. and Vaisnys, R.},
      title = {T Wave Alternans Features for Automated Detection},
      pages = {587-602},
      journal = {Informatica},
      year = {2005},
      volume= {91},
      number = {4},
      url = http://www.bccn-goettingen.de/Publi}}
    		
    Abstract:
    Review:
    Stein, S. and Wörgötter, F. and Schoeler, M. and Papon, J. and Kulvicius, T. (2014).
    Convexity based object partitioning for robot applications. IEEE International Conference on Robotics and Automation (ICRA), 3213-3220. DOI: 10.1109/ICRA.2014.6907321.
    BibTeX:
    @inproceedings{steinwoergoetterschoeler2014,
      author = {Stein, S. and Wörgötter, F. and Schoeler, M. and Papon, J. and Kulvicius, T.},
      title = {Convexity based object partitioning for robot applications},
      pages = {3213-3220},
      booktitle = {IEEE International Conference on Robotics and Automation (ICRA)},
      year = {2014},
      month = {05},
      doi = 10.1109/ICRA.2014.6907321},
      abstract = The idea that connected convex surfaces, separated by concave boundaries, play an important role for the perception of objects and their decomposition into parts has been discussed for a long time. Based on this idea, we present a new bottom-up approach for the segmentation of 3D point clouds into object parts. The algorithm approximates a scene using an adjacency-graph of spatially connected surface patches. Edges in the graph are then classified as either convex or concave using a novel, strictly local criterion. Region growing is employed to identify locally convex connected subgraphs, which represent the object parts. We show quantitatively that our algorithm, although conceptually easy to graph and fast to compute, produces results that are comparable to far more complex state-of-the-art methods which use classification, learning and model fitting. This suggests that convexity/concavity is a powerful feature for object partitioning using 3D data. Furthermore we demonstrate that for many objects a natural decomposition into}}
    		
    Abstract: The idea that connected convex surfaces, separated by concave boundaries, play an important role for the perception of objects and their decomposition into parts has been discussed for a long time. Based on this idea, we present a new bottom-up approach for the segmentation of 3D point clouds into object parts. The algorithm approximates a scene using an adjacency-graph of spatially connected surface patches. Edges in the graph are then classified as either convex or concave using a novel, strictly local criterion. Region growing is employed to identify locally convex connected subgraphs, which represent the object parts. We show quantitatively that our algorithm, although conceptually easy to graph and fast to compute, produces results that are comparable to far more complex state-of-the-art methods which use classification, learning and model fitting. This suggests that convexity/concavity is a powerful feature for object partitioning using 3D data. Furthermore we demonstrate that for many objects a natural decomposition into
    Review:
    Schoeler, M. and Wörgötter, F. and Aein, M. and Kulvicius, T. (2014).
    Automated generation of training sets for object recognition in robotic applications. 23rd International Conference on Robotics in Alpe-Adria-Danube Region (RAAD), 1-7. DOI: 10.1109/RAAD.2014.7002247.
    BibTeX:
    @inproceedings{schoelerwoergoetteraein2014,
      author = {Schoeler, M. and Wörgötter, F. and Aein, M. and Kulvicius, T.},
      title = {Automated generation of training sets for object recognition in robotic applications},
      pages = {1-7},
      booktitle = {23rd International Conference on Robotics in Alpe-Adria-Danube Region (RAAD)},
      year = {2014},
      month = {Sept},
      doi = 10.1109/RAAD.2014.7002247},
      abstract = Object recognition plays an important role in robotics, since objects/tools first have to be identified in the scene before they can be manipulated/used. The performance of object recognition largely depends on the training dataset. Usually such training sets are gathered manually by a human operator, a tedious procedure, which ultimately limits the size of the dataset. One reason for manual selection of samples is that results returned by search engines often contain irrelevant images, mainly due to the problem of homographs (words spelled the same but with different meanings). In this paper we present an automated and unsupervised method, coined Trainingset Cleaning by Translation ( TCT ), for generation of training sets which are able to deal with the problem of homographs. For disambiguation, it uses the context provided by a command like "tighten the nut" together with a combination of public image searches, text searches and translation services. We compare our approach against plain Google image search qualitatively as well as in a classification task and demonstrate that our method indeed leads to a task-relevant training set, which results in an improvement of 24.1% in object recognition for 12 ambiguous classes. In addition, we present an application of our method to a real robot scenario.}}
    		
    Abstract: Object recognition plays an important role in robotics, since objects/tools first have to be identified in the scene before they can be manipulated/used. The performance of object recognition largely depends on the training dataset. Usually such training sets are gathered manually by a human operator, a tedious procedure, which ultimately limits the size of the dataset. One reason for manual selection of samples is that results returned by search engines often contain irrelevant images, mainly due to the problem of homographs (words spelled the same but with different meanings). In this paper we present an automated and unsupervised method, coined Trainingset Cleaning by Translation ( TCT ), for generation of training sets which are able to deal with the problem of homographs. For disambiguation, it uses the context provided by a command like "tighten the nut" together with a combination of public image searches, text searches and translation services. We compare our approach against plain Google image search qualitatively as well as in a classification task and demonstrate that our method indeed leads to a task-relevant training set, which results in an improvement of 24.1% in object recognition for 12 ambiguous classes. In addition, we present an application of our method to a real robot scenario.
    Review:
    Schoeler, M. and Wörgötter, F. and Papon, J. and Kulvicius, T. (2015).
    Unsupervised generation of context-relevant training-sets for visual object recognition employing multilinguality. IEEE Winter Conference on Applications of Computer Vision (WACV), 805-812. DOI: 10.1109/WACV.2015.112.
    BibTeX:
    @inproceedings{schoelerwoergoetterpapon2015,
      author = {Schoeler, M. and Wörgötter, F. and Papon, J. and Kulvicius, T.},
      title = {Unsupervised generation of context-relevant training-sets for visual object recognition employing multilinguality},
      pages = {805-812},
      booktitle = {IEEE Winter Conference on Applications of Computer Vision (WACV)},
      year = {2015},
      month = {Jan},
      doi = 10.1109/WACV.2015.112},
      abstract = Image based object classification requires clean training data sets. Gathering such sets is usually done manually by humans, which is time-consuming and laborious. On the other hand, directly using images from search engines creates very noisy data due to ambiguous noun-focused indexing. However, in daily speech nouns and verbs are always coupled. We use this for the automatic generation of clean data sets by the here-presented TRANSCLEAN algorithm, which through the use of multiple languages also solves the problem of polysemes (a single spelling with multiple meanings). Thus, we use the implicit knowledge contained in verbs, e.g. in an imperative such as}}
    		
    Abstract: Image based object classification requires clean training data sets. Gathering such sets is usually done manually by humans, which is time-consuming and laborious. On the other hand, directly using images from search engines creates very noisy data due to ambiguous noun-focused indexing. However, in daily speech nouns and verbs are always coupled. We use this for the automatic generation of clean data sets by the here-presented TRANSCLEAN algorithm, which through the use of multiple languages also solves the problem of polysemes (a single spelling with multiple meanings). Thus, we use the implicit knowledge contained in verbs, e.g. in an imperative such as
    Review:
    Tetzlaff, C. and Dasgupta, S. and Kulvicius, T. and Wörgötter, F. (2015).
    The Use of Hebbian Cell Assemblies for Nonlinear Computation. Scientific Reports, 5. DOI: 10.1038/srep12866.
    BibTeX:
    @article{tetzlaffdasguptakulvicius2015,
      author = {Tetzlaff, C. and Dasgupta, S. and Kulvicius, T. and Wörgötter, F.},
      title = {The Use of Hebbian Cell Assemblies for Nonlinear Computation},
      journal = {Scientific Reports},
      year = {2015},
      volume= {5},
      publisher = {Nature Publishing Group},
      url = http://www.nature.com/articles/srep12866},
      doi = 10.1038/srep12866},
      abstract = When learning a complex task our nervous system self-organizes large groups of neurons into coherent dynamic activity patterns. During this, a network with multiple, simultaneously active, and computationally powerful cell assemblies is created. How such ordered structures are formed while preserving a rich diversity of neural dynamics needed for computation is still unknown. Here we show that the combination of synaptic plasticity with the slower process of synaptic scaling achieves (i) the formation of cell assemblies and (ii) enhances the diversity of neural dynamics facilitating the learning of complex calculations. Due to synaptic scaling the dynamics of different cell assemblies do not interfere with each other. As a consequence, this type of self-organization allows executing a difficult, six degrees of freedom, manipulation task with a robot where assemblies need to learn computing complex non-linear transforms and - for execution - must cooperate with each other without interference. This mechanism, thus, permits the self-organization of computationally powerful sub-structures in dynamic networks for behavior control.}}
    		
    Abstract: When learning a complex task our nervous system self-organizes large groups of neurons into coherent dynamic activity patterns. During this, a network with multiple, simultaneously active, and computationally powerful cell assemblies is created. How such ordered structures are formed while preserving a rich diversity of neural dynamics needed for computation is still unknown. Here we show that the combination of synaptic plasticity with the slower process of synaptic scaling achieves (i) the formation of cell assemblies and (ii) enhances the diversity of neural dynamics facilitating the learning of complex calculations. Due to synaptic scaling the dynamics of different cell assemblies do not interfere with each other. As a consequence, this type of self-organization allows executing a difficult, six degrees of freedom, manipulation task with a robot where assemblies need to learn computing complex non-linear transforms and - for execution - must cooperate with each other without interference. This mechanism, thus, permits the self-organization of computationally powerful sub-structures in dynamic networks for behavior control.
    Review:
    Herzog, S. and Wörgötter, F. and Kulvicius, T. (2017).
    Generation of movements with boundary conditions based on optimal control theory. Robotics and Autonomous Systems, 1 - 11, 94. DOI: 10.1016/j.robot.2017.04.006.
    BibTeX:
    @article{herzogwoergoetterkulvicius2017,
      author = {Herzog, S. and Wörgötter, F. and Kulvicius, T.},
      title = {Generation of movements with boundary conditions based on optimal control theory},
      pages = {1 - 11},
      journal = {Robotics and Autonomous Systems},
      year = {2017},
      volume= {94},
      url = http://www.sciencedirect.com/science/article/pii/S0921889016300963},
      doi = 10.1016/j.robot.2017.04.006},
      abstract = Abstract Trajectory generation methods play an important role in robotics since they are essential for the execution of actions. In this paper we present a novel trajectory generation method for generalization of accurate movements with boundary conditions. Our approach originates from optimal control theory and is based on a second order dynamic system. We evaluate our method and compare it to the state of the art movement generation methods in both simulations and real robot experiments. We show that the new method is very compact in its representation and can reproduce reference trajectories with zero error. Moreover, it has most of the features of the state of the art movement generation methods such as robustness to perturbations and generalization to new position and velocity boundary conditions. We believe that, due to these features, our method may have potential for robotic applications where high accuracy is required paired with flexibility, for example, in modern industrial robotic applications, where more flexibility will be demanded as well as in medical robotics.}}
    		
    Abstract: Abstract Trajectory generation methods play an important role in robotics since they are essential for the execution of actions. In this paper we present a novel trajectory generation method for generalization of accurate movements with boundary conditions. Our approach originates from optimal control theory and is based on a second order dynamic system. We evaluate our method and compare it to the state of the art movement generation methods in both simulations and real robot experiments. We show that the new method is very compact in its representation and can reproduce reference trajectories with zero error. Moreover, it has most of the features of the state of the art movement generation methods such as robustness to perturbations and generalization to new position and velocity boundary conditions. We believe that, due to these features, our method may have potential for robotic applications where high accuracy is required paired with flexibility, for example, in modern industrial robotic applications, where more flexibility will be demanded as well as in medical robotics.
    Review:
    Herzog, S. and Wörgötter, F. and Kulvicius, T. (2016).
    Optimal trajectory generation for generalization of discrete movements with boundary conditions. 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 3143-3149. DOI: 10.1109/IROS.2016.7759486.
    BibTeX:
    @inproceedings{herzogwoergoetterkulvicius2016,
      author = {Herzog, S. and Wörgötter, F. and Kulvicius, T.},
      title = {Optimal trajectory generation for generalization of discrete movements with boundary conditions},
      pages = {3143-3149},
      booktitle = {2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
      year = {2016},
      month = {Oct},
      doi = 10.1109/IROS.2016.7759486},
      abstract = Trajectory generation methods play an important role in robotics since they are essential for the execution of actions. In this paper we present a novel trajectory generation method for generalization of accurate movements with boundary conditions. Our approach originates from optimal control theory and is based on a second order dynamic system. We evaluate our method and compare it to state-of-the-art movement generation methods in both simulations and a real robot experiment. We show that the new method is very compact in its representation and can reproduce demonstrated trajectories with zero error. Moreover, it has most of the properties of the state-of-the-art trajectory generation methods such as robustness to perturbations and generalisation to new boundary position and velocity conditions. We believe that, due to these features, our method has great potential for various robotic applications, especially, where high accuracy is required, for example, in industrial and medical robotics.}}
    		
    Abstract: Trajectory generation methods play an important role in robotics since they are essential for the execution of actions. In this paper we present a novel trajectory generation method for generalization of accurate movements with boundary conditions. Our approach originates from optimal control theory and is based on a second order dynamic system. We evaluate our method and compare it to state-of-the-art movement generation methods in both simulations and a real robot experiment. We show that the new method is very compact in its representation and can reproduce demonstrated trajectories with zero error. Moreover, it has most of the properties of the state-of-the-art trajectory generation methods such as robustness to perturbations and generalisation to new boundary position and velocity conditions. We believe that, due to these features, our method has great potential for various robotic applications, especially, where high accuracy is required, for example, in industrial and medical robotics.
    Review:

    © 2011 - 2016 Dept. of Computational Neuroscience • comments to: sreich _at_ gwdg.de • Impressum / Site Info