Dr. Minija Tamosiunaite

Group(s): Neural Control and Robotics ,
Computer Vision
Email:
minija.tamosiunaite@phys.uni-goettingen.de
Phone: +49 551/ 39 10763
Room: E.01.104

Global QuickSearch:   Matches: 0

Search Settings

    Author / Editor / Organization
    Year
    Title
    Journal / Proceedings / Book
    Kulvicius, T. and Biehl, M. and Aein, M J. and Tamosiunaite, M. and Wörgötter, F. (2013).
    Interaction learning for dynamic movement primitives used in cooperative robotic tasks. Robotics and Autonomous Systems, 1450 - 1459, 61, 12. DOI: 10.1016/j.robot.2013.07.009.
    BibTeX:
    @article{kulviciusbiehlaein2013,
      author = {Kulvicius, T. and Biehl, M. and Aein, M J. and Tamosiunaite, M. and Wörgötter, F.},
      title = {Interaction learning for dynamic movement primitives used in cooperative robotic tasks},
      pages = {1450 - 1459},
      journal = {Robotics and Autonomous Systems},
      year = {2013},
      volume= {61},
      number = {12},
      url = {http://www.sciencedirect.com/science/article/pii/S0921889013001358},
      doi = {10.1016/j.robot.2013.07.009},
      abstract = {Since several years dynamic movement primitives (DMPs) are more and more getting into the center of interest for flexible movement control in robotics. In this study we introduce sensory feedback together with a predictive learning mechanism which allows tightly coupled dual-agent systems to learn an adaptive, sensor-driven interaction based on DMPs. The coupled conventional (no-sensors, no learning) DMP-system automatically equilibrates and can still be solved analytically allowing us to derive conditions for stability. When adding adaptive sensor control we can show that both agents learn to cooperate. Simulations as well as real-robot experiments are shown. Interestingly, all these mechanisms are entirely based on low level interactions without any planning or cognitive component.}}
    Abstract: Since several years dynamic movement primitives (DMPs) are more and more getting into the center of interest for flexible movement control in robotics. In this study we introduce sensory feedback together with a predictive learning mechanism which allows tightly coupled dual-agent systems to learn an adaptive, sensor-driven interaction based on DMPs. The coupled conventional (no-sensors, no learning) DMP-system automatically equilibrates and can still be solved analytically allowing us to derive conditions for stability. When adding adaptive sensor control we can show that both agents learn to cooperate. Simulations as well as real-robot experiments are shown. Interestingly, all these mechanisms are entirely based on low level interactions without any planning or cognitive component.
    Review:
    Aein, M J. and Aksoy, E E. and Tamosiunaite, M. and Papon, J. and Ude, A. and Wörgötter, F. (2013).
    Toward a library of manipulation actions based on Semantic Object-Action Relations. IEEE/RSJ International Conference on Intelligent Robots and Systems. DOI: 10.1109/IROS.2013.6697011.
    BibTeX:
    @inproceedings{aeinaksoytamosiunaite2013,
      author = {Aein, M J. and Aksoy, E E. and Tamosiunaite, M. and Papon, J. and Ude, A. and Wörgötter, F.},
      title = {Toward a library of manipulation actions based on Semantic Object-Action Relations},
      booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems},
      year = {2013},
      doi = {10.1109/IROS.2013.6697011},
      abstract = {The goal of this study is to provide an architecture for a generic definition of robot manipulation actions. We emphasize that the representation of actions presented here is procedural. Thus, we will define the structural elements of our action representations as execution protocols. To achieve this, manipulations are defined using three levels. The top- level defines objects, their relations and the actions in an abstract and symbolic way. A mid-level sequencer, with which the action primitives are chained, is used to structure the actual action execution, which is performed via the bottom level. This (lowest) level collects data from sensors and communicates with the control system of the robot. This method enables robot manipulators to execute the same action in different situations i.e. on different objects with different positions and orientations. In addition, two methods of detecting action failure are provided which are necessary to handle faults in system. To demonstrate the effectiveness of the proposed framework, several different actions are performed on our robotic setup and results are shown. This way we are creating a library of human-like robot actions, which can be used by higher-level task planners to execute more complex tasks.}}
    Abstract: The goal of this study is to provide an architecture for a generic definition of robot manipulation actions. We emphasize that the representation of actions presented here is procedural. Thus, we will define the structural elements of our action representations as execution protocols. To achieve this, manipulations are defined using three levels. The top- level defines objects, their relations and the actions in an abstract and symbolic way. A mid-level sequencer, with which the action primitives are chained, is used to structure the actual action execution, which is performed via the bottom level. This (lowest) level collects data from sensors and communicates with the control system of the robot. This method enables robot manipulators to execute the same action in different situations i.e. on different objects with different positions and orientations. In addition, two methods of detecting action failure are provided which are necessary to handle faults in system. To demonstrate the effectiveness of the proposed framework, several different actions are performed on our robotic setup and results are shown. This way we are creating a library of human-like robot actions, which can be used by higher-level task planners to execute more complex tasks.
    Review:
    Kulvicius, T. and Markelic, I. and Tamosiunaite, M. and Wörgötter, F. (2013).
    Semantic image search for robotic applications. Proc. of 22nd Int. Workshop on Robotics in Alpe-Adria-Danube Region RAAD2113, 1-8.
    BibTeX:
    @inproceedings{kulviciusmarkelictamosiunaite2013,
      author = {Kulvicius, T. and Markelic, I. and Tamosiunaite, M. and Wörgötter, F.},
      title = {Semantic image search for robotic applications},
      pages = {1-8},
      booktitle = {Proc. of 22nd Int. Workshop on Robotics in Alpe-Adria-Danube Region RAAD2113},
      year = {2013},
      location = {Portoro (Slovenia)},
      month = {September 11-13},
      abstract = {Generalization in robotics is one of the most important problems. New generalization approaches use internet databases in order to solve new tasks. Modern search engines can return a large amount of information according to a query within milliseconds. However, not all of the returned information is task relevant, partly due to the problem of polysemes. Here we specifically address the problem of object generalization by using image search. We suggest a bi-modal solution, combining visual and textual information, based on the observation that humans use additional linguistic cues to demarcate intended word meaning. We evaluate the quality of our approach by comparing it to human labelled data and find that, on average, our approach leads to improved results in comparison to Google searches, and that it can treat the problem of polysemes.}}
    Abstract: Generalization in robotics is one of the most important problems. New generalization approaches use internet databases in order to solve new tasks. Modern search engines can return a large amount of information according to a query within milliseconds. However, not all of the returned information is task relevant, partly due to the problem of polysemes. Here we specifically address the problem of object generalization by using image search. We suggest a bi-modal solution, combining visual and textual information, based on the observation that humans use additional linguistic cues to demarcate intended word meaning. We evaluate the quality of our approach by comparing it to human labelled data and find that, on average, our approach leads to improved results in comparison to Google searches, and that it can treat the problem of polysemes.
    Review:
    Aksoy, E E. and Tamosiunaite, M. and Vuga, R. and Ude, A. and Geib, C. and Steedman, M. and Wörgötter, F. (2013).
    Structural bootstrapping at the sensorimotor level for the fast acquisition of action knowledge for cognitive robots. IEEE International Conference on Development and Learning and Epigenetic Robotics ICDL-EPIROB, 1--8. DOI: 10.1109/DevLrn.2013.6652537.
    BibTeX:
    @inproceedings{aksoytamosiunaitevuga2013,
      author = {Aksoy, E E. and Tamosiunaite, M. and Vuga, R. and Ude, A. and Geib, C. and Steedman, M. and Wörgötter, F.},
      title = {Structural bootstrapping at the sensorimotor level for the fast acquisition of action knowledge for cognitive robots},
      pages = {1--8},
      booktitle = {IEEE International Conference on Development and Learning and Epigenetic Robotics ICDL-EPIROB},
      year = {2013},
      location = {Osaka (Japan)},
      month = {08},
      doi = {10.1109/DevLrn.2013.6652537},
      abstract = {Autonomous robots are faced with the problem of encoding complex actions (e.g. complete manipulations) in a generic and generalizable way. Recently we had introduced the Semantic Event Chains (SECs) as a new representation which can be directly computed from a stream of 3D images and is based on changes in the relationships between objects involved in a manipulation. Here we show that the SEC framework can be extended (called extended SEC) with action-related information and used to achieve and encode two important cognitive properties relevant for advanced autonomous robots: The extended SEC enables us to determine whether an action representation (1) needs to be newly created and stored in its entirety in the robots memory or (2) whether one of the already known and memorized action representations just needs to be refined. In human cognition these two processes (1 and 2) are known as accommodation and assimilation. Thus, here we show that the extended SEC representation can be used to realize these processes originally defined by Piaget for the first time in a robotic application. This is of fundamental importance for any cognitive agent as it allows categorizing observed actions in new versus known ones, storing only the relevant aspects.}}
    Abstract: Autonomous robots are faced with the problem of encoding complex actions (e.g. complete manipulations) in a generic and generalizable way. Recently we had introduced the Semantic Event Chains (SECs) as a new representation which can be directly computed from a stream of 3D images and is based on changes in the relationships between objects involved in a manipulation. Here we show that the SEC framework can be extended (called extended SEC) with action-related information and used to achieve and encode two important cognitive properties relevant for advanced autonomous robots: The extended SEC enables us to determine whether an action representation (1) needs to be newly created and stored in its entirety in the robots memory or (2) whether one of the already known and memorized action representations just needs to be refined. In human cognition these two processes (1 and 2) are known as accommodation and assimilation. Thus, here we show that the extended SEC representation can be used to realize these processes originally defined by Piaget for the first time in a robotic application. This is of fundamental importance for any cognitive agent as it allows categorizing observed actions in new versus known ones, storing only the relevant aspects.
    Review:
    Wörgötter, F. and Aksoy, E. E. and Krüger, N. and Piater, J. and Ude, A. and Tamosiunaite, M. (2013).
    A Simple Ontology of Manipulation Actions based on Hand-Object Relations. IEEE Transactions on Autonomous Mental Development, 117 - 134, 05, 02. DOI: 10.1109/TAMD.2012.2232291.
    BibTeX:
    @article{woergoetteraksoykrueger2013,
      author = {Wörgötter, F. and Aksoy, E. E. and Krüger, N. and Piater, J. and Ude, A. and Tamosiunaite, M.},
      title = {A Simple Ontology of Manipulation Actions based on Hand-Object Relations},
      pages = {117 - 134},
      journal = {IEEE Transactions on Autonomous Mental Development},
      year = {2013},
      volume= {05},
      number = {02},
      month = {06},
      doi = {10.1109/TAMD.2012.2232291},
      abstract = {Humans can perform a multitude of different actions with their hands (manipulations). In spite of this, so far there have been only a few attempts to represent manipulation types trying to understand the underlying principles. Here we first discuss how manipulation actions are structured in space and time. For this we use as temporal anchor points those moments where two objects (or hand and object) touch or un-touch each other during a manipulation. We show that by this one can define a relatively small tree-like manipulation ontology. We find less than 30 fundamental manipulations. The temporal anchors also provide us with information about when to pay attention to additional important information, for example when to consider trajectory shapes and relative poses between objects. As a consequence a highly condensed representation emerges by which different manipulations can be recognized and encoded. Examples of manipulations recognition and execution by a robot based on this representation are given at the end of this study.}}
    Abstract: Humans can perform a multitude of different actions with their hands (manipulations). In spite of this, so far there have been only a few attempts to represent manipulation types trying to understand the underlying principles. Here we first discuss how manipulation actions are structured in space and time. For this we use as temporal anchor points those moments where two objects (or hand and object) touch or un-touch each other during a manipulation. We show that by this one can define a relatively small tree-like manipulation ontology. We find less than 30 fundamental manipulations. The temporal anchors also provide us with information about when to pay attention to additional important information, for example when to consider trajectory shapes and relative poses between objects. As a consequence a highly condensed representation emerges by which different manipulations can be recognized and encoded. Examples of manipulations recognition and execution by a robot based on this representation are given at the end of this study.
    Review:
    Kulvicius, T. and Ning, K. and Tamosiunaite, M. and Wörgötter, F. (2012).
    Joining Movement Sequences: Modified Dynamic Movement Primitives for Robotics Applications Exemplified on Handwriting. IEEE Transactions on Robotics, 145 - 157, 28, 1. DOI: 10.1109/TRO.2011.2163863.
    BibTeX:
    @article{kulviciusningtamosiunaite2012,
      author = {Kulvicius, T. and Ning, K. and Tamosiunaite, M. and Wörgötter, F.},
      title = {Joining Movement Sequences: Modified Dynamic Movement Primitives for Robotics Applications Exemplified on Handwriting},
      pages = {145 - 157},
      journal = {IEEE Transactions on Robotics},
      year = {2012},
      volume= {28},
      number = {1},
      doi = {10.1109/TRO.2011.2163863},
      abstract = {The generation of complex movement patterns, in particular in cases where one needs to smoothly and accurately join trajectories in a dynamic way, is an important problem in robotics. This paper presents a novel joining method based on the modification of the original dynamic movement primitive DMP formulation. The new method can reproduce the target trajectory with high accuracy regarding both, position and velocity profile, and produces smooth and natural transitions in position as well as velocity space. The properties of the method are demonstrated by applying it to simulated handwriting generation also shown on a robot, where an adaptive algorithm is used to learn trajectories from human demonstration. These results demonstrate that the new method is a feasible alternative for joining of movement sequences which has high potential for all robotics applications where trajectory joining is required}}
    Abstract: The generation of complex movement patterns, in particular in cases where one needs to smoothly and accurately join trajectories in a dynamic way, is an important problem in robotics. This paper presents a novel joining method based on the modification of the original dynamic movement primitive DMP formulation. The new method can reproduce the target trajectory with high accuracy regarding both, position and velocity profile, and produces smooth and natural transitions in position as well as velocity space. The properties of the method are demonstrated by applying it to simulated handwriting generation also shown on a robot, where an adaptive algorithm is used to learn trajectories from human demonstration. These results demonstrate that the new method is a feasible alternative for joining of movement sequences which has high potential for all robotics applications where trajectory joining is required
    Review:
    Ainge, A. and Tamosiunaite, M. and Wörgötter, F. and Dudchenko, P A. (2012).
    Hippocampal place cells encode intended destination, and not a discriminative stimulus, in a conditional T-maze task. Hippocampus, 534-543, 22.
    BibTeX:
    @article{aingetamosiunaitewoergoetter2012,
      author = {Ainge, A. and Tamosiunaite, M. and Wörgötter, F. and Dudchenko, P A.},
      title = {Hippocampal place cells encode intended destination, and not a discriminative stimulus, in a conditional T-maze task},
      pages = {534-543},
      journal = {Hippocampus},
      year = {2012},
      volume= {22},
      abstract = {The firing of hippocampal place cells encodes instantaneous location but can also reflect where the animal is heading prospective firing, or where it has just come from retrospective firing. The current experiment sought to explicitly control the prospective firing of place cells with a visual discriminada in a T-maze. Rats were trained to associate a specific visual stimulus e.g. a flashing light with the occurrence of reward in a specific location e.g. and the left arm of the T. A different visual stimulus e.g. and a constant light signalled the availability of reward in the opposite arm of the T. After this discrimination had been acquired, rats were implanted with electrodes in the CA1 layer of the hippocampus. Place cells were then identified and recorded as the animals performed the discrimination task, and the presentation of the visual stimulus was manipulated. A subset of CA1 place cells fired at different rates on the central stem of the T depending on the animals intended destination, but this conditional or prospective firing was independent of the visual discriminative stimulus. The firing rate of some place cells was, however, modulated by changes in the timing of presentation of the visual stimulus. Thus, place cells fired prospectively, but this firing did not appear to be controlled, directly, by a salient visual stimulus that controlled behaviour}}
    Abstract: The firing of hippocampal place cells encodes instantaneous location but can also reflect where the animal is heading prospective firing, or where it has just come from retrospective firing. The current experiment sought to explicitly control the prospective firing of place cells with a visual discriminada in a T-maze. Rats were trained to associate a specific visual stimulus e.g. a flashing light with the occurrence of reward in a specific location e.g. and the left arm of the T. A different visual stimulus e.g. and a constant light signalled the availability of reward in the opposite arm of the T. After this discrimination had been acquired, rats were implanted with electrodes in the CA1 layer of the hippocampus. Place cells were then identified and recorded as the animals performed the discrimination task, and the presentation of the visual stimulus was manipulated. A subset of CA1 place cells fired at different rates on the central stem of the T depending on the animals intended destination, but this conditional or prospective firing was independent of the visual discriminative stimulus. The firing rate of some place cells was, however, modulated by changes in the timing of presentation of the visual stimulus. Thus, place cells fired prospectively, but this firing did not appear to be controlled, directly, by a salient visual stimulus that controlled behaviour
    Review:
    Tamosiunaite, M. and Markelic, I. and Kulvicius, T. and Wörgötter, F. (2011).
    Generalizing objects by analyzing language. 11th IEEE-RAS International Conference on Humanoid Robots Humanoids, 557-563. DOI: 10.1109/Humanoids.2011.6100812.
    BibTeX:
    @inproceedings{tamosiunaitemarkelickulvicius2011,
      author = {Tamosiunaite, M. and Markelic, I. and Kulvicius, T. and Wörgötter, F.},
      title = {Generalizing objects by analyzing language},
      pages = {557-563},
      booktitle = {11th IEEE-RAS International Conference on Humanoid Robots Humanoids},
      year = {2011},
      month = {10},
      doi = {10.1109/Humanoids.2011.6100812},
      abstract = {Generalizing objects in an action-context by a robot, for example addressing the problem: "Which items can be cut with which tools?", is an unresolved and difficult problem. Answering such a question defines a complete action class and robots cannot do this so far. We use a bootstrapping mechanism similar to that known from human language acquisition, and combine languagewith image-analysis to create action classes built around the verb (action) in an utterance. A human teaches the robot a certain sentence, for example: "Cut a sausage with a knife", from where on the machine generalizes the arguments (nouns) that the verb takes and searches for possible alternative nouns. Then, by ways of an internet-based image search and a classification algorithm, image classes for the alternative nouns are extracted, by which a large "picture book" of the possible objects involved in an action is created. This concludes the generalization step. Using the same classifier, the machine can now also perform a recognition procedure. Without having seen the objects before, it can analyze a visual scene, discovering, for example, a cucumber and a mandolin, which match to the earlier found nouns allowing it to suggest actions like: "I could cut a cucumber with a mandolin". The algorithm for generalizing objects by analyzing/anguage (GOAL) presented here, allows, thus, generalization and recognition of objects in an action-context. It can then be combined with methods for action execution (e.g. action generation-based on human demonstration) to execute so far unknown actions.}}
    Abstract: Generalizing objects in an action-context by a robot, for example addressing the problem: "Which items can be cut with which tools?", is an unresolved and difficult problem. Answering such a question defines a complete action class and robots cannot do this so far. We use a bootstrapping mechanism similar to that known from human language acquisition, and combine languagewith image-analysis to create action classes built around the verb (action) in an utterance. A human teaches the robot a certain sentence, for example: "Cut a sausage with a knife", from where on the machine generalizes the arguments (nouns) that the verb takes and searches for possible alternative nouns. Then, by ways of an internet-based image search and a classification algorithm, image classes for the alternative nouns are extracted, by which a large "picture book" of the possible objects involved in an action is created. This concludes the generalization step. Using the same classifier, the machine can now also perform a recognition procedure. Without having seen the objects before, it can analyze a visual scene, discovering, for example, a cucumber and a mandolin, which match to the earlier found nouns allowing it to suggest actions like: "I could cut a cucumber with a mandolin". The algorithm for generalizing objects by analyzing/anguage (GOAL) presented here, allows, thus, generalization and recognition of objects in an action-context. It can then be combined with methods for action execution (e.g. action generation-based on human demonstration) to execute so far unknown actions.
    Review:
    Ning, K. and Kulvicius, T. and Tamosiunaite, M. and Wörgötter, F. (2011).
    A Novel Trajectory Generation Method for Robot Control. Journal of Intelligent Robotic Systems, 165-184, 68, 2. DOI: 10.1007/s10846-012-9683-8.
    BibTeX:
    @article{ningkulviciustamosiunaite2011,
      author = {Ning, K. and Kulvicius, T. and Tamosiunaite, M. and Wörgötter, F.},
      title = {A Novel Trajectory Generation Method for Robot Control},
      pages = {165-184},
      journal = {Journal of Intelligent Robotic Systems},
      year = {2011},
      volume= {68},
      number = {2},
      doi = {10.1007/s10846-012-9683-8},
      abstract = {This paper presents a novel trajectory generator based on Dynamic Movement Primitives DMP. The key ideas from the original DMP formalism are extracted, reformulated and extended from a control theoretical viewpoint. This method can generate smooth trajectories, satisfy position- and velocity boundary conditions at start- and endpoint with high precision, and follow accurately geometrical paths as desired. Paths can be complex and processed as a whole, and smooth transitions can be generated automatically. This novel trajectory generating technology appears therefore to be a viable alternative to the existing solutions not only for service robotics but possibly also in industry}}
    Abstract: This paper presents a novel trajectory generator based on Dynamic Movement Primitives DMP. The key ideas from the original DMP formalism are extracted, reformulated and extended from a control theoretical viewpoint. This method can generate smooth trajectories, satisfy position- and velocity boundary conditions at start- and endpoint with high precision, and follow accurately geometrical paths as desired. Paths can be complex and processed as a whole, and smooth transitions can be generated automatically. This novel trajectory generating technology appears therefore to be a viable alternative to the existing solutions not only for service robotics but possibly also in industry
    Review:
    Tamosiunaite, M. and Nemec, B. and Ude, A. and Wörgötter, F. (2011).
    Learning to pour with a robot arm combining goal and shape learning for dynamic movement primitives. Robotics and Autonomous Systems RAS, 910-922, 59, 11. DOI: 10.1016/j.robot.2011.07.004.
    BibTeX:
    @article{tamosiunaitenemecude2011,
      author = {Tamosiunaite, M. and Nemec, B. and Ude, A. and Wörgötter, F.},
      title = {Learning to pour with a robot arm combining goal and shape learning for dynamic movement primitives},
      pages = {910-922},
      journal = {Robotics and Autonomous Systems RAS},
      year = {2011},
      volume= {59},
      number = {11},
      url = {http://www.sciencedirect.com/science/article/pii/S0921889011001254},
      doi = {10.1016/j.robot.2011.07.004},
      abstract = {When describing robot motion with dynamic motion primitives DMPs, goal-trajectory endpoint, shape and temporal scaling parameters are used. In reinforcement learning with DMPs, usually goals and temporal scaling parameters are pre-defined and only the weights for shaping a DMP are learned. Many tasks, however, exist where the best goal position is not a priori known, requiring to learn it. Thus, here we specifically address the question of how to simultaneously combine goal and shape parameter learning. This is a difficult problem because both parameters could easily interfere in a destructive way. We apply value function approximation techniques for goal learning and policy gradient methods for shape learning. Specifically, we use policy improvement with path integrals and natural actor-critic for the policy gradient approach. Methods are analyzed with simulations and implemented on a real robot setup. Results for learning from scratch, learning initialized by human demonstration, as well as for modifying the tool for the learned DMPs are presented. We observe that the combination goal- together with shape learning is stable and robust within large parameter regimes. Learning converges quickly even in the presence of large disturbances, which makes this combined method suitable for robotic applications}}
    Abstract: When describing robot motion with dynamic motion primitives DMPs, goal-trajectory endpoint, shape and temporal scaling parameters are used. In reinforcement learning with DMPs, usually goals and temporal scaling parameters are pre-defined and only the weights for shaping a DMP are learned. Many tasks, however, exist where the best goal position is not a priori known, requiring to learn it. Thus, here we specifically address the question of how to simultaneously combine goal and shape parameter learning. This is a difficult problem because both parameters could easily interfere in a destructive way. We apply value function approximation techniques for goal learning and policy gradient methods for shape learning. Specifically, we use policy improvement with path integrals and natural actor-critic for the policy gradient approach. Methods are analyzed with simulations and implemented on a real robot setup. Results for learning from scratch, learning initialized by human demonstration, as well as for modifying the tool for the learned DMPs are presented. We observe that the combination goal- together with shape learning is stable and robust within large parameter regimes. Learning converges quickly even in the presence of large disturbances, which makes this combined method suitable for robotic applications
    Review:
    Ning, K. and Kulvicius, T. and Tamosiunaite, M. and Wörgötter, F. (2011).
    Accurate Position and Velocity Control for Trajectories Based on Dynamic Movement Primitives. IEEE International Conference on Robotics and Automation ICRA, 5006-5011. DOI: 10.1109/ICRA.2011.5979668.
    BibTeX:
    @inproceedings{ningkulviciustamosiunaite2011a,
      author = {Ning, K. and Kulvicius, T. and Tamosiunaite, M. and Wörgötter, F.},
      title = {Accurate Position and Velocity Control for Trajectories Based on Dynamic Movement Primitives},
      pages = {5006-5011},
      booktitle = {IEEE International Conference on Robotics and Automation ICRA},
      year = {2011},
      doi = {10.1109/ICRA.2011.5979668},
      abstract = {This paper presents a novel method for trajectory generation based on dynamic movement primitives DMPs treated from a control theoretical perspective. We extended the key ideas from the original DMP formalism by introducing a velocity convergence mechanism in the reformulated system. Theoretical proof is given to guarantee its validity. The new method can deal with complex paths as a whole. Based on this, we can generate smooth trajectories with automatically generated transition zones, satisfy position- and velocity boundary conditions at start and endpoint with high precision, and support multiple via-point applications. Theoretic proof of this method and experiments are presented}}
    Abstract: This paper presents a novel method for trajectory generation based on dynamic movement primitives DMPs treated from a control theoretical perspective. We extended the key ideas from the original DMP formalism by introducing a velocity convergence mechanism in the reformulated system. Theoretical proof is given to guarantee its validity. The new method can deal with complex paths as a whole. Based on this, we can generate smooth trajectories with automatically generated transition zones, satisfy position- and velocity boundary conditions at start and endpoint with high precision, and support multiple via-point applications. Theoretic proof of this method and experiments are presented
    Review:
    Markelic, I. and Kjaer-Nielsen, A. and Pauwels, K. and Baunegaardand Jensen, L. and Chumerin, N. and Vidugiriene, A. and Tamosiunaite M.and Hulle, M V. and Krüger, N. and Rotter, A. and Wörgötter, F. (2011).
    The Driving School System: Learning Automated Basic Driving Skills from a Teacher in a Real Car. IEEE Trans. Intelligent Transportation Systems, 1-12, PP99. DOI: 10.1109/TITS.2011.2157690.
    BibTeX:
    @article{markelickjaernielsenpauwels2011,
      author = {Markelic, I. and Kjaer-Nielsen, A. and Pauwels, K. and Baunegaardand Jensen, L. and Chumerin, N. and Vidugiriene, A. and Tamosiunaite M.and Hulle, M V. and Krüger, N. and Rotter, A. and Wörgötter, F.},
      title = {The Driving School System: Learning Automated Basic Driving Skills from a Teacher in a Real Car},
      pages = {1-12},
      journal = {IEEE Trans. Intelligent Transportation Systems},
      year = {2011},
      volume= {PP99},
      doi = {10.1109/TITS.2011.2157690},
      abstract = {We present a system that learns basic vision based driving skills from a human teacher. In contrast to much other work in this area which is based on simulation, or data obtained from simulation, our system is implemented as a multi-threaded, parallel CPU/GPU architecture in a real car and trained with real driving data to generate steering and acceleration control for road following. In addition it uses a novel algorithm for detecting independently moving objects IMOs for spotting obstacles. Both, learning and IMO detection algorithms, are data driven and thus improve above the limitations of model based approaches. The systems ability to imitate the teachers behavior is analyzed on known and unknown streets and the results suggest its use for steering assistance but limit the use of the acceleration signal to curve negotiation. We propose that this ability to adapt to the driver has high potential for future intelligent driver assistance systems since it can serve to increase the drivers security as well as the comfort, an important sales argument in the car industry}}
    Abstract: We present a system that learns basic vision based driving skills from a human teacher. In contrast to much other work in this area which is based on simulation, or data obtained from simulation, our system is implemented as a multi-threaded, parallel CPU/GPU architecture in a real car and trained with real driving data to generate steering and acceleration control for road following. In addition it uses a novel algorithm for detecting independently moving objects IMOs for spotting obstacles. Both, learning and IMO detection algorithms, are data driven and thus improve above the limitations of model based approaches. The systems ability to imitate the teachers behavior is analyzed on known and unknown streets and the results suggest its use for steering assistance but limit the use of the acceleration signal to curve negotiation. We propose that this ability to adapt to the driver has high potential for future intelligent driver assistance systems since it can serve to increase the drivers security as well as the comfort, an important sales argument in the car industry
    Review:
    Kulvicius, T. and Ning, K. and Tamosiunaite, M. and Wörgötter, F. (2011).
    Modified dynamic movement primitives for joining movement sequences. IEEE International Conference on Robotics and Automation, 2275-2280. DOI: 10.1109/ICRA.2011.5979716.
    BibTeX:
    @inproceedings{kulviciusningtamosiunaite2011,
      author = {Kulvicius, T. and Ning, K. and Tamosiunaite, M. and Wörgötter, F.},
      title = {Modified dynamic movement primitives for joining movement sequences},
      pages = {2275-2280},
      booktitle = {IEEE International Conference on Robotics and Automation},
      year = {2011},
      month = {05},
      doi = {10.1109/ICRA.2011.5979716},
      abstract = {The generation of complex movement patterns, in particular in cases where one needs to smoothly and accurately join trajectories, is still a difficult problem in robotics. This paper presents a novel approach for joining of several dynamic movement primitives DMPs based on a modification of the original formulation for DMPs. The new method produces smooth and natural transitions in position as well as velocity space. The properties of the method are demonstrated by applying it to simulated handwriting generation implemented on a robot, where an adaptive algorithm is used to learn trajectories from human demonstration. These results demonstrate that the new method is a feasible alternative for trajectory learning and generation and its accuracy and modular character has potential for various robotics applications}}
    Abstract: The generation of complex movement patterns, in particular in cases where one needs to smoothly and accurately join trajectories, is still a difficult problem in robotics. This paper presents a novel approach for joining of several dynamic movement primitives DMPs based on a modification of the original formulation for DMPs. The new method produces smooth and natural transitions in position as well as velocity space. The properties of the method are demonstrated by applying it to simulated handwriting generation implemented on a robot, where an adaptive algorithm is used to learn trajectories from human demonstration. These results demonstrate that the new method is a feasible alternative for trajectory learning and generation and its accuracy and modular character has potential for various robotics applications
    Review:
    Aksoy, E E. and Dellen, B. and Tamosiunaite, M. and Wörgötter, F. (2011).
    Execution of a Dual-Object Pushing Action with Semantic Event Chains. IEEE-RAS Int. Conf. on Humanoid Robots, 576-583. DOI: 10.1109/Humanoids.2011.6100833.
    BibTeX:
    @inproceedings{aksoydellentamosiunaite2011,
      author = {Aksoy, E E. and Dellen, B. and Tamosiunaite, M. and Wörgötter, F.},
      title = {Execution of a Dual-Object Pushing Action with Semantic Event Chains},
      pages = {576-583},
      booktitle = {IEEE-RAS Int. Conf. on Humanoid Robots},
      year = {2011},
      doi = {10.1109/Humanoids.2011.6100833},
      abstract = {Execution of a manipulation after learning from demonstration many times requires intricate planning and control systems or some form of manual guidance for a robot. Here we present a framework for manipulation execution based on the so called "Semantic Event Chain" which is an abstract description of relations between the objects in the scene. It captures the change of those relations during a manipulation and thereby provides the decisive temporal anchor points by which a manipulation is critically defined. Using semantic event chains a model of a manipulation can be learned. We will show that it is possible to add the required control parameters (the spatial anchor points) to this model, which can then be executed by a robot in a fully autonomous way. The process of learning and execution of semantic event chains is explained using a box pushing example.}}
    Abstract: Execution of a manipulation after learning from demonstration many times requires intricate planning and control systems or some form of manual guidance for a robot. Here we present a framework for manipulation execution based on the so called "Semantic Event Chain" which is an abstract description of relations between the objects in the scene. It captures the change of those relations during a manipulation and thereby provides the decisive temporal anchor points by which a manipulation is critically defined. Using semantic event chains a model of a manipulation can be learned. We will show that it is possible to add the required control parameters (the spatial anchor points) to this model, which can then be executed by a robot in a fully autonomous way. The process of learning and execution of semantic event chains is explained using a box pushing example.
    Review:
    Kulvicius, T. and Kolodziejski, C. and Tamosiunaite, M. and Porr, B. and Wörgötter, F. (2010).
    Behavioral analysis of differential hebbian learning in closed-loop systems. Biological Cybernetics, 255-271, 103, 4. DOI: 10.1007/s00422-010-0396-4.
    BibTeX:
    @article{kulviciuskolodziejskitamosiunaite20,
      author = {Kulvicius, T. and Kolodziejski, C. and Tamosiunaite, M. and Porr, B. and Wörgötter, F.},
      title = {Behavioral analysis of differential hebbian learning in closed-loop systems},
      pages = {255-271},
      journal = {Biological Cybernetics},
      year = {2010},
      volume= {103},
      number = {4},
      publisher = {Springer-Verlag},
      doi = {10.1007/s00422-010-0396-4},
      abstract = {Understanding closed loop behavioral systems is a non-trivial problem, especially when they change during learning. Descriptions of closed loop systems in terms of information theory date back to the 50s, however, there have been only a few attempts which take into account learning, mostly measuring information of inputs. In this study we analyze a specific type of closed loop system by looking at the input as well as the output space. For this, we investigate simulated agents that perform differential Hebbian learning STDP. In the first part we show that analytical solutions can be found for the temporal development of such systems for relatively simple cases. In the second part of this study we try to answer the following question: How can we predict which system from a given class would be the best for a particular scenario? This question is addressed using energy and entropy measures and investigating their development during learning. This way we can show that within well- specified scenarios there are indeed agents which are optimal with respect to their structure and adaptive properties}}
    Abstract: Understanding closed loop behavioral systems is a non-trivial problem, especially when they change during learning. Descriptions of closed loop systems in terms of information theory date back to the 50s, however, there have been only a few attempts which take into account learning, mostly measuring information of inputs. In this study we analyze a specific type of closed loop system by looking at the input as well as the output space. For this, we investigate simulated agents that perform differential Hebbian learning STDP. In the first part we show that analytical solutions can be found for the temporal development of such systems for relatively simple cases. In the second part of this study we try to answer the following question: How can we predict which system from a given class would be the best for a particular scenario? This question is addressed using energy and entropy measures and investigating their development during learning. This way we can show that within well- specified scenarios there are indeed agents which are optimal with respect to their structure and adaptive properties
    Review:
    Markelic, I. and Kulvicius, T. and Tamosiunaite, M. and Wörgötter, F. (2009).
    Anticipatory Driving for a Robot-Car Based on Supervised Learning. Lecture Notes in Computer Science: Anticipatory Behavior in Adaptive Learning Systems, 267-282.
    BibTeX:
    @article{markelickulviciustamosiunaite2009,
      author = {Markelic, I. and Kulvicius, T. and Tamosiunaite, M. and Wörgötter, F.},
      title = {Anticipatory Driving for a Robot-Car Based on Supervised Learning},
      pages = {267-282},
      journal = {Lecture Notes in Computer Science: Anticipatory Behavior in Adaptive Learning Systems},
      year = {2009},
      abstract = {Using look ahead information and plan making improves hu- man driving. We therefore propose that also autonomously driving systems should dispose over such abilities. We adapt a machine learning approach, where the system, a car-like robot, is trained by an experienced driver by correlating visual input to human driving actions. The heart of the system is a database where look ahead sensory information is stored together with action sequences issued by the human supervi- sor. The result is a robot that runs at real-time and issues steering and velocity control in a human-like way. For steer we adapt a two-level ap- proach, where the result of the database is combined with an additional reactive controller for robust behavior. Concerning velocity control this paper makes a novel contribution which is the ability of the system to react adequatly to upcoming curves}}
    Abstract: Using look ahead information and plan making improves hu- man driving. We therefore propose that also autonomously driving systems should dispose over such abilities. We adapt a machine learning approach, where the system, a car-like robot, is trained by an experienced driver by correlating visual input to human driving actions. The heart of the system is a database where look ahead sensory information is stored together with action sequences issued by the human supervi- sor. The result is a robot that runs at real-time and issues steering and velocity control in a human-like way. For steer we adapt a two-level ap- proach, where the result of the database is combined with an additional reactive controller for robust behavior. Concerning velocity control this paper makes a novel contribution which is the ability of the system to react adequatly to upcoming curves
    Review:
    Tamosiunaite, M. and Asfour, T. and Wörgötter, F. (2009).
    Learning to reach by reinforcement learning using a receptive field based function approximation approach with continuous actions. Biological Cybernetics, 249-260, 100, 3.
    BibTeX:
    @article{tamosiunaiteasfourwoergoetter2009,
      author = {Tamosiunaite, M. and Asfour, T. and Wörgötter, F.},
      title = {Learning to reach by reinforcement learning using a receptive field based function approximation approach with continuous actions},
      pages = {249-260},
      journal = {Biological Cybernetics},
      year = {2009},
      volume= {100},
      number = {3},
      abstract = {Reinforcement learning methods can be used in robotics applications especially for specific target-oriented problems, for example the reward-based recalibration of goal directed actions. To this end still relatively large and continuous state-action spaces need to be efficiently handled. The goal of this paper is, thus, to develop a novel, rather simple method which uses reinforcement learning with function approximation in conjunction with different reward-strategies for solving such problems. For the testing of our method, we use a four degree-of-freedom reaching problem in 3D-space simulated by a two-joint robot arm system with two DOF each. Function approximation is based on 4D, overlapping kernels receptive fields and the state-action space contains about 10,000 of these. Different types of reward structures are being compared, for example, reward-on- touching-only against reward-on-approach. Furthermore, forbidden joint configurations are punished. A continuous action space is used. In spite of a rather large number of states and the continuous action space these reward/punishment strategies allow the system to find a good solution usually within about 20 trials. The efficiency of our method demonstrated in this test scenario suggests that it might be possible to use it on a real robot for problems where mixed rewards can be defined in situations where other types of learning might be difficult}}
    Abstract: Reinforcement learning methods can be used in robotics applications especially for specific target-oriented problems, for example the reward-based recalibration of goal directed actions. To this end still relatively large and continuous state-action spaces need to be efficiently handled. The goal of this paper is, thus, to develop a novel, rather simple method which uses reinforcement learning with function approximation in conjunction with different reward-strategies for solving such problems. For the testing of our method, we use a four degree-of-freedom reaching problem in 3D-space simulated by a two-joint robot arm system with two DOF each. Function approximation is based on 4D, overlapping kernels receptive fields and the state-action space contains about 10,000 of these. Different types of reward structures are being compared, for example, reward-on- touching-only against reward-on-approach. Furthermore, forbidden joint configurations are punished. A continuous action space is used. In spite of a rather large number of states and the continuous action space these reward/punishment strategies allow the system to find a good solution usually within about 20 trials. The efficiency of our method demonstrated in this test scenario suggests that it might be possible to use it on a real robot for problems where mixed rewards can be defined in situations where other types of learning might be difficult
    Review:
    Nemec, B. and Tamosiunaite, M. and Wörgötter, F. and Ude, A. (2009).
    Task adaptation through exploration and action sequencing. 9th IEEE-RAS International Conference on Humanoid Robots, 2009, 610 -616. DOI: 10.1109/ICHR.2009.5379568.
    BibTeX:
    @inproceedings{nemectamosiunaitewoergoetter2009,
      author = {Nemec, B. and Tamosiunaite, M. and Wörgötter, F. and Ude, A.},
      title = {Task adaptation through exploration and action sequencing},
      pages = {610 -616},
      booktitle = {9th IEEE-RAS International Conference on Humanoid Robots, 2009},
      year = {2009},
      doi = {10.1109/ICHR.2009.5379568},
      abstract = {General-purpose autonomous robots need to have the ability to sequence and adapt the available sensorimotor knowledge, which is often given in the form of movement primitives. In order to solve a given task in situations that were not considered during the initial learning, it is necessary to adapt trajectories contained in the library of primitive motions to new situations. In this paper we explore how to apply reinforcement learning to modify the subgoals of primitive movements involved in the given task. As the underlying sensorimotor representation we selected nonlinear dynamic systems, which provide a powerful machinery for the modification of motion trajectories. We propose a new formulation for dynamic systems, which ensures that consecutive primitive movements can be splined together in a continuous way up to second order derivatives}}
    Abstract: General-purpose autonomous robots need to have the ability to sequence and adapt the available sensorimotor knowledge, which is often given in the form of movement primitives. In order to solve a given task in situations that were not considered during the initial learning, it is necessary to adapt trajectories contained in the library of primitive motions to new situations. In this paper we explore how to apply reinforcement learning to modify the subgoals of primitive movements involved in the given task. As the underlying sensorimotor representation we selected nonlinear dynamic systems, which provide a powerful machinery for the modification of motion trajectories. We propose a new formulation for dynamic systems, which ensures that consecutive primitive movements can be splined together in a continuous way up to second order derivatives
    Review:
    Kolodziejski, C. and Porr, B. and Tamosiunaite, M. and Wörgötter, F. (2009).
    On the asymptotic equivalence between differential Hebbian and temporal difference learning using a local third factor. Advances in Neural Information Processing Systems, 857-864, 21.
    BibTeX:
    @inproceedings{kolodziejskiporrtamosiunaite2009,
      author = {Kolodziejski, C. and Porr, B. and Tamosiunaite, M. and Wörgötter, F.},
      title = {On the asymptotic equivalence between differential Hebbian and temporal difference learning using a local third factor},
      pages = {857-864},
      booktitle = {Advances in Neural Information Processing Systems},
      year = {2009},
      volume= {21},
      abstract = {In this theoretical contribution we provide mathematical proof that two of the most important classes of network learning - correlation-based differential Heb- bian learning and reward-based temporal difference learning - are asymptotically equivalent when timing the learning with a local modulatory signal. This opens the opportunity to consistently reformulate most of the abstract reinforcement learn- ing framework from a correlation based perspective that is more closely related to the biophysics of neurons}}
    Abstract: In this theoretical contribution we provide mathematical proof that two of the most important classes of network learning - correlation-based differential Heb- bian learning and reward-based temporal difference learning - are asymptotically equivalent when timing the learning with a local modulatory signal. This opens the opportunity to consistently reformulate most of the abstract reinforcement learn- ing framework from a correlation based perspective that is more closely related to the biophysics of neurons
    Review:
    Tamosiunaite, M. and Ainge, J. and Kulvicius, T. and Porr, B. and Dudchenko, P. and Wörgötter, F. (2008).
    Path-finding in real and simulated rats: assessing the influence of path characteristics on navigation learning. Journal of Computational Neuroscience, 562-582, 25, 3.
    BibTeX:
    @article{tamosiunaiteaingekulvicius2008,
      author = {Tamosiunaite, M. and Ainge, J. and Kulvicius, T. and Porr, B. and Dudchenko, P. and Wörgötter, F.},
      title = {Path-finding in real and simulated rats: assessing the influence of path characteristics on navigation learning},
      pages = {562-582},
      journal = {Journal of Computational Neuroscience},
      year = {2008},
      volume= {25},
      number = {3},
      abstract = {A large body of experimental evidence suggests that the hippocampal place field system is involved in reward based navigation learning in rodents. Reinforcement learning RL mechanisms have been used to model this, associating the state space in an RL-algorithm to the place-field map in a rat. The convergence properties of RL-algorithms are affected by the exploration patterns of the learner. Therefore, we first analyzed the path characteristics of freely exploring rats in a test arena. We found that straight path segments with mean length 23 cm up to a maximal length of 80 cm take up a significant proportion of the total paths. Thus, rat paths are biased as compared to random exploration. Next we designed a RL system that reproduces these specific path characteristics. Our model arena is covered by overlapping, probabilistically firing place fields PF of realistic size and coverage. Because convergence of RL-algorithms is also influenced by the state space characteristics, different PF-sizes and densities, leading to a different degree of overlap, were also investigated. The model rat learns finding a reward opposite to its starting point. We observed that the combination of biased straight exploration, overlapping coverage and probabilistic firing will strongly impair the convergence of learning. When the degree of randomness in the exploration is increased, convergence improves, but the distribution of straight path segments becomes unrealistic and paths become wiggly. To mend this situation without affecting the path characteristic two additional mechanisms are implemented: A gradual drop of the learned weights weight decay and path length limitation, which prevents learning if the reward is not found after some expected time. Both mechanisms limit the memory of the system and thereby counteract effects of getting trapped on a wrong path. When using these strategies individually divergent cases get substantially reduced and for some parameter settings no divergence was found anymore at all. Using weight decay and path length limitation at the same time, convergence is not much improved but instead time to convergence increases as the memory limiting effect is getting too strong. The degree of improvement relies also on the size and degree of overlap coverage density in the place field system. The used combination of these two parameters leads to a trade-off between convergence and speed to convergence. Thus, this study suggests that the role of the PF-system in navigation learning cannot be considered independently from the animals exploration pattern}}
    Abstract: A large body of experimental evidence suggests that the hippocampal place field system is involved in reward based navigation learning in rodents. Reinforcement learning RL mechanisms have been used to model this, associating the state space in an RL-algorithm to the place-field map in a rat. The convergence properties of RL-algorithms are affected by the exploration patterns of the learner. Therefore, we first analyzed the path characteristics of freely exploring rats in a test arena. We found that straight path segments with mean length 23 cm up to a maximal length of 80 cm take up a significant proportion of the total paths. Thus, rat paths are biased as compared to random exploration. Next we designed a RL system that reproduces these specific path characteristics. Our model arena is covered by overlapping, probabilistically firing place fields PF of realistic size and coverage. Because convergence of RL-algorithms is also influenced by the state space characteristics, different PF-sizes and densities, leading to a different degree of overlap, were also investigated. The model rat learns finding a reward opposite to its starting point. We observed that the combination of biased straight exploration, overlapping coverage and probabilistic firing will strongly impair the convergence of learning. When the degree of randomness in the exploration is increased, convergence improves, but the distribution of straight path segments becomes unrealistic and paths become wiggly. To mend this situation without affecting the path characteristic two additional mechanisms are implemented: A gradual drop of the learned weights weight decay and path length limitation, which prevents learning if the reward is not found after some expected time. Both mechanisms limit the memory of the system and thereby counteract effects of getting trapped on a wrong path. When using these strategies individually divergent cases get substantially reduced and for some parameter settings no divergence was found anymore at all. Using weight decay and path length limitation at the same time, convergence is not much improved but instead time to convergence increases as the memory limiting effect is getting too strong. The degree of improvement relies also on the size and degree of overlap coverage density in the place field system. The used combination of these two parameters leads to a trade-off between convergence and speed to convergence. Thus, this study suggests that the role of the PF-system in navigation learning cannot be considered independently from the animals exploration pattern
    Review:
    Kulvicius, T. and Tamosiunaite, M. and Ainge, J. and Dudchenko, P. and Wörgötter, F. (2008).
    Odor supported place cell model and goal navigation in rodents. Journal of Computational Neuroscience, 481-500, 25.
    BibTeX:
    @article{kulviciustamosiunaiteainge2008,
      author = {Kulvicius, T. and Tamosiunaite, M. and Ainge, J. and Dudchenko, P. and Wörgötter, F.},
      title = {Odor supported place cell model and goal navigation in rodents},
      pages = {481-500},
      journal = {Journal of Computational Neuroscience},
      year = {2008},
      volume= {25},
      abstract = {Experiments with rodents demonstrate that visual cues play an important role in the control of hippocampal place cells and spatial navigation. Never- theless, rats may also rely on auditory, olfactory and somatosensory stimuli for orientation. It is also known that rats can track odors or self-generated scent marks to find a food source. Here we model odor supported place cells by using a simple feed-forward network and analyze the impact of olfactory cues on place cell formation and spatial navigation. The obtained place cells are used to solve a goal navigation task by a novel mechanism based on self-marking by odor patches combined with a Q-learning algorithm. We also analyze the impact of place cell remapping on goal directed behavior when switching between two environments. We emphasize the importance of olfactory cues in place cell formation and show that the utility of environ- mental and self-generated olfactory cues, together with a mixed navigation strategy, improves goal directed navigation}}
    Abstract: Experiments with rodents demonstrate that visual cues play an important role in the control of hippocampal place cells and spatial navigation. Never- theless, rats may also rely on auditory, olfactory and somatosensory stimuli for orientation. It is also known that rats can track odors or self-generated scent marks to find a food source. Here we model odor supported place cells by using a simple feed-forward network and analyze the impact of olfactory cues on place cell formation and spatial navigation. The obtained place cells are used to solve a goal navigation task by a novel mechanism based on self-marking by odor patches combined with a Q-learning algorithm. We also analyze the impact of place cell remapping on goal directed behavior when switching between two environments. We emphasize the importance of olfactory cues in place cell formation and show that the utility of environ- mental and self-generated olfactory cues, together with a mixed navigation strategy, improves goal directed navigation
    Review:
    Tamosiunaite, M. and Porr, B. and Wörgötter, F. (2007).
    Developing velocity sensitivity in a model neuron by local synaptic plasticity. Biol. Cybern, 507-518, 96.
    BibTeX:
    @article{tamosiunaiteporrwoergoetter2007,
      author = {Tamosiunaite, M. and Porr, B. and Wörgötter, F.},
      title = {Developing velocity sensitivity in a model neuron by local synaptic plasticity},
      pages = {507-518},
      journal = {Biol. Cybern},
      year = {2007},
      volume= {96}}
    Abstract:
    Review:
    Tamosiunaite, M. and Porr, B. and Wörgötter, F. (2007).
    Self-influencing synaptic plasticity: Recurrent changes of synaptic weights can lead to specific functional properties. Journal of Computational Neuroscience, 113-127, 23, 1. DOI: 10.1007/s10827-007-0021-2.
    BibTeX:
    @article{tamosiunaiteporrwoergoetter2007a,
      author = {Tamosiunaite, M. and Porr, B. and Wörgötter, F.},
      title = {Self-influencing synaptic plasticity: Recurrent changes of synaptic weights can lead to specific functional properties},
      pages = {113-127},
      journal = {Journal of Computational Neuroscience},
      year = {2007},
      volume= {23},
      number = {1},
      doi = {10.1007/s10827-007-0021-2},
      abstract = {Recent experimental results suggest that dendritic and back-propagating spikes can influence synaptic plastic- ity in different ways Holthoff, 2004: Holthoff et al. and 2005. In this study we investigate how these signals could interact at dendrites in space and time leading to changing plasticity properties at local synapse clusters. Similar to a previous study Saudargiene et al. and 2004 we employ a differential Hebbian learning rule to emulate spike-timing dependent plasticity and investigate how the interaction of dendritic and back-propagating spikes, as the post-synaptic signals, could influence plasticity. Specifically, we will show that lo- cal synaptic plasticity driven by spatially confined dendritic spikes can lead to the emergence of synaptic clusters with different properties. If one of these clusters can drive the neu- ron into spiking, plasticity may change and the now arising global influence of a back-propagating spike can lead to a further segregation of the clusters and possibly the dying-off of some of them leading to more functional specificity. These results suggest that through plasticity being a spatial and tem- poral local process, the computational properties of dendrites or complete neurons can be substantially augmented}}
    Abstract: Recent experimental results suggest that dendritic and back-propagating spikes can influence synaptic plastic- ity in different ways Holthoff, 2004: Holthoff et al. and 2005. In this study we investigate how these signals could interact at dendrites in space and time leading to changing plasticity properties at local synapse clusters. Similar to a previous study Saudargiene et al. and 2004 we employ a differential Hebbian learning rule to emulate spike-timing dependent plasticity and investigate how the interaction of dendritic and back-propagating spikes, as the post-synaptic signals, could influence plasticity. Specifically, we will show that lo- cal synaptic plasticity driven by spatially confined dendritic spikes can lead to the emergence of synaptic clusters with different properties. If one of these clusters can drive the neu- ron into spiking, plasticity may change and the now arising global influence of a back-propagating spike can lead to a further segregation of the clusters and possibly the dying-off of some of them leading to more functional specificity. These results suggest that through plasticity being a spatial and tem- poral local process, the computational properties of dendrites or complete neurons can be substantially augmented
    Review:
    Ainge, J. and Tamosiunaite, M. and Wörgötter, F. and Dudchenko, P. (2007).
    Hippocampal CA1 place cells encode intended destination on a maze with multiple choice points. J. Neurosci, 9769-9779, 27, 36. DOI: 10.1523/JNEUROSCI.2011-07.2007.
    BibTeX:
    @article{aingetamosiunaitewoergoetter2007,
      author = {Ainge, J. and Tamosiunaite, M. and Wörgötter, F. and Dudchenko, P.},
      title = {Hippocampal CA1 place cells encode intended destination on a maze with multiple choice points},
      pages = {9769-9779},
      journal = {J. Neurosci},
      year = {2007},
      volume= {27},
      number = {36},
      doi = {10.1523/JNEUROSCI.2011-07.2007}}
    Abstract:
    Review:
    Tamosiunaite, M. and Porr, B. and Wörgötter, F. (2006).
    Temporally changing synaptic plasticity. Advances in Neural Information Processing Systems, 18, 1337-1344.
    BibTeX:
    @inproceedings{tamosiunaiteporrwoergoetter2006,
      author = {Tamosiunaite, M. and Porr, B. and Wörgötter, F.},
      title = {Temporally changing synaptic plasticity},
      pages = {1337-1344},
      booktitle = {Advances in Neural Information Processing Systems, 18},
      year = {2006},
      publisher = {MIT Press, Cambridge, MA}}
    Abstract:
    Review:
    Kulvicius, T. and Tamosiunaite, M. and Vaisnys, R. (2005).
    T Wave Alternans Features for Automated Detection. Informatica, 587-602, 91, 4.
    BibTeX:
    @article{kulviciustamosiunaitevaisnys2005,
      author = {Kulvicius, T. and Tamosiunaite, M. and Vaisnys, R.},
      title = {T Wave Alternans Features for Automated Detection},
      pages = {587-602},
      journal = {Informatica},
      year = {2005},
      volume= {91},
      number = {4},
      url = {http://www.bccn-goettingen.de/Publi}}
    Abstract:
    Review:
    Aksoy, E E. and Tamosiunaite, M. and Wörgötter, F. (2014).
    Model-free incremental learning of the semantics of manipulation actions. Robotics and Autonomous Systems, 1-42. DOI: 10.1016/j.robot.2014.11.003.
    BibTeX:
    @article{aksoytamosiunaitewoergoetter2014,
      author = {Aksoy, E E. and Tamosiunaite, M. and Wörgötter, F.},
      title = {Model-free incremental learning of the semantics of manipulation actions},
      pages = {1-42},
      journal = {Robotics and Autonomous Systems},
      year = {2014},
      url = {http://www.sciencedirect.com/science/article/pii/S0921889014002450},
      doi = {10.1016/j.robot.2014.11.003},
      abstract = {Abstract Understanding and learning the semantics of complex manipulation actions are intriguing and non-trivial issues for the development of autonomous robots. In this paper, we present a novel method for an on-line, incremental learning of the semantics of manipulation actions by observation. Recently, we had introduced the Semantic Event Chains (SECs) as a new generic representation for manipulations, which can be directly computed from a stream of images and is based on the changes in the relationships between objects involved in a manipulation. We here show that the SEC concept can be used to bootstrap the learning of the semantics of manipulation actions without using any prior knowledge about actions or objects. We create a new manipulation action benchmark with 8 different manipulation tasks including in total 120 samples to learn an archetypal SEC model for each manipulation action. We then evaluate the learned SEC models with 20 long and complex chained manipulation sequences including in total 103 manipulation samples. Thereby we put the event chains to a decisive test asking how powerful is action classification when using this framework. We find that we reach up to 100 % and 87 % average precision and recall values in the validation phase and 99 % and 92 % in the testing phase. This supports the notion that SECs are a useful tool for classifying manipulation actions in a fully automatic way.}}
    Abstract: Abstract Understanding and learning the semantics of complex manipulation actions are intriguing and non-trivial issues for the development of autonomous robots. In this paper, we present a novel method for an on-line, incremental learning of the semantics of manipulation actions by observation. Recently, we had introduced the Semantic Event Chains (SECs) as a new generic representation for manipulations, which can be directly computed from a stream of images and is based on the changes in the relationships between objects involved in a manipulation. We here show that the SEC concept can be used to bootstrap the learning of the semantics of manipulation actions without using any prior knowledge about actions or objects. We create a new manipulation action benchmark with 8 different manipulation tasks including in total 120 samples to learn an archetypal SEC model for each manipulation action. We then evaluate the learned SEC models with 20 long and complex chained manipulation sequences including in total 103 manipulation samples. Thereby we put the event chains to a decisive test asking how powerful is action classification when using this framework. We find that we reach up to 100 % and 87 % average precision and recall values in the validation phase and 99 % and 92 % in the testing phase. This supports the notion that SECs are a useful tool for classifying manipulation actions in a fully automatic way.
    Review:
    Sutterlütti, R. and Stein, S. C. and Tamosiunaite, M. and Wörgötter, F. (2014).
    Object names correspond to convex entities. Cognitive Processing, 69 -- 71, 15, 1. DOI: 10.1007/s10339-013-0597-6.
    BibTeX:
    @article{sutterluettisteintamosiunaite2014,
      author = {Sutterlütti, R. and Stein, S. C. and Tamosiunaite, M. and Wörgötter, F.},
      title = {Object names correspond to convex entities},
      pages = {69 -- 71},
      booktitle = {Cognitive Processing},
      year = {2014},
      volume= {15},
      number = {1},
      language = {English},
      organization = {Springer Berlin Heidelberg},
      publisher = {Springer Berlin Heidelberg},
      url = {http://download.springer.com/static/pdf/411/art%253A10.1007%252Fs10339-014-0632-2.pdf?auth661427114510_ae25a34c4f91888c7fea13ea0fd0da15&ext.pdf},
      doi = {10.1007/s10339-013-0597-6}}
    Abstract:
    Review:
    Chatterjee, S. and Nachstedt, T. and Wörgötter, F. and Tamosiunaite, M. and Manoonpong, P. and Enomoto, Y. and Ariizumi, R. and Matsuno, F. (2014).
    Reinforcement learning approach to generate goal-directed locomotion of a snake-like robot with screw-drive units. 23rd International Conference on Robotics in Alpe-Adria-Danube Region (RAAD), 1-7. DOI: 10.1109/RAAD.2014.7002234.
    BibTeX:
    @inproceedings{chatterjeenachstedtwoergoetter2014,
      author = {Chatterjee, S. and Nachstedt, T. and Wörgötter, F. and Tamosiunaite, M. and Manoonpong, P. and Enomoto, Y. and Ariizumi, R. and Matsuno, F.},
      title = {Reinforcement learning approach to generate goal-directed locomotion of a snake-like robot with screw-drive units},
      pages = {1-7},
      booktitle = {23rd International Conference on Robotics in Alpe-Adria-Danube Region (RAAD)},
      year = {2014},
      month = {Sept},
      doi = {10.1109/RAAD.2014.7002234},
      abstract = {In this paper we apply a policy improvement algorithm called Policy Improvement with Path Integrals (PI2) to generate goal-directed locomotion of a complex snake-like robot with screw-drive units. PI2 is numerically simple and has an ability to deal with high dimensional systems. Here, this approach is used to find proper locomotion control parameters, like joint angles and screw-drive velocities, of the robot. The learning process was achieved using a simulated robot and the learned parameters were successfully transferred to the real one. As a result the robot can locomote toward a given goal.}}
    Abstract: In this paper we apply a policy improvement algorithm called Policy Improvement with Path Integrals (PI2) to generate goal-directed locomotion of a complex snake-like robot with screw-drive units. PI2 is numerically simple and has an ability to deal with high dimensional systems. Here, this approach is used to find proper locomotion control parameters, like joint angles and screw-drive velocities, of the robot. The learning process was achieved using a simulated robot and the learned parameters were successfully transferred to the real one. As a result the robot can locomote toward a given goal.
    Review:
    Aksoy, E. E. and Aein, M. J. and Tamosiunaite, M. and Wörgötter, F. (2015).
    Semantic parsing of human manipulation activities using on-line learned models for robot imitation. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2875-2882. DOI: 10.1109/IROS.2015.7353773.
    BibTeX:
    @inproceedings{aksoyaeintamosiunaite2015,
      author = {Aksoy, E. E. and Aein, M. J. and Tamosiunaite, M. and Wörgötter, F.},
      title = {Semantic parsing of human manipulation activities using on-line learned models for robot imitation},
      pages = {2875-2882},
      booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
      year = {2015},
      location = {Hamburg, Germany},
      month = {Sept},
      doi = {10.1109/IROS.2015.7353773},
      abstract = {Human manipulation activity recognition is an important yet challenging task in robot imitation. In this paper, we introduce, for the first time, a novel method for semantic decomposition and recognition of continuous human manipulation activities by using on-line learned individual manipulation models. Solely based on the spatiotemporal interactions between objects and hands in the scene, the proposed framework can parse not only sequential and concurrent (overlapping) manipulation streams but also basic primitive elements of each detected manipulation. Without requiring any prior object knowledge, the framework can furthermore extract object-like scene entities that are performing the same role in the detected manipulations. The framework was evaluated on our new egocentric activity dataset which contains 120 different samples of 8 single atomic manipulations (e.g. Cutting and Stirring) and 20 long and complex activity demonstrations such as}}
    Abstract: Human manipulation activity recognition is an important yet challenging task in robot imitation. In this paper, we introduce, for the first time, a novel method for semantic decomposition and recognition of continuous human manipulation activities by using on-line learned individual manipulation models. Solely based on the spatiotemporal interactions between objects and hands in the scene, the proposed framework can parse not only sequential and concurrent (overlapping) manipulation streams but also basic primitive elements of each detected manipulation. Without requiring any prior object knowledge, the framework can furthermore extract object-like scene entities that are performing the same role in the detected manipulations. The framework was evaluated on our new egocentric activity dataset which contains 120 different samples of 8 single atomic manipulations (e.g. Cutting and Stirring) and 20 long and complex activity demonstrations such as
    Review:
    Markievicz, I. and Vitkute-Adzgauskiene, D. and Tamosiunaite, M. (2013).
    Semi-supervised Learning of Action Ontology from Domain-Specific Corpora. Information and Software Technologies, 173-185, 403. DOI: 10.1007/978-3-642-41947-8_16.
    BibTeX:
    @incollection{markieviczvitkuteadzgauskienetamosi,
      author = {Markievicz, I. and Vitkute-Adzgauskiene, D. and Tamosiunaite, M.},
      title = {Semi-supervised Learning of Action Ontology from Domain-Specific Corpora},
      pages = {173-185},
      booktitle = {Information and Software Technologies},
      year = {2013},
      volume= {403},
      editor = {Skersys, Tomas and Butleris, Rimantas and Butkiene, Rita},
      publisher = {Springer Berlin Heidelberg},
      series = {Communications},
      url = {http://dx.doi.org/10.1007978-3-642},
      doi = {10.1007/978-3-642-41947-8_16},
      abstract = {The paper presents research results, showing how unsupervised and supervised ontology learning methods can be combined in an action ontology building approach. A framework for action ontology building from domain-specific corpus texts is suggested, using different natural language processing techniques, such as collocation extraction, frequency lists, word space model, etc. The suggested framework employs additional knowledge sources of WordNet and VerbNet with structured linguistic and semantic information. Re-sults from experiments with crawled chemical laboratory corpus texts are given}}
    Abstract: The paper presents research results, showing how unsupervised and supervised ontology learning methods can be combined in an action ontology building approach. A framework for action ontology building from domain-specific corpus texts is suggested, using different natural language processing techniques, such as collocation extraction, frequency lists, word space model, etc. The suggested framework employs additional knowledge sources of WordNet and VerbNet with structured linguistic and semantic information. Re-sults from experiments with crawled chemical laboratory corpus texts are given
    Review:
    Ziaeetabar, F. and Aksoy, E. E. and Wörgötter, F. and Tamosiunaite, M. (2017).
    Semantic Analysis of Manipulation Actions Using Spatial Relations. IEEE International Conference on Robotics and Automation (ICRA) (accepted).
    BibTeX:
    @inproceedings{ziaeetabaraksoywoergoetter2017,
      author = {Ziaeetabar, F. and Aksoy, E. E. and Wörgötter, F. and Tamosiunaite, M.},
      title = {Semantic Analysis of Manipulation Actions Using Spatial Relations},
      booktitle = {IEEE International Conference on Robotics and Automation (ICRA)},
      year = {2017},
      month = {May- June},
      note = {accepted},
      abstract = {Recognition of human manipulation actions together with the analysis and execution by a robot is an important issue. Also, perception of spatial relationships between objects is central to understanding the meaning of manipulation actions. Here we would like to merge these two notions and analyze manipulation actions using symbolic spatial relations between objects in the scene. Specifically, we define procedures for extraction of symbolic human-readable relations based on Axis Aligned Bounding Box object models and use sequences of those relations for action recognition from image sequences. Our framework is inspired by the so called Semantic Event Chain framework, which analyzes touching and un-touching events of different objects during the manipulation. However, our framework uses fourteen spatial relations instead of two. We show that our relational framework is able to differentiate between more manipulation actions than the original Semantic Event Chains. We quantitatively evaluate the method on the MANIAC dataset containing 120 videos of eight different manipulation actions and obtain 97% classification accuracy which is 12 % more as compared to the original Semantic Event Chains.}}
    Abstract: Recognition of human manipulation actions together with the analysis and execution by a robot is an important issue. Also, perception of spatial relationships between objects is central to understanding the meaning of manipulation actions. Here we would like to merge these two notions and analyze manipulation actions using symbolic spatial relations between objects in the scene. Specifically, we define procedures for extraction of symbolic human-readable relations based on Axis Aligned Bounding Box object models and use sequences of those relations for action recognition from image sequences. Our framework is inspired by the so called Semantic Event Chain framework, which analyzes touching and un-touching events of different objects during the manipulation. However, our framework uses fourteen spatial relations instead of two. We show that our relational framework is able to differentiate between more manipulation actions than the original Semantic Event Chains. We quantitatively evaluate the method on the MANIAC dataset containing 120 videos of eight different manipulation actions and obtain 97% classification accuracy which is 12 % more as compared to the original Semantic Event Chains.
    Review:
    Chatterjee, S. and Nachstedt, T. and Tamosiunaite, M. and Wörgötter, F. and Enomoto, Y. and Ariizumi, R. and Matsuno, F. and Manoonpong, P. (2015).
    Learning and Chaining of Motor Primitives for Goal-Directed Locomotion of a Snake-Like Robot with Screw-Drive Units. International Journal of Advanced Robotic Systems, 12, 12. DOI: 10.5772/61621.
    BibTeX:
    @article{chatterjeenachstedttamosiunaite2015,
      author = {Chatterjee, S. and Nachstedt, T. and Tamosiunaite, M. and Wörgötter, F. and Enomoto, Y. and Ariizumi, R. and Matsuno, F. and Manoonpong, P.},
      title = {Learning and Chaining of Motor Primitives for Goal-Directed Locomotion of a Snake-Like Robot with Screw-Drive Units},
      journal = {International Journal of Advanced Robotic Systems},
      year = {2015},
      volume= {12},
      number = {12},
      doi = {10.5772/61621},
      abstract = {In this paper we apply a policy improvement algorithm called Policy Improvement with Path Integrals (PItextlesssuptextgreater2textless/suptextgreater) to generate goal-directed locomotion of a complex snake-like robot with screw-drive units. PItextlesssuptextgreater2textless/suptextgreater is numerically simple and has an ability to deal with high dimensional systems. Here, this approach is used to find proper locomotion control parameters, like joint angles and screw-drive velocities, of the robot. The learning process was achieved using a simulated robot and the learned parameters were successfully transferred to the real one. As a result the robot can locomote toward a given goal. © 2014 IEEE.}}
    Abstract: In this paper we apply a policy improvement algorithm called Policy Improvement with Path Integrals (PItextlesssuptextgreater2textless/suptextgreater) to generate goal-directed locomotion of a complex snake-like robot with screw-drive units. PItextlesssuptextgreater2textless/suptextgreater is numerically simple and has an ability to deal with high dimensional systems. Here, this approach is used to find proper locomotion control parameters, like joint angles and screw-drive velocities, of the robot. The learning process was achieved using a simulated robot and the learned parameters were successfully transferred to the real one. As a result the robot can locomote toward a given goal. © 2014 IEEE.
    Review:

    © 2011 - 2017 Dept. of Computational Neuroscience • comments to: sreich _at_ gwdg.de • Impressum / Site Info