Dr. Mohamad Javad Aein

Group(s): Neural Control and Robotics
Email:
maein@gwdg.de

Global QuickSearch:   Matches: 0

Search Settings

    Author / Editor / Organization
    Year
    Title
    Journal / Proceedings / Book
    Kulvicius, T. and Biehl, M. and Aein, M J. and Tamosiunaite, M. and Wörgötter, F. (2013).
    Interaction learning for dynamic movement primitives used in cooperative robotic tasks. Robotics and Autonomous Systems, 1450 - 1459, 61, 12. DOI: 10.1016/j.robot.2013.07.009.
    BibTeX:
    @article{kulviciusbiehlaein2013,
      author = {Kulvicius, T. and Biehl, M. and Aein, M J. and Tamosiunaite, M. and Wörgötter, F.},
      title = {Interaction learning for dynamic movement primitives used in cooperative robotic tasks},
      pages = {1450 - 1459},
      journal = {Robotics and Autonomous Systems},
      year = {2013},
      volume= {61},
      number = {12},
      url = {http://www.sciencedirect.com/science/article/pii/S0921889013001358},
      doi = {10.1016/j.robot.2013.07.009},
      abstract = {Since several years dynamic movement primitives (DMPs) are more and more getting into the center of interest for flexible movement control in robotics. In this study we introduce sensory feedback together with a predictive learning mechanism which allows tightly coupled dual-agent systems to learn an adaptive, sensor-driven interaction based on DMPs. The coupled conventional (no-sensors, no learning) DMP-system automatically equilibrates and can still be solved analytically allowing us to derive conditions for stability. When adding adaptive sensor control we can show that both agents learn to cooperate. Simulations as well as real-robot experiments are shown. Interestingly, all these mechanisms are entirely based on low level interactions without any planning or cognitive component.}}
    Abstract: Since several years dynamic movement primitives (DMPs) are more and more getting into the center of interest for flexible movement control in robotics. In this study we introduce sensory feedback together with a predictive learning mechanism which allows tightly coupled dual-agent systems to learn an adaptive, sensor-driven interaction based on DMPs. The coupled conventional (no-sensors, no learning) DMP-system automatically equilibrates and can still be solved analytically allowing us to derive conditions for stability. When adding adaptive sensor control we can show that both agents learn to cooperate. Simulations as well as real-robot experiments are shown. Interestingly, all these mechanisms are entirely based on low level interactions without any planning or cognitive component.
    Review:
    Aein, M J. and Aksoy, E E. and Tamosiunaite, M. and Papon, J. and Ude, A. and Wörgötter, F. (2013).
    Toward a library of manipulation actions based on Semantic Object-Action Relations. IEEE/RSJ International Conference on Intelligent Robots and Systems. DOI: 10.1109/IROS.2013.6697011.
    BibTeX:
    @inproceedings{aeinaksoytamosiunaite2013,
      author = {Aein, M J. and Aksoy, E E. and Tamosiunaite, M. and Papon, J. and Ude, A. and Wörgötter, F.},
      title = {Toward a library of manipulation actions based on Semantic Object-Action Relations},
      booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems},
      year = {2013},
      doi = {10.1109/IROS.2013.6697011},
      abstract = {The goal of this study is to provide an architecture for a generic definition of robot manipulation actions. We emphasize that the representation of actions presented here is procedural. Thus, we will define the structural elements of our action representations as execution protocols. To achieve this, manipulations are defined using three levels. The top- level defines objects, their relations and the actions in an abstract and symbolic way. A mid-level sequencer, with which the action primitives are chained, is used to structure the actual action execution, which is performed via the bottom level. This (lowest) level collects data from sensors and communicates with the control system of the robot. This method enables robot manipulators to execute the same action in different situations i.e. on different objects with different positions and orientations. In addition, two methods of detecting action failure are provided which are necessary to handle faults in system. To demonstrate the effectiveness of the proposed framework, several different actions are performed on our robotic setup and results are shown. This way we are creating a library of human-like robot actions, which can be used by higher-level task planners to execute more complex tasks.}}
    Abstract: The goal of this study is to provide an architecture for a generic definition of robot manipulation actions. We emphasize that the representation of actions presented here is procedural. Thus, we will define the structural elements of our action representations as execution protocols. To achieve this, manipulations are defined using three levels. The top- level defines objects, their relations and the actions in an abstract and symbolic way. A mid-level sequencer, with which the action primitives are chained, is used to structure the actual action execution, which is performed via the bottom level. This (lowest) level collects data from sensors and communicates with the control system of the robot. This method enables robot manipulators to execute the same action in different situations i.e. on different objects with different positions and orientations. In addition, two methods of detecting action failure are provided which are necessary to handle faults in system. To demonstrate the effectiveness of the proposed framework, several different actions are performed on our robotic setup and results are shown. This way we are creating a library of human-like robot actions, which can be used by higher-level task planners to execute more complex tasks.
    Review:
    Schoeler, M. and Wörgötter, F. and Aein, M. and Kulvicius, T. (2014).
    Automated generation of training sets for object recognition in robotic applications. 23rd International Conference on Robotics in Alpe-Adria-Danube Region (RAAD), 1-7. DOI: 10.1109/RAAD.2014.7002247.
    BibTeX:
    @inproceedings{schoelerwoergoetteraein2014,
      author = {Schoeler, M. and Wörgötter, F. and Aein, M. and Kulvicius, T.},
      title = {Automated generation of training sets for object recognition in robotic applications},
      pages = {1-7},
      booktitle = {23rd International Conference on Robotics in Alpe-Adria-Danube Region (RAAD)},
      year = {2014},
      month = {Sept},
      doi = {10.1109/RAAD.2014.7002247},
      abstract = {Object recognition plays an important role in robotics, since objects/tools first have to be identified in the scene before they can be manipulated/used. The performance of object recognition largely depends on the training dataset. Usually such training sets are gathered manually by a human operator, a tedious procedure, which ultimately limits the size of the dataset. One reason for manual selection of samples is that results returned by search engines often contain irrelevant images, mainly due to the problem of homographs (words spelled the same but with different meanings). In this paper we present an automated and unsupervised method, coined Trainingset Cleaning by Translation ( TCT ), for generation of training sets which are able to deal with the problem of homographs. For disambiguation, it uses the context provided by a command like "tighten the nut" together with a combination of public image searches, text searches and translation services. We compare our approach against plain Google image search qualitatively as well as in a classification task and demonstrate that our method indeed leads to a task-relevant training set, which results in an improvement of 24.1% in object recognition for 12 ambiguous classes. In addition, we present an application of our method to a real robot scenario.}}
    Abstract: Object recognition plays an important role in robotics, since objects/tools first have to be identified in the scene before they can be manipulated/used. The performance of object recognition largely depends on the training dataset. Usually such training sets are gathered manually by a human operator, a tedious procedure, which ultimately limits the size of the dataset. One reason for manual selection of samples is that results returned by search engines often contain irrelevant images, mainly due to the problem of homographs (words spelled the same but with different meanings). In this paper we present an automated and unsupervised method, coined Trainingset Cleaning by Translation ( TCT ), for generation of training sets which are able to deal with the problem of homographs. For disambiguation, it uses the context provided by a command like "tighten the nut" together with a combination of public image searches, text searches and translation services. We compare our approach against plain Google image search qualitatively as well as in a classification task and demonstrate that our method indeed leads to a task-relevant training set, which results in an improvement of 24.1% in object recognition for 12 ambiguous classes. In addition, we present an application of our method to a real robot scenario.
    Review:
    Aksoy, E. E. and Aein, M. J. and Tamosiunaite, M. and Wörgötter, F. (2015).
    Semantic parsing of human manipulation activities using on-line learned models for robot imitation. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2875-2882. DOI: 10.1109/IROS.2015.7353773.
    BibTeX:
    @inproceedings{aksoyaeintamosiunaite2015,
      author = {Aksoy, E. E. and Aein, M. J. and Tamosiunaite, M. and Wörgötter, F.},
      title = {Semantic parsing of human manipulation activities using on-line learned models for robot imitation},
      pages = {2875-2882},
      booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
      year = {2015},
      location = {Hamburg, Germany},
      month = {Sept},
      doi = {10.1109/IROS.2015.7353773},
      abstract = {Human manipulation activity recognition is an important yet challenging task in robot imitation. In this paper, we introduce, for the first time, a novel method for semantic decomposition and recognition of continuous human manipulation activities by using on-line learned individual manipulation models. Solely based on the spatiotemporal interactions between objects and hands in the scene, the proposed framework can parse not only sequential and concurrent (overlapping) manipulation streams but also basic primitive elements of each detected manipulation. Without requiring any prior object knowledge, the framework can furthermore extract object-like scene entities that are performing the same role in the detected manipulations. The framework was evaluated on our new egocentric activity dataset which contains 120 different samples of 8 single atomic manipulations (e.g. Cutting and Stirring) and 20 long and complex activity demonstrations such as}}
    Abstract: Human manipulation activity recognition is an important yet challenging task in robot imitation. In this paper, we introduce, for the first time, a novel method for semantic decomposition and recognition of continuous human manipulation activities by using on-line learned individual manipulation models. Solely based on the spatiotemporal interactions between objects and hands in the scene, the proposed framework can parse not only sequential and concurrent (overlapping) manipulation streams but also basic primitive elements of each detected manipulation. Without requiring any prior object knowledge, the framework can furthermore extract object-like scene entities that are performing the same role in the detected manipulations. The framework was evaluated on our new egocentric activity dataset which contains 120 different samples of 8 single atomic manipulations (e.g. Cutting and Stirring) and 20 long and complex activity demonstrations such as
    Review:
    Agostini, A. and Aein, M. J. and Szedmak, S. and Aksoy, E. E. and Piater, J. and Wörgötter, F. (2015).
    Using Structural Bootstrapping for Object Substitution in Robotic Executions of Human-like Manipulation Tasks. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), 6479-6486. DOI: 10.1109/IROS.2015.7354303.
    BibTeX:
    @inproceedings{agostiniaeinszedmak2015,
      author = {Agostini, A. and Aein, M. J. and Szedmak, S. and Aksoy, E. E. and Piater, J. and Wörgötter, F.},
      title = {Using Structural Bootstrapping for Object Substitution in Robotic Executions of Human-like Manipulation Tasks},
      pages = {6479-6486},
      booktitle = {IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS)},
      year = {2015},
      location = {Hamburg, Germany},
      month = {Sept},
      doi = {10.1109/IROS.2015.7354303},
      abstract = {In this work we address the problem of finding replacements of missing objects that are needed for the execution of human-like manipulation tasks. This is a usual problem that is easily solved by humans provided their natural knowledge to find object substitutions: using a knife as a screwdriver or a book as a cutting board. On the other hand, in robotic applications, objects required in the task should be included in advance in the problem definition. If any of these objects is missing from the scenario, the conventional approach is to manually redefine the problem according to the available objects in the scene. In this work we propose an automatic way of finding object substitutions for the execution of manipulation tasks. The approach uses a logic-based planner to generate a plan from a prototypical problem definition and searches for replacements in the scene when some of the objects involved in the plan are missing. This is done by means of a repository of objects and attributes with roles, which is used to identify the affordances of the unknown objects in the scene. Planning actions are grounded using a novel approach that encodes the semantic structure of manipulation actions. The system was evaluated in a KUKA arm platform for the task of preparing a salad with successful results.}}
    Abstract: In this work we address the problem of finding replacements of missing objects that are needed for the execution of human-like manipulation tasks. This is a usual problem that is easily solved by humans provided their natural knowledge to find object substitutions: using a knife as a screwdriver or a book as a cutting board. On the other hand, in robotic applications, objects required in the task should be included in advance in the problem definition. If any of these objects is missing from the scenario, the conventional approach is to manually redefine the problem according to the available objects in the scene. In this work we propose an automatic way of finding object substitutions for the execution of manipulation tasks. The approach uses a logic-based planner to generate a plan from a prototypical problem definition and searches for replacements in the scene when some of the objects involved in the plan are missing. This is done by means of a repository of objects and attributes with roles, which is used to identify the affordances of the unknown objects in the scene. Planning actions are grounded using a novel approach that encodes the semantic structure of manipulation actions. The system was evaluated in a KUKA arm platform for the task of preparing a salad with successful results.
    Review:

    © 2011 - 2017 Dept. of Computational Neuroscience • comments to: sreich _at_ gwdg.de • Impressum / Site Info