Simon Christoph Stein

Group(s): Computer Vision
Email:
-

Global QuickSearch:   Matches: 0

Search Settings

    Author / Editor / Organization
    Year
    Title
    Journal / Proceedings / Book
    Steingrube, S. and Timme, M. and Wörgötter, F. and Manoonpong, P. (2010).
    Self-Organized Adaptation of Simple Neural Circuits Enables Complex Robot Behavior. Nature Physics, 224-230, 6. DOI: 10.1038/nphys1508.
    BibTeX:
    @article{steingrubetimmewoergoetter2010,
      author = {Steingrube, S. and Timme, M. and Wörgötter, F. and Manoonpong, P.},
      title = {Self-Organized Adaptation of Simple Neural Circuits Enables Complex Robot Behavior},
      pages = {224-230},
      journal = {Nature Physics},
      year = {2010},
      volume= {6},
      doi = 10.1038/nphys1508},
      abstract = Controlling sensori-motor systems in higher animals or complex robots is a challenging combinatorial problem, because many sensory signals need to be simultaneously coordinated into a broad behavioural spectrum. To rapidly interact with the environment, this control needs to be fast and adaptive. Present robotic solutions operate with limited autonomy and are mostly restricted to few behavioural patterns. Here we introduce chaos control as a new strategy to generate complex behaviour of an autonomous robot. In the presented system, 18 sensors drive 18 motors by means of a simple neural control circuit, thereby generating 11 basic behavioural patterns for example, orienting, taxis, self-protection and various gaits and their combinations. The control signal quickly and reversibly adapts to new situations and also enables learning and synaptic long-term storage of behaviourally useful motor responses. Thus, such neural control provides a powerful yet simple way to self-organize versatile behaviours in autonomous agents with many degrees of freedom}}
    		
    Abstract: Controlling sensori-motor systems in higher animals or complex robots is a challenging combinatorial problem, because many sensory signals need to be simultaneously coordinated into a broad behavioural spectrum. To rapidly interact with the environment, this control needs to be fast and adaptive. Present robotic solutions operate with limited autonomy and are mostly restricted to few behavioural patterns. Here we introduce chaos control as a new strategy to generate complex behaviour of an autonomous robot. In the presented system, 18 sensors drive 18 motors by means of a simple neural control circuit, thereby generating 11 basic behavioural patterns for example, orienting, taxis, self-protection and various gaits and their combinations. The control signal quickly and reversibly adapts to new situations and also enables learning and synaptic long-term storage of behaviourally useful motor responses. Thus, such neural control provides a powerful yet simple way to self-organize versatile behaviours in autonomous agents with many degrees of freedom
    Review:
    Stein, S. and Schoeler, M. and Papon, J. and Wörgötter, F. (2014).
    Object Partitioning using Local Convexity. Conference on Computer Vision and Pattern Recognition CVPR, 304-311. DOI: 10.1109/CVPR.2014.46.
    BibTeX:
    @inproceedings{steinschoelerpapon2014,
      author = {Stein, S. and Schoeler, M. and Papon, J. and Wörgötter, F.},
      title = {Object Partitioning using Local Convexity},
      pages = {304-311},
      booktitle = {Conference on Computer Vision and Pattern Recognition CVPR},
      year = {2014},
      location = {Columbus, OH, USA},
      month = {06},
      doi = 10.1109/CVPR.2014.46},
      abstract = The problem of how to arrive at an appropriate 3D-segmentation of a scene remains difficult. While current state-of-the-art methods continue to gradually improve in benchmark performance, they also grow more and more complex, for example by incorporating chains of classifiers, which require training on large manually annotated data- sets. As an alternative to this, we present a new, efficient learning- and model-free approach for the segmentation of 3D point clouds into object parts. The algorithm begins by decomposing the scene into an adjacency-graph of surface patches based on a voxel grid. Edges in the graph are then classified as either convex or concave using a novel combination of simple criteria which operate on the local geometry of these patches. This way the graph is divided into locally convex connected subgraphs, which - with high accuracy - represent object parts. Additionally, we propose a novel depth dependent voxel grid to deal with the decreasing point-density at far distances in the point clouds. This improves segmentation, allowing the use of fixed parameters for vastly different scenes. The algorithm is straight-forward to implement and requires no training data, while nevertheless producing results that are comparable to state-of-the-art methods which incorporate high-level concepts involving classification, learning and model fitting.}}
    		
    Abstract: The problem of how to arrive at an appropriate 3D-segmentation of a scene remains difficult. While current state-of-the-art methods continue to gradually improve in benchmark performance, they also grow more and more complex, for example by incorporating chains of classifiers, which require training on large manually annotated data- sets. As an alternative to this, we present a new, efficient learning- and model-free approach for the segmentation of 3D point clouds into object parts. The algorithm begins by decomposing the scene into an adjacency-graph of surface patches based on a voxel grid. Edges in the graph are then classified as either convex or concave using a novel combination of simple criteria which operate on the local geometry of these patches. This way the graph is divided into locally convex connected subgraphs, which - with high accuracy - represent object parts. Additionally, we propose a novel depth dependent voxel grid to deal with the decreasing point-density at far distances in the point clouds. This improves segmentation, allowing the use of fixed parameters for vastly different scenes. The algorithm is straight-forward to implement and requires no training data, while nevertheless producing results that are comparable to state-of-the-art methods which incorporate high-level concepts involving classification, learning and model fitting.
    Review:
    Stein, S. and Wörgötter, F. and Schoeler, M. and Papon, J. and Kulvicius, T. (2014).
    Convexity based object partitioning for robot applications. IEEE International Conference on Robotics and Automation (ICRA), 3213-3220. DOI: 10.1109/ICRA.2014.6907321.
    BibTeX:
    @inproceedings{steinwoergoetterschoeler2014,
      author = {Stein, S. and Wörgötter, F. and Schoeler, M. and Papon, J. and Kulvicius, T.},
      title = {Convexity based object partitioning for robot applications},
      pages = {3213-3220},
      booktitle = {IEEE International Conference on Robotics and Automation (ICRA)},
      year = {2014},
      month = {05},
      doi = 10.1109/ICRA.2014.6907321},
      abstract = The idea that connected convex surfaces, separated by concave boundaries, play an important role for the perception of objects and their decomposition into parts has been discussed for a long time. Based on this idea, we present a new bottom-up approach for the segmentation of 3D point clouds into object parts. The algorithm approximates a scene using an adjacency-graph of spatially connected surface patches. Edges in the graph are then classified as either convex or concave using a novel, strictly local criterion. Region growing is employed to identify locally convex connected subgraphs, which represent the object parts. We show quantitatively that our algorithm, although conceptually easy to graph and fast to compute, produces results that are comparable to far more complex state-of-the-art methods which use classification, learning and model fitting. This suggests that convexity/concavity is a powerful feature for object partitioning using 3D data. Furthermore we demonstrate that for many objects a natural decomposition into}}
    		
    Abstract: The idea that connected convex surfaces, separated by concave boundaries, play an important role for the perception of objects and their decomposition into parts has been discussed for a long time. Based on this idea, we present a new bottom-up approach for the segmentation of 3D point clouds into object parts. The algorithm approximates a scene using an adjacency-graph of spatially connected surface patches. Edges in the graph are then classified as either convex or concave using a novel, strictly local criterion. Region growing is employed to identify locally convex connected subgraphs, which represent the object parts. We show quantitatively that our algorithm, although conceptually easy to graph and fast to compute, produces results that are comparable to far more complex state-of-the-art methods which use classification, learning and model fitting. This suggests that convexity/concavity is a powerful feature for object partitioning using 3D data. Furthermore we demonstrate that for many objects a natural decomposition into
    Review:
    Schoeler, M. and Stein, S. and Papon, J. and Abramov, A. and Wörgötter, F. (2014).
    Fast Self-supervised On-line Training for Object Recognition Specifically for Robotic Applications. International Conference on Computer Vision Theory and Applications VISAPP, 1 - 10.
    BibTeX:
    @inproceedings{schoelersteinpapon2014,
      author = {Schoeler, M. and Stein, S. and Papon, J. and Abramov, A. and Wörgötter, F.},
      title = {Fast Self-supervised On-line Training for Object Recognition Specifically for Robotic Applications},
      pages = {1 - 10},
      booktitle = {International Conference on Computer Vision Theory and Applications VISAPP},
      year = {2014},
      month = {January},
      abstract = Today most recognition pipelines are trained at an off-line stage, providing systems with pre-segmented images and predefined objects, or at an on-line stage, which requires a human supervisor to tediously control the learning. Self-Supervised on-line training of recognition pipelines without human intervention is a highly desirable goal, as it allows systems to learn unknown, environment specific objects on-the-fly. We propose a fast and automatic system, which can extract and learn unknown objects with minimal human intervention by employing a two-level pipeline combining the advantages of RGB-D sensors for object extraction and high-resolution cameras for object recognition. Furthermore, we significantly improve recognition results with local features by implementing a novel keypoint orientation scheme, which leads to highly invariant but discriminative object signatures. Using only one image per object for training, our system is able to achieve a recognition rate of 79% for 18 objects, benchmarked on 42 scenes with random poses, scales and occlusion, while only taking 7 seconds for the training. Additionally, we evaluate our orientation scheme on the state-of-the-art 56-object SDU-dataset boosting accuracy for one training view per object by +37% to 78% and peaking at a performance of 98% for 11 training views.}}
    		
    Abstract: Today most recognition pipelines are trained at an off-line stage, providing systems with pre-segmented images and predefined objects, or at an on-line stage, which requires a human supervisor to tediously control the learning. Self-Supervised on-line training of recognition pipelines without human intervention is a highly desirable goal, as it allows systems to learn unknown, environment specific objects on-the-fly. We propose a fast and automatic system, which can extract and learn unknown objects with minimal human intervention by employing a two-level pipeline combining the advantages of RGB-D sensors for object extraction and high-resolution cameras for object recognition. Furthermore, we significantly improve recognition results with local features by implementing a novel keypoint orientation scheme, which leads to highly invariant but discriminative object signatures. Using only one image per object for training, our system is able to achieve a recognition rate of 79% for 18 objects, benchmarked on 42 scenes with random poses, scales and occlusion, while only taking 7 seconds for the training. Additionally, we evaluate our orientation scheme on the state-of-the-art 56-object SDU-dataset boosting accuracy for one training view per object by +37% to 78% and peaking at a performance of 98% for 11 training views.
    Review:
    Sutterlütti, R. and Stein, S. C. and Tamosiunaite, M. and Wörgötter, F. (2014).
    Object names correspond to convex entities. Cognitive Processing, 69 -- 71, 15, 1. DOI: 10.1007/s10339-013-0597-6.
    BibTeX:
    @article{sutterluettisteintamosiunaite2014,
      author = {Sutterlütti, R. and Stein, S. C. and Tamosiunaite, M. and Wörgötter, F.},
      title = {Object names correspond to convex entities},
      pages = {69 -- 71},
      booktitle = {Cognitive Processing},
      year = {2014},
      volume= {15},
      number = {1},
      language = {English},
      organization = {Springer Berlin Heidelberg},
      publisher = {Springer Berlin Heidelberg},
      url = http://download.springer.com/static/pdf/411/art%253A10.1007%252Fs10339-014-0632-2.pdf?auth661427114510_ae25a34c4f91888c7fea13ea0fd0da15&ext.pdf},
      doi = 10.1007/s10339-013-0597-6}}
    		
    Abstract:
    Review:

    © 2011 - 2016 Dept. of Computational Neuroscience • comments to: sreich _at_ gwdg.de • Impressum / Site Info