Fatemeh Ziaeetabar

Group(s): Computer Vision
Email:
fziaeetabar@gwdg.de
Phone: +49 551/ 39 10763
Room: E.01.104

Global QuickSearch:   Matches: 0

Search Settings

    Author / Editor / Organization
    Year
    Title
    Journal / Proceedings / Book
    Ziaeetabar, F. and Aksoy, E. E. and Wörgötter, F. and Tamosiunaite, M. (2017).
    Semantic Analysis of Manipulation Actions Using Spatial Relations. IEEE International Conference on Robotics and Automation (ICRA) (accepted).
    BibTeX:
    @inproceedings{ziaeetabaraksoywoergoetter2017,
      author = {Ziaeetabar, F. and Aksoy, E. E. and Wörgötter, F. and Tamosiunaite, M.},
      title = {Semantic Analysis of Manipulation Actions Using Spatial Relations},
      booktitle = {IEEE International Conference on Robotics and Automation (ICRA)},
      year = {2017},
      month = {May- June},
      note = {accepted},
      abstract = Recognition of human manipulation actions together with the analysis and execution by a robot is an important issue. Also, perception of spatial relationships between objects is central to understanding the meaning of manipulation actions. Here we would like to merge these two notions and analyze manipulation actions using symbolic spatial relations between objects in the scene. Specifically, we define procedures for extraction of symbolic human-readable relations based on Axis Aligned Bounding Box object models and use sequences of those relations for action recognition from image sequences. Our framework is inspired by the so called Semantic Event Chain framework, which analyzes touching and un-touching events of different objects during the manipulation. However, our framework uses fourteen spatial relations instead of two. We show that our relational framework is able to differentiate between more manipulation actions than the original Semantic Event Chains. We quantitatively evaluate the method on the MANIAC dataset containing 120 videos of eight different manipulation actions and obtain 97% classification accuracy which is 12 % more as compared to the original Semantic Event Chains.}}
    		
    Abstract: Recognition of human manipulation actions together with the analysis and execution by a robot is an important issue. Also, perception of spatial relationships between objects is central to understanding the meaning of manipulation actions. Here we would like to merge these two notions and analyze manipulation actions using symbolic spatial relations between objects in the scene. Specifically, we define procedures for extraction of symbolic human-readable relations based on Axis Aligned Bounding Box object models and use sequences of those relations for action recognition from image sequences. Our framework is inspired by the so called Semantic Event Chain framework, which analyzes touching and un-touching events of different objects during the manipulation. However, our framework uses fourteen spatial relations instead of two. We show that our relational framework is able to differentiate between more manipulation actions than the original Semantic Event Chains. We quantitatively evaluate the method on the MANIAC dataset containing 120 videos of eight different manipulation actions and obtain 97% classification accuracy which is 12 % more as compared to the original Semantic Event Chains.
    Review:
    Ziaeetabar, F. and MoghadamCharkari, N. (2014).
    Detection of Abnormal Behaviors Based on Trajectory and Spatial Analysis for Intelligent Video Surveillance systems. Bernstein Conference 2014, 205-206. DOI: 10.12751/nncn.bc2014.0223.
    BibTeX:
    @inproceedings{ziaeetabarmoghadamcharkari2014,
      author = {Ziaeetabar, F. and MoghadamCharkari, N.},
      title = {Detection of Abnormal Behaviors Based on Trajectory and Spatial Analysis for Intelligent Video Surveillance systems},
      pages = {205-206},
      booktitle = {Bernstein Conference 2014},
      year = {2014},
      month = {September},
      doi = 10.12751/nncn.bc2014.0223},
      abstract = There have been several contributions on human motion detection and action recognition over the past two decades. However detection of abnormal and suspicious behaviors in video surveillance is currently one of the most interesting studies for many research groups in computer vision and artificial intelligence. There are two well-known models to detect suspicious behaviors misuse detection model and anomaly detection model. Misuse detection model is related to definition of suspicious behavior while anomaly detection model measures the difference between the defined normal behaviors and the current behavior. We employed the first model to classify human behaviors into normal, abnormal and suspicious types according to trajectory and spatio-temporal domains. In the former we define some abnormal trajectories, like crinkle or loitering trajectories and then compare patterns of input trajectories for each person with the predefined abnormal trajectory. In the second domain some special regions on a scene which are not usual for walking are predefined. If a person stays there more than a threshold time, his behavior will be assumed as an abnormal one. Under some conditions abnormal type will be interpreted as a suspicious type. This is done by introducing a "normality level" which determines the commonality level of each behavior. If a behavior has a high "normality level" value, it is normal and lower values show abnormal and suspicious types of behavior. Employing both domains simultaneously provides high degree of accuracy in the proposed approach. In addition, introducing several level of abnormality according to the "normality level" parameter and having fuzzy approach led to differentiate between warning and alarm states. The above points plus on-line working of the system opposite of the complexity of its algorithms are positive points of our work. Our method detects abnormal and suspicious behaviors in the CAVIAR data set with an accuracy of 90% in real time.}}
    		
    Abstract: There have been several contributions on human motion detection and action recognition over the past two decades. However detection of abnormal and suspicious behaviors in video surveillance is currently one of the most interesting studies for many research groups in computer vision and artificial intelligence. There are two well-known models to detect suspicious behaviors misuse detection model and anomaly detection model. Misuse detection model is related to definition of suspicious behavior while anomaly detection model measures the difference between the defined normal behaviors and the current behavior. We employed the first model to classify human behaviors into normal, abnormal and suspicious types according to trajectory and spatio-temporal domains. In the former we define some abnormal trajectories, like crinkle or loitering trajectories and then compare patterns of input trajectories for each person with the predefined abnormal trajectory. In the second domain some special regions on a scene which are not usual for walking are predefined. If a person stays there more than a threshold time, his behavior will be assumed as an abnormal one. Under some conditions abnormal type will be interpreted as a suspicious type. This is done by introducing a "normality level" which determines the commonality level of each behavior. If a behavior has a high "normality level" value, it is normal and lower values show abnormal and suspicious types of behavior. Employing both domains simultaneously provides high degree of accuracy in the proposed approach. In addition, introducing several level of abnormality according to the "normality level" parameter and having fuzzy approach led to differentiate between warning and alarm states. The above points plus on-line working of the system opposite of the complexity of its algorithms are positive points of our work. Our method detects abnormal and suspicious behaviors in the CAVIAR data set with an accuracy of 90% in real time.
    Review:

    © 2011 - 2016 Dept. of Computational Neuroscience • comments to: sreich _at_ gwdg.de • Impressum / Site Info