Dr. Irene Markelic

Email:
-

Global QuickSearch:   Matches: 0

Search Settings

    Author / Editor / Organization
    Year
    Title
    Journal / Proceedings / Book
    Kulvicius, T. and Markelic, I. and Tamosiunaite, M. and Wörgötter, F. (2013).
    Semantic image search for robotic applications. Proc. of 22nd Int. Workshop on Robotics in Alpe-Adria-Danube Region RAAD2113, 1-8.
    BibTeX:
    @inproceedings{kulviciusmarkelictamosiunaite2013,
      author = {Kulvicius, T. and Markelic, I. and Tamosiunaite, M. and Wörgötter, F.},
      title = {Semantic image search for robotic applications},
      pages = {1-8},
      booktitle = {Proc. of 22nd Int. Workshop on Robotics in Alpe-Adria-Danube Region RAAD2113},
      year = {2013},
      location = {Portoro (Slovenia)},
      month = {September 11-13},
      abstract = {Generalization in robotics is one of the most important problems. New generalization approaches use internet databases in order to solve new tasks. Modern search engines can return a large amount of information according to a query within milliseconds. However, not all of the returned information is task relevant, partly due to the problem of polysemes. Here we specifically address the problem of object generalization by using image search. We suggest a bi-modal solution, combining visual and textual information, based on the observation that humans use additional linguistic cues to demarcate intended word meaning. We evaluate the quality of our approach by comparing it to human labelled data and find that, on average, our approach leads to improved results in comparison to Google searches, and that it can treat the problem of polysemes.}}
    Abstract: Generalization in robotics is one of the most important problems. New generalization approaches use internet databases in order to solve new tasks. Modern search engines can return a large amount of information according to a query within milliseconds. However, not all of the returned information is task relevant, partly due to the problem of polysemes. Here we specifically address the problem of object generalization by using image search. We suggest a bi-modal solution, combining visual and textual information, based on the observation that humans use additional linguistic cues to demarcate intended word meaning. We evaluate the quality of our approach by comparing it to human labelled data and find that, on average, our approach leads to improved results in comparison to Google searches, and that it can treat the problem of polysemes.
    Review:
    Tetzlaff, C. and Kolodziejski, C. and Markelic, I. and Wörgötter, F. (2012).
    Time scales of memory, learning, and plasticity. Biol. Cybern, 715-726, 10611. DOI: 10.1007/s00422-012-0529.
    BibTeX:
    @article{tetzlaffkolodziejskimarkelic2012,
      author = {Tetzlaff, C. and Kolodziejski, C. and Markelic, I. and Wörgötter, F.},
      title = {Time scales of memory, learning, and plasticity},
      pages = {715-726},
      journal = {Biol. Cybern},
      year = {2012},
      volume= {10611},
      url = {http://dx.doi.org/10.1007/s00422-012-0529},
      doi = {10.1007/s00422-012-0529},
      abstract = {If we stored every bit of input, the storage capacity of our nervous system would be reached after only about 10 days. The nervous system relies on at least two mechanisms that counteract this capacity limit: compression and forgetting. But the latter mechanism needs to know how long an entity should be stored: some memories are relevant only for the next few minutes, some are important even after the passage of several years. Psychology and physiology have found and described many different memory mechanisms, and these mechanisms indeed use different time scales. In this prospect we review these mechanisms with respect to their time scale and propose relations between mechanisms in learning and memory and their underlying physiological basis}}
    Abstract: If we stored every bit of input, the storage capacity of our nervous system would be reached after only about 10 days. The nervous system relies on at least two mechanisms that counteract this capacity limit: compression and forgetting. But the latter mechanism needs to know how long an entity should be stored: some memories are relevant only for the next few minutes, some are important even after the passage of several years. Psychology and physiology have found and described many different memory mechanisms, and these mechanisms indeed use different time scales. In this prospect we review these mechanisms with respect to their time scale and propose relations between mechanisms in learning and memory and their underlying physiological basis
    Review:
    Liu, G. and Wörgötter, F. and Markelic, I. (2012).
    Stochastic Lane Shape Estimation Using Local Image Descriptors. IEEE Transactions on Intelligent Transportation Systems, 13 - 21. DOI: 10.1109/TITS.2012.2205146.
    BibTeX:
    @article{liuwoergoettermarkelic2012,
      author = {Liu, G. and Wörgötter, F. and Markelic, I.},
      title = {Stochastic Lane Shape Estimation Using Local Image Descriptors},
      pages = {13 - 21},
      journal = {IEEE Transactions on Intelligent Transportation Systems},
      year = {2012},
      month = {07},
      doi = {10.1109/TITS.2012.2205146},
      abstract = {In this paper, we present a novel measurement model for particle-filter-based lane shape estimation. Recently, the particle filter has been widely used to solve lane detection and tracking problems, due to its simplicity, robustness, and efficiency. The key part of the particle filter is the measurement model, which describes how well a generated hypothesis (a particle) fits current visual cues in the image. Previous methods often simply combine multiple visual cues in a likelihood function without considering the uncertainties of local visual cues and the accurate probability relationship between visual cues and the lane model. In contrast, this paper derives a new measurement model by utilizing multiple kernel density to precisely estimate this probability relationship. The uncertainties of local visual cues are considered and modeled by Gaussian kernels. Specifically, we use a linear-parabolic model to describe the shape of lane boundaries on a top-view image and a partitioned particle filter (PPF), integrating it with our novel measurement model to estimate lane shapes in consecutive frames. Finally, the robustness of the proposed algorithm with the new measurement model is demonstrated on the DRIVSCO data sets.}}
    Abstract: In this paper, we present a novel measurement model for particle-filter-based lane shape estimation. Recently, the particle filter has been widely used to solve lane detection and tracking problems, due to its simplicity, robustness, and efficiency. The key part of the particle filter is the measurement model, which describes how well a generated hypothesis (a particle) fits current visual cues in the image. Previous methods often simply combine multiple visual cues in a likelihood function without considering the uncertainties of local visual cues and the accurate probability relationship between visual cues and the lane model. In contrast, this paper derives a new measurement model by utilizing multiple kernel density to precisely estimate this probability relationship. The uncertainties of local visual cues are considered and modeled by Gaussian kernels. Specifically, we use a linear-parabolic model to describe the shape of lane boundaries on a top-view image and a partitioned particle filter (PPF), integrating it with our novel measurement model to estimate lane shapes in consecutive frames. Finally, the robustness of the proposed algorithm with the new measurement model is demonstrated on the DRIVSCO data sets.
    Review:
    Liu, G. and Wörgötter, F. and Markelic, I. (2012).
    Square-Root Sigma-Point Information Filtering. IEEE Transactions on Automatic Control. DOI: 10.1109/TAC.2012.2193708.
    BibTeX:
    @article{liuwoergoettermarkelic2012a,
      author = {Liu, G. and Wörgötter, F. and Markelic, I.},
      title = {Square-Root Sigma-Point Information Filtering},
      journal = {IEEE Transactions on Automatic Control},
      year = {2012},
      doi = {10.1109/TAC.2012.2193708},
      abstract = {The sigma-point information filters employ a number of deterministic sigma-points to calculate the mean and covariance of a random variable which undergoes a nonlinear transformation. These sigma-points can be generated by the unscented transform or Stirlings interpolation, which corresponds to the unscented information filter (UIF) and the central difference information filter (CDIF) respectively. In this technical note, we develop the square-root extensions of UIF and CDIF, which have better numerical properties than the original versions, e.g., improved numerical accuracy, double order precision and preservation of symmetry. We also show that the square-root unscented information filter (SRUIF) might lose the positive-definiteness due to the negative Cholesky update, whereas the square-root central difference information filter (SRCDIF) has only posi- tive Cholesky update. Therefore, the SRCDIF is preferable to the SRUIF concerning the numerical stability.}}
    Abstract: The sigma-point information filters employ a number of deterministic sigma-points to calculate the mean and covariance of a random variable which undergoes a nonlinear transformation. These sigma-points can be generated by the unscented transform or Stirlings interpolation, which corresponds to the unscented information filter (UIF) and the central difference information filter (CDIF) respectively. In this technical note, we develop the square-root extensions of UIF and CDIF, which have better numerical properties than the original versions, e.g., improved numerical accuracy, double order precision and preservation of symmetry. We also show that the square-root unscented information filter (SRUIF) might lose the positive-definiteness due to the negative Cholesky update, whereas the square-root central difference information filter (SRCDIF) has only posi- tive Cholesky update. Therefore, the SRCDIF is preferable to the SRUIF concerning the numerical stability.
    Review:
    Liu, G. and Wörgötter, F. and Markelic, I. (2012).
    The Square-Root Unscented Information Filter for State Estimation and Sensor Fusion. International Conference on Sensor Networks SENSORNETS.
    BibTeX:
    @inproceedings{liuwoergoettermarkelic2012b,
      author = {Liu, G. and Wörgötter, F. and Markelic, I.},
      title = {The Square-Root Unscented Information Filter for State Estimation and Sensor Fusion},
      booktitle = {International Conference on Sensor Networks SENSORNETS},
      year = {2012},
      abstract = {This paper presents a new recursive Bayesian estimation method, which is the square-root unscented information filter (SRUIF). The unscented information filter (UIF) has been introduced recently for nonlinear system estimation and sensor fusion. In the UIF framework, a number of sigma points are sampled from the probability distribution of the prior state by the unscented transform and then propagated through the nonlinear dynamic function and measurement function. The new state is estimated from the propagated sigma points. In this way, the UIF can achieve higher estimation accuracies and faster convergence rates than the extended information filter (EIF). As the extension of the original UIF, we propose to use the square-root of the covariance in the SRUIF instead of the full covariance in the UIF for estimation. The new SRUIF has better numerical properties than the original UIF, e.g., improved numerical accuracy, double order precision and preservation of symmetry.}}
    Abstract: This paper presents a new recursive Bayesian estimation method, which is the square-root unscented information filter (SRUIF). The unscented information filter (UIF) has been introduced recently for nonlinear system estimation and sensor fusion. In the UIF framework, a number of sigma points are sampled from the probability distribution of the prior state by the unscented transform and then propagated through the nonlinear dynamic function and measurement function. The new state is estimated from the propagated sigma points. In this way, the UIF can achieve higher estimation accuracies and faster convergence rates than the extended information filter (EIF). As the extension of the original UIF, we propose to use the square-root of the covariance in the SRUIF instead of the full covariance in the UIF for estimation. The new SRUIF has better numerical properties than the original UIF, e.g., improved numerical accuracy, double order precision and preservation of symmetry.
    Review:
    Tamosiunaite, M. and Markelic, I. and Kulvicius, T. and Wörgötter, F. (2011).
    Generalizing objects by analyzing language. 11th IEEE-RAS International Conference on Humanoid Robots Humanoids, 557-563. DOI: 10.1109/Humanoids.2011.6100812.
    BibTeX:
    @inproceedings{tamosiunaitemarkelickulvicius2011,
      author = {Tamosiunaite, M. and Markelic, I. and Kulvicius, T. and Wörgötter, F.},
      title = {Generalizing objects by analyzing language},
      pages = {557-563},
      booktitle = {11th IEEE-RAS International Conference on Humanoid Robots Humanoids},
      year = {2011},
      month = {10},
      doi = {10.1109/Humanoids.2011.6100812},
      abstract = {Generalizing objects in an action-context by a robot, for example addressing the problem: "Which items can be cut with which tools?", is an unresolved and difficult problem. Answering such a question defines a complete action class and robots cannot do this so far. We use a bootstrapping mechanism similar to that known from human language acquisition, and combine languagewith image-analysis to create action classes built around the verb (action) in an utterance. A human teaches the robot a certain sentence, for example: "Cut a sausage with a knife", from where on the machine generalizes the arguments (nouns) that the verb takes and searches for possible alternative nouns. Then, by ways of an internet-based image search and a classification algorithm, image classes for the alternative nouns are extracted, by which a large "picture book" of the possible objects involved in an action is created. This concludes the generalization step. Using the same classifier, the machine can now also perform a recognition procedure. Without having seen the objects before, it can analyze a visual scene, discovering, for example, a cucumber and a mandolin, which match to the earlier found nouns allowing it to suggest actions like: "I could cut a cucumber with a mandolin". The algorithm for generalizing objects by analyzing/anguage (GOAL) presented here, allows, thus, generalization and recognition of objects in an action-context. It can then be combined with methods for action execution (e.g. action generation-based on human demonstration) to execute so far unknown actions.}}
    Abstract: Generalizing objects in an action-context by a robot, for example addressing the problem: "Which items can be cut with which tools?", is an unresolved and difficult problem. Answering such a question defines a complete action class and robots cannot do this so far. We use a bootstrapping mechanism similar to that known from human language acquisition, and combine languagewith image-analysis to create action classes built around the verb (action) in an utterance. A human teaches the robot a certain sentence, for example: "Cut a sausage with a knife", from where on the machine generalizes the arguments (nouns) that the verb takes and searches for possible alternative nouns. Then, by ways of an internet-based image search and a classification algorithm, image classes for the alternative nouns are extracted, by which a large "picture book" of the possible objects involved in an action is created. This concludes the generalization step. Using the same classifier, the machine can now also perform a recognition procedure. Without having seen the objects before, it can analyze a visual scene, discovering, for example, a cucumber and a mandolin, which match to the earlier found nouns allowing it to suggest actions like: "I could cut a cucumber with a mandolin". The algorithm for generalizing objects by analyzing/anguage (GOAL) presented here, allows, thus, generalization and recognition of objects in an action-context. It can then be combined with methods for action execution (e.g. action generation-based on human demonstration) to execute so far unknown actions.
    Review:
    Markelic, I. and Kjaer-Nielsen, A. and Pauwels, K. and Baunegaardand Jensen, L. and Chumerin, N. and Vidugiriene, A. and Tamosiunaite M.and Hulle, M V. and Krüger, N. and Rotter, A. and Wörgötter, F. (2011).
    The Driving School System: Learning Automated Basic Driving Skills from a Teacher in a Real Car. IEEE Trans. Intelligent Transportation Systems, 1-12, PP99. DOI: 10.1109/TITS.2011.2157690.
    BibTeX:
    @article{markelickjaernielsenpauwels2011,
      author = {Markelic, I. and Kjaer-Nielsen, A. and Pauwels, K. and Baunegaardand Jensen, L. and Chumerin, N. and Vidugiriene, A. and Tamosiunaite M.and Hulle, M V. and Krüger, N. and Rotter, A. and Wörgötter, F.},
      title = {The Driving School System: Learning Automated Basic Driving Skills from a Teacher in a Real Car},
      pages = {1-12},
      journal = {IEEE Trans. Intelligent Transportation Systems},
      year = {2011},
      volume= {PP99},
      doi = {10.1109/TITS.2011.2157690},
      abstract = {We present a system that learns basic vision based driving skills from a human teacher. In contrast to much other work in this area which is based on simulation, or data obtained from simulation, our system is implemented as a multi-threaded, parallel CPU/GPU architecture in a real car and trained with real driving data to generate steering and acceleration control for road following. In addition it uses a novel algorithm for detecting independently moving objects IMOs for spotting obstacles. Both, learning and IMO detection algorithms, are data driven and thus improve above the limitations of model based approaches. The systems ability to imitate the teachers behavior is analyzed on known and unknown streets and the results suggest its use for steering assistance but limit the use of the acceleration signal to curve negotiation. We propose that this ability to adapt to the driver has high potential for future intelligent driver assistance systems since it can serve to increase the drivers security as well as the comfort, an important sales argument in the car industry}}
    Abstract: We present a system that learns basic vision based driving skills from a human teacher. In contrast to much other work in this area which is based on simulation, or data obtained from simulation, our system is implemented as a multi-threaded, parallel CPU/GPU architecture in a real car and trained with real driving data to generate steering and acceleration control for road following. In addition it uses a novel algorithm for detecting independently moving objects IMOs for spotting obstacles. Both, learning and IMO detection algorithms, are data driven and thus improve above the limitations of model based approaches. The systems ability to imitate the teachers behavior is analyzed on known and unknown streets and the results suggest its use for steering assistance but limit the use of the acceleration signal to curve negotiation. We propose that this ability to adapt to the driver has high potential for future intelligent driver assistance systems since it can serve to increase the drivers security as well as the comfort, an important sales argument in the car industry
    Review:
    Liu, G. and Wörgötter, F. and Markelic, I. (2011).
    Nonlinear Estimation Using Central Difference Information Filter. IEEE International Workshop on Statistical Signal Processing, 593-596, 28-30. DOI: 10.1109/SSP.2011.5967768.
    BibTeX:
    @inproceedings{liuwoergoettermarkelic2011b,
      author = {Liu, G. and Wörgötter, F. and Markelic, I.},
      title = {Nonlinear Estimation Using Central Difference Information Filter},
      pages = {593-596},
      booktitle = {IEEE International Workshop on Statistical Signal Processing},
      year = {2011},
      volume= {28-30},
      doi = {10.1109/SSP.2011.5967768},
      abstract = {n this contribution, we introduce a new state estimation filter for nonlinear estimation and sensor fusion, which we call cen- tral difference information filter CDIF. As we know, the ex- tended information filter EIF has two shortcomings: one is the limited accuracy of the Taylor series linearization method, the other is the calculation of the Jacobians. These shortcom- ings can be compensated by utilizing sigma point information filters SPIFs, e.g. and the unscented information filter UIF, which uses deterministic sigma points to approximate the distribution of Gaussian random variables and does not require the calculation of Jacobians. As an alternative to the UIF, the CDIF is derived by using Stirlings interpolation to generate sigma points in the SPIFs architecture, which uses less parameters, has lower computational cost and achieves the same accuracy as UIF. To demonstrate the performance of our al- gorithm, a classic space vehicle reentry tracking simulation is used}}
    Abstract: n this contribution, we introduce a new state estimation filter for nonlinear estimation and sensor fusion, which we call cen- tral difference information filter CDIF. As we know, the ex- tended information filter EIF has two shortcomings: one is the limited accuracy of the Taylor series linearization method, the other is the calculation of the Jacobians. These shortcom- ings can be compensated by utilizing sigma point information filters SPIFs, e.g. and the unscented information filter UIF, which uses deterministic sigma points to approximate the distribution of Gaussian random variables and does not require the calculation of Jacobians. As an alternative to the UIF, the CDIF is derived by using Stirlings interpolation to generate sigma points in the SPIFs architecture, which uses less parameters, has lower computational cost and achieves the same accuracy as UIF. To demonstrate the performance of our al- gorithm, a classic space vehicle reentry tracking simulation is used
    Review:
    Liu, G. and Wörgötter, F. and Markelic, I. (2011).
    Lane Shape Estimation Using a Partitioned Particle Filter for Autonomous Driving. IEEE International Conference on Robotics and Automation ICRA, 1627-1633, 9-13. DOI: 10.1109/ICRA.2011.5979753.
    BibTeX:
    @inproceedings{liuwoergoettermarkelic2011,
      author = {Liu, G. and Wörgötter, F. and Markelic, I.},
      title = {Lane Shape Estimation Using a Partitioned Particle Filter for Autonomous Driving},
      pages = {1627-1633},
      booktitle = {IEEE International Conference on Robotics and Automation ICRA},
      year = {2011},
      volume= {9-13},
      doi = {10.1109/ICRA.2011.5979753},
      abstract = {This paper presents a probabilistic algorithm for lane shape estimation in an urban environment which is important for example for driver assistance systems and autonomous driving. For the first time, we bring together the so-called Partitioned Particle filter, an improvement of the traditional Particle filter, and the linear-parabolic lane model which alleviates many shortcomings of traditional lane models. The former improves the traditional Particle filter by subdividing the whole state space of particles into several subspaces and estimating those subspaces in a hierarchical structure, such that the number of particles for each subspace is flexible and the robustness of the whole system is increased. Furthermore, we introduce a new statistical observation model, an important part of the Particle filter, where we use multi- kernel density to model the probability distribution of lane parameters. Our observation model considers not only color and position information as image cues, but also the image gradient. Our experimental results illustrate the robustness and efficiency of our algorithm even when confronted with challenging scenes}}
    Abstract: This paper presents a probabilistic algorithm for lane shape estimation in an urban environment which is important for example for driver assistance systems and autonomous driving. For the first time, we bring together the so-called Partitioned Particle filter, an improvement of the traditional Particle filter, and the linear-parabolic lane model which alleviates many shortcomings of traditional lane models. The former improves the traditional Particle filter by subdividing the whole state space of particles into several subspaces and estimating those subspaces in a hierarchical structure, such that the number of particles for each subspace is flexible and the robustness of the whole system is increased. Furthermore, we introduce a new statistical observation model, an important part of the Particle filter, where we use multi- kernel density to model the probability distribution of lane parameters. Our observation model considers not only color and position information as image cues, but also the image gradient. Our experimental results illustrate the robustness and efficiency of our algorithm even when confronted with challenging scenes
    Review:
    Liu, G. and Wörgötter, F. and Markelic, I. (2011).
    Square-Root Sigma-Point Information Filter for Nonlinear Estimation and Sensor Fusion. IEEE Transactions on Automatic Control, 2945 - 2950, 57, 11. DOI: 10.1109/TAC.2012.2193708.
    BibTeX:
    @inproceedings{liuwoergoettermarkelic2011a,
      author = {Liu, G. and Wörgötter, F. and Markelic, I.},
      title = {Square-Root Sigma-Point Information Filter for Nonlinear Estimation and Sensor Fusion},
      pages = {2945 - 2950},
      booktitle = {IEEE Transactions on Automatic Control},
      year = {2011},
      volume= {57},
      number = {11},
      month = {11},
      doi = {10.1109/TAC.2012.2193708},
      abstract = {The sigma-point information filters employ a number of deterministic sigma-points to calculate the mean and covariance of a random variable which undergoes a nonlinear transformation. These sigma-points can be generated by the unscented transform or Stirlings interpolation, which corresponds to the unscented information filter (UIF) and the central difference information filter (CDIF) respectively. In this technical note, we develop the square-root extensions of UIF and CDIF, which have better numerical properties than the original versions, e.g., improved numerical accuracy, double order precision and preservation of symmetry. We also show that the square-root unscented information filter (SRUIF) might lose the positive-definiteness due to the negative Cholesky update, whereas the square-root central difference information filter (SRCDIF) has only positive Cholesky update. Therefore, the SRCDIF is preferable to the SRUIF concerning the numerical stability.}}
    Abstract: The sigma-point information filters employ a number of deterministic sigma-points to calculate the mean and covariance of a random variable which undergoes a nonlinear transformation. These sigma-points can be generated by the unscented transform or Stirlings interpolation, which corresponds to the unscented information filter (UIF) and the central difference information filter (CDIF) respectively. In this technical note, we develop the square-root extensions of UIF and CDIF, which have better numerical properties than the original versions, e.g., improved numerical accuracy, double order precision and preservation of symmetry. We also show that the square-root unscented information filter (SRUIF) might lose the positive-definiteness due to the negative Cholesky update, whereas the square-root central difference information filter (SRCDIF) has only positive Cholesky update. Therefore, the SRCDIF is preferable to the SRUIF concerning the numerical stability.
    Review:
    Liu, G. and Wörgötter, F. and Markelic, I. (2010).
    Combining Statistical Hough Transform and Particle Filter for robust lane detection and tracking. Intelligent Vehicles Symposium IV, 2010 IEEE, 993 -997. DOI: 10.1109/IVS.2010.5548021.
    BibTeX:
    @inproceedings{liuwoergoettermarkelic2010,
      author = {Liu, G. and Wörgötter, F. and Markelic, I.},
      title = {Combining Statistical Hough Transform and Particle Filter for robust lane detection and tracking},
      pages = {993 -997},
      booktitle = {Intelligent Vehicles Symposium IV, 2010 IEEE},
      year = {2010},
      doi = {10.1109/IVS.2010.5548021},
      abstract = {Lane detection and tracking is still a challenging task. Here, we combine the recently introduced Statistical Hough transform SHT with a Particle Filter PF and show its application for robust lane tracking. SHT improves the standard Hough transform HT which was shown to work well for lane detection. We use the local descriptors of the SHT as measurement for the PF, and show how a new three kernel density based observation model can be modeled based on the SHT and used with the PF. The application of the former becomes feasible by the reduced computations achieved with the tracking algorithm. We demonstrate the use of the resulting algorithm for lane detection and tracking by applying it to images freed from the perspective effect achieved by applying Inverse Perspective Mapping IPM. The presented results show the robustness of the presented algorithm}}
    Abstract: Lane detection and tracking is still a challenging task. Here, we combine the recently introduced Statistical Hough transform SHT with a Particle Filter PF and show its application for robust lane tracking. SHT improves the standard Hough transform HT which was shown to work well for lane detection. We use the local descriptors of the SHT as measurement for the PF, and show how a new three kernel density based observation model can be modeled based on the SHT and used with the PF. The application of the former becomes feasible by the reduced computations achieved with the tracking algorithm. We demonstrate the use of the resulting algorithm for lane detection and tracking by applying it to images freed from the perspective effect achieved by applying Inverse Perspective Mapping IPM. The presented results show the robustness of the presented algorithm
    Review:
    Markelic, I. and Kulvicius, T. and Tamosiunaite, M. and Wörgötter, F. (2009).
    Anticipatory Driving for a Robot-Car Based on Supervised Learning. Lecture Notes in Computer Science: Anticipatory Behavior in Adaptive Learning Systems, 267-282.
    BibTeX:
    @article{markelickulviciustamosiunaite2009,
      author = {Markelic, I. and Kulvicius, T. and Tamosiunaite, M. and Wörgötter, F.},
      title = {Anticipatory Driving for a Robot-Car Based on Supervised Learning},
      pages = {267-282},
      journal = {Lecture Notes in Computer Science: Anticipatory Behavior in Adaptive Learning Systems},
      year = {2009},
      abstract = {Using look ahead information and plan making improves hu- man driving. We therefore propose that also autonomously driving systems should dispose over such abilities. We adapt a machine learning approach, where the system, a car-like robot, is trained by an experienced driver by correlating visual input to human driving actions. The heart of the system is a database where look ahead sensory information is stored together with action sequences issued by the human supervi- sor. The result is a robot that runs at real-time and issues steering and velocity control in a human-like way. For steer we adapt a two-level ap- proach, where the result of the database is combined with an additional reactive controller for robust behavior. Concerning velocity control this paper makes a novel contribution which is the ability of the system to react adequatly to upcoming curves}}
    Abstract: Using look ahead information and plan making improves hu- man driving. We therefore propose that also autonomously driving systems should dispose over such abilities. We adapt a machine learning approach, where the system, a car-like robot, is trained by an experienced driver by correlating visual input to human driving actions. The heart of the system is a database where look ahead sensory information is stored together with action sequences issued by the human supervi- sor. The result is a robot that runs at real-time and issues steering and velocity control in a human-like way. For steer we adapt a two-level ap- proach, where the result of the database is combined with an additional reactive controller for robust behavior. Concerning velocity control this paper makes a novel contribution which is the ability of the system to react adequatly to upcoming curves
    Review:

    © 2011 - 2017 Dept. of Computational Neuroscience • comments to: sreich _at_ gwdg.de • Impressum / Site Info