Dr. Alejandro Agostini

Group(s): Neural Control and Robotics
Email:
alejandro.agostini@phys.uni-goettingen.de
Phone: +49 551/ 39 10761
Room: E.01.106

Global QuickSearch:   Matches: 0

Search Settings

    Author / Editor / Organization
    Year
    Title
    Journal / Proceedings / Book
    Agostini, A. and Celaya, E. (2011).
    A Competitive Strategy for Function Approximation in Q-Learning. Proceeding of the 22nd International Joint Conference on Artificial Intelligence IJCAI11. Barcelona, Spain, 1146-1151.
    BibTeX:
    @conference{agostinicelaya2011,
      author = {Agostini, A. and Celaya, E.},
      title = {A Competitive Strategy for Function Approximation in Q-Learning},
      pages = {1146-1151},
      booktitle = {Proceeding of the 22nd International Joint Conference on Artificial Intelligence IJCAI11. Barcelona, Spain},
      year = {2011},
      url = http://ijcai.org/papers11/contents.php},
      abstract = In this work we propose an approach for generalization in continuous domain Reinforcement Learning that, instead of using a single function approximator, tries many different function approximators in parallel, each one defined in a different region of the domain. Associated with each approximator is a relevance function that locally quantifies the quality of its approximation, so that, at each input point, the approximator with highest relevance can be selected. The relevance function is defined using parametric estimations of the variance of the q-values and the density of samples in the input space, which are used to quantify the accuracy and the confidence in the approximation, respectively. These parametric estimations are obtained from a probability density distribution represented as a Gaussian Mixture Model embedded in the input-output space of each approximator. In our experiments, the proposed approach required a lesser number of experiences for learning and produced more stable convergence profiles than when using a single function approximator.}}
    		
    Abstract: In this work we propose an approach for generalization in continuous domain Reinforcement Learning that, instead of using a single function approximator, tries many different function approximators in parallel, each one defined in a different region of the domain. Associated with each approximator is a relevance function that locally quantifies the quality of its approximation, so that, at each input point, the approximator with highest relevance can be selected. The relevance function is defined using parametric estimations of the variance of the q-values and the density of samples in the input space, which are used to quantify the accuracy and the confidence in the approximation, respectively. These parametric estimations are obtained from a probability density distribution represented as a Gaussian Mixture Model embedded in the input-output space of each approximator. In our experiments, the proposed approach required a lesser number of experiences for learning and produced more stable convergence profiles than when using a single function approximator.
    Review:
    Krüger, N. and Piater, J. and Geib, C. and Petrick, R. and M, S. and Wörgötter, F. and Ude, A. and Asfour, T. and Kraft, D. and Omrcen, D. and Agostini, A. and Dillmann, R. (2011).
    Object-Action Complexes: Grounded Abstractions of Sensorimotor Processes. Robotics and Autonomous Systems RAS, 740 - 757, 59, 10. DOI: 10.1016/j.robot.2011.05.009.
    BibTeX:
    @article{kruegerpiatergeib2011,
      author = {Krüger, N. and Piater, J. and Geib, C. and Petrick, R. and M, S. and Wörgötter, F. and Ude, A. and Asfour, T. and Kraft, D. and Omrcen, D. and Agostini, A. and Dillmann, R.},
      title = {Object-Action Complexes: Grounded Abstractions of Sensorimotor Processes},
      pages = {740 - 757},
      journal = {Robotics and Autonomous Systems RAS},
      year = {2011},
      volume= {59},
      number = {10},
      url = http://www.sciencedirect.com/science/article/pii/S0921889011000935},
      doi = 10.1016/j.robot.2011.05.009},
      abstract = Autonomous cognitive robots must be able to interact with the world and reason about their interactions. On the one hand, physical interactions are inherently continuous, noisy, and require feedback. On the other hand, the knowledge needed for reasoning about high-level objectives and plans is more conveniently expressed as symbolic predictions about state changes. Bridging this gap between control knowledge and abstract reasoning has been a fundamental concern of autonomous robotics. This paper proposes a formalism called an Object-Action Complex as the basis for symbolic representations of sensorimotor experience. OACs are designed to capture the interaction between objects and associated actions in artificial cognitive systems. This paper defines a formalism for describing object action relations and their use for autonomous cognitive robots, and describes how OACs can be learned. We also demonstrate how OACs interact across different levels of abstraction in the context of two tasks: the grounding of objects and grasping accordances, and the execution of plans using grounded representations}}
    		
    Abstract: Autonomous cognitive robots must be able to interact with the world and reason about their interactions. On the one hand, physical interactions are inherently continuous, noisy, and require feedback. On the other hand, the knowledge needed for reasoning about high-level objectives and plans is more conveniently expressed as symbolic predictions about state changes. Bridging this gap between control knowledge and abstract reasoning has been a fundamental concern of autonomous robotics. This paper proposes a formalism called an Object-Action Complex as the basis for symbolic representations of sensorimotor experience. OACs are designed to capture the interaction between objects and associated actions in artificial cognitive systems. This paper defines a formalism for describing object action relations and their use for autonomous cognitive robots, and describes how OACs can be learned. We also demonstrate how OACs interact across different levels of abstraction in the context of two tasks: the grounding of objects and grasping accordances, and the execution of plans using grounded representations
    Review:
    Agostini, A. and Torras, C. and Wörgötter, F. (2011).
    Integrating Task Planning and Interactive Learning for Robots to Work in Human Environments. 22nd International Joint Conference on Artificial Intelligence IJCAI11. Barcelona, Spain, 2386-2391.
    BibTeX:
    @inproceedings{agostinitorraswoergoetter2011,
      author = {Agostini, A. and Torras, C. and Wörgötter, F.},
      title = {Integrating Task Planning and Interactive Learning for Robots to Work in Human Environments},
      pages = {2386-2391},
      booktitle = {22nd International Joint Conference on Artificial Intelligence IJCAI11. Barcelona, Spain},
      year = {2011},
      url = http://www.iri.upc.edu/publications/show/1247},
      abstract = Human environments are challenging for robots, which need to be trainable by lay people and learn new behaviours rapidly without disrupting much the ongoing activity. A system that integrates AI techniques for planning and learning is here proposed to satisfy these strong demands. The approach rapidly learns planning operators from few action experiences using a competitive strategy where many alternatives of cause-effect explanations are evaluated in parallel, and the most successful ones are used to generate the operators. The success of a cause-effect explanation is evaluated by a probabilistic estimate that compensates the lack of experience, producing more confident estimations and speeding up the learning in relation to other known estimates. The system operates without task interruption by integrating in the planning-learning loop a human teacher that supports the planner in making decisions. All the mechanisms are integrated and synchronized in the robot using a general decision-making framework. The feasibility and scalability of the architecture are evaluated in two different robot platforms: a Stäubli arm, and the humanoid ARMAR III.}}
    		
    Abstract: Human environments are challenging for robots, which need to be trainable by lay people and learn new behaviours rapidly without disrupting much the ongoing activity. A system that integrates AI techniques for planning and learning is here proposed to satisfy these strong demands. The approach rapidly learns planning operators from few action experiences using a competitive strategy where many alternatives of cause-effect explanations are evaluated in parallel, and the most successful ones are used to generate the operators. The success of a cause-effect explanation is evaluated by a probabilistic estimate that compensates the lack of experience, producing more confident estimations and speeding up the learning in relation to other known estimates. The system operates without task interruption by integrating in the planning-learning loop a human teacher that supports the planner in making decisions. All the mechanisms are integrated and synchronized in the robot using a general decision-making framework. The feasibility and scalability of the architecture are evaluated in two different robot platforms: a Stäubli arm, and the humanoid ARMAR III.
    Review:
    Agostini, A. and Wörgötter, F. and Torras, C. (2010).
    Quick Learning of Cause-Effects Relevant for Robot Action. , 1 -- 18, IRI-TR-10-01.
    BibTeX:
    @techreport{agostiniwoergoettertorras2010,
      author = {Agostini, A. and Wörgötter, F. and Torras, C.},
      title = {Quick Learning of Cause-Effects Relevant for Robot Action},
      pages = {1 -- 18},
      year = {2010},
      number = {IRI-TR-10-01},
      institution = {Institut de Robotica i Informatica Industrial, CSIC-UPC},
      url = http://www.iri.upc.edu/download/scidoc/1222},
      abstract = In this work we propose a new paradigm for the rapid learning of cause-effect relations relevant for task execution. Learning occurs automatically from action experiences by means of a novel constructive learning approach designed for applications where there is no previous knowledge of the task or world model, examples are provided on-line during run time, and the number of examples is small compared to the number of incoming experiences. These limitations pose obstacles for the existing constructive learning methods, where on-line learning is either not considered, a significant amount of prior knowledge has to be provided, or a large number of experiences or training streams are required. The system is implemented and evaluated in a humanoid robot platform using a decision-making framework that integrates a planner, the proposed learning mechanism, and a human teacher that supports the planner in the action selection. Results demonstrate the feasibility of the system for decision making in robotic applications.}}
    		
    Abstract: In this work we propose a new paradigm for the rapid learning of cause-effect relations relevant for task execution. Learning occurs automatically from action experiences by means of a novel constructive learning approach designed for applications where there is no previous knowledge of the task or world model, examples are provided on-line during run time, and the number of examples is small compared to the number of incoming experiences. These limitations pose obstacles for the existing constructive learning methods, where on-line learning is either not considered, a significant amount of prior knowledge has to be provided, or a large number of experiences or training streams are required. The system is implemented and evaluated in a humanoid robot platform using a decision-making framework that integrates a planner, the proposed learning mechanism, and a human teacher that supports the planner in the action selection. Results demonstrate the feasibility of the system for decision making in robotic applications.
    Review:
    Agostini, A. and Celaya, E. (2010).
    Reinforcement Learning for Robot Control Using Probability Density Estimations. Proceedings of the 7th International Conference on Informatics in Control, Automation and Robotics (INSTICC), 160-168.
    BibTeX:
    @conference{agostinicelaya2010,
      author = {Agostini, A. and Celaya, E.},
      title = {Reinforcement Learning for Robot Control Using Probability Density Estimations},
      pages = {160-168},
      booktitle = {Proceedings of the 7th International Conference on Informatics in Control, Automation and Robotics (INSTICC)},
      year = {2010},
      location = {Madeira, Portugal},
      url = http://www.iri.upc.edu/download/scidoc/1135},
      abstract = The successful application of Reinforcement Learning (RL) techniques to robot control is limited by the fact that, in most robotic tasks, the state and action spaces are continuous, multidimensional, and in essence, too large for conventional RL algorithms to work. The well known curse of dimensionality makes infeasible using a tabular representation of the value function, which is the classical approach that provides convergence guarantees. When a function approximation technique is used to generalize among similar states, the convergence of the algorithm is compromised, since updates unavoidably affect an extended region of the domain, that is, some situations are modified in a way that has not been really experienced, and the update may degrade the approximation. We propose a RL algorithm that uses a probability density estimation in the joint space of states, actions and Q-values as a means of function approximation. This allows us to devise an updating approach that, taking into account the local sampling density, avoids an excessive modification of the approximation far from the observed sample.}}
    		
    Abstract: The successful application of Reinforcement Learning (RL) techniques to robot control is limited by the fact that, in most robotic tasks, the state and action spaces are continuous, multidimensional, and in essence, too large for conventional RL algorithms to work. The well known curse of dimensionality makes infeasible using a tabular representation of the value function, which is the classical approach that provides convergence guarantees. When a function approximation technique is used to generalize among similar states, the convergence of the algorithm is compromised, since updates unavoidably affect an extended region of the domain, that is, some situations are modified in a way that has not been really experienced, and the update may degrade the approximation. We propose a RL algorithm that uses a probability density estimation in the joint space of states, actions and Q-values as a means of function approximation. This allows us to devise an updating approach that, taking into account the local sampling density, avoids an excessive modification of the approximation far from the observed sample.
    Review:
    Agostini, A. and Celaya, E. (2010).
    Reinforcement Learning with a Gaussian Mixture Model. International Joint Conference on Neural Networks (IJCNN), 3485-3492. DOI: 10.1109/IJCNN.2010.5596306.
    BibTeX:
    @conference{agostinicelaya2010a,
      author = {Agostini, A. and Celaya, E.},
      title = {Reinforcement Learning with a Gaussian Mixture Model},
      pages = {3485-3492},
      booktitle = {International Joint Conference on Neural Networks (IJCNN)},
      year = {2010},
      month = {7},
      doi = 10.1109/IJCNN.2010.5596306},
      abstract = Recent approaches to Reinforcement Learning (RL) with function approximation include Neural Fitted Q Iteration and the use of Gaussian Processes. They belong to the class of fitted value iteration algorithms, which use a set of support points to fit the value-function in a batch iterative process. These techniques make efficient use of a reduced number of samples by reusing them as needed, and are appropriate for applications where the cost of experiencing a new sample is higher than storing and reusing it, but this is at the expense of increasing the computational effort, since these algorithms are not incremental. On the other hand, non-parametric models for function approximation, like Gaussian Processes, are preferred against parametric ones, due to their greater flexibility. A further advantage of using Gaussian Processes for function approximation is that they allow to quantify the uncertainty of the estimation at each point. In this paper, we propose a new approach for RL in continuous domains based on Probability Density Estimations. Our method combines the best features of the previous methods: it is non-parametric and provides an estimation of the variance of the approximated function at any point of the domain. In addition, our method is simple, incremental, and computationally efficient. All these features make this approach more appealing than Gaussian Processes and fitted value iteration algorithms in general.}}
    		
    Abstract: Recent approaches to Reinforcement Learning (RL) with function approximation include Neural Fitted Q Iteration and the use of Gaussian Processes. They belong to the class of fitted value iteration algorithms, which use a set of support points to fit the value-function in a batch iterative process. These techniques make efficient use of a reduced number of samples by reusing them as needed, and are appropriate for applications where the cost of experiencing a new sample is higher than storing and reusing it, but this is at the expense of increasing the computational effort, since these algorithms are not incremental. On the other hand, non-parametric models for function approximation, like Gaussian Processes, are preferred against parametric ones, due to their greater flexibility. A further advantage of using Gaussian Processes for function approximation is that they allow to quantify the uncertainty of the estimation at each point. In this paper, we propose a new approach for RL in continuous domains based on Probability Density Estimations. Our method combines the best features of the previous methods: it is non-parametric and provides an estimation of the variance of the approximated function at any point of the domain. In addition, our method is simple, incremental, and computationally efficient. All these features make this approach more appealing than Gaussian Processes and fitted value iteration algorithms in general.
    Review:
    Agostini, A. and Celaya, E. (2009).
    Exploiting Domain Symmetries in Reinforcement Learning with Continuous State and Action Spaces. Proc. 8th International Conference on Machine Learning and Applications ICMLA09. Miami, EE.UU, 331-336.
    BibTeX:
    @conference{agostinicelaya2009,
      author = {Agostini, A. and Celaya, E.},
      title = {Exploiting Domain Symmetries in Reinforcement Learning with Continuous State and Action Spaces},
      pages = {331-336},
      booktitle = {Proc. 8th International Conference on Machine Learning and Applications ICMLA09. Miami, EE.UU},
      year = {2009}}
    		
    Abstract:
    Review:
    Wörgötter, F. and Agostini, A. and Krüger, N. and Shylo, N. and Porr, B. (2009).
    Cognitive agents - a procedural perspective relying on the predictability of Object-Action-Complexes OACs. Robotics and Autonomous Systems, 420-432, 57, 4.
    BibTeX:
    @article{woergoetteragostinikrueger2009,
      author = {Wörgötter, F. and Agostini, A. and Krüger, N. and Shylo, N. and Porr, B.},
      title = {Cognitive agents - a procedural perspective relying on the predictability of Object-Action-Complexes OACs},
      pages = {420-432},
      journal = {Robotics and Autonomous Systems},
      year = {2009},
      volume= {57},
      number = {4},
      abstract = Embodied cognition suggests that complex cognitive traits can only arise when agents have a body situated in the world. The aspects of embodiment and situatedness are being discussed here from the perspective of linear systems theory. This perspective treats bodies as dynamic, temporally variable entities, which can be extended or curtailed at their boundaries. We show how acting agents can, for example, actively extend their body for some time by incorporating predictably behaving parts of the world and how this affects the transfer functions. We suggest that primates have mastered this to a large degree increasingly splitting their world into predictable and unpredictable entities. We argue that temporary body extension may have been instrumental in paving the way for the development of higher cognitive complexity as it is reliably widening the cause-effect horizon about the actions of the agent. A first robot experiment is sketched to support these ideas. We continue discussing the concept of Object-Action Complexes OACs introduced by the European PACO-PLUS consortium to emphasize the notion that, for a cognitive agent, objects and actions are inseparably intertwined. In another robot experiment we devise a semi-supervised procedure using the OAC-concept to demonstrate how an agent can acquire knowledge about its world. Here the notion of predicting changes fundamentally underlies the implemented procedure and we try to show how this concept can be used to improve the robots inner model and behaviour. Hence, in this article we have tried to show how predictability can be used to augment the agents body and to acquire knowledge about the external world, possibly leading to more advanced cognitive traits}}
    		
    Abstract: Embodied cognition suggests that complex cognitive traits can only arise when agents have a body situated in the world. The aspects of embodiment and situatedness are being discussed here from the perspective of linear systems theory. This perspective treats bodies as dynamic, temporally variable entities, which can be extended or curtailed at their boundaries. We show how acting agents can, for example, actively extend their body for some time by incorporating predictably behaving parts of the world and how this affects the transfer functions. We suggest that primates have mastered this to a large degree increasingly splitting their world into predictable and unpredictable entities. We argue that temporary body extension may have been instrumental in paving the way for the development of higher cognitive complexity as it is reliably widening the cause-effect horizon about the actions of the agent. A first robot experiment is sketched to support these ideas. We continue discussing the concept of Object-Action Complexes OACs introduced by the European PACO-PLUS consortium to emphasize the notion that, for a cognitive agent, objects and actions are inseparably intertwined. In another robot experiment we devise a semi-supervised procedure using the OAC-concept to demonstrate how an agent can acquire knowledge about its world. Here the notion of predicting changes fundamentally underlies the implemented procedure and we try to show how this concept can be used to improve the robots inner model and behaviour. Hence, in this article we have tried to show how predictability can be used to augment the agents body and to acquire knowledge about the external world, possibly leading to more advanced cognitive traits
    Review:
    Agostini, A. and Celaya, E. and Torras, C. and Wörgötter, F. (2008).
    Action Rule Induction from Cause-Effect Pairs Learned through Robot-Teacher Interaction. International Conference on Cognitive Systems COGSYS.
    BibTeX:
    @inproceedings{agostinicelayatorras2008,
      author = {Agostini, A. and Celaya, E. and Torras, C. and Wörgötter, F.},
      title = {Action Rule Induction from Cause-Effect Pairs Learned through Robot-Teacher Interaction},
      booktitle = {International Conference on Cognitive Systems COGSYS},
      year = {2008},
      abstract = In this work we propose a decision-making system that efficiently learns behaviors in the form of rules using natural human instructions about cause-effect relations in currently observed situations, avoiding complicated instructions and explanations of long-run action sequences and complete world dynamics. The learned rules are represented in a way suitable to both reactive and deliberative approaches, which are thus smoothly integrated. Simple and repetitive tasks are resolved reactively, while complex tasks would be faced in a more deliberative manner using a planner module. Human interaction is only required if the system fails to obtain the expected results when applying a rule, or fails to resolve the task with the knowledge acquired so far}}
    		
    Abstract: In this work we propose a decision-making system that efficiently learns behaviors in the form of rules using natural human instructions about cause-effect relations in currently observed situations, avoiding complicated instructions and explanations of long-run action sequences and complete world dynamics. The learned rules are represented in a way suitable to both reactive and deliberative approaches, which are thus smoothly integrated. Simple and repetitive tasks are resolved reactively, while complex tasks would be faced in a more deliberative manner using a planner module. Human interaction is only required if the system fails to obtain the expected results when applying a rule, or fails to resolve the task with the knowledge acquired so far
    Review:
    Agostini, A. and Celaya, E. (2005).
    Feasible Control of Complex Systems using Automatic Learning. Proc. of the 2nd International Conference on Informatics in Control, Automation and Robotics ICINCO05. Barcelona, Spain, 284-287.
    BibTeX:
    @conference{agostinicelaya2005,
      author = {Agostini, A. and Celaya, E.},
      title = {Feasible Control of Complex Systems using Automatic Learning},
      pages = {284-287},
      booktitle = {Proc. of the 2nd International Conference on Informatics in Control, Automation and Robotics ICINCO05. Barcelona, Spain},
      year = {2005}}
    		
    Abstract:
    Review:
    Agostini, A. and Celaya, E. (2004).
    Learning in Complex Environments with Feature-Based Categorization. Proc. of the 8th Conference on Intelligent Autonomous Systems IAS8. Amsterdam, Netherlands, 446-455.
    BibTeX:
    @conference{agostinicelaya2004,
      author = {Agostini, A. and Celaya, E.},
      title = {Learning in Complex Environments with Feature-Based Categorization},
      pages = {446-455},
      booktitle = {Proc. of the 8th Conference on Intelligent Autonomous Systems IAS8. Amsterdam, Netherlands},
      year = {2004}}
    		
    Abstract:
    Review:
    Agostini, A. and Celaya, E. (2004).
    Learning Model-Free Motor Control. Proc. of the 16th European Conference on Artificial intelligence ECAI04. Valencia, Spain, 947-948.
    BibTeX:
    @conference{agostinicelaya2004a,
      author = {Agostini, A. and Celaya, E.},
      title = {Learning Model-Free Motor Control},
      pages = {947-948},
      booktitle = {Proc. of the 16th European Conference on Artificial intelligence ECAI04. Valencia, Spain},
      year = {2004}}
    		
    Abstract:
    Review:
    Agostini, A. and Celaya, E. (2004).
    Trajectory Tracking Control of a Rotational Joint using Feature-Based Categorization Learning. Proc. International Conference on Intelligent Robots and Systems IROS04. Sendai, Japan, 3489-3494.
    BibTeX:
    @conference{agostinicelaya2004b,
      author = {Agostini, A. and Celaya, E.},
      title = {Trajectory Tracking Control of a Rotational Joint using Feature-Based Categorization Learning},
      pages = {3489-3494},
      booktitle = {Proc. International Conference on Intelligent Robots and Systems IROS04. Sendai, Japan},
      year = {2004}}
    		
    Abstract:
    Review:
    Rollon, E. and Isern, D. and Agostini, A. and Cortes, U. (2003).
    Towards the distributed management of emergencies: forest fires case study. Proceeding of the 1st IJCAI Workshop on Environmental Decision Support Systems. Acapulco, Mexico., 77--82.
    BibTeX:
    @conference{rollonisernagostini2003,
      author = {Rollon, E. and Isern, D. and Agostini, A. and Cortes, U.},
      title = {Towards the distributed management of emergencies: forest fires case study},
      pages = {77--82},
      booktitle = {Proceeding of the 1st IJCAI Workshop on Environmental Decision Support Systems. Acapulco, Mexico.},
      year = {2003}}
    		
    Abstract:
    Review:
    Agostini, A. and Torras, C. and Wörgötter, F. (2014).
    Learning Weakly-Correlated Cause-Effects for Gardening with a Cognitive System. Engineering Applications of Artificial Intelligence, 178--194, 36. DOI: 10.1016/j.engappai.2014.07.017.
    BibTeX:
    @article{agostinitorraswoergoetter2014,
      author = {Agostini, A. and Torras, C. and Wörgötter, F.},
      title = {Learning Weakly-Correlated Cause-Effects for Gardening with a Cognitive System},
      pages = {178--194},
      journal = {Engineering Applications of Artificial Intelligence},
      year = {2014},
      volume= {36},
      doi = 10.1016/j.engappai.2014.07.017},
      abstract = We propose a cognitive system that combines artificial intelligence techniques for planning and learning to execute tasks involving delayed and variable correlations between the actions executed and their expected effects. The system is applied to the the task of controlling the growth of plants, where the evolution of the plant attributes strongly depends on different events taking place in the temporally distant past history of the plant. The main problem to tackle is how to efficiently detect these past events. This is very challenging since the inclusion of time could make the dimensionality of the search space extremely large and the collected training instances may only provide very limited information about the relevant combinations of events. To address this problem we propose a learning method that progressively identifies those events that are more likely to produce a sequence of changes under a plant treatment. Since the number of experiences is very limited compared to the size of the event space, we use a probabilistic estimate that takes into account the lack of experience to prevent biased estimations. Planning operators are generated from most accurately predicted sequences of changes. Planning and learning are integrated in a decision-making framework that operates without task interruptions by allowing a human gardener to instruct the treatments when the knowledge acquired so far is not enough to make a decision.}}
    		
    Abstract: We propose a cognitive system that combines artificial intelligence techniques for planning and learning to execute tasks involving delayed and variable correlations between the actions executed and their expected effects. The system is applied to the the task of controlling the growth of plants, where the evolution of the plant attributes strongly depends on different events taking place in the temporally distant past history of the plant. The main problem to tackle is how to efficiently detect these past events. This is very challenging since the inclusion of time could make the dimensionality of the search space extremely large and the collected training instances may only provide very limited information about the relevant combinations of events. To address this problem we propose a learning method that progressively identifies those events that are more likely to produce a sequence of changes under a plant treatment. Since the number of experiences is very limited compared to the size of the event space, we use a probabilistic estimate that takes into account the lack of experience to prevent biased estimations. Planning operators are generated from most accurately predicted sequences of changes. Planning and learning are integrated in a decision-making framework that operates without task interruptions by allowing a human gardener to instruct the treatments when the knowledge acquired so far is not enough to make a decision.
    Review:
    Agostini, A. and Torras, C. and Wörgötter, F. (2015).
    Efficient interactive decision-making framework for robotic applications. Artificial Intelligence. DOI: 10.1016/j.artint.2015.04.004.
    BibTeX:
    @article{agostinitorraswoergoetter2015,
      author = {Agostini, A. and Torras, C. and Wörgötter, F.},
      title = {Efficient interactive decision-making framework for robotic applications},
      journal = {Artificial Intelligence},
      year = {2015},
      url = http://www.sciencedirect.com/science/article/pii/S0004370215000661},
      doi = 10.1016/j.artint.2015.04.004},
      abstract = The inclusion of robots in our society is imminent, such as service robots. Robots are now capable of reliably manipulating objects in our daily lives but only when combined with artificial intelligence (AI) techniques for planning and decision-making, which allow a machine to determine how a task can be completed successfully. To perform decision making, AI planning methods use a set of planning operators to code the state changes in the environment produced by a robotic action. Given a specific goal, the planner then searches for the best sequence of planning operators, i.e., the best plan that leads through the state space to satisfy the goal. In principle, planning operators can be hand-coded, but this is impractical for applications that involve many possible state transitions. An alternative is to learn them automatically from experience, which is most efficient when there is a human teacher. In this study, we propose a simple and efficient decision-making framework for this purpose. The robot executes its plan in a step-wise manner and any planning impasse produced by missing operators is resolved online by asking a human teacher for the next action to execute. Based on the observed state transitions, this approach rapidly generates the missing operators by evaluating the relevance of several cause-effect alternatives in parallel using a probability estimate, which compensates for the high uncertainty that is inherent when learning from a small number of samples. We evaluated the validity of our approach in simulated and real environments, where it was benchmarked against previous methods. Humans learn in the same incremental manner, so we consider that our approach may be a better alternative to existing learning paradigms, which require offline learning, a significant amount of previous knowledge, or a large number of samples.}}
    		
    Abstract: The inclusion of robots in our society is imminent, such as service robots. Robots are now capable of reliably manipulating objects in our daily lives but only when combined with artificial intelligence (AI) techniques for planning and decision-making, which allow a machine to determine how a task can be completed successfully. To perform decision making, AI planning methods use a set of planning operators to code the state changes in the environment produced by a robotic action. Given a specific goal, the planner then searches for the best sequence of planning operators, i.e., the best plan that leads through the state space to satisfy the goal. In principle, planning operators can be hand-coded, but this is impractical for applications that involve many possible state transitions. An alternative is to learn them automatically from experience, which is most efficient when there is a human teacher. In this study, we propose a simple and efficient decision-making framework for this purpose. The robot executes its plan in a step-wise manner and any planning impasse produced by missing operators is resolved online by asking a human teacher for the next action to execute. Based on the observed state transitions, this approach rapidly generates the missing operators by evaluating the relevance of several cause-effect alternatives in parallel using a probability estimate, which compensates for the high uncertainty that is inherent when learning from a small number of samples. We evaluated the validity of our approach in simulated and real environments, where it was benchmarked against previous methods. Humans learn in the same incremental manner, so we consider that our approach may be a better alternative to existing learning paradigms, which require offline learning, a significant amount of previous knowledge, or a large number of samples.
    Review:
    Agostini, A. and Celaya, E. (2017).
    Online Reinforcement Learning using a Probability Density Estimation. Neural Computation, 220--246, 29, 1. DOI: 10.1162/NECO_a_00906.
    BibTeX:
    @article{agostinicelaya2017,
      author = {Agostini, A. and Celaya, E.},
      title = {Online Reinforcement Learning using a Probability Density Estimation},
      pages = {220--246},
      journal = {Neural Computation},
      year = {2017},
      volume= {29},
      number = {1},
      publisher = {MITP},
      doi = 10.1162/NECO_a_00906},
      abstract = Function approximation in online, incremental, reinforcement learning needs to deal with two fundamental problems: biased sampling and non-stationarity. In this kind of tasks, biased sampling occurs because samples are obtained from specific trajectories dictated by the dynamics of the environment and are usually concentrated in particular convergence regions, which in the long term tend to dominate the approximation in the less sampled regions. The non-stationarity comes from the recursive nature of the estimations typical of temporal difference methods. This non-stationarity has a local profile, not only varying along the learning process but also along different regions of the state space. We propose to deal with these problems using an estimation of the probability density of samples represented with a Gaussian mixture model. To deal with the non-stationarity problem we use the common approach of introducing a forgetting factor in the updating formula. However, instead of using the same forgetting factor for the whole domain, we make it to depend on the local density of samples, which we use to estimate the non-stationarity of the function at any given input point. On the other hand, to address the biased sampling problem, the forgetting factor applied to each mixture component is modulated according to the new information provided in the updating, rather than forgetting only depending on time, thus avoiding undesired distortions of the approximation in less sampled regions.}}
    		
    Abstract: Function approximation in online, incremental, reinforcement learning needs to deal with two fundamental problems: biased sampling and non-stationarity. In this kind of tasks, biased sampling occurs because samples are obtained from specific trajectories dictated by the dynamics of the environment and are usually concentrated in particular convergence regions, which in the long term tend to dominate the approximation in the less sampled regions. The non-stationarity comes from the recursive nature of the estimations typical of temporal difference methods. This non-stationarity has a local profile, not only varying along the learning process but also along different regions of the state space. We propose to deal with these problems using an estimation of the probability density of samples represented with a Gaussian mixture model. To deal with the non-stationarity problem we use the common approach of introducing a forgetting factor in the updating formula. However, instead of using the same forgetting factor for the whole domain, we make it to depend on the local density of samples, which we use to estimate the non-stationarity of the function at any given input point. On the other hand, to address the biased sampling problem, the forgetting factor applied to each mixture component is modulated according to the new information provided in the updating, rather than forgetting only depending on time, thus avoiding undesired distortions of the approximation in less sampled regions.
    Review:
    Celaya, E. and Agostini, A. (2015).
    Online EM with Weight-Based Forgetting. Neural Computation, 1142 - 1157, 27, 5. DOI: 10.1162/NECO_a_00723.
    BibTeX:
    @article{celayaagostini2015,
      author = {Celaya, E. and Agostini, A.},
      title = {Online EM with Weight-Based Forgetting},
      pages = {1142 - 1157},
      journal = {Neural Computation},
      year = {2015},
      volume= {27},
      number = {5},
      publisher = {MIT Press},
      url = http://www.mitpressjournals.org/doi/10.1162/NECO_a_00723},
      doi = 10.1162/NECO_a_00723},
      abstract = In the online version of the EM algorithm introduced by Sato and Ishii (2000), a time-dependent discount factor is introduced for forgetting the effect of the old estimated values obtained with an earlier, inaccurate estimator. In their approach, forgetting is uniformly applied to the estimators of each mixture component depending exclusively on time, irrespective of the weight attributed to each unit for the observed sample. This causes an excessive forgetting in the less frequently sampled regions. To address this problem, we propose a modification of the algorithm that involves a weight-dependent forgetting, different for each mixture component, in which old observations are forgotten according to the actual weight of the new samples used to replace older values. A comparison of the time-dependent versus the weight-dependent approach shows that the latter improves the accuracy of the approximation and exhibits much greater stability.}}
    		
    Abstract: In the online version of the EM algorithm introduced by Sato and Ishii (2000), a time-dependent discount factor is introduced for forgetting the effect of the old estimated values obtained with an earlier, inaccurate estimator. In their approach, forgetting is uniformly applied to the estimators of each mixture component depending exclusively on time, irrespective of the weight attributed to each unit for the observed sample. This causes an excessive forgetting in the less frequently sampled regions. To address this problem, we propose a modification of the algorithm that involves a weight-dependent forgetting, different for each mixture component, in which old observations are forgotten according to the actual weight of the new samples used to replace older values. A comparison of the time-dependent versus the weight-dependent approach shows that the latter improves the accuracy of the approximation and exhibits much greater stability.
    Review:
    Mustafa, W. and Waechter, M. and Szedmak, S. and Agostini, A. (2016).
    Affordance Estimation For Vision-Based Object Replacement on a Humanoid Robot. Proceedings of ISR 2016: 47st International Symposium on Robotics, 164-172.
    BibTeX:
    @inproceedings{mustafawaechterszedmak2016,
      author = {Mustafa, W. and Waechter, M. and Szedmak, S. and Agostini, A.},
      title = {Affordance Estimation For Vision-Based Object Replacement on a Humanoid Robot},
      pages = {164-172},
      booktitle = {Proceedings of ISR 2016: 47st International Symposium on Robotics},
      year = {2016},
      month = {June},
      url = http://ieeexplore.ieee.org/document/7559112/},
      abstract = In this paper, we address the problem of finding replacements of missing objects, involved in the execution of manipulation tasks. Our approach is based on estimating functional affordances for the unknown objects in order to propose replacements. We use a vision-based affordance estimation system utilizing object-wise global features and a multi-label learning method. This method also associates confidence values to the estimated affordances. We evaluate our approach on kitchen-related manipulation affordances. The evaluation also includes testing different scenarios for training the system using large-scale datasets. The results indicate that the system is able to successfully predict the affordances of novel objects. We also implement our system on a humanoid robot and demonstrate the affordance estimation in a real scene.}}
    		
    Abstract: In this paper, we address the problem of finding replacements of missing objects, involved in the execution of manipulation tasks. Our approach is based on estimating functional affordances for the unknown objects in order to propose replacements. We use a vision-based affordance estimation system utilizing object-wise global features and a multi-label learning method. This method also associates confidence values to the estimated affordances. We evaluate our approach on kitchen-related manipulation affordances. The evaluation also includes testing different scenarios for training the system using large-scale datasets. The results indicate that the system is able to successfully predict the affordances of novel objects. We also implement our system on a humanoid robot and demonstrate the affordance estimation in a real scene.
    Review:
    Quack, B. and Wörgötter, F. and Agostini, A. (2015).
    Simultaneously Learning at Different Levels of Abstraction. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), 4600-4607. DOI: 10.1109/IROS.2015.7354032.
    BibTeX:
    @inproceedings{quackwoergoetteragostini2015,
      author = {Quack, B. and Wörgötter, F. and Agostini, A.},
      title = {Simultaneously Learning at Different Levels of Abstraction},
      pages = {4600-4607},
      booktitle = {IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS)},
      year = {2015},
      month = {Sept},
      doi = 10.1109/IROS.2015.7354032},
      abstract = Robotic applications in human environments are usually implemented using a cognitive architecture that integrates techniques of different levels of abstraction, ranging from artificial intelligence techniques for making decisions at a symbolic level to robotic techniques for grounding symbolic actions. In this work we address the problem of simultaneous learning at different levels of abstractions in such an architecture. This problem is important since human environments are highly variable, and many unexpected situations may arise during the execution of a task. The usual approach under this circumstance is to train each level individually to learn how to deal with the new situations. However, this approach is limited since it implies long task interruptions every time a new situation needs to be learned. We propose an architecture where learning takes place simultaneously at all the levels of abstraction. To achieve this, we devise a method that permits higher levels to guide the learning at the levels below for the correct execution of the task. The architecture is instantiated with a logic-based planner and an online planning operator learner, at the highest level, and with online reinforcement learning units that learn action policies for the grounding of the symbolic actions, at the lowest one. A human teacher is involved in the decision-making loop to facilitate learning. The framework is tested in a physically realistic simulation of the Sokoban game.}}
    		
    Abstract: Robotic applications in human environments are usually implemented using a cognitive architecture that integrates techniques of different levels of abstraction, ranging from artificial intelligence techniques for making decisions at a symbolic level to robotic techniques for grounding symbolic actions. In this work we address the problem of simultaneous learning at different levels of abstractions in such an architecture. This problem is important since human environments are highly variable, and many unexpected situations may arise during the execution of a task. The usual approach under this circumstance is to train each level individually to learn how to deal with the new situations. However, this approach is limited since it implies long task interruptions every time a new situation needs to be learned. We propose an architecture where learning takes place simultaneously at all the levels of abstraction. To achieve this, we devise a method that permits higher levels to guide the learning at the levels below for the correct execution of the task. The architecture is instantiated with a logic-based planner and an online planning operator learner, at the highest level, and with online reinforcement learning units that learn action policies for the grounding of the symbolic actions, at the lowest one. A human teacher is involved in the decision-making loop to facilitate learning. The framework is tested in a physically realistic simulation of the Sokoban game.
    Review:
    Agostini, A. and Aein, M. J. and Szedmak, S. and Aksoy, E. E. and Piater, J. and Wörgötter, F. (2015).
    Using Structural Bootstrapping for Object Substitution in Robotic Executions of Human-like Manipulation Tasks. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), 6479-6486. DOI: 10.1109/IROS.2015.7354303.
    BibTeX:
    @inproceedings{agostiniaeinszedmak2015,
      author = {Agostini, A. and Aein, M. J. and Szedmak, S. and Aksoy, E. E. and Piater, J. and Wörgötter, F.},
      title = {Using Structural Bootstrapping for Object Substitution in Robotic Executions of Human-like Manipulation Tasks},
      pages = {6479-6486},
      booktitle = {IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS)},
      year = {2015},
      location = {Hamburg, Germany},
      month = {Sept},
      doi = 10.1109/IROS.2015.7354303},
      abstract = In this work we address the problem of finding replacements of missing objects that are needed for the execution of human-like manipulation tasks. This is a usual problem that is easily solved by humans provided their natural knowledge to find object substitutions: using a knife as a screwdriver or a book as a cutting board. On the other hand, in robotic applications, objects required in the task should be included in advance in the problem definition. If any of these objects is missing from the scenario, the conventional approach is to manually redefine the problem according to the available objects in the scene. In this work we propose an automatic way of finding object substitutions for the execution of manipulation tasks. The approach uses a logic-based planner to generate a plan from a prototypical problem definition and searches for replacements in the scene when some of the objects involved in the plan are missing. This is done by means of a repository of objects and attributes with roles, which is used to identify the affordances of the unknown objects in the scene. Planning actions are grounded using a novel approach that encodes the semantic structure of manipulation actions. The system was evaluated in a KUKA arm platform for the task of preparing a salad with successful results.}}
    		
    Abstract: In this work we address the problem of finding replacements of missing objects that are needed for the execution of human-like manipulation tasks. This is a usual problem that is easily solved by humans provided their natural knowledge to find object substitutions: using a knife as a screwdriver or a book as a cutting board. On the other hand, in robotic applications, objects required in the task should be included in advance in the problem definition. If any of these objects is missing from the scenario, the conventional approach is to manually redefine the problem according to the available objects in the scene. In this work we propose an automatic way of finding object substitutions for the execution of manipulation tasks. The approach uses a logic-based planner to generate a plan from a prototypical problem definition and searches for replacements in the scene when some of the objects involved in the plan are missing. This is done by means of a repository of objects and attributes with roles, which is used to identify the affordances of the unknown objects in the scene. Planning actions are grounded using a novel approach that encodes the semantic structure of manipulation actions. The system was evaluated in a KUKA arm platform for the task of preparing a salad with successful results.
    Review:
    Agostini, A. and Alenya, G. and Fischbach, A. and Scharr, H. and Wörgötter, F. and Torras, C. (2017).
    A Cognitive Architecture for Automatic Gardening. Computers and Electronics in Agriculture, 69--79, 138. DOI: 10.1016/j.compag.2017.04.015.
    BibTeX:
    @article{agostinialenyafischbach2017,
      author = {Agostini, A. and Alenya, G. and Fischbach, A. and Scharr, H. and Wörgötter, F. and Torras, C.},
      title = {A Cognitive Architecture for Automatic Gardening},
      pages = {69--79},
      journal = {Computers and Electronics in Agriculture},
      year = {2017},
      volume= {138},
      publisher = {Elsevier},
      url = http://www.sciencedirect.com/science/article/pii/S0168169916304768},
      doi = 10.1016/j.compag.2017.04.015},
      abstract = In large industrial greenhouses, plants are usually treated following well established protocols for watering, nutrients, and shading/light. While this is practical for the automation of the process, it does not tap the full potential for optimal plant treatment. To more efficiently grow plants, specific treatments according to the plant individual needs should be applied. Experienced human gardeners are very good at treating plants individually. Unfortunately, hiring a crew of gardeners to carry out this task in large greenhouses is not cost effective. In this work we present a cognitive system that integrates artificial intelligence (AI) techniques for decision-making with robotics techniques for sensing and acting to autonomously treat plants using a real-robot platform. Artificial intelligence techniques are used to decide the amount of water and nutrients each plant needs according to the history of the plant. Robotic techniques for sensing measure plant attributes (e.g. leaves) from visual information using 3D model representations. These attributes are used by the AI system to make decisions about the treatment to apply. Acting techniques execute robot movements to supply the plants with the specified amount of water and nutrients.}}
    		
    Abstract: In large industrial greenhouses, plants are usually treated following well established protocols for watering, nutrients, and shading/light. While this is practical for the automation of the process, it does not tap the full potential for optimal plant treatment. To more efficiently grow plants, specific treatments according to the plant individual needs should be applied. Experienced human gardeners are very good at treating plants individually. Unfortunately, hiring a crew of gardeners to carry out this task in large greenhouses is not cost effective. In this work we present a cognitive system that integrates artificial intelligence (AI) techniques for decision-making with robotics techniques for sensing and acting to autonomously treat plants using a real-robot platform. Artificial intelligence techniques are used to decide the amount of water and nutrients each plant needs according to the history of the plant. Robotic techniques for sensing measure plant attributes (e.g. leaves) from visual information using 3D model representations. These attributes are used by the AI system to make decisions about the treatment to apply. Acting techniques execute robot movements to supply the plants with the specified amount of water and nutrients.
    Review:
    Agostini, A. (2002).
    Multiagent System for an Intelligent Domiciliary Monitoring of Patients with Cardiovascular Pathologies (in Spanish). Open Discussion Track Proceedings of the VIII Iberoamerican Conference on Artificial Intelligence (Seville, Spain), 205--210.
    BibTeX:
    @inproceedings{agostini2002,
      author = {Agostini, A.},
      title = {Multiagent System for an Intelligent Domiciliary Monitoring of Patients with Cardiovascular Pathologies (in Spanish)},
      pages = {205--210},
      booktitle = {Open Discussion Track Proceedings of the VIII Iberoamerican Conference on Artificial Intelligence (Seville, Spain)},
      year = {2002},
      abstract = Actualmente existe un considerable numero de personas con enfermedades cardiovasculares que sufren complicaciones en ambitos donde el auxilio medico puede llegar demasiado tarde. Por otro lado la medicina cardiologica ha incorporado recientemente herramientas diagnosticas y pronosticas potentes y no invasivas basadas en el procesamiento de senales obtenidas del sistema cardiovascular utilizando herramientas matematicas avanzadas. El presente trabajo propone un sistema multiagente que vincula estas herramientas y tecnicas de inteligencia artificial, para realizar el monitoreo inteligente de pacientes con patologias cardiovasculares como asi tambien para coordinar las acciones y gestionar los recursos de las tres partes fundamentales que deben intervenir en caso de una emergencia, a saber, el propio paciente y su familia, su medico de cabecera y la institucion de salud respectiva. Se utilizan sistemas de razonamiento basado en casos y basado de reglas para decidir el accionar de acuerdo a la situacion de cada paciente.}}
    		
    Abstract: Actualmente existe un considerable numero de personas con enfermedades cardiovasculares que sufren complicaciones en ambitos donde el auxilio medico puede llegar demasiado tarde. Por otro lado la medicina cardiologica ha incorporado recientemente herramientas diagnosticas y pronosticas potentes y no invasivas basadas en el procesamiento de senales obtenidas del sistema cardiovascular utilizando herramientas matematicas avanzadas. El presente trabajo propone un sistema multiagente que vincula estas herramientas y tecnicas de inteligencia artificial, para realizar el monitoreo inteligente de pacientes con patologias cardiovasculares como asi tambien para coordinar las acciones y gestionar los recursos de las tres partes fundamentales que deben intervenir en caso de una emergencia, a saber, el propio paciente y su familia, su medico de cabecera y la institucion de salud respectiva. Se utilizan sistemas de razonamiento basado en casos y basado de reglas para decidir el accionar de acuerdo a la situacion de cada paciente.
    Review:
    Agostini, A. and Gamero, L. and Rumi, P. (2000).
    Clinical Application of the Matlab Toolbox VFCLab for the Analysis of the Heart Rate Variability (in Spanish). Proceeding of the XVII Brazilian Conference in Biomedical Engineering.
    BibTeX:
    @inproceedings{agostinigamerorumi2000,
      author = {Agostini, A. and Gamero, L. and Rumi, P.},
      title = {Clinical Application of the Matlab Toolbox VFCLab for the Analysis of the Heart Rate Variability (in Spanish)},
      booktitle = {Proceeding of the XVII Brazilian Conference in Biomedical Engineering},
      year = {2000},
      abstract = La medicina cardiologica ha incorporado recientemente herramientas diagnosticas potentes y no invasivas basadas en el procesamiento de senales obtenidas del sistema cardiovascular. La variabilidad de la frecuencia cardiaca (VFC) es una de ellas y posee informacion muy valiosa referida a patologias cardiovasculares, al estado del sistema nervioso autonomo y al pronostico de la muerte subita cardiaca. En este trabajo se presenta una aplicacion clinica de una biblioteca de funciones desarrolladas con Matlab (Matlab Toolbox) para el analisis matematico de la VFC. Las funciones de dicha biblioteca contienen herramientas para el analisis en el dominio temporal y frecuencial, como asi tambien, herramientas para el estudio de las dinamicas no lineales del sistema cardiovascular. Para la aplicacion clinica se utiliza un grupo de pacientes con patologias cardiovasculares caracteristicas. Se comparan los resultados obtenidos con aquellos de otros trabajos y se presentan los mismos en forma numerica y grafica.}}
    		
    Abstract: La medicina cardiologica ha incorporado recientemente herramientas diagnosticas potentes y no invasivas basadas en el procesamiento de senales obtenidas del sistema cardiovascular. La variabilidad de la frecuencia cardiaca (VFC) es una de ellas y posee informacion muy valiosa referida a patologias cardiovasculares, al estado del sistema nervioso autonomo y al pronostico de la muerte subita cardiaca. En este trabajo se presenta una aplicacion clinica de una biblioteca de funciones desarrolladas con Matlab (Matlab Toolbox) para el analisis matematico de la VFC. Las funciones de dicha biblioteca contienen herramientas para el analisis en el dominio temporal y frecuencial, como asi tambien, herramientas para el estudio de las dinamicas no lineales del sistema cardiovascular. Para la aplicacion clinica se utiliza un grupo de pacientes con patologias cardiovasculares caracteristicas. Se comparan los resultados obtenidos con aquellos de otros trabajos y se presentan los mismos en forma numerica y grafica.
    Review:
    Agostini, A. and Gamero, L. and Rumi, P. (1999).
    VFCLab: Matlab Toolbox for the Analysis of the Heart Rate Variability (in Spanish). Proceeding of the XII Argentinean Conference of Bioengineering.
    BibTeX:
    @inproceedings{agostinigamerorumi1999,
      author = {Agostini, A. and Gamero, L. and Rumi, P.},
      title = {VFCLab: Matlab Toolbox for the Analysis of the Heart Rate Variability (in Spanish)},
      booktitle = {Proceeding of the XII Argentinean Conference of Bioengineering},
      year = {1999},
      abstract = El analisis de la variabilidad de la frecuencia cardiaca (VFC) constituye una de las herramientas mas prometedoras para el estudio y diagnostico de patologias cardiovasculares y del sistema nervioso autonomo, y para el pronostico de la muerte subita cardiaca. En este trabajo se presenta una biblioteca de funciones para analizar la VFC en el dominio temporal y frecuencial, como asi tambien en el estudio de dinamicas no lineales. Se describen las funciones mas importantes y se efectua una aplicacion practica sobre un registro de Holter de 24 horas de un paciente sano. Se presentan los resultados en forma numerica y grafica.}}
    		
    Abstract: El analisis de la variabilidad de la frecuencia cardiaca (VFC) constituye una de las herramientas mas prometedoras para el estudio y diagnostico de patologias cardiovasculares y del sistema nervioso autonomo, y para el pronostico de la muerte subita cardiaca. En este trabajo se presenta una biblioteca de funciones para analizar la VFC en el dominio temporal y frecuencial, como asi tambien en el estudio de dinamicas no lineales. Se describen las funciones mas importantes y se efectua una aplicacion practica sobre un registro de Holter de 24 horas de un paciente sano. Se presentan los resultados en forma numerica y grafica.
    Review:

    © 2011 - 2016 Dept. of Computational Neuroscience • comments to: sreich _at_ gwdg.de • Impressum / Site Info