184 research outputs found

    Pistage 3D par mesures angulaires sans manœuvres

    Get PDF
    International audiencePassive target estimation is a widely investigated problem of practical interest. We are concerned specifically with an autonomous flight system developed onboard the ONERA ReSSAC unmanned helicopter. This helicopter is equipped with a (visible or infrared) camera and so is able to measure azimuths and elevation angles of a target. The latter is supposed to follow a constant velocity motion. It is well known that observer must maneuver in order to insure the observability of the target state. We are interested in tracking partly the target state when both the observer and the target have a constant velocity model in a three-dimensional space. We describe the set of all the trajectories compatible with the angle measurements and we propose a quick method to estimate these trajectories.Le pistage passif d'une cible est un domaine de recherche actif. Nous nous intéressons particulièrement au drone hélicoptère Ressac. Celui-ci est équipé d'une caméra optique et infrarouge. Il est donc capable d'effectuer des mesures angulaires (azimuts et sites) d'une cible. Cette cible est supposée suivre une mouvement rectiligne uniforme dans l'espace. Il est bien connu que l'observateur doit manœuvrer pour pour assurer l'observabilité de l'état de la cible. Dans ce papier, nous estimons partiellement cet état lorsque la cible et l'observateur sont en mouvement rectiligne uniforme dans l'espace 3D. Nous décrivons l'ensemble des trajectoires cible compatibles avec ces mesures angulaires et nous proposons une méthode rapide d'estimations de ces trajectoires

    NASA-ONERA Collaboration on Human Factors in Aviation Accidents and Incidents

    Get PDF
    This is the first annual report jointly prepared by NASA and ONERA on the work performed under the agreement to collaborate on a study of the human factors entailed in aviation accidents and incidents, particularly focused on the consequences of decreases in human performance associated with fatigue. The objective of this agreement is to generate reliable, automated procedures that improve understanding of the levels and characteristics of flight-crew fatigue factors whose confluence will likely result in unacceptable crew performance. This study entails the analyses of numerical and textual data collected during operational flights. NASA and ONERA are collaborating on the development and assessment of automated capabilities for extracting operationally significant information from very large, diverse (textual and numerical) databases; much larger than can be handled practically by human experts

    First Annual Report: NASA-ONERA Collaboration on Human Factors in Aviation Accidents and Incidents

    Get PDF
    This is the first annual report jointly prepared by NASA and ONERA on the work performed under the agreement to collaborate on a study of the human factors entailed in aviation accidents and incidents particularly focused on consequences of decreases in human performance associated with fatigue. The objective of this Agreement is to generate reliable, automated procedures that improve understanding of the levels and characteristics of flight-crew fatigue factors whose confluence will likely result in unacceptable crew performance. This study entails the analyses of numerical and textual data collected during operational flights. NASA and ONERA are collaborating on the development and assessment of automated capabilities for extracting operationally significant information from very large, diverse (textual and numerical) databases much larger than can be handled practically by human experts. This report presents the approach that is currently expected to be used in processing and analyzing the data for identifying decrements in aircraft performance and examining their relationships to decrements in crewmember performance due to fatigue. The decisions on the approach were based on samples of both the numerical and textual data that will be collected during the four studies planned under the Human Factors Monitoring Program (HFMP). Results of preliminary analyses of these sample data are presented in this report

    Autonomous search and rescue rotorcraft mission stochastic planning with generic DBNs

    Get PDF
    This paper proposes an original generic hierarchical framework in order to facilitate the modeling stage of complex autonomous robotics mission planning problems with action uncertainties. Such stochastic planning problems can be modeled as Markov Decision Processes [5]. This work is motivated by a real application to autonomous search and rescue rotorcraft within the ReSSAC1 project at ONERA. As shown in Figure 1.a, an autonomous rotorcraft must y and explore over regions, using waypoints, and in order to nd one (roughly localized) person per region (dark small areas). Uncertainties can come from the unpredictability of the environment (wind, visibility) or from a partial knowledge of it: map of obstacles, or elevation map etc. After a short presentation of the framework of structured Markov Decision Processes (MDPs), we present a new original hierarchical MDP model based on generic Dynamic Bayesian Network templates. We illustrate the bene ts of our approach on the basis of search and rescue missions of the ReSSAC project.IFIP International Conference on Artificial Intelligence in Theory and Practice - Planning and SchedulingRed de Universidades con Carreras en Informática (RedUNCI

    Extending the Bellman equation for MDPs to continuous actions and continuous time in the discounted case

    Get PDF
    Recent work on Markov Decision Processes (MDPs) covers the use of continuous variables and resources, including time. This work is usually done in a framework of bounded resources and finite temporal horizon for which a total reward criterion is often appropriate. However, most of this work considers discrete effects on continuous variables while considering continuous variables often allows for parametric (possibly continuous) quantification of actions effects. On top of that, infinite horizon MDPs often make use of discounted criterions in order to insure convergence and to account for the difference between a reward obtained now and a reward obtained later. In this paper, we build on the standard MDP framework in order to extend it to continuous time and resources and to the corresponding parametric actions. We aim at providing a framework and a sound set of hypothesis under which a classical Bellman equation holds in the discounted case, for parametric continuous actions and hybrid state spaces, including time. We illustrate our approach by applying it to the TMDP representation of Boyan and Littman

    Approximate Policy Iteration for Generalized Semi-Markov Decision Processes: an Improved Algorithm

    Get PDF
    In the context of time-dependent problems of planning under uncertainty, most of the problem's complexity comes from the concurrent interaction of simultaneous processes. Generalized Semi-Markov Decision Processes represent an efficient formalism to capture both concurrency of events and actions and uncertainty. We introduce GSMDP with observable time and hybrid state space and present an new algorithm based on Approximate Policy Iteration to generate efficient policies. This algorithm relies on simulation-based exploration and makes use of SVM regression. We experimentally illustrate the strengths and weaknesses of this algorithm and propose an improved version based on the weaknesses highlighted by the experiments

    Un Algorithme Amélioré d'Itération de la Politique Approchée pour les Processus Décisionnels Semi-Markoviens Généralisés

    Get PDF
    La complexité des problèmes de décision dans l'incertain dépendant du temps provient sou-vent de l'interaction de plusieurs processus concurrents. Les Processus Décisionnels Semi-Markoviens Généralisés (GSMDP) consituent un formalisme efficace et élégant pour représenter à la fois les aspects de concurrence d'événements et d'actions et d'incertitude. Nous proposons un formalisme GSMDP étendu à un temps observable et un espace d'états hybride. Sur cette base, nous introduisons un nouvel algorithme inspiré de l'itération de la politique approchée afin de construire des politiques efficaces. Cet algorithme repose sur une exploration guidée par la simulation et utilise les techniques d'appren-tissage à vecteurs supports. Nous illustrons cet algorithme sur un exemple et en proposons une version améliorée qui compense sa principale faiblesse

    TiMDPpoly: An Improved Method for Solving Time-Dependent MDPs

    Get PDF
    We introduce TMDPpoly, an algorithm designed to solve planning problems with durative actions, under probabilistic uncertainty, in a non-stationary, continuous-time context. Mission planning for autonomous agents such as planetary rovers or unmanned aircrafts often correspond to such time-dependent planning problems. Modeling these problems can be cast through the framework of Time-dependent Markov Decision Processes (TiMDPs). We analyze the TiMDP optimality equations in order to exploit their properties. Then, we focus on the class of piecewise polynomial models in order to approximate TiMDPs, and introduce several algorithmic contributions which lead to the TMDPpoly algorithm for TiMDPs. Finally, our approach is evaluated on an unmanned aircraft mission planning problem and on an adapted version of the well-known Mars rover domain

    Adapting an MDP planner to time-dependency: case study on a UAV coordination problem

    Get PDF
    In order to allow the temporal coordination of two independent communicating agents, one needs to be able to plan in a time-dependent environment. This paper deals with the modeling and solving of such problems through the use of Time-dependent Markov Decision Processes (TiMDPs). We provide an analysis of the TiMDP model and exploit its properties to introduce an improved asynchronous value iteration method. Our approach is evaluated on a UAV temporal coordination problem and on the well-known Mars rover domain

    A Simulation-based Approach for Solving Temporal Markov Problems

    Get PDF
    Time is a crucial variable in planning and often requires special attention since it introduces a specific structure along with additional complexity, especially in the case of decision under uncertainty. In this paper, after reviewing and comparing MDP frameworks designed to deal with temporal problems, we focus on Generalized Semi-Markov Decision Processes (GSMDP) with observable time. We highlight the inherent structure and complexity of these problems and present the differences with classical reinforcement learning problems. Finally, we introduce a new simulation-based reinforcement learning method for solving GSMDP, bringing together results from simulation-based policy iteration, regression techniques and simulation theory. We illustrate our approach on a subway network control example
    • …
    corecore