22 research outputs found
The trade-off between taxi time and fuel consumption in airport ground movement
Environmental impact is a very important agenda item in many sectors nowadays, which the air transportation sector is also trying to reduce
as much as possible. One area which has remained relatively unexplored in this context is the ground movement problem for aircraft on the airport’s surface.
Aircraft have to be routed from a gate to a runway and vice versa and it is
still unknown whether fuel burn and environmental impact reductions will best result from purely minimising the taxi times or whether it is also important to avoid multiple acceleration phases. This paper presents a newly developed multi-objective approach for analysing the trade-off between taxi time and fuel consumption during taxiing. The approach consists of a combination of a graph-based routing algorithm and a population adaptive immune algorithm to discover different speed profiles of aircraft. Analysis with data from a European hub airport has highlighted the impressive performance of the new approach. Furthermore, it is shown that the trade-off between taxi time and fuel consumption is very sensitive to the fuel-related objective function which is used
AMPLE: an anytime planning and execution framework for dynamic and uncertain problems in robotics
Acting in robotics is driven by reactive and deliberative reasonings which take place in the competition between execution and planning processes. Properly balancing reactivity and deliberation is still an open question for harmonious execution of deliberative plans in complex robotic applications. We propose a flexible algorithmic framework to allow continuous real-time planning of complex tasks in parallel of their executions. Our framework, named AMPLE, is oriented towards robotic modular architectures in the sense that it turns planning algorithms into services that must be generic, reactive, and valuable. Services are optimized actions that are delivered at precise time points following requests from other modules that include states and dates at which actions are needed. To this end, our framework is divided in two concurrent processes: a planning thread which receives planning requests and delegates action selection to embedded planning softwares in compliance with the queue of internal requests, and an execution thread which orchestrates these planning requests as well as action execution and state monitoring. We show how the behavior of the execution thread can be parametrized to achieve various strategies which can differ, for instance, depending on the distribution of internal planning requests over possible future execution states in anticipation of the uncertain evolution of the system, or over different underlying planners to take several levels into account. We demonstrate the flexibility and the relevance of our framework on various robotic benchmarks and real experiments that involve complex planning problems of different natures which could not be properly tackled by existing dedicated planning approaches which rely on the standard plan-then-execute loop
Multi-Robot Planning and Execution for an Exploration Mission: a Case Study
International audienceThis paper presents the first steps of the treatment of a real-world robotic scenario consisting in exploring a large area using several heterogeneous autonomous robots. Addressing this scenario requires combining several components at the planning and execution lev-els. First, the scenario needs to be well modeled in or-der for a planning algorithm to come up with a realistic solution. This implies modeling temporal and spatial synchronization of activities between robots, as well as computing the duration of move activities using a pre-cise terrain model. Then, in order to obtain a robust multi-agent executive layer, we need a robust hierarchi-cal plan scheme that helps identifying appropriate plan repairs when, despite the quality of the various models, failures or delays occur. Finally, we need various algo-rithmic tools in order to obtain flexible plans of good quality
Synthesis of plans or policies for controlling dynamic systems
International audienceTo be properly controlled, dynamic systems need plans or policies. Plans are sequences of actions to be performed, whereas policies associate an action to be performed with each possible system state. The model-based synthesis of plans or policies consists in producing them automatically starting from a model of the physical system to be controlled and from user requirements on the controlled system. This article is a survey of what exists and what has been done at ONERA for the automatic synthesis of plans or policies for the high-level control of dynamic systems
Strategies de planification et d'exécution proactives avec de multiples hypothèses
International audienceIn order to enhance the behavior of autonomous service robots, we are exploring multiple paradigms for their planning and execution strategy (the way of interleaving the planning, selection and execution of actions). In this paper we focus on continuous proactive planning with multiple hypotheses in order to continuously generate multiple solution-plans from which an action can potentially be selected when required. To illustrate the concepts, we develop how it can be used for autonomous navigation in dynamic environments. For this purpose we present and analyze the tests we realized with several instantiations. We also discuss several aspects and concerns about the proposed strategy, and how integrating more semantic information could enhance the capabilities of service robots for real-world applications.Afin d'améliorer le comportement de robots de services autonomes, nous explorons plusieurs paradigmes pour leur stratégie de planification et d'exécution (la manière de mélanger la planification, la sélection et l'exécution d'actions). Dans cet article nous nous focalisons sur la planification proactive en continue avec de multiples hypothèses afin de générer de multiples plans-solution en continu desquels une action peut potentiellement être sélectionné au moment nécessaire. Pour illustrer les concepts, nous développons comment cette stratégie peut être utilisée pour la navigation autonome dans des environnements dynamiques. Nous discutons également plusieurs aspects et préoccupations concernant la stratégie proposé, ainsi que comment l'intégration de plus d'informations sémantiques peut améliorer les capacités de robots de service pour des applications réels
POMDP-based online target detection and recognition for autonomous UAVs
This paper presents a target detection and recognition mission by an autonomous Unmanned Aerial Vehicule (UAV), modeled as a Partially Observable Markov Decision Process (POMDP). The POMDP model deals in a single framework with both perception actions (controlling the camera's view angle), and mission actions (moving between zones and flight levels, landing) needed to achieve the goal of the mission, i.e. landing in a zone containing a car whose model is recognized as a desired target model with sufficient belief. We explain how we automatically learned the probabilistic observation POMDP model from statistical analysis of the image processing algorithm used on-board the UAV to analyze objects in the scene. We also present our "optimize-while-execute" framework, which drives a POMDP sub-planner to optimize and execute the POMDP policy in parallel under action duration constraints, reasoning about the future possible execution states of the robotic system. Finally, we present experimental results, which demonstrate that Artificial Intelligence techniques like POMDP planning can be successfully applied in order to automatically control perception and mission actions hand-in-hand for complex time-constrained UAV missions
POMDP-based online target detection and recognition for autonomous UAVs
This paper presents a target detection and recognition mission by an autonomous Unmanned Aerial Vehicule (UAV), modeled as a Partially Observable Markov Decision Process (POMDP). The POMDP model deals in a single framework with both perception actions (controlling the camera's view angle), and mission actions (moving between zones and flight levels, landing) needed to achieve the goal of the mission, i.e. landing in a zone containing a car whose model is recognized as a desired target model with sufficient belief. We explain how we automatically learned the probabilistic observation POMDP model from statistical analysis of the image processing algorithm used on-board the UAV to analyze objects in the scene. We also present our "optimize-while-execute" framework, which drives a POMDP sub-planner to optimize and execute the POMDP policy in parallel under action duration constraints, reasoning about the future possible execution states of the robotic system. Finally, we present experimental results, which demonstrate that Artificial Intelligence techniques like POMDP planning can be successfully applied in order to automatically control perception and mission actions hand-in-hand for complex time-constrained UAV missions