3,048 research outputs found

    White Phase Intersection Control through Distributed Coordination: A Mobile Controller Paradigm in a Mixed Traffic Stream

    Full text link
    This study presents a vehicle-level distributed coordination strategy to control a mixed traffic stream of connected automated vehicles (CAVs) and connected human-driven vehicles (CHVs) through signalized intersections. We use CAVs as mobile traffic controllers during a newly introduced white phase, during which CAVs will negotiate the right-of-way to lead a group of CHVs while CHVs must follow their immediate front vehicle. The white phase will not be activated under low CAV penetration rates, where vehicles must wait for green signals. We have formulated this problem as a distributed mixed-integer non-linear program and developed a methodology to form an agreement among all vehicles on their trajectories and signal timing parameters. The agreement on trajectories is reached through an iterative process, where CAVs update their trajectory based on shared trajectory of other vehicles to avoid collisions and share their trajectory with other vehicles. Additionally, the agreement on signal timing parameters is formed through a voting process where the most voted feasible signal timing parameters are selected. The numerical experiments indicate that the proposed methodology can efficiently control vehicle movements at signalized intersections under various CAV market shares. The introduced white phase reduces the total delay by 3.2% to 94.06% compared to cooperative trajectory and signal optimization under different CAV market shares in our tests. In addition, our numerical results show that the proposed technique yields reductions in total delay, ranging from 40.2% - 98.9%, compared to those of a fully-actuated signal control obtained from a state-of-practice traffic signal optimization software.Comment: 15 pages. 20 figures, 4 table

    A Two-Stage Optimization-based Motion Planner for Safe Urban Driving

    Get PDF
    Recent road trials have shown that guaranteeing the safety of driving decisions is essential for the wider adoption of autonomous vehicle technology. One promising direction is to pose safety requirements as planning constraints in nonlinear, non-convex optimization problems of motion synthesis. However, many implementations of this approach are limited by uncertain convergence and local optimality of the solutions achieved, affecting overall robustness. To improve upon these issues, we propose a novel two-stage optimization framework: in the first stage, we find a solution to a Mixed-Integer Linear Programming (MILP) formulation of the motion synthesis problem, the output of which initializes a second Nonlinear Programming (NLP) stage. The MILP stage enforces hard constraints of safety and road rule compliance generating a solution in the right subspace, while the NLP stage refines the solution within the safety bounds for feasibility and smoothness. We demonstrate the effectiveness of our framework via simulated experiments of complex urban driving scenarios, outperforming a state-of-the-art baseline in metrics of convergence, comfort and progress.Comment: IEEE Transactions on Robotics (T-RO), 202

    Robust, goal-directed plan execution with bounded risk

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2012.Cataloged from PDF version of thesis.Includes bibliographical references (p. 273-283).There is an increasing need for robust optimal plan execution for multi-agent systems in uncertain environments, while guaranteeing an acceptable probability of success. For example, a fleet of unmanned aerial vehicles (UAVs) and autonomous underwater vehicles (AUVs) are required to operate autonomously for an extensive mission duration in an uncertain environment. Previous work introduced the concept of a model-based executive, which increases the level of autonomy, elevating the level at which systems are commanded. This thesis develops model-based executives that reason explicitly from a stochastic plant model to find the optimal course of action, while ensuring that the probability of failure is within a user-specified risk bound. This thesis presents two robust mode-based executives: probabilistic Sulu or p-Sulu, and distributed probabilistic Sulu or dp-Sulu. The objective for p-Sulu and dp-Sulu is to allow users to command continuous, stochastic multi-agent systems in a manner that is both intuitive and safe. The user specifies the desired evolution of the plant state, as well as the acceptable probabilities of failure, as a temporal plan on states called a chance-constrained qualitative state plan (CCQSP). An example of a CCQSP statement is "go to A through B within 30 minutes, with less than 0.001% probability of failure." p-Sulu and dp-Sulu take a CCQSP, a continuous plant model with stochastic uncertainty, and an objective function as inputs, and outputs an optimal continuous control sequence, as well as an optimal discrete schedule. The difference between p-Sulu and dp-Sulu is that p-Sulu plans in a centralized manner while dp-Sulu plans in a distributed manner. dp-Sulu enables robust CCQSP execution for multi-agent systems. We solve the problem based on the key concept of risk allocation, which achieves tractability by allocating the specified risk to individual constraints and mapping the result into an equivalent deterministic constrained optimization problem. Risk allocation also enables a distributed plan execution for multi-agent systems by distributing the risk among agents to decompose the optimization problem. Building upon the risk allocation approach, we develop our first CCQSP executive, p-Sulu, in four spirals. First, we develop the Convex Risk Allocation (CRA) algorithm, which can solve a CCQSP planning problem with a convex state space and a fixed schedule, highlighting the capability of optimally allocating risk to individual constraints. Second, we develop the Non-convex Iterative Risk Allocation (NIRA) algorithm, which can handle non-convex state space. Third, we build upon NIRA a full-horizon CCQSP planner, p-Sulu FH, which can optimize not only the control sequence but also the schedule. Fourth, we develop p-Sulu, which enables the real-time execution of CCQSPs by employing the receding horizon approach. Our second CCQSP executive, dp-Sulu, is developed in two spirals. First, we develop the Market-based Iterative Risk Allocation (MIRA) algorithm, which can control a multiagent system in a distributed manner by optimally distributing risk among agents through the market-based method called tatonnement. Second and finally, we integrate the capability of MIRA into p-Sulu to build the robust model-based executive, dp-Sulu, which can execute CCQSPs on multi-agent systems in a distributed manner. Our simulation results demonstrate that our executives can efficiently execute CCQSP planning problems with significantly reduced suboptimality compared to prior art.by Masahiro Ono.Ph.D

    Bayesian Search Under Dynamic Disaster Scenarios

    Get PDF
    Search and Rescue (SAR) is a hard decision making context where there is available a limited amount of resources that should be strategically allocated over the search region in order to find missing people opportunely. In this thesis, we consider those SAR scenarios where the search region is being affected by some type of dynamic threat such as a wilder or a hurricane. In spite of the large amount of SAR missions that consistently take place under these circumstances, and being Search Theory a research area dating back from more than a half century, to the best of our knowledge, this kind of search problem has not being considered in any previous research. Here we propose a bi-objective mathematical optimization model and three solution methods for the problem: (1) Epsilon-constraint; (2) Lexicographic; and (3) Ant Colony based heuristic. One of the objectives of our model pursues the allocation of resources in riskiest zones. This objective attempts to find victims located at the closest regions to the threat, presenting a high risk of being reached by the disaster. In contrast, the second objective is oriented to allocate resources in regions where it is more likely to find the victim. Furthermore, we implemented a receding horizon approach oriented to provide our planning methodology with the ability to adapt to disaster's behavior based on updated information gathered during the mission. All our products were validated through computational experiments.MaestríaMagister en Ingeniería Industria

    Stochastic dynamic programming heuristic for the (R, s, S) policy parameters computation

    Get PDF
    The (R, s, S) is a stochastic inventory control policy widely used by practitioners. In an inventory system managed according to this policy, the inventory is reviewed at instant R; if the observed inventory position is lower than the reorder level s an order is placed. The order's quantity is set to raise the inventory position to the order-up-to-level S. This paper introduces a new stochastic dynamic program (SDP) based heuristic to compute the (R, s, S) policy parameters for the non-stationary stochastic lot-sizing problem with backlogging of the excessive demand, fixed order and review costs, and linear holding and penalty costs. In a recent work, Visentin et al. (2021) present an approach to compute optimal policy parameters under these assumptions. Our model combines a greedy relaxation of the problem with a modified version of Scarf's (s, S) SDP. A simple implementation of the model requires a prohibitive computational effort to compute the parameters. However, we can speed up the computations by using K-convexity property and memorisation techniques. The resulting algorithm is considerably faster than the state-of-the-art, extending its adoptability by practitioners. An extensive computational study compares our approach with the algorithms available in the literature

    From Monocular SLAM to Autonomous Drone Exploration

    Full text link
    Micro aerial vehicles (MAVs) are strongly limited in their payload and power capacity. In order to implement autonomous navigation, algorithms are therefore desirable that use sensory equipment that is as small, low-weight, and low-power consuming as possible. In this paper, we propose a method for autonomous MAV navigation and exploration using a low-cost consumer-grade quadrocopter equipped with a monocular camera. Our vision-based navigation system builds on LSD-SLAM which estimates the MAV trajectory and a semi-dense reconstruction of the environment in real-time. Since LSD-SLAM only determines depth at high gradient pixels, texture-less areas are not directly observed so that previous exploration methods that assume dense map information cannot directly be applied. We propose an obstacle mapping and exploration approach that takes the properties of our semi-dense monocular SLAM system into account. In experiments, we demonstrate our vision-based autonomous navigation and exploration system with a Parrot Bebop MAV
    corecore