7 research outputs found

    A hardware-in-the-loop testing facility for unmanned aerial vehicle sensor suites and control algorithms

    Get PDF
    In the past decade Unmanned Aerial Vehicles (UAVs) have rapidly grown into a major field of robotics in both industry and academia. Many well established platforms have been developed, and the demand continues to grow. However, the UAVs utilized in industry are predominately remotely piloted aircraft offering very limited levels of autonomy. In contrast, fully autonomous flight has been achieved in research, and the degree of autonomy continues to grow, with research now focusing on advanced tasks such as navigating cluttered terrain and formation ying.The gap between academia and industry is the robustness of control algorithms. Academic research often focuses on proof of concept demonstrations with little or no consideration to real world concerns such as adverse weather or sensor integration.One of the goals of this thesis is to integrate real world issues into the design process. A testing environment was designed and built that allows sensors and control algorithms to be tested against real obstacles and environmental conditions in a controlled, repeatable fashion. The use of this facility is demonstrated in the implementation of a safe landing zone algorithm for a robotic helicopter equipped with a laser scanner. Results from tests conducted in the testing facility are used to analyze results from ights in the field.Controlling the testing environment also provides a baseline to evaluate different control solutions. In the current research paradigm, it is difficult to determine which research questions have been solved because the testing conditions vary from researcher to researcher. A common testing environment eliminates ambiguities and allows solutions to be characterized based on their performance in different terrains and environmental conditions.This thesis explores how flight tests can be conducted in the lab using the actual hardware and control algorithms. The sensor package is attached to a 6 DOF gantry whose motion is governed by the dynamic model of the aircraft. To provide an expansive terrain over which the flight can be conducted, a scaled model of the environment was created.The the feasibility of using a scaled environment is demonstrated with a common sensor package and control task: using computer vision to guide an autonomous helicopter. The effcts of scaling are investigated, and the approach validated by comparing results in the scaled model to actual flights. Finally, it is demonstrated how the facility can be used to investigate the effect of adverse conditions on control algorithm performance. The overarching philosophy of this work is that incorporating real world concerns into the design process leads to more fully developed and robust solutions.Ph.D., Mechanical Engineering -- Drexel University, 201

    NeBula: TEAM CoSTAR’s robotic autonomy solution that won phase II of DARPA subterranean challenge

    Get PDF
    This paper presents and discusses algorithms, hardware, and software architecture developed by the TEAM CoSTAR (Collaborative SubTerranean Autonomous Robots), competing in the DARPA Subterranean Challenge. Specifically, it presents the techniques utilized within the Tunnel (2019) and Urban (2020) competitions, where CoSTAR achieved second and first place, respectively. We also discuss CoSTAR’s demonstrations in Martian-analog surface and subsurface (lava tubes) exploration. The paper introduces our autonomy solution, referred to as NeBula (Networked Belief-aware Perceptual Autonomy). NeBula is an uncertainty-aware framework that aims at enabling resilient and modular autonomy solutions by performing reasoning and decision making in the belief space (space of probability distributions over the robot and world states). We discuss various components of the NeBula framework, including (i) geometric and semantic environment mapping, (ii) a multi-modal positioning system, (iii) traversability analysis and local planning, (iv) global motion planning and exploration behavior, (v) risk-aware mission planning, (vi) networking and decentralized reasoning, and (vii) learning-enabled adaptation. We discuss the performance of NeBula on several robot types (e.g., wheeled, legged, flying), in various environments. We discuss the specific results and lessons learned from fielding this solution in the challenging courses of the DARPA Subterranean Challenge competition.Peer ReviewedAgha, A., Otsu, K., Morrell, B., Fan, D. D., Thakker, R., Santamaria-Navarro, A., Kim, S.-K., Bouman, A., Lei, X., Edlund, J., Ginting, M. F., Ebadi, K., Anderson, M., Pailevanian, T., Terry, E., Wolf, M., Tagliabue, A., Vaquero, T. S., Palieri, M., Tepsuporn, S., Chang, Y., Kalantari, A., Chavez, F., Lopez, B., Funabiki, N., Miles, G., Touma, T., Buscicchio, A., Tordesillas, J., Alatur, N., Nash, J., Walsh, W., Jung, S., Lee, H., Kanellakis, C., Mayo, J., Harper, S., Kaufmann, M., Dixit, A., Correa, G. J., Lee, C., Gao, J., Merewether, G., Maldonado-Contreras, J., Salhotra, G., Da Silva, M. S., Ramtoula, B., Fakoorian, S., Hatteland, A., Kim, T., Bartlett, T., Stephens, A., Kim, L., Bergh, C., Heiden, E., Lew, T., Cauligi, A., Heywood, T., Kramer, A., Leopold, H. A., Melikyan, H., Choi, H. C., Daftry, S., Toupet, O., Wee, I., Thakur, A., Feras, M., Beltrame, G., Nikolakopoulos, G., Shim, D., Carlone, L., & Burdick, JPostprint (published version

    NeBula: Team CoSTAR's robotic autonomy solution that won phase II of DARPA Subterranean Challenge

    Get PDF
    This paper presents and discusses algorithms, hardware, and software architecture developed by the TEAM CoSTAR (Collaborative SubTerranean Autonomous Robots), competing in the DARPA Subterranean Challenge. Specifically, it presents the techniques utilized within the Tunnel (2019) and Urban (2020) competitions, where CoSTAR achieved second and first place, respectively. We also discuss CoSTAR¿s demonstrations in Martian-analog surface and subsurface (lava tubes) exploration. The paper introduces our autonomy solution, referred to as NeBula (Networked Belief-aware Perceptual Autonomy). NeBula is an uncertainty-aware framework that aims at enabling resilient and modular autonomy solutions by performing reasoning and decision making in the belief space (space of probability distributions over the robot and world states). We discuss various components of the NeBula framework, including (i) geometric and semantic environment mapping, (ii) a multi-modal positioning system, (iii) traversability analysis and local planning, (iv) global motion planning and exploration behavior, (v) risk-aware mission planning, (vi) networking and decentralized reasoning, and (vii) learning-enabled adaptation. We discuss the performance of NeBula on several robot types (e.g., wheeled, legged, flying), in various environments. We discuss the specific results and lessons learned from fielding this solution in the challenging courses of the DARPA Subterranean Challenge competition.The work is partially supported by the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration (80NM0018D0004), and Defense Advanced Research Projects Agency (DARPA)

    A multimodal micro air vehicle for autonomous flight in near-earth environments

    Get PDF
    Reconnaissance, surveillance, and search-and-rescue missions in near-Earth environments such as caves, forests, and urban areas pose many new challenges to command and control (C2) teams. Of great significance is how to acquire situational awareness when access to the scene is blocked by enemy fire, rubble, or other occlusions. Small bird-sized aerial robots are expendable and can fly over obstacles and through small openings to assist in the acquisition and distribution of intelligence. However, limited flying space and densely populated obstacle fields requires a vehicle that is capable of hovering, but also maneuverable. A secondary flight mode was incorporated into a fixed-wing aircraft to preserve its maneuverability while adding the capability of hovering. An inertial measurement sensor and onboard flight control system were interfaced and used to transition the hybrid prototype from cruise to hover flight and sustain a hover autonomously. Furthermore, the hovering flight mode can be used to maneuver the aircraft through small openings such as doorways. An ultrasonic and infrared sensor suite was designed to follow exterior building walls until an ingress route was detected. Reactive control was then used to traverse the doorway and gather reconnaissance. Entering a dangerous environment to gather intelligence autonomously will provide an invaluable resource to any C2 team. The holistic approach of platform development, sensor suite design, and control serves as the philosophy of this work.Ph.D., Mechanical Engineering -- Drexel University, 200

    Target classification in multimodal video

    Get PDF
    The presented thesis focuses on enhancing scene segmentation and target recognition methodologies via the mobilisation of contextual information. The algorithms developed to achieve this goal utilise multi-modal sensor information collected across varying scenarios, from controlled indoor sequences to challenging rural locations. Sensors are chiefly colour band and long wave infrared (LWIR), enabling persistent surveillance capabilities across all environments. In the drive to develop effectual algorithms towards the outlined goals, key obstacles are identified and examined: the recovery of background scene structure from foreground object ’clutter’, employing contextual foreground knowledge to circumvent training a classifier when labeled data is not readily available, creating a labeled LWIR dataset to train a convolutional neural network (CNN) based object classifier and the viability of spatial context to address long range target classification when big data solutions are not enough. For an environment displaying frequent foreground clutter, such as a busy train station, we propose an algorithm exploiting foreground object presence to segment underlying scene structure that is not often visible. If such a location is outdoors and surveyed by an infra-red (IR) and visible band camera set-up, scene context and contextual knowledge transfer allows reasonable class predictions for thermal signatures within the scene to be determined. Furthermore, a labeled LWIR image corpus is created to train an infrared object classifier, using a CNN approach. The trained network demonstrates effective classification accuracy of 95% over 6 object classes. However, performance is not sustainable for IR targets acquired at long range due to low signal quality and classification accuracy drops. This is addressed by mobilising spatial context to affect network class scores, restoring robust classification capability

    Summary of Research 1994

    Get PDF
    The views expressed in this report are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government.This report contains 359 summaries of research projects which were carried out under funding of the Naval Postgraduate School Research Program. A list of recent publications is also included which consists of conference presentations and publications, books, contributions to books, published journal papers, and technical reports. The research was conducted in the areas of Aeronautics and Astronautics, Computer Science, Electrical and Computer Engineering, Mathematics, Mechanical Engineering, Meteorology, National Security Affairs, Oceanography, Operations Research, Physics, and Systems Management. This also includes research by the Command, Control and Communications (C3) Academic Group, Electronic Warfare Academic Group, Space Systems Academic Group, and the Undersea Warfare Academic Group
    corecore