6 research outputs found

    MAR-CPS: Measurable Augmented Reality for Prototyping Cyber-Physical Systems

    Get PDF
    Cyber-Physical Systems (CPSs) refer to engineering platforms that rely on the inte- gration of physical systems with control, computation, and communication technologies. Autonomous vehicles are instances of CPSs that are rapidly growing with applications in many domains. Due to the integration of physical systems with computational sens- ing, planning, and learning in CPSs, hardware-in-the-loop experiments are an essential step for transitioning from simulations to real-world experiments. This paper proposes an architecture for rapid prototyping of CPSs that has been developed in the Aerospace Controls Laboratory at the Massachusetts Institute of Technology. This system, referred to as MAR-CPS (Measurable Augmented Reality for Prototyping Cyber-Physical Systems), includes physical vehicles and sensors, a motion capture technology, a projection system, and a communication network. The role of the projection system is to augment a physical laboratory space with 1) autonomous vehicles' beliefs and 2) a simulated mission environ- ment, which in turn will be measured by physical sensors on the vehicles. The main focus of this method is on rapid design of planning, perception, and learning algorithms for au- tonomous single-agent or multi-agent systems. Moreover, the proposed architecture allows researchers to project a simulated counterpart of outdoor environments in a controlled, indoor space, which can be crucial when testing in outdoor environments is disfavored due to safety, regulatory, or monetary concerns. We discuss the issues related to the design and implementation of MAR-CPS and demonstrate its real-time behavior in a variety of problems in autonomy, such as motion planning, multi-robot coordination, and learning spatio-temporal fields.Boeing Compan

    Collaborating with Autonomous Agents

    Get PDF
    With the anticipated increase of small unmanned aircraft systems (sUAS) entering into the National Airspace System, it is highly likely that vehicle operators will be teaming with fleets of small autonomous vehicles. The small vehicles may consist of sUAS, which are 55 pounds or less that typically will y at altitudes 400 feet and below, and small ground vehicles typically operating in buildings or defined small campuses. Typically, the vehicle operators are not concerned with manual control of the vehicle; instead they are concerned with the overall mission. In order for this vision of high-level mission operators working with fleets of vehicles to come to fruition, many human factors related challenges must be investigated and solved. First, the interface between the human operator and the autonomous agent must be at a level that the operator needs and the agents can understand. This paper details the natural language human factors e orts that NASA Langley's Autonomy Incubator is focusing on. In particular these e orts focus on allowing the operator to interact with the system using speech and gestures rather than a mouse and keyboard. With this ability of the system to understand both speech and gestures, operators not familiar with the vehicle dynamics will be able to easily plan, initiate, and change missions using a language familiar to them rather than having to learn and converse in the vehicle's language. This will foster better teaming between the operator and the autonomous agent which will help lower workload, increase situation awareness, and improve performance of the system as a whole

    Mixed Reality and Remote Sensing Application of Unmanned Aerial Vehicle in Fire and Smoke Detection

    Get PDF
    This paper proposes the development of a system incorporating inertial measurement unit (IMU), a consumer-grade digital camera and a fire detection algorithm simultaneously with a nano Unmanned Aerial Vehicle (UAV) for inspection purposes. The video streams are collected through the monocular camera and navigation relied on the state-of-the-art indoor/outdoor Simultaneous Localisation and Mapping (SLAM) system. It implements the robotic operating system (ROS) and computer vision algorithm to provide a robust, accurate and unique inter-frame motion estimation. The collected onboard data are communicated to the ground station and used the SLAM system to generate a map of the environment. A robust and efficient re-localization was performed to recover from tracking failure, motion blur, and frame lost in the data received. The fire detection algorithm was deployed based on the colour, movement attributes, temporal variation of fire intensity and its accumulation around a point. The cumulative time derivative matrix was utilized to analyze the frame-by-frame changes and to detect areas with high-frequency luminance flicker (random characteristic). Colour, surface coarseness, boundary roughness, and skewness features were perceived as the quadrotor flew autonomously within the clutter and congested area. Mixed Reality system was adopted to visualize and test the proposed system in a physical environment, and the virtual simulation was conducted through the Unity game engine. The results showed that the UAV could successfully detect fire and flame, autonomously fly towards and hover around it, communicate with the ground station and simultaneously generate a map of the environment. There was a slight error between the real and virtual UAV calibration due to the ground truth data and the correlation complexity of tracking real and virtual camera coordinate frames

    Human-Swarm Robot Interaction with Different Awareness Constraints

    Get PDF
    corecore