14 research outputs found

    On-Sensor Visual Inference with A Pixel Processor Array

    Get PDF

    Advances in Human Robot Interaction for Cloud Robotics applications

    Get PDF
    In this thesis are analyzed different and innovative techniques for Human Robot Interaction. The focus of this thesis is on the interaction with flying robots. The first part is a preliminary description of the state of the art interactions techniques. Then the first project is Fly4SmartCity, where it is analyzed the interaction between humans (the citizen and the operator) and drones mediated by a cloud robotics platform. Then there is an application of the sliding autonomy paradigm and the analysis of different degrees of autonomy supported by a cloud robotics platform. The last part is dedicated to the most innovative technique for human-drone interaction in the User’s Flying Organizer project (UFO project). This project wants to develop a flying robot able to project information into the environment exploiting concepts of Spatial Augmented Realit

    Science, technology and the future of small autonomous drones

    Get PDF
    We are witnessing the advent of a new era of robots — drones — that can autonomously fly in natural and man-made environments. These robots, often associated with defence applications, could have a major impact on civilian tasks, including transportation, communication, agriculture, disaster mitigation and environment preservation. Autonomous flight in confined spaces presents great scientific and technical challenges owing to the energetic cost of staying airborne and to the perceptual intelligence required to negotiate complex environments. We identify scientific and technological advances that are expected to translate, within appropriate regulatory frameworks, into pervasive use of autonomous drones for civilian applications

    Flying Free: A Research Overview of Deep Learning in Drone Navigation Autonomy

    Get PDF
    With the rise of Deep Learning approaches in computer vision applications, significant strides have been made towards vehicular autonomy. Research activity in autonomous drone navigation has increased rapidly in the past five years, and drones are moving fast towards the ultimate goal of near-complete autonomy. However, while much work in the area focuses on specific tasks in drone navigation, the contribution to the overall goal of autonomy is often not assessed, and a comprehensive overview is needed. In this work, a taxonomy of drone navigation autonomy is established by mapping the definitions of vehicular autonomy levels, as defined by the Society of Automotive Engineers, to specific drone tasks in order to create a clear definition of autonomy when applied to drones. A top–down examination of research work in the area is conducted, focusing on drone navigation tasks, in order to understand the extent of research activity in each area. Autonomy levels are cross-checked against the drone navigation tasks addressed in each work to provide a framework for understanding the trajectory of current research. This work serves as a guide to research in drone autonomy with a particular focus on Deep Learning-based solutions, indicating key works and areas of opportunity for development of this area in the future

    A Control Architecture for Unmanned Aerial Vehicles Operating in Human-Robot Team for Service Robotic Tasks

    Get PDF
    In this thesis a Control architecture for an Unmanned Aerial Vehicle (UAV) is presented. The aim of the thesis is to address the problem of control a flying robot operating in human robot team at different level of abstraction. For this purpose, three different layers in the design of the architecture were considered, namely, the high level, the middle level and the low level layers. The special case of an UAV operating in service robotics tasks and in particular in Search&Rescue mission in alpine scenario is considered. Different methodologies for each layer are presented with simulated or real-world experimental validation

    Guided Autonomy for Quadcopter Photography

    Get PDF
    Photographing small objects with a quadcopter is non-trivial to perform with many common user interfaces, especially when it requires maneuvering an Unmanned Aerial Vehicle (C) to difficult angles in order to shoot high perspectives. The aim of this research is to employ machine learning to support better user interfaces for quadcopter photography. Human Robot Interaction (HRI) is supported by visual servoing, a specialized vision system for real-time object detection, and control policies acquired through reinforcement learning (RL). Two investigations of guided autonomy were conducted. In the first, the user directed the quadcopter with a sketch based interface, and periods of user direction were interspersed with periods of autonomous flight. In the second, the user directs the quadcopter by taking a single photo with a handheld mobile device, and the quadcopter autonomously flies to the requested vantage point. This dissertation focuses on the following problems: 1) evaluating different user interface paradigms for dynamic photography in a GPS-denied environment; 2) learning better Convolutional Neural Network (CNN) object detection models to assure a higher precision in detecting human subjects than the currently available state-of-the-art fast models; 3) transferring learning from the Gazebo simulation into the real world; 4) learning robust control policies using deep reinforcement learning to maneuver the quadcopter to multiple shooting positions with minimal human interaction

    Proceedings, MSVSCC 2012

    Get PDF
    Proceedings of the 6th Annual Modeling, Simulation & Visualization Student Capstone Conference held on April 19, 2012 at VMASC in Suffolk, Virginia

    Proceedings of the 2018 Canadian Society for Mechanical Engineering (CSME) International Congress

    Get PDF
    Published proceedings of the 2018 Canadian Society for Mechanical Engineering (CSME) International Congress, hosted by York University, 27-30 May 2018
    corecore