402 research outputs found

    Drone-Driven Running:Exploring the Opportunities for Drones to Support Running Well-being through a Review of Running and Drone Interaction Technologies

    Get PDF
    There is an underexplored interaction space for drones that can be utilised as running interaction technology, distinct from human drone interaction that warrants foregrounding. This paper consolidates the current state of art in running interaction technology through a review of relevant studies and commercial technologies in a framework positioned using dimensions related to the form of interaction as identified in the sports ITECH framework. Our analysis highlights the unmet opportunities in running interaction technology and presents the potential of drones to further support runners. The potential of drones to support various forms of interaction are supported using exemplar research done in human-drone interaction. Through our findings, we hope to inform and expedite future research and practice in the field of running interaction technology and runner drone interaction by supporting researchers in defining and situating their contributions.</p

    Environment and task modeling of long-term-autonomous service robots

    Get PDF
    Utilizing service robots in real-world tasks can significantly improve efficiency, productivity, and safety in various fields such as healthcare, hospitality, and transportation. However, integrating these robots into complex, human-populated environments for continuous use is a significant challenge. A key potential for addressing this challenge lies in long-term modeling capabilities to navigate, understand, and proactively exploit these environments for increased safety and better task performance. For example, robots may use this long-term knowledge of human activity to avoid crowded spaces when navigating or improve their human-centric services. This thesis proposes comprehensive approaches to improve the mapping, localization, and task fulfillment capabilities of service robots by leveraging multi-modal sensor information and (long- term) environment modeling. Learned environmental dynamics are actively exploited to improve the task performance of service robots. As a first contribution, a new long-term-autonomous service robot is presented, designed for both inside and outside buildings. The multi-modal sensor information provided by the robot forms the basis for subsequent methods to model human-centric environments and human activity. It is shown that utilizing multi-modal data for localization and mapping improves long-term robustness and map quality. This especially applies to environments of varying types, i.e., mixed indoor and outdoor or small-scale and large-scale areas. Another essential contribution is a regression model for spatio-temporal prediction of human activity. The model is based on long-term observations of humans by a mobile robot. It is demonstrated that the proposed model can effectively represent the distribution of detected people resulting from moving robots and enables proactive navigation planning. Such model predictions are then used to adapt the robot’s behavior by synthesizing a modular task control model. A reactive executive system based on behavior trees is introduced, which actively triggers recovery behaviors in the event of faults to improve the long-term autonomy. By explicitly addressing failures of robot software components and more advanced problems, it is shown that errors can be solved and potential human helpers can be found efficiently

    Multimodal machine learning for intelligent mobility

    Get PDF
    Scientific problems are solved by finding the optimal solution for a specific task. Some problems can be solved analytically while other problems are solved using data driven methods. The use of digital technologies to improve the transportation of people and goods, which is referred to as intelligent mobility, is one of the principal beneficiaries of data driven solutions. Autonomous vehicles are at the heart of the developments that propel Intelligent Mobility. Due to the high dimensionality and complexities involved in real-world environments, it needs to become commonplace for intelligent mobility to use data-driven solutions. As it is near impossible to program decision making logic for every eventuality manually. While recent developments of data-driven solutions such as deep learning facilitate machines to learn effectively from large datasets, the application of techniques within safety-critical systems such as driverless cars remain scarce.Autonomous vehicles need to be able to make context-driven decisions autonomously in different environments in which they operate. The recent literature on driverless vehicle research is heavily focused only on road or highway environments but have discounted pedestrianized areas and indoor environments. These unstructured environments tend to have more clutter and change rapidly over time. Therefore, for intelligent mobility to make a significant impact on human life, it is vital to extend the application beyond the structured environments. To further advance intelligent mobility, researchers need to take cues from multiple sensor streams, and multiple machine learning algorithms so that decisions can be robust and reliable. Only then will machines indeed be able to operate in unstructured and dynamic environments safely. Towards addressing these limitations, this thesis investigates data driven solutions towards crucial building blocks in intelligent mobility. Specifically, the thesis investigates multimodal sensor data fusion, machine learning, multimodal deep representation learning and its application of intelligent mobility. This work demonstrates that mobile robots can use multimodal machine learning to derive driver policy and therefore make autonomous decisions.To facilitate autonomous decisions necessary to derive safe driving algorithms, we present an algorithm for free space detection and human activity recognition. Driving these decision-making algorithms are specific datasets collected throughout this study. They include the Loughborough London Autonomous Vehicle dataset, and the Loughborough London Human Activity Recognition dataset. The datasets were collected using an autonomous platform design and developed in house as part of this research activity. The proposed framework for Free-Space Detection is based on an active learning paradigm that leverages the relative uncertainty of multimodal sensor data streams (ultrasound and camera). It utilizes an online learning methodology to continuously update the learnt model whenever the vehicle experiences new environments. The proposed Free Space Detection algorithm enables an autonomous vehicle to self-learn, evolve and adapt to new environments never encountered before. The results illustrate that online learning mechanism is superior to one-off training of deep neural networks that require large datasets to generalize to unfamiliar surroundings. The thesis takes the view that human should be at the centre of any technological development related to artificial intelligence. It is imperative within the spectrum of intelligent mobility where an autonomous vehicle should be aware of what humans are doing in its vicinity. Towards improving the robustness of human activity recognition, this thesis proposes a novel algorithm that classifies point-cloud data originated from Light Detection and Ranging sensors. The proposed algorithm leverages multimodality by using the camera data to identify humans and segment the region of interest in point cloud data. The corresponding 3-dimensional data was converted to a Fisher Vector Representation before being classified by a deep Convolutional Neural Network. The proposed algorithm classifies the indoor activities performed by a human subject with an average precision of 90.3%. When compared to an alternative point cloud classifier, PointNet[1], [2], the proposed framework out preformed on all classes. The developed autonomous testbed for data collection and algorithm validation, as well as the multimodal data-driven solutions for driverless cars, is the major contributions of this thesis. It is anticipated that these results and the testbed will have significant implications on the future of intelligent mobility by amplifying the developments of intelligent driverless vehicles.</div

    A prospective geoinformatic approach to indoor navigation for Unmanned Air System (UAS) by use of quick response (QR) codes

    Get PDF
    Dissertation submitted in partial fulfilment of the requirements for the degree of Master of Science in Geospatial TechnologiesThis research study explores a navigation system for autonomous indoor flight of Unmanned Aircraft Systems (UAS) dead reckoning with Inertial Navigation System (INS) and the use of low cost artificial landmarks, Quick Response (QR) codes placed on the floor and allows for fully autonomous flight with all computation done onboard UAS on embedded hardware. We provide a detailed description of all system components and application. Additionally, we show how the system is integrated with a commercial UAS and provide results of experimental autonomous flight tests. To our knowledge, this system is one of the first to allow for complete closed-loop control and goal-driven navigation of a UAS in an indoor setting without requiring connection to any external infrastructures

    Design and Implementation of the Kinect Controlled Electro-Mechanical Skeleton (K.C.E.M.S)

    Get PDF
    Mimicking real-time human motion with a low cost solution has been an extremely difficult task in the past but with the release of the Microsoft Kinect motion capture system, this problem has been simplified. This thesis discusses the feasibility and design behind a simple robotic skeleton that utilizes the Kinect to mimic human movements in near real-time. The goal of this project is to construct a 1/3-scale model of a robotically enhanced skeleton and demonstrate the abilities of the Kinect as a tool for human movement mimicry. The resulting robot was able to mimic many human movements but was mechanically limited in the shoulders. Its movements were slower then real-time due to the inability for the controller to handle real-time motions. This research was presented and published at the 2012 SouthEastCon. Along with this, research papers about the formula hybrid accumulator design and the 2010 autonomous surface vehicle were presented and published

    Enhancing 3D Autonomous Navigation Through Obstacle Fields: Homogeneous Localisation and Mapping, with Obstacle-Aware Trajectory Optimisation

    Get PDF
    Small flying robots have numerous potential applications, from quadrotors for search and rescue, infrastructure inspection and package delivery to free-flying satellites for assistance activities inside a space station. To enable these applications, a key challenge is autonomous navigation in 3D, near obstacles on a power, mass and computation constrained platform. This challenge requires a robot to perform localisation, mapping, dynamics-aware trajectory planning and control. The current state-of-the-art uses separate algorithms for each component. Here, the aim is for a more homogeneous approach in the search for improved efficiencies and capabilities. First, an algorithm is described to perform Simultaneous Localisation And Mapping (SLAM) with physical, 3D map representation that can also be used to represent obstacles for trajectory planning: Non-Uniform Rational B-Spline (NURBS) surfaces. Termed NURBSLAM, this algorithm is shown to combine the typically separate tasks of localisation and obstacle mapping. Second, a trajectory optimisation algorithm is presented that produces dynamically-optimal trajectories with direct consideration of obstacles, providing a middle ground between path planners and trajectory smoothers. Called the Admissible Subspace TRajectory Optimiser (ASTRO), the algorithm can produce trajectories that are easier to track than the state-of-the-art for flight near obstacles, as shown in flight tests with quadrotors. For quadrotors to track trajectories, a critical component is the differential flatness transformation that links position and attitude controllers. Existing singularities in this transformation are analysed, solutions are proposed and are then demonstrated in flight tests. Finally, a combined system of NURBSLAM and ASTRO are brought together and tested against the state-of-the-art in a novel simulation environment to prove the concept that a single 3D representation can be used for localisation, mapping, and planning
    • …
    corecore