57 research outputs found

    Body swarm interface (BOSI) : controlling robotic swarms using human bio-signals

    Get PDF
    Traditionally robots are controlled using devices like joysticks, keyboards, mice and other similar human computer interface (HCI) devices. Although this approach is effective and practical for some cases, it is restrictive only to healthy individuals without disabilities, and it also requires the user to master the device before its usage. It becomes complicated and non-intuitive when multiple robots need to be controlled simultaneously with these traditional devices, as in the case of Human Swarm Interfaces (HSI). This work presents a novel concept of using human bio-signals to control swarms of robots. With this concept there are two major advantages: Firstly, it gives amputees and people with certain disabilities the ability to control robotic swarms, which has previously not been possible. Secondly, it also gives the user a more intuitive interface to control swarms of robots by using gestures, thoughts, and eye movement. We measure different bio-signals from the human body including Electroencephalography (EEG), Electromyography (EMG), Electrooculography (EOG), using off the shelf products. After minimal signal processing, we then decode the intended control action using machine learning techniques like Hidden Markov Models (HMM) and K-Nearest Neighbors (K-NN). We employ formation controllers based on distance and displacement to control the shape and motion of the robotic swarm. Comparison for ground truth for thoughts and gesture classifications are done, and the resulting pipelines are evaluated with both simulations and hardware experiments with swarms of ground robots and aerial vehicles

    Autonomous Quadrotor Control Using Convolutional Neural Networks

    Get PDF
    Quadrotors are considered nowadays one of the fastest growing technologies. It is entering all fields of life making them a powerful tool to serve humanity and help in developing a better life style. It is crucial to experiment all possible ways of controlling quadrotors, starting from classical methodologies to cutting edge modern technologies to serve their purpose. In most of the times quadrotors would have combination of several technologies on board. The attitude angles and altitude control used in this thesis are based mainly on PID control which is modeled and simulated on MATLAB and Simulink. To control the quadrotor behavior for two different tasks, Obstacle Avoidance and Command by Hand Gesture, the use of Convolutional Neural Networks (CNN) was proposed, since this new technology had shown very impressive results in image recognition in recent years. A considerable amount of training images (datasets) were created for the two tasks. Training and testing of the CNN were performed for these datasets, and real time flight experiments were performed, using a ground station, a Arduino microcontroller and interface circuit connected to the quadrotor. Results of the experiments show an excellent error rates for both tasks. The system performance reflects a major advantage of scalability for classification for new classes and other complex tasks, towards an autonomous flying and more intelligent behavior of quadrotors

    Accelerating Trajectory Generation for Quadrotors Using Transformers

    Full text link
    In this work, we address the problem of computation time for trajectory generation in quadrotors. Most trajectory generation methods for waypoint navigation of quadrotors, for example minimum snap/jerk and minimum-time, are structured as bi-level optimizations. The first level involves allocating time across all input waypoints and the second step is to minimize the snap/jerk of the trajectory under that time allocation. Such an optimization can be computationally expensive to solve. In our approach we treat trajectory generation as a supervised learning problem between a sequential set of inputs and outputs. We adapt a transformer model to learn the optimal time allocations for a given set of input waypoints, thus making it into a single step optimization. We demonstrate the performance of the transformer model by training it to predict the time allocations for a minimum snap trajectory generator. The trained transformer model is able to predict accurate time allocations with fewer data samples and smaller model size, compared to a feedforward network (FFN), demonstrating that it is able to model the sequential nature of the waypoint navigation problem.Comment: Accepted at L4DC 202

    Brain Computer Interfaces for the Control of Robotic Swarms

    Get PDF
    abstract: A robotic swarm can be defined as a large group of inexpensive, interchangeable robots with limited sensing and/or actuating capabilities that cooperate (explicitly or implicitly) based on local communications and sensing in order to complete a mission. Its inherent redundancy provides flexibility and robustness to failures and environmental disturbances which guarantee the proper completion of the required task. At the same time, human intuition and cognition can prove very useful in extreme situations where a fast and reliable solution is needed. This idea led to the creation of the field of Human-Swarm Interfaces (HSI) which attempts to incorporate the human element into the control of robotic swarms for increased robustness and reliability. The aim of the present work is to extend the current state-of-the-art in HSI by applying ideas and principles from the field of Brain-Computer Interfaces (BCI), which has proven to be very useful for people with motor disabilities. At first, a preliminary investigation about the connection of brain activity and the observation of swarm collective behaviors is conducted. After showing that such a connection may exist, a hybrid BCI system is presented for the control of a swarm of quadrotors. The system is based on the combination of motor imagery and the input from a game controller, while its feasibility is proven through an extensive experimental process. Finally, speech imagery is proposed as an alternative mental task for BCI applications. This is done through a series of rigorous experiments and appropriate data analysis. This work suggests that the integration of BCI principles in HSI applications can be successful and it can potentially lead to systems that are more intuitive for the users than the current state-of-the-art. At the same time, it motivates further research in the area and sets the stepping stones for the potential development of the field of Brain-Swarm Interfaces (BSI).Dissertation/ThesisMasters Thesis Mechanical Engineering 201

    Hand-worn Haptic Interface for Drone Teleoperation

    Full text link
    Drone teleoperation is usually accomplished using remote radio controllers, devices that can be hard to master for inexperienced users. Moreover, the limited amount of information fed back to the user about the robot's state, often limited to vision, can represent a bottleneck for operation in several conditions. In this work, we present a wearable interface for drone teleoperation and its evaluation through a user study. The two main features of the proposed system are a data glove to allow the user to control the drone trajectory by hand motion and a haptic system used to augment their awareness of the environment surrounding the robot. This interface can be employed for the operation of robotic systems in line of sight (LoS) by inexperienced operators and allows them to safely perform tasks common in inspection and search-and-rescue missions such as approaching walls and crossing narrow passages with limited visibility conditions. In addition to the design and implementation of the wearable interface, we performed a systematic study to assess the effectiveness of the system through three user studies (n = 36) to evaluate the users' learning path and their ability to perform tasks with limited visibility. We validated our ideas in both a simulated and a real-world environment. Our results demonstrate that the proposed system can improve teleoperation performance in different cases compared to standard remote controllers, making it a viable alternative to standard Human-Robot Interfaces.Comment: Accepted at the IEEE International Conference on Robotics and Automation (ICRA) 202

    Implementation of a Natural User Interface to Command a Drone

    Full text link
    In this work, we propose the use of a Natural User Interface (NUI) through body gestures using the open source library OpenPose, looking for a more dynamic and intuitive way to control a drone. For the implementation, we use the Robotic Operative System (ROS) to control and manage the different components of the project. Wrapped inside ROS, OpenPose (OP) processes the video obtained in real-time by a commercial drone, allowing to obtain the user's pose. Finally, the keypoints from OpenPose are obtained and translated, using geometric constraints, to specify high-level commands to the drone. Real-time experiments validate the full strategy.Comment: 2020 International Conference on Unmanned Aircraft Systems (ICUAS), Athens, Greece, 202

    Deep Learning for Processing Electromyographic Signals: a Taxonomy-based Survey

    Get PDF
    Deep Learning (DL) has been recently employed to build smart systems that perform incredibly well in a wide range of tasks, such as image recognition, machine translation, and self-driving cars. In several fields the considerable improvement in the computing hardware and the increasing need for big data analytics has boosted DL work. In recent years physiological signal processing has strongly benefited from deep learning. In general, there is an exponential increase in the number of studies concerning the processing of electromyographic (EMG) signals using DL methods. This phenomenon is mostly explained by the current limitation of myoelectric controlled prostheses as well as the recent release of large EMG recording datasets, e.g. Ninapro. Such a growing trend has inspired us to seek and review recent papers focusing on processing EMG signals using DL methods. Referring to the Scopus database, a systematic literature search of papers published between January 2014 and March 2019 was carried out, and sixty-five papers were chosen for review after a full text analysis. The bibliometric research revealed that the reviewed papers can be grouped in four main categories according to the final application of the EMG signal analysis: Hand Gesture Classification, Speech and Emotion Classification, Sleep Stage Classification and Other Applications. The review process also confirmed the increasing trend in terms of published papers, the number of papers published in 2018 is indeed four times the amount of papers published the year before. As expected, most of the analyzed papers (≈60 %) concern the identification of hand gestures, thus supporting our hypothesis. Finally, it is worth reporting that the convolutional neural network (CNN) is the most used topology among the several involved DL architectures, in fact, the sixty percent approximately of the reviewed articles consider a CNN
    • …
    corecore