1,546 research outputs found

    The multimodal edge of human aerobotic interaction

    No full text
    This paper presents the idea of a multimodal human aerobotic interaction. An overview of the aerobotic system and its application is given. The joystick-based controller interface and its limitations is discussed. Two techniques are suggested as emerging alternatives to the joystick-based controller interface used in human aerobotic interaction. The first technique is a multimodal combination of speech, gaze, gesture, and other non-verbal cues already used in regular human-humaninteraction. The second is telepathic interaction via brain computer interfaces. The potential limitations of these alternatives is highlighted, and the considerations for further works are presented

    Body swarm interface (BOSI) : controlling robotic swarms using human bio-signals

    Get PDF
    Traditionally robots are controlled using devices like joysticks, keyboards, mice and other similar human computer interface (HCI) devices. Although this approach is effective and practical for some cases, it is restrictive only to healthy individuals without disabilities, and it also requires the user to master the device before its usage. It becomes complicated and non-intuitive when multiple robots need to be controlled simultaneously with these traditional devices, as in the case of Human Swarm Interfaces (HSI). This work presents a novel concept of using human bio-signals to control swarms of robots. With this concept there are two major advantages: Firstly, it gives amputees and people with certain disabilities the ability to control robotic swarms, which has previously not been possible. Secondly, it also gives the user a more intuitive interface to control swarms of robots by using gestures, thoughts, and eye movement. We measure different bio-signals from the human body including Electroencephalography (EEG), Electromyography (EMG), Electrooculography (EOG), using off the shelf products. After minimal signal processing, we then decode the intended control action using machine learning techniques like Hidden Markov Models (HMM) and K-Nearest Neighbors (K-NN). We employ formation controllers based on distance and displacement to control the shape and motion of the robotic swarm. Comparison for ground truth for thoughts and gesture classifications are done, and the resulting pipelines are evaluated with both simulations and hardware experiments with swarms of ground robots and aerial vehicles

    A Brief Exposition on Brain-Computer Interface

    Get PDF
    Brain-Computer Interface is a technology that records brain signals and translates them into useful commands to operate a drone or a wheelchair. Drones are used in various applications such as aerial operations, where pilot’s presence is impossible. The BCI can also be used for patients suffering from brain diseases who lose their body control and are unable to move to satisfy their basic needs. By taking advantage of BCI and drone technology, algorithms for Mind-Controlled Unmanned Aerial System can be developed. This paper deals with the classification of BCI & UAV, methodologies of BCI, the framework of BCI, neuro-imaging methods, BCI headset options, BCI platforms, electrode types & their placement, and the result of feature extraction technique (FFT) with 72.5% accuracy

    SANTO: Social Aerial NavigaTion in Outdoors

    Get PDF
    In recent years, the advances in remote connectivity, miniaturization of electronic components and computing power has led to the integration of these technologies in daily devices like cars or aerial vehicles. From these, a consumer-grade option that has gained popularity are the drones or unmanned aerial vehicles, namely quadrotors. Although until recently they have not been used for commercial applications, their inherent potential for a number of tasks where small and intelligent devices are needed is huge. However, although the integrated hardware has advanced exponentially, the refinement of software used for these applications has not beet yet exploited enough. Recently, this shift is visible in the improvement of common tasks in the field of robotics, such as object tracking or autonomous navigation. Moreover, these challenges can become bigger when taking into account the dynamic nature of the real world, where the insight about the current environment is constantly changing. These settings are considered in the improvement of robot-human interaction, where the potential use of these devices is clear, and algorithms are being developed to improve this situation. By the use of the latest advances in artificial intelligence, the human brain behavior is simulated by the so-called neural networks, in such a way that computing system performs as similar as possible as the human behavior. To this end, the system does learn by error which, in an akin way to the human learning, requires a set of previous experiences quite considerable, in order for the algorithm to retain the manners. Applying these technologies to robot-human interaction do narrow the gap. Even so, from a bird's eye, a noticeable time slot used for the application of these technologies is required for the curation of a high-quality dataset, in order to ensure that the learning process is optimal and no wrong actions are retained. Therefore, it is essential to have a development platform in place to ensure these principles are enforced throughout the whole process of creation and optimization of the algorithm. In this work, multiple already-existing handicaps found in pipelines of this computational gauge are exposed, approaching each of them in a independent and simple manner, in such a way that the solutions proposed can be leveraged by the maximum number of workflows. On one side, this project concentrates on reducing the number of bugs introduced by flawed data, as to help the researchers to focus on developing more sophisticated models. On the other side, the shortage of integrated development systems for this kind of pipelines is envisaged, and with special care those using simulated or controlled environments, with the goal of easing the continuous iteration of these pipelines.Thanks to the increasing popularity of drones, the research and development of autonomous capibilities has become easier. However, due to the challenge of integrating multiple technologies, the available software stack to engage this task is restricted. In this thesis, we accent the divergencies among unmanned-aerial-vehicle simulators and propose a platform to allow faster and in-depth prototyping of machine learning algorithms for this drones

    Towards Intuitive HMI for UAV Control

    Full text link
    In the last decade, UAVs have become a widely used technology. As they are used by both professionals and amateurs, there is a need to explore different control modalities to make control intuitive and easier, especially for new users. In this work, we compared the most widely used joystick control with a custom human pose control. We used human pose estimation and arm movements to send UAV commands in the same way that operators use their fingers to send joystick commands. Experiments were conducted in a simulation environment with first-person visual feedback. Participants had to traverse the same maze with joystick and human pose control. Participants' subjective experience was assessed using the raw NASA Task Load Index.Comment: 2022 International Conference on Smart Systems and Technologies (SST
    corecore