680 research outputs found

    Fine grained pointing recognition for natural drone guidance

    Get PDF
    Human action recognition systems are typically focused on identifying different actions, rather than fine grained variations of the same action. This work explores strategies to identify different pointing directions in order to build a natural interaction system to guide autonomous systems such as drones. Commanding a drone with hand-held panels or tablets is common practice but intuitive user-drone interfaces might have significant benefits. The system proposed in this work just requires the user to provide occasional high-level navigation commands by pointing the drone towards the desired motion direction. Due to the lack of data on these settings, we present a new benchmarking video dataset to validate our framework and facilitate future research on the area. Our results show good accuracy for pointing direction recognition, while running at interactive rates and exhibiting robustness to variability in user appearance, viewpoint, camera distance and scenery

    DRONE CONTROL USING BCI TECHNOLOGY

    Get PDF
    A drone, also known as an unmanned aerial vehicle (UAV), is a type of aircraft that is operated remotely or autonomously. The utilization of drones increased because it is now possible to use them to perform tasks that would be too complicated for human beings to do. Electroencephalograms (EEG) are generated by the electrical activity of the brain and can be measured by placing electrodes on the scalp. The idea of controlling drones using EEG signals refers to the use of EEG technology to control the movement of a drone. EEG signals are used to determine the user\u27s intention and translate that into commands that are sent to the drone.For this project, we developed and tested a system that has the purpose to control a drone using a headband that detects EEG signals from the drone’s pilot when he/she performs facial gestures. A commercial EEG headband will be used to record the EEG signals generated when three facial gestures are performed: raise eyebrows, hard blink, and look left. The headband has three electrodes in the form of small metal disks that allow three frontal cortex measurements. For this experiment, the recordings will be taken from three different people and the EEG signals recorded from them will be analyzed and recorded using the OpenBCI GUI software. The data recorded will be transferred to MATLAB software. Then the data will go through a feature extraction process, to design an Artificial Neural Network (ANN). After that, the Artificial Neural Network will be trained to classify the facial gesture selected for the experiment and once its training is completed the Neural network will be converted into a function that will be sent to MATLAB for the purpose to send commands DJI Tello drone based on the classification analysis performed by the Neural Network created

    Social-aware drone navigation using social force model

    Get PDF
    Robot’s navigation is one of the hardest challenges to deal with, because real environments imply highly dynamic objects moving in all directions. The main ideal goal is to conduct a safe navigation within the environment, avoiding obstacles and reaching the final proposed goal. Nowadays, with the last advances in technology, we are able to see robots almost everywhere, and this can lead us to think about the robot’s role in the future, and where we would find them, and it is no exaggerated to say, that practically, flying and land-based robots are going to live together with people, interacting in our houses, streets and shopping centers. Moreover, we will notice their presence, gradually inserted in our human societies, every time doing more human tasks, which in the past years were unthinkable. Therefore, if we think about robots moving or flying around us, we must consider safety, the distance the robot should take to make the human feel comfortable, and the different reactions people would have. The main goal of this work is to accompany people making use of a flying robot. The term social navigation gives us the path to follow when we talk about a social environment. Robots must be able to navigate between humans, giving sense of security to those who are walking close to them. In this work, we present a model called Social Force Model, which states that the human social interaction between persons and objects is inspired in the fluid dynamics de- fined by Newton’s equations, and also, we introduce the extended version which complements the initial method with the human-robot interaction force. In the robotics field, the use of tools for helping the development and the implementation part are crucial. The fast advances in technology allows the international community to have access to cheaper and more compact hardware and software than a decade ago. It is becoming more and more usual to have access to more powerful technology which helps us to run complex algorithms, and because of that, we can run bigger systems in reduced space, making robots more intelligent, more compact and more robust against failures. Our case was not an exception, in the next chapters we will present the procedure we followed to implement the approaches, supported by different simulation tools and software. Because of the nature of the problem we were facing, we made use of Robotic Operating System along with Gazebo, which help us to have a good outlook of how the code will work in real-life experiments. In this work, both real and simulated experiments are presented, in which we expose the interaction conducted by the 3D Aerial Social Force Model, between humans, objects and in this case the AR.Drone, a flying drone property of the Instituto de Robótica e Informática Industrial. We focus on making the drone navigation more socially acceptable by the humans around; the main purpose of the drone is to accompany a person, which we will call the "main" person in this work, who is going to try to navigate side-by-side, with a behavior being dictated with some forces exerted by the environment, and also is going to try to be the more socially close acceptable possible to the remaining humans around. Also, it is presented a comparison between the 3D Aerial Social Force Model and the Artificial Potential Fields method, a well-known method and widely used in robot navigation. We present both methods and the description of the forces each one involves. Along with these two models, there is also another important topic to introduce. As we said, the robot must be able to accompany a pedestrian in his way, and for that reason, the forecasting capacity is an important feature since the robot does not know the final destination of the human to accompany. It is essential to give it the ability to predict the human movements. In this work, we used the differential values between the past position values to know how much is changing through time. This gives us an accurate idea of how the human would behave or which direction he/she would take next. Furthermore, we present a description of the human motion prediction model based on linear regression. The motivation behind the idea of building a Regression Model was the simplicity of the implementation, the robustness and the very accurate results of the approach. The previous main human positions are taken, in order to forecast the new position of the human, the next seconds. This is done with the main purpose of letting the drone know about the direction the human is taking, to move forward beside the human, as if the drone was accompanying him. The optimization for the linear regression model, to find the right weights for our model, was carried out by gradient descent, implementing also de RMSprop variant in order to reach convergence in a faster way. The strategy that was followed to build the prediction model is explained with detail later in this work. The presence of social robots has grown during the past years, many researchers have contributed and many techniques are being used to give them the capacity of interacting safely and effectively with the people, and it is a hot topic which has matured a lot, but still there is many research to be investigated

    Road Condition Detection and Emergency Rescue Recognition Using On-Board UAV in the Wildness

    Get PDF
    Unmanned aerial vehicle (UAV) vision technology is becoming increasingly important, especially in wilderness rescue. For humans in the wilderness with poor network conditions and bad weather, this paper proposes a technique for road extraction and road condition detection from video captured by UAV multispectral cameras in real-time or pre-downloaded multispectral images from satellites, which in turn provides humans with optimal route planning. Additionally, depending on the flight altitude of the UAV, humans can interact with the UAV through dynamic gesture recognition to identify emergency situations and potential dangers for emergency rescue or re-routing. The purpose of this work is to detect the road condition and identify emergency situations in order to provide necessary and timely assistance to humans in the wild. By obtaining a normalized difference vegetation index (NDVI), the UAV can effectively distinguish between bare soil roads and gravel roads, refining the results of our previous route planning data. In the low-altitude human–machine interaction part, based on media-pipe hand landmarks, we combined machine learning methods to build a dataset of four basic hand gestures for sign for help dynamic gesture recognition. We tested the dataset on different classifiers, and the best results show that the model can achieve 99.99% accuracy on the testing set. In this proof-of-concept paper, the above experimental results confirm that our proposed scheme can achieve our expected tasks of UAV rescue and route planning

    Pedestrian Models for Autonomous Driving Part II: High-Level Models of Human Behavior

    Get PDF
    Abstract—Autonomous vehicles (AVs) must share space with pedestrians, both in carriageway cases such as cars at pedestrian crossings and off-carriageway cases such as delivery vehicles navigating through crowds on pedestrianized high-streets. Unlike static obstacles, pedestrians are active agents with complex, inter- active motions. Planning AV actions in the presence of pedestrians thus requires modelling of their probable future behaviour as well as detecting and tracking them. This narrative review article is Part II of a pair, together surveying the current technology stack involved in this process, organising recent research into a hierarchical taxonomy ranging from low-level image detection to high-level psychological models, from the perspective of an AV designer. This self-contained Part II covers the higher levels of this stack, consisting of models of pedestrian behaviour, from prediction of individual pedestrians’ likely destinations and paths, to game-theoretic models of interactions between pedestrians and autonomous vehicles. This survey clearly shows that, although there are good models for optimal walking behaviour, high-level psychological and social modelling of pedestrian behaviour still remains an open research question that requires many conceptual issues to be clarified. Early work has been done on descriptive and qualitative models of behaviour, but much work is still needed to translate them into quantitative algorithms for practical AV control

    From the Ground Up: Designerly Knowledge in Human-Drone Interaction

    Get PDF
    There are flying robots out there — you may have seen and heard them, droning over your head. Drones have expanded our human capacities, lifting our sight to the skies, but not without generating intricate experiences. How are these machines being designed and researched? What design methods, approaches, and philosophies are relevant to the study of the development (or decline) of drones in society? In this thesis, I argue that we must re-frame how drones are studied, from the ground up, through a design stance. I invite you to take a journey with me, with changing lenses from the work of others to my own intimate relationship with this technology. My work relies on exploring the fringes of design research: understudied groups such as children, alternative design approaches such as soma design, and peripheral methods such as autoethnography.This thesis includes four articles discussing perspectives on designerly knowledge, composing a frame surrounding the notion that we may be missing out on some of the aspects of the wicked nature of human-drone interaction (HDI) design. The methods are poised on phenomenology and narratives, and supported by the assumption that any subject of study is a sociotechnical assemblage. Starting through a first-person perspective, I offer a contribution to the gap in research through a longitudinal autoethnographic study conducted with my children. The second paper comes in the form of a pictorial expressing a first-person experience during a design research workshop, and what that meant for my relationship with drones as a research material. The third paper leaps into a Research through Design project, challenging the solutionist drone and offering instead the first steps in a concept-driven design of the unlikely pairing of drones and breathing. The fourth paper returns to the pictorial form, suggesting a method for visual conversations between researchers through the tangible qualities of sketches and illustrations. Central to this thesis, is the argument for designerly approaches in HDI and championing the need for alternative forms of publication and research. To that end, I include two publications in the form of pictorials: a publication format relying on visual knowledge and with growing interest in the HCI community

    A Survey on Human-aware Robot Navigation

    Full text link
    Intelligent systems are increasingly part of our everyday lives and have been integrated seamlessly to the point where it is difficult to imagine a world without them. Physical manifestations of those systems on the other hand, in the form of embodied agents or robots, have so far been used only for specific applications and are often limited to functional roles (e.g. in the industry, entertainment and military fields). Given the current growth and innovation in the research communities concerned with the topics of robot navigation, human-robot-interaction and human activity recognition, it seems like this might soon change. Robots are increasingly easy to obtain and use and the acceptance of them in general is growing. However, the design of a socially compliant robot that can function as a companion needs to take various areas of research into account. This paper is concerned with the navigation aspect of a socially-compliant robot and provides a survey of existing solutions for the relevant areas of research as well as an outlook on possible future directions.Comment: Robotics and Autonomous Systems, 202

    Survey Paper Artificial and Computational Intelligence in the Internet of Things and Wireless Sensor Network

    Get PDF
    In this modern age, Internet of Things (IoT) and Wireless Sensor Network (WSN) as its derivatives have become one of the most popular and important technological advancements. In IoT, all things and services in the real world are digitalized and it continues to grow exponentially every year. This growth in number of IoT device in the end has created a tremendous amount of data and new data services such as big data systems. These new technologies can be managed to produce additional value to the existing business model. It also can provide a forecasting service and is capable to produce decision-making support using computational intelligence methods. In this survey paper, we provide detailed research activities concerning Computational Intelligence methods application in IoT WSN. To build a good understanding, in this paper we also present various challenges and issues for Computational Intelligence in IoT WSN. In the last presentation, we discuss the future direction of Computational Intelligence applications in IoT WSN such as Self-Organizing Network (dynamic network) concept

    A Survey of Computer Vision Methods for 2D Object Detection from Unmanned Aerial Vehicles

    Get PDF
    The spread of Unmanned Aerial Vehicles (UAVs) in the last decade revolutionized many applications fields. Most investigated research topics focus on increasing autonomy during operational campaigns, environmental monitoring, surveillance, maps, and labeling. To achieve such complex goals, a high-level module is exploited to build semantic knowledge leveraging the outputs of the low-level module that takes data acquired from multiple sensors and extracts information concerning what is sensed. All in all, the detection of the objects is undoubtedly the most important low-level task, and the most employed sensors to accomplish it are by far RGB cameras due to costs, dimensions, and the wide literature on RGB-based object detection. This survey presents recent advancements in 2D object detection for the case of UAVs, focusing on the differences, strategies, and trade-offs between the generic problem of object detection, and the adaptation of such solutions for operations of the UAV. Moreover, a new taxonomy that considers different heights intervals and driven by the methodological approaches introduced by the works in the state of the art instead of hardware, physical and/or technological constraints is proposed
    • …
    corecore