409 research outputs found

    Toward a robot swarm protecting a group of migrants

    Get PDF
    Different geopolitical conflicts of recent years have led to mass migration of several civilian populations. These migrations take place in militarized zones, indicating real danger contexts for the populations. Indeed, civilians are increasingly targeted during military assaults. Defense and security needs have increased; therefore, there is a need to prioritize the protection of migrants. Very few or no arrangements are available to manage the scale of displacement and the protection of civilians during migration. In order to increase their security during mass migration in an inhospitable territory, this article proposes an assistive system using a team of mobile robots, labeled a rover swarm that is able to provide safety area around the migrants. We suggest a coordination algorithm including CNN and fuzzy logic that allows the swarm to synchronize their movements and provide better sensor coverage of the environment. Implementation is carried out using on a reduced scale rover to enable evaluation of the functionalities of the suggested software architecture and algorithms. Results bring new perspectives to helping and protecting migrants with a swarm that evolves in a complex and dynamic environment

    Probabilistic Human-Robot Information Fusion

    Get PDF
    This thesis is concerned with combining the perceptual abilities of mobile robots and human operators to execute tasks cooperatively. It is generally agreed that a synergy of human and robotic skills offers an opportunity to enhance the capabilities of today’s robotic systems, while also increasing their robustness and reliability. Systems which incorporate both human and robotic information sources have the potential to build complex world models, essential for both automated and human decision making. In this work, humans and robots are regarded as equal team members who interact and communicate on a peer-to-peer basis. Human-robot communication is addressed using probabilistic representations common in robotics. While communication can in general be bidirectional, this work focuses primarily on human-to-robot information flow. More specifically, the approach advocated in this thesis is to let robots fuse their sensor observations with observations obtained from human operators. While robotic perception is well-suited for lower level world descriptions such as geometric properties, humans are able to contribute perceptual information on higher abstraction levels. Human input is translated into the machine representation via Human Sensor Models. A common mathematical framework for humans and robots reinforces the notion of true peer-to-peer interaction. Human-robot information fusion is demonstrated in two application domains: (1) scalable information gathering, and (2) cooperative decision making. Scalable information gathering is experimentally demonstrated on a system comprised of a ground vehicle, an unmanned air vehicle, and two human operators in a natural environment. Information from humans and robots was fused in a fully decentralised manner to build a shared environment representation on multiple abstraction levels. Results are presented in the form of information exchange patterns, qualitatively demonstrating the benefits of human-robot information fusion. The second application domain adds decision making to the human-robot task. Rational decisions are made based on the robots’ current beliefs which are generated by fusing human and robotic observations. Since humans are considered a valuable resource in this context, operators are only queried for input when the expected benefit of an observation exceeds the cost of obtaining it. The system can be seen as adjusting its autonomy at run-time based on the uncertainty in the robots’ beliefs. A navigation task is used to demonstrate the adjustable autonomy system experimentally. Results from two experiments are reported: a quantitative evaluation of human-robot team effectiveness, and a user study to compare the system to classical teleoperation. Results show the superiority of the system with respect to performance, operator workload, and usability

    Digital Cognitive Companions for Marine Vessels : On the Path Towards Autonomous Ships

    Get PDF
    As for the automotive industry, industry and academia are making extensive efforts to create autonomous ships. The solutions for this are very technology-intense. Many building blocks, often relying on AI technology, need to work together to create a complete system that is safe and reliable to use. Even when the ships are fully unmanned, humans are still foreseen to guide the ships when unknown situations arise. This will be done through teleoperation systems.In this thesis, methods are presented to enhance the capability of two building blocks that are important for autonomous ships; a positioning system, and a system for teleoperation.The positioning system has been constructed to not rely on the Global Positioning System (GPS), as this system can be jammed or spoofed. Instead, it uses Bayesian calculations to compare the bottom depth and magnetic field measurements with known sea charts and magnetic field maps, in order to estimate the position. State-of-the-art techniques for this method typically use high-resolution maps. The problem is that there are hardly any high-resolution terrain maps available in the world. Hence we present a method using standard sea-charts. We compensate for the lower accuracy by using other domains, such as magnetic field intensity and bearings to landmarks. Using data from a field trial, we showed that the fusion method using multiple domains was more robust than using only one domain. In the second building block, we first investigated how 3D and VR approaches could support the remote operation of unmanned ships with a data connection with low throughput, by comparing respective graphical user interfaces (GUI) with a Baseline GUI following the currently applied interfaces in such contexts. Our findings show that both the 3D and VR approaches outperform the traditional approach significantly. We found the 3D GUI and VR GUI users to be better at reacting to potentially dangerous situations than the Baseline GUI users, and they could keep track of the surroundings more accurately. Building from this, we conducted a teleoperation user study using real-world data from a field-trial in the archipelago, where the users should assist the positioning system with bearings to landmarks. The users experienced the tool to give a good overview, and despite the connection with the low throughput, they managed through the GUI to significantly improve the positioning accuracy

    Advances in Human-Robot Interaction

    Get PDF
    Rapid advances in the field of robotics have made it possible to use robots not just in industrial automation but also in entertainment, rehabilitation, and home service. Since robots will likely affect many aspects of human existence, fundamental questions of human-robot interaction must be formulated and, if at all possible, resolved. Some of these questions are addressed in this collection of papers by leading HRI researchers

    Explainable shared control in assistive robotics

    Get PDF
    Shared control plays a pivotal role in designing assistive robots to complement human capabilities during everyday tasks. However, traditional shared control relies on users forming an accurate mental model of expected robot behaviour. Without this accurate mental image, users may encounter confusion or frustration whenever their actions do not elicit the intended system response, forming a misalignment between the respective internal models of the robot and human. The Explainable Shared Control paradigm introduced in this thesis attempts to resolve such model misalignment by jointly considering assistance and transparency. There are two perspectives of transparency to Explainable Shared Control: the human's and the robot's. Augmented reality is presented as an integral component that addresses the human viewpoint by visually unveiling the robot's internal mechanisms. Whilst the robot perspective requires an awareness of human "intent", and so a clustering framework composed of a deep generative model is developed for human intention inference. Both transparency constructs are implemented atop a real assistive robotic wheelchair and tested with human users. An augmented reality headset is incorporated into the robotic wheelchair and different interface options are evaluated across two user studies to explore their influence on mental model accuracy. Experimental results indicate that this setup facilitates transparent assistance by improving recovery times from adverse events associated with model misalignment. As for human intention inference, the clustering framework is applied to a dataset collected from users operating the robotic wheelchair. Findings from this experiment demonstrate that the learnt clusters are interpretable and meaningful representations of human intent. This thesis serves as a first step in the interdisciplinary area of Explainable Shared Control. The contributions to shared control, augmented reality and representation learning contained within this thesis are likely to help future research advance the proposed paradigm, and thus bolster the prevalence of assistive robots.Open Acces

    Overcoming barriers and increasing independence: service robots for elderly and disabled people

    Get PDF
    This paper discusses the potential for service robots to overcome barriers and increase independence of elderly and disabled people. It includes a brief overview of the existing uses of service robots by disabled and elderly people and advances in technology which will make new uses possible and provides suggestions for some of these new applications. The paper also considers the design and other conditions to be met for user acceptance. It also discusses the complementarity of assistive service robots and personal assistance and considers the types of applications and users for which service robots are and are not suitable

    Human-Machine Interfaces for Service Robotics

    Get PDF
    L'abstract Ăš presente nell'allegato / the abstract is in the attachmen

    NeBula: TEAM CoSTAR’s robotic autonomy solution that won phase II of DARPA subterranean challenge

    Get PDF
    This paper presents and discusses algorithms, hardware, and software architecture developed by the TEAM CoSTAR (Collaborative SubTerranean Autonomous Robots), competing in the DARPA Subterranean Challenge. Specifically, it presents the techniques utilized within the Tunnel (2019) and Urban (2020) competitions, where CoSTAR achieved second and first place, respectively. We also discuss CoSTAR’s demonstrations in Martian-analog surface and subsurface (lava tubes) exploration. The paper introduces our autonomy solution, referred to as NeBula (Networked Belief-aware Perceptual Autonomy). NeBula is an uncertainty-aware framework that aims at enabling resilient and modular autonomy solutions by performing reasoning and decision making in the belief space (space of probability distributions over the robot and world states). We discuss various components of the NeBula framework, including (i) geometric and semantic environment mapping, (ii) a multi-modal positioning system, (iii) traversability analysis and local planning, (iv) global motion planning and exploration behavior, (v) risk-aware mission planning, (vi) networking and decentralized reasoning, and (vii) learning-enabled adaptation. We discuss the performance of NeBula on several robot types (e.g., wheeled, legged, flying), in various environments. We discuss the specific results and lessons learned from fielding this solution in the challenging courses of the DARPA Subterranean Challenge competition.Peer ReviewedAgha, A., Otsu, K., Morrell, B., Fan, D. D., Thakker, R., Santamaria-Navarro, A., Kim, S.-K., Bouman, A., Lei, X., Edlund, J., Ginting, M. F., Ebadi, K., Anderson, M., Pailevanian, T., Terry, E., Wolf, M., Tagliabue, A., Vaquero, T. S., Palieri, M., Tepsuporn, S., Chang, Y., Kalantari, A., Chavez, F., Lopez, B., Funabiki, N., Miles, G., Touma, T., Buscicchio, A., Tordesillas, J., Alatur, N., Nash, J., Walsh, W., Jung, S., Lee, H., Kanellakis, C., Mayo, J., Harper, S., Kaufmann, M., Dixit, A., Correa, G. J., Lee, C., Gao, J., Merewether, G., Maldonado-Contreras, J., Salhotra, G., Da Silva, M. S., Ramtoula, B., Fakoorian, S., Hatteland, A., Kim, T., Bartlett, T., Stephens, A., Kim, L., Bergh, C., Heiden, E., Lew, T., Cauligi, A., Heywood, T., Kramer, A., Leopold, H. A., Melikyan, H., Choi, H. C., Daftry, S., Toupet, O., Wee, I., Thakur, A., Feras, M., Beltrame, G., Nikolakopoulos, G., Shim, D., Carlone, L., & Burdick, JPostprint (published version

    Context-Enabled Visualization Strategies for Automation Enabled Human-in-the-loop Inspection Systems to Enhance the Situation Awareness of Windstorm Risk Engineers

    Get PDF
    Insurance loss prevention survey, specifically windstorm risk inspection survey is the process of investigating potential damages associated with a building or structure in the event of an extreme weather condition such as a hurricane or tornado. Traditionally, the risk inspection process is highly subjective and depends on the skills of the engineer performing it. This dissertation investigates the sensemaking process of risk engineers while performing risk inspection with special focus on various factors influencing it. This research then investigates how context-based visualizations strategies enhance the situation awareness and performance of windstorm risk engineers. An initial study investigated the sensemaking process and situation awareness requirements of the windstorm risk engineers. The data frame theory of sensemaking was used as the framework to carry out this study. Ten windstorm risk engineers were interviewed, and the data collected were analyzed following an inductive thematic approach. The themes emerged from the data explained the sensemaking process of risk engineers, the process of making sense of contradicting information, importance of their experience level, internal and external biases influencing the inspection process, difficulty developing mental models, and potential technology interventions. More recently human in the loop systems such as drones have been used to improve the efficiency of windstorm risk inspection. This study provides recommendations to guide the design of such systems to support the sensemaking process and situation awareness of windstorm visual risk inspection. The second study investigated the effect of context-based visualization strategies to enhance the situation awareness of the windstorm risk engineers. More specifically, the study investigated how different types of information contribute towards the three levels of situation awareness. Following a between subjects study design 65 civil/construction engineering students completed this study. A checklist based and predictive display based decision aids were tested and found to be effective in supporting the situation awareness requirements as well as performance of windstorm risk engineers. However, the predictive display only helped with certain tasks like understanding the interaction among different components on the rooftop. For remaining tasks, checklist alone was sufficient. Moreover, the decision aids did not place any additional cognitive demand on the participants. This study helped us understand the advantages and disadvantages of the decision aids tested. The final study evaluated the transfer of training effect of the checklist and predictive display based decision aids. After one week of the previous study, participants completed a follow-up study without any decision aids. The performance and situation awareness of participants in the checklist and predictive display group did not change significantly from first trial to second trial. However, the performance and situation awareness of participants in the control condition improved significantly in the second trial. They attributed this to their exposure to SAGAT questionnaire in the first study. They knew what issues to look for and what tasks need to be completed in the simulation. The confounding effect of SAGAT questionnaires needs to be studied in future research efforts

    Fast 3D cluster tracking for a mobile robot using 2D techniques on depth images

    Get PDF
    User simultaneous detection and tracking is an issue at the core of human-robot interaction (HRI). Several methods exist and give good results; many use image processing techniques on images provided by the camera. The increasing presence in mobile robots of range-imaging cameras (such as structured light devices as Microsoft Kinects) allows us to develop image processing on depth maps. In this article, a fast and lightweight algorithm is presented for the detection and tracking of 3D clusters thanks to classic 2D techniques such as edge detection and connected components applied to the depth maps. The recognition of clusters is made using their 2D shape. An algorithm for the compression of depth maps has been specifically developed, allowing the distribution of the whole processing among several computers. The algorithm is then applied to a mobile robot for chasing an object selected by the user. The algorithm is coupled with laser-based tracking to make up for the narrow field of view of the range-imaging camera. The workload created by the method is light enough to enable its use even with processors with limited capabilities. Extensive experimental results are given for verifying the usefulness of the proposed method.Spanish MICINN (Ministry of Science and Innovation) through the project ‘‘Applications of Social Robots=Aplicaciones de los Robots Sociales.’’Publicad
    • 

    corecore