3,453 research outputs found

    Safe Robotic Grasping: Minimum Impact-Force Grasp Selection

    Full text link
    This paper addresses the problem of selecting from a choice of possible grasps, so that impact forces will be minimised if a collision occurs while the robot is moving the grasped object along a post-grasp trajectory. Such considerations are important for safety in human-robot interaction, where even a certified "human-safe" (e.g. compliant) arm may become hazardous once it grasps and begins moving an object, which may have significant mass, sharp edges or other dangers. Additionally, minimising collision forces is critical to preserving the longevity of robots which operate in uncertain and hazardous environments, e.g. robots deployed for nuclear decommissioning, where removing a damaged robot from a contaminated zone for repairs may be extremely difficult and costly. Also, unwanted collisions between a robot and critical infrastructure (e.g. pipework) in such high-consequence environments can be disastrous. In this paper, we investigate how the safety of the post-grasp motion can be considered during the pre-grasp approach phase, so that the selected grasp is optimal in terms applying minimum impact forces if a collision occurs during a desired post-grasp manipulation. We build on the methods of augmented robot-object dynamics models and "effective mass" and propose a method for combining these concepts with modern grasp and trajectory planners, to enable the robot to achieve a grasp which maximises the safety of the post-grasp trajectory, by minimising potential collision forces. We demonstrate the effectiveness of our approach through several experiments with both simulated and real robots.Comment: To be appeared in IEEE/RAS IROS 201

    Understanding user interactions in stereoscopic head-mounted displays

    Get PDF
    2022 Spring.Includes bibliographical references.Interacting in stereoscopic head mounted displays can be difficult. There are not yet clear standards for how interactions in these environments should be performed. In virtual reality there are a number of well designed interaction techniques; however, augmented reality interaction techniques still need to be improved before they can be easily used. This dissertation covers work done towards understanding how users navigate and interact with virtual environments that are displayed in stereoscopic head-mounted displays. With this understanding, existing techniques from virtual reality devices can be transferred to augmented reality where appropriate, and where that is not the case, new interaction techniques can be developed. This work begins by observing how participants interact with virtual content using gesture alone, speech alone, and the combination of gesture+speech during a basic object manipulation task in augmented reality. Later, a complex 3-dimensional data-exploration environment is developed and refined. That environment is capable of being used in both augmented reality (AR) and virtual reality (VR), either asynchronously or simultaneously. The process of iteratively designing that system and the design choices made during its implementation are provided for future researchers working on complex systems. This dissertation concludes with a comparison of user interactions and navigation in that complex environment when using either an augmented or virtual reality display. That comparison contributes new knowledge on how people perform object manipulations between the two devices. When viewing 3D visualizations, users will need to feel able to navigate the environment. Without careful attention to proper interaction technique design, people may struggle to use the developed system. These struggles may range from a system that is uncomfortable and not fit for long-term use, or they could be as major as causing new users to not being able to interact in these environments at all. Getting the interactions right for AR and VR environments is a step towards facilitating their widespread acceptance. This dissertation provides the groundwork needed to start designing interaction techniques around how people utilize their personal space, virtual space, body, tools, and feedback systems

    Registro espacial 2D–3D para a inspeção remota de subestações de energia

    Get PDF
    Remote inspection and supervisory control are critical features for smart factories, civilian surveillance, power systems, among other domains. For reducing the time to make decisions, operators must have both a high situation awareness, implying a considerable amount of data to be presented, and minimal sensory load. Recent research suggests the adoption of computer vision techniques for automatic inspection, as well as virtual reality (VR) as an alternative to traditional SCADA interfaces. Nevertheless, although VR may provide a good representation of a substation’s state, it lacks some real-time information, available from online field cameras and microphones. Since these two sources of information (VR and field information) are not integrated into one single solution, we miss the opportunity of using VR as a SCADA-aware remote inspection tool, during operation and disaster-response routines. This work discusses a method to augment virtual environments of power substations with field images, enabling operators to promptly see a virtual representation of the inspected area's surroundings. The resulting environment is integrated with an image-based state inference machine, continuously checking the inferred states against the ones reported by the SCADA database. Whenever a discrepancy is found, an alarm is triggered and the virtual camera can be immediately teleported to the affected region, speeding up system reestablishment. The solution is based on a client-server architecture and allows multiple cameras deployed in multiple substations. Our results concern the quality of the 2D–3D registration and the rendering framerate for a simple scenario. The collected quantitative metrics suggest good camera pose estimations and registrations, as well as an arguably optimal rendering framerate for substations' equipment inspection.CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível SuperiorCEMIG - Companhia Energética de Minas GeraisCNPq - Conselho Nacional de Desenvolvimento Científico e TecnológicoFAPEMIG - Fundação de Amparo a Pesquisa do Estado de Minas GeraisTese (Doutorado)A inspeção remota e o controle supervisório são requisitos críticos para fábricas modernas, vigilância de civis, sistemas de energia e outras áreas. Para reduzir o tempo da tomada de decisão, os operadores precisam de uma elevada consciência da situação em campo, o que implica em uma grande quantidade de dados a serem apresentados, mas com menor carga sensorial possível. Estudos recentes sugerem a adoção de técnicas de visão computacional para inspeção automática, e a Realidade Virtual (VR) como uma alternativa às interfaces tradicionais do SCADA. Entretanto, apesar de fornecer uma boa representação do estado da subestação, os ambientes virtuais carecem de algumas informações de campo, provenientes de câmeras e microfones. Como essas duas fontes de dados (VR e dispositivos de captura) não são integrados em uma única solução, perde-se a oportunidade de usar VR como uma ferramenta de inspeção remota conectada ao SCADA, durante a operação e rotinas de respostas a desastres. Este trabalho trata de um método para aumentar ambientes virtuais de subestações com imagens de campo, permitindo aos operadores a rápida visualização de uma representação virtual do entorno da área monitorada. O ambiente resultante é integrado com uma máquina de inferência estados por imagens, comparando continuamente os estados inferidos com aqueles reportados pela base SCADA. Na ocasião de uma discrepância, um alarme é gerado e possibilita que a câmera virtual seja imediatamente teletransportada para a região afetada, acelerando o processo de retomada do sistema. A solução se baseia em uma arquitetura cliente-servidor e permite múltiplas câmeras presentes em múltiplas subestações. Os resultados dizem respeito à qualidade do registro 2D–3D e à taxa de renderização para um cenário simples. As métricas quantitativas coletadas sugerem bons níveis de registro e estimativa de pose de câmera, além de uma taxa ótima de renderização para fins de inspeção de equipamentos em subestações

    Remote maintenance assistance using real-time augmented reality authoring

    Get PDF
    Maintenance operations and lifecycle engineering have largely been considered one of the most expensive and time-consuming components for industrial equipment. Numerous organizations continually devote large quantities of resources towards maintaining equipment. As such, any optimizations that would reduce maintenance errors and expenses could lead to substantial time and cost savings. Unfortunately, there are often not enough specialists to meet the demand, forcing localized technicians to perform on-site maintenance on equipment outside their area of expertise. Augmented reality (AR) is one technology that has already been shown to improve the maintenance process. While powerful, AR has its own set of challenges, from content authoring to spatial perception. This work details a system that puts both the power of AR and the knowledge of a specialist directly into the hands of an on-site technician. An application was developed that enables a specialist to deliver AR instructions in real-time to assist a technician performing on-site maintenance. Using a novel and simplified authoring interface, specialists can create AR content in real-time, with little to no prior knowledge of augmented reality or the system itself. There has been ample research on different AR-supported processes, such as real-time authoring, video monitoring, and off-site assistance. However, much less work has been done to integrate them and leverage existing personnel knowledge to both author and deliver real-time AR instructions. This work details the development and implementation of such a system. A technical evaluation was also performed to ensure real-time connectivity in geographically distributed environments. Three network configurations were evaluated. A high-latency high-bandwidth network was used to represent a typical modern maintenance facility. A low-bandwidth network was evaluated to mimic older or more isolated maintenance environments. Lastly, a 4G LTE network was tested, showing the potential for the system to be used across global locations. Under all network configurations, the system effectively facilitated the complete disassembly of a hydraulic pump assembly

    Press-n-Paste : Copy-and-Paste Operations with Pressure-sensitive Caret Navigation for Miniaturized Surface in Mobile Augmented Reality

    Get PDF
    Publisher Copyright: © 2021 ACM.Copy-and-paste operations are the most popular features on computing devices such as desktop computers, smartphones and tablets. However, the copy-and-paste operations are not sufficiently addressed on the Augmented Reality (AR) smartglasses designated for real-time interaction with texts in physical environments. This paper proposes two system solutions, namely Granularity Scrolling (GS) and Two Ends (TE), for the copy-and-paste operations on AR smartglasses. By leveraging a thumb-size button on a touch-sensitive and pressure-sensitive surface, both the multi-step solutions can capture the target texts through indirect manipulation and subsequently enables the copy-and-paste operations. Based on the system solutions, we implemented an experimental prototype named Press-n-Paste (PnP). After the eight-session evaluation capturing 1,296 copy-and-paste operations, 18 participants with GS and TE achieve the peak performance of 17,574 ms and 13,951 ms per copy-and-paste operation, with 93.21% and 98.15% accuracy rates respectively, which are as good as the commercial solutions using direct manipulation on touchscreen devices. The user footprints also show that PnP has a distinctive feature of miniaturized interaction area within 12.65 mm∗14.48 mm. PnP not only proves the feasibility of copy-and-paste operations with the flexibility of various granularities on AR smartglasses, but also gives significant implications to the design space of pressure widgets as well as the input design on smart wearables.Peer reviewe

    MAGIC: Manipulating Avatars and Gestures to Improve Remote Collaboration

    Get PDF
    Remote collaborative work has become pervasive in many settings, from engineering to medical professions. Users are immersed in virtual environments and communicate through life-sized avatars that enable face-to-face collaboration. Within this context, users often collaboratively view and interact with virtual 3D models, for example, to assist in designing new devices such as customized prosthetics, vehicles, or buildings. However, discussing shared 3D content face-to-face has various challenges, such as ambiguities, occlusions, and different viewpoints that all decrease mutual awareness, leading to decreased task performance and increased errors. To address this challenge, we introduce MAGIC, a novel approach for understanding pointing gestures in a face-to-face shared 3D space, improving mutual understanding and awareness. Our approach distorts the remote user\'s gestures to correctly reflect them in the local user\'s reference space when face-to-face. We introduce a novel metric called pointing agreement to measure what two users perceive in common when using pointing gestures in a shared 3D space. Results from a user study suggest that MAGIC significantly improves pointing agreement in face-to-face collaboration settings, improving co-presence and awareness of interactions performed in the shared space. We believe that MAGIC improves remote collaboration by enabling simpler communication mechanisms and better mutual awareness.Comment: Presented at IEEE VR 202

    Tangible user interfaces : past, present and future directions

    Get PDF
    In the last two decades, Tangible User Interfaces (TUIs) have emerged as a new interface type that interlinks the digital and physical worlds. Drawing upon users' knowledge and skills of interaction with the real non-digital world, TUIs show a potential to enhance the way in which people interact with and leverage digital information. However, TUI research is still in its infancy and extensive research is required in or- der to fully understand the implications of tangible user interfaces, to develop technologies that further bridge the digital and the physical, and to guide TUI design with empirical knowledge. This paper examines the existing body of work on Tangible User In- terfaces. We start by sketching the history of tangible user interfaces, examining the intellectual origins of this field. We then present TUIs in a broader context, survey application domains, and review frame- works and taxonomies. We also discuss conceptual foundations of TUIs including perspectives from cognitive sciences, phycology, and philoso- phy. Methods and technologies for designing, building, and evaluating TUIs are also addressed. Finally, we discuss the strengths and limita- tions of TUIs and chart directions for future research

    Breaking the Screen: Interaction Across Touchscreen Boundaries in Virtual Reality for Mobile Knowledge Workers.

    Get PDF
    Virtual Reality (VR) has the potential to transform knowledge work. One advantage of VR knowledge work is that it allows extending 2D displays into the third dimension, enabling new operations, such as selecting overlapping objects or displaying additional layers of information. On the other hand, mobile knowledge workers often work on established mobile devices, such as tablets, limiting interaction with those devices to a small input space. This challenge of a constrained input space is intensified in situations when VR knowledge work is situated in cramped environments, such as airplanes and touchdown spaces. In this paper, we investigate the feasibility of interacting jointly between an immersive VR head-mounted display and a tablet within the context of knowledge work. Specifically, we 1) design, implement and study how to interact with information that reaches beyond a single physical touchscreen in VR; 2) design and evaluate a set of interaction concepts; and 3) build example applications and gather user feedback on those applications.Comment: 10 pages, 8 figures, ISMAR 202

    Facilitating Human-Robot Collaboration Using a Mixed-Reality Projection System

    Get PDF
    abstract: Human-Robot collaboration can be a challenging exercise especially when both the human and the robot want to work simultaneously on a given task. It becomes difficult for the human to understand the intentions of the robot and vice-versa. To overcome this problem, a novel approach using the concept of Mixed-Reality has been proposed, which uses the surrounding space as the canvas to augment projected information on and around 3D objects. A vision based tracking algorithm precisely detects the pose and state of the 3D objects, and human-skeleton tracking is performed to create a system that is both human-aware as well as context-aware. Additionally, the system can warn humans about the intentions of the robot, thereby creating a safer environment to work in. An easy-to-use and universal visual language has been created which could form the basis for interaction in various human-robot collaborations in manufacturing industries. An objective and subjective user study was conducted to test the hypothesis, that using this system to execute a human-robot collaborative task would result in higher performance as compared to using other traditional methods like printed instructions and through mobile devices. Multiple measuring tools were devised to analyze the data which finally led to the conclusion that the proposed mixed-reality projection system does improve the human-robot team's efficiency and effectiveness and hence, will be a better alternative in the future.Dissertation/ThesisMasters Thesis Computer Science 201
    corecore