1,701 research outputs found

    Periscope: A Robotic Camera System to Support Remote Physical Collaboration

    Full text link
    We investigate how robotic camera systems can offer new capabilities to computer-supported cooperative work through the design, development, and evaluation of a prototype system called Periscope. With Periscope, a local worker completes manipulation tasks with guidance from a remote helper who observes the workspace through a camera mounted on a semi-autonomous robotic arm that is co-located with the worker. Our key insight is that the helper, the worker, and the robot should all share responsibility of the camera view--an approach we call shared camera control. Using this approach, we present a set of modes that distribute the control of the camera between the human collaborators and the autonomous robot depending on task needs. We demonstrate the system's utility and the promise of shared camera control through a preliminary study where 12 dyads collaboratively worked on assembly tasks. Finally, we discuss design and research implications of our work for future robotic camera systems that facilitate remote collaboration.Comment: This is a pre-print of the article accepted for publication in PACM HCI and will be presented at CSCW 202

    A Toolkit for Exploring Augmented Reality Through Construction with Children

    Get PDF
    International audienceAugmented Reality begins to be widely mainstream among children, due to some interactive successes in video games and social networks. Based on this interest, we present CartonEd, an open and complete toolkit suitable for children dedicated to the construction of an augmented reality headset device. The toolkit let the children play and explore augmented reality with and beyond handheld devices. Inspired by the Do-It-Yourself movement, the toolkit includes different components such as blueprints, tutorials, videos, mobile apps, a software development kit and an official website. Among the mobile applications, one is implemented to guide the children through the construction process while experiencing augmented reality. To validate our solution (in particular the construction process and the guiding app) and understand its effect on children in regard to their relation to the augmented reality, we conducted four construction sessions. Our study examines the usability of the guiding app and the construction process. We report in this paper the main components of the CartonEd toolkit and the results of an evaluation among 57 children and teenagers (ages 8-16), showing a positive outcome about their own constructed device (all functional), their feelings and wishes regarding the augmented reality

    Turn It This Way: Remote Gesturing in Video-Mediated Communication

    Get PDF
    Collaborative physical tasks are working tasks characterised by workers 'in-the-field' who manipulate task artefacts under the guidance of a remote expert. Examples of such interactions include paramedics requiring field-surgery consults from hospital surgeons, soldiers requiring support from distant bomb-disposal experts, technicians inspecting and repairing machinery under the guidance of a chief engineer or scientists examining artefacts with distributed colleagues. This thesis considers the design of technology to support such forms of distributed working. Early research in video-mediated communication (VMC) which sought to support such interactions presumed video links between remote spaces would improve collaboration. The results of these studies however, demonstrated that in such tasks audio-video links alone were unlikely to improve performance beyond that achievable by simpler audio-only links. In explanation of these observations a reading of studies of situated collaborative working practices suggests that to support distributed object-focussed interactions it is beneficial to not only provide visual access to remote spaces but also to present within the task-space the gestural actions of remote collaborators. Remote Gestural Simulacra are advanced video-mediated communication tools that enable remote collaborators to both see and observably point at and gesture around and towards shared task artefacts located at another site. Technologies developed to support such activities have been critiqued; their design often fractures the interaction between the collaborating parties, restricting access to aspects of communication which are commonly used in co-present situations to coordinate interaction and ground understanding. This thesis specifically explores the design of remote gesture tools, seeking to understand how remote representations of gesture can be used during collaborative physical tasks. In a series of lab-based studies, the utility of remote gesturing is investigated, both qualitatively, examining its collaborative function and quantitatively exploring its impact on both facets of task performance and collaborative language. The thesis also discusses how the configuration of remote gesture tools impacts on their usability, empirically comparing various gesture tool designs. The thesis constructs and examines an argument that remote gesture tools should be designed from a 'mixed ecologies' perspective (theoretically alleviating the problems engendered by 'fractured ecologies' in which collaborating partners are given access to the most salient and relevant features of communicative action that are utilised in face-to-face interaction, namely mutual and reciprocal awareness of commonly understood object-focussed actions (hand-based gestures) and mutual and reciprocal awareness of task-space perspectives. The thesis demonstrates experimental support for this position and concludes by presenting discussion of how the findings generated from the thesis research can be used to guide the design of future iterations of remote gesture tools, and presents directions for areas of further research

    Development of an intelligent personal assistant to empower operators in industry 4.0 environments

    Get PDF
    Mestrado de dupla diplomação com a UTFPR - Universidade Tecnológica Federal do ParanáIndustry 4.0 brings a high level of automation to industrial environments and changes the way companies operate, both in operational aspects and in human relations. It is important to define the role of the millions of operators affected in this new socioeconomic paradigm, integrating new technologies and empowering the workforce to take advantage of aspects such as the flexibility and versatility that human operators bring to production lines. To advance the implementation of this objective, this work proposes the development of an intelligent personal assistant, using concepts of human-in-the-loop cyber-physical systems and context awareness, to assist operators during manufacturing tasks, providing the necessary information for the fulfillment of operations and verifying the accuracy to inform them about possible errors. The implementation is divided in two parts. The first part focuses on an application that supports the real-time operations that can be present in the industry, such as pick and place in warehouses and the assembly of complex equipment on an assembly line. Through an interface, the instruction is given and, using artificial vision techniques with images coming from an IntelRealsense camera, it verifies if the operation is being correctly performed. The gathering of this information occurs in a context awareness algorithm, fulfilling the requirement of intelligent personal assistant and providing feedback to the operator so that the tasks are performed with efficiency and lower incidence of errors. The second part includes the training of these operators in an immersive environment through a virtual reality equipment such as the Oculus Go. The immersive scenario, developed in Unity3D, uses as a model the real workbench, bringing the possibility of performing these trainings in any environment and excluding the need to use real equipment, which could be damaged by the user’s inexperience. The results achieved during the validation tests performed in these two parts, commenting on the strengths, challenges and failures found in the system in general. These results are also qualitatively compared with traditional applications of the proposed case studies in order to prove the fulfillment of the objectives proposed in this work. Finally, the usability test is presented, which provides data on weak points in the user experience for possible improvements in future work.A indústria 4.0 traz um nível elevado de automação a ambientes industriais e muda a forma em que empresas funcionam, tanto em aspectos operacionais quanto em relações humanas. É importante a definição do papel dos milhões de operadores afetados neste novo paradigma socioeconômico, fazendo a integração das novas tecnologias e capacitando a mão de obra para fazer proveito de aspectos como a flexibilidade e versatilidade que operadores humanos trazem às linhas de produção. Para avançar a implementação deste objetivo, este trabalho propõe o desenvolvimento de uma assistente pessoal inteligente, utilizando conceitos de human-in-the-loop cyberphysical systems e context awareness, para auxiliar operadores durante tarefas de manufatura, provendo informações necessárias para o cumprimento de operações e verificando a acurácia para informá-lo sobre possíveis erros. A implementação está dividida em duas partes. A primeira parte foca em uma aplicação de operações em tempo real que podem estar presentes na indústria como pick-andplace em armazéns e a montagem de equipamentos complexos em uma linha de montagem. Através de uma interface é dada a instrução a ser realizada e, utilizando técnicas de visão artificial, com imagens vindas de uma câmera IntelRealsense, verifica se a operação está sendo corretamente executada. A junção dessas informações ocorre em um algoritmo de context awareness, cumprindo o requisito de assistente pessoal inteligente e fornecendo o feedback ao operador para que as tarefas sejam realizadas com eficiência e menor incidência de erros. Já a segunda parte engloba o treinamento destes operadores em um ambiente imersivo através de um equipamento de realidade virtual como o Oculus Go. O cenário, desenvolvido no Unity3D, utiliza como modelo a bancada real, trazendo a possibilidade de se realizar esses treinamentos em qualquer ambiente, excluindo a necessidade da utilização de equipamentos reais e possíveis danos originados de inexperiência do usuário. Os resultados apresentam os testes de validação realizados nestas duas partes, comentando os pontos fortes, desafios e falhas encontradas no sistema em geral. Estes resultados também são comparados qualitativamente com aplicações tradicionais dos casos de estudo propostos de forma a comprovar o cumprimento dos objetivos propostos neste trabalho. Por fim, é apresentado o teste de usabilidade que fornece dados em pontos fracos na experiência de usuários para possíveis melhorias em futuros trabalhos

    Conception and Implementation of a Location-based Augmented Reality Kernel

    Get PDF
    The availability of sophisticated mobile applications on many platforms constitutes a challenging task. In order to cover the most relevant mobile operating systems and make the best use of their underlying features, the native development on the target platform still others the most diverse possibilities. Aside from the most widely spread mobile operating systems - namely Android and iOS - the Windows Phone platform oters a unique design language and many developer tools and technologies for building Windows Store apps. Making use of the capabilities of modern smartphones enables the development and use of desktop-like applications. The built-in sensors, cameras and powerful processing units of such a device other a versatile platform to build against. As a result, many mobile applications and technologies have emerged. However, information on profound insight into the development of such an application is hard to find. In this work, the development of AREA on the Windows Phone 8.1 platform is presented. AREA is a location-based mobile Augmented Reality engine and already available on Android and iOS. By porting the engine to yet another mobile platform, more third-party mobile business applications can integrate AREA and make use of its efficient and modular design. This work also points out the differences in implementation between the Windows Phone version and its counterparts on Android and iOS. Insights into the architecture and some references to the mathematical basis are also provided

    Turn It This Way: Remote Gesturing in Video-Mediated Communication

    Get PDF
    Collaborative physical tasks are working tasks characterised by workers 'in-the-field' who manipulate task artefacts under the guidance of a remote expert. Examples of such interactions include paramedics requiring field-surgery consults from hospital surgeons, soldiers requiring support from distant bomb-disposal experts, technicians inspecting and repairing machinery under the guidance of a chief engineer or scientists examining artefacts with distributed colleagues. This thesis considers the design of technology to support such forms of distributed working. Early research in video-mediated communication (VMC) which sought to support such interactions presumed video links between remote spaces would improve collaboration. The results of these studies however, demonstrated that in such tasks audio-video links alone were unlikely to improve performance beyond that achievable by simpler audio-only links. In explanation of these observations a reading of studies of situated collaborative working practices suggests that to support distributed object-focussed interactions it is beneficial to not only provide visual access to remote spaces but also to present within the task-space the gestural actions of remote collaborators. Remote Gestural Simulacra are advanced video-mediated communication tools that enable remote collaborators to both see and observably point at and gesture around and towards shared task artefacts located at another site. Technologies developed to support such activities have been critiqued; their design often fractures the interaction between the collaborating parties, restricting access to aspects of communication which are commonly used in co-present situations to coordinate interaction and ground understanding. This thesis specifically explores the design of remote gesture tools, seeking to understand how remote representations of gesture can be used during collaborative physical tasks. In a series of lab-based studies, the utility of remote gesturing is investigated, both qualitatively, examining its collaborative function and quantitatively exploring its impact on both facets of task performance and collaborative language. The thesis also discusses how the configuration of remote gesture tools impacts on their usability, empirically comparing various gesture tool designs. The thesis constructs and examines an argument that remote gesture tools should be designed from a 'mixed ecologies' perspective (theoretically alleviating the problems engendered by 'fractured ecologies' in which collaborating partners are given access to the most salient and relevant features of communicative action that are utilised in face-to-face interaction, namely mutual and reciprocal awareness of commonly understood object-focussed actions (hand-based gestures) and mutual and reciprocal awareness of task-space perspectives. The thesis demonstrates experimental support for this position and concludes by presenting discussion of how the findings generated from the thesis research can be used to guide the design of future iterations of remote gesture tools, and presents directions for areas of further research

    B.O.G.G.L.E.S.: Boundary Optical GeoGraphic Lidar Environment System

    Get PDF
    The purpose of this paper is to describe a pseudo X-ray vision system that pairs a Lidar scanner with a visualization device. The system as a whole is referred to as B.O.G.G.L.E.S. There are several key factors that went into the development of this system and the background information and design approach are thoroughly described. B.O.G.G.L.E.S functionality is depicted through the use of design constraints and the analysis of test results. Additionally, many possible developments for B.O.G.G.L.E.S are proposed in the paper. This indicates that there are various avenues of improvement for this project that could be implemented in the future
    corecore