310 research outputs found

    MetaSpace II: Object and full-body tracking for interaction and navigation in social VR

    Full text link
    MetaSpace II (MS2) is a social Virtual Reality (VR) system where multiple users can not only see and hear but also interact with each other, grasp and manipulate objects, walk around in space, and get tactile feedback. MS2 allows walking in physical space by tracking each user's skeleton in real-time and allows users to feel by employing passive haptics i.e., when users touch or manipulate an object in the virtual world, they simultaneously also touch or manipulate a corresponding object in the physical world. To enable these elements in VR, MS2 creates a correspondence in spatial layout and object placement by building the virtual world on top of a 3D scan of the real world. Through the association between the real and virtual world, users are able to walk freely while wearing a head-mounted device, avoid obstacles like walls and furniture, and interact with people and objects. Most current virtual reality (VR) environments are designed for a single user experience where interactions with virtual objects are mediated by hand-held input devices or hand gestures. Additionally, users are only shown a representation of their hands in VR floating in front of the camera as seen from a first person perspective. We believe, representing each user as a full-body avatar that is controlled by natural movements of the person in the real world (see Figure 1d), can greatly enhance believability and a user's sense immersion in VR.Comment: 10 pages, 9 figures. Video: http://living.media.mit.edu/projects/metaspace-ii

    Smart augmented reality instructional system for mechanical assembly

    Get PDF
    Quality and efficiency are pivotal indicators of a manufacturing company. Many companies are suffering from shortage of experienced workers across the production line to perform complex assembly tasks such as assembly of an aircraft engine. This could lead to a significant financial loss. In order to further reduce time and error in an assembly, a smart system consisting of multi-modal Augmented Reality (AR) instructions with the support of a deep learning network for tool detection is introduced. The multi-modal smart AR is designed to provide on-site information including various visual renderings with a fine-tuned Region-based Convolutional Neural Network, which is trained on a synthetic tool dataset. The dataset is generated using CAD models of tools augmented onto a 2D scene without the need of manually preparing real tool images. By implementing the system to mechanical assembly of a CNC carving machine, the result has shown that the system is not only able to correctly classify and localize the physical tools but also enables workers to successfully complete the given assembly tasks. With the proposed approaches, an efficiently customizable smart AR instructional system capable of sensing, characterizing the requirements, and enhancing worker\u27s performance effectively has been built and demonstrated --Abstract, page iii

    An Augmented Interface to Display Industrial Robot Faults

    Get PDF
    Technology advancement is changing the way industrial factories have to face an increasingly complex and competitive market. The fourth industrial revolution (known as industry 4.0) is also changing how human workers have to carry out tasks and actions. In fact, it is no longer impossible to think of a scenario in which human operators and industrial robots work side-by-side, sharing the same environment and tools. To realize a safe work environment, workers should trust robots as well as they trust human operators. Such goal is indeed complex to achieve, especially when workers are under stress conditions, such as when a fault occurs and the human operators are no longer able to understand what is happening in the industrial manipulator. Indeed, Augmented Reality (AR) can help workers to visualize in real-time robots’ faults. This paper proposes an augmented system that assists human workers to recognize and visualize errors, improving their awareness of the system. The system has been tested using both an AR see-through device and a smartphone

    Development of an intelligent personal assistant to empower operators in industry 4.0 environments

    Get PDF
    Mestrado de dupla diplomação com a UTFPR - Universidade Tecnológica Federal do ParanáIndustry 4.0 brings a high level of automation to industrial environments and changes the way companies operate, both in operational aspects and in human relations. It is important to define the role of the millions of operators affected in this new socioeconomic paradigm, integrating new technologies and empowering the workforce to take advantage of aspects such as the flexibility and versatility that human operators bring to production lines. To advance the implementation of this objective, this work proposes the development of an intelligent personal assistant, using concepts of human-in-the-loop cyber-physical systems and context awareness, to assist operators during manufacturing tasks, providing the necessary information for the fulfillment of operations and verifying the accuracy to inform them about possible errors. The implementation is divided in two parts. The first part focuses on an application that supports the real-time operations that can be present in the industry, such as pick and place in warehouses and the assembly of complex equipment on an assembly line. Through an interface, the instruction is given and, using artificial vision techniques with images coming from an IntelRealsense camera, it verifies if the operation is being correctly performed. The gathering of this information occurs in a context awareness algorithm, fulfilling the requirement of intelligent personal assistant and providing feedback to the operator so that the tasks are performed with efficiency and lower incidence of errors. The second part includes the training of these operators in an immersive environment through a virtual reality equipment such as the Oculus Go. The immersive scenario, developed in Unity3D, uses as a model the real workbench, bringing the possibility of performing these trainings in any environment and excluding the need to use real equipment, which could be damaged by the user’s inexperience. The results achieved during the validation tests performed in these two parts, commenting on the strengths, challenges and failures found in the system in general. These results are also qualitatively compared with traditional applications of the proposed case studies in order to prove the fulfillment of the objectives proposed in this work. Finally, the usability test is presented, which provides data on weak points in the user experience for possible improvements in future work.A indústria 4.0 traz um nível elevado de automação a ambientes industriais e muda a forma em que empresas funcionam, tanto em aspectos operacionais quanto em relações humanas. É importante a definição do papel dos milhões de operadores afetados neste novo paradigma socioeconômico, fazendo a integração das novas tecnologias e capacitando a mão de obra para fazer proveito de aspectos como a flexibilidade e versatilidade que operadores humanos trazem às linhas de produção. Para avançar a implementação deste objetivo, este trabalho propõe o desenvolvimento de uma assistente pessoal inteligente, utilizando conceitos de human-in-the-loop cyberphysical systems e context awareness, para auxiliar operadores durante tarefas de manufatura, provendo informações necessárias para o cumprimento de operações e verificando a acurácia para informá-lo sobre possíveis erros. A implementação está dividida em duas partes. A primeira parte foca em uma aplicação de operações em tempo real que podem estar presentes na indústria como pick-andplace em armazéns e a montagem de equipamentos complexos em uma linha de montagem. Através de uma interface é dada a instrução a ser realizada e, utilizando técnicas de visão artificial, com imagens vindas de uma câmera IntelRealsense, verifica se a operação está sendo corretamente executada. A junção dessas informações ocorre em um algoritmo de context awareness, cumprindo o requisito de assistente pessoal inteligente e fornecendo o feedback ao operador para que as tarefas sejam realizadas com eficiência e menor incidência de erros. Já a segunda parte engloba o treinamento destes operadores em um ambiente imersivo através de um equipamento de realidade virtual como o Oculus Go. O cenário, desenvolvido no Unity3D, utiliza como modelo a bancada real, trazendo a possibilidade de se realizar esses treinamentos em qualquer ambiente, excluindo a necessidade da utilização de equipamentos reais e possíveis danos originados de inexperiência do usuário. Os resultados apresentam os testes de validação realizados nestas duas partes, comentando os pontos fortes, desafios e falhas encontradas no sistema em geral. Estes resultados também são comparados qualitativamente com aplicações tradicionais dos casos de estudo propostos de forma a comprovar o cumprimento dos objetivos propostos neste trabalho. Por fim, é apresentado o teste de usabilidade que fornece dados em pontos fracos na experiência de usuários para possíveis melhorias em futuros trabalhos

    A Framework for Extended Reality System Development in Manufacturing

    Get PDF
    This paper presents a framework for developing extended reality (XR) systems within manufacturing context. The aim of this study is to develop a systematic framework to improve the usability and user acceptance of future XR systems. So that manufacturing industry can move from the “wow effect” of XR demonstrators into the stage whereas XR systems can be successfully integrated and improve the conventional work routines. It is essential to ensure the usability and user acceptance of XR systems for the wider adoption in manufacturing. The proposed framework was developed through six case studies that covered different XR system developments for different application areas of manufacturing. The framework consists of five iterative phases: (1) requirements analysis, (2) solution selection, (3) data preparation, (4) system implementation and (5) system evaluation. It is validated through one empirical case and seven identified previous studies, which partly aligned with the proposed framework. The proposed framework provides a clear guideline on the steps needed to integrate XR in manufacturing and it extends the XR usage with increased usability and user acceptance. Furthermore, it strengthens the importance of user-centered approach for XR system development in manufacturing

    Nextmed: Automatic Imaging Segmentation, 3D Reconstruction, and 3D Model Visualization Platform Using Augmented and Virtual Reality

    Get PDF
    The visualization of medical images with advanced techniques, such as augmented reality and virtual reality, represent a breakthrough for medical professionals. In contrast to more traditional visualization tools lacking 3D capabilities, these systems use the three available dimensions. To visualize medical images in 3D, the anatomical areas of interest must be segmented. Currently, manual segmentation, which is the most commonly used technique, and semi-automatic approaches can be time consuming because a doctor is required, making segmentation for each individual case unfeasible. Using new technologies, such as computer vision and artificial intelligence for segmentation algorithms and augmented and virtual reality for visualization techniques implementation, we designed a complete platform to solve this problem and allow medical professionals to work more frequently with anatomical 3D models obtained from medical imaging. As a result, the Nextmed project, due to the different implemented software applications, permits the importation of digital imaging and communication on medicine (dicom) images on a secure cloud platform and the automatic segmentation of certain anatomical structures with new algorithms that improve upon the current research results. A 3D mesh of the segmented structure is then automatically generated that can be printed in 3D or visualized using both augmented and virtual reality, with the designed software systems. The Nextmed project is unique, as it covers the whole process from uploading dicom images to automatic segmentation, 3D reconstruction, 3D visualization, and manipulation using augmented and virtual reality. There are many researches about application of augmented and virtual reality for medical image 3D visualization; however, they are not automated platforms. Although some other anatomical structures can be studied, we focused on one case: a lung study. Analyzing the application of the platform to more than 1000 dicom images and studying the results with medical specialists, we concluded that the installation of this system in hospitals would provide a considerable improvement as a tool for medical image visualization

    Game engines and MAS: tuplespace-based interaction in Unity3D

    Get PDF
    I Game Engines stanno acquisendo sempre più importanza sia in ambito industriale, dove permettono lo sviluppo di applicazioni moderne e videogiochi, sia in ambito di ricerca, in particolare nel contesto dei sistemi multi-agente (MAS). La loro capacità espressiva, unita al supporto di tecnologie e funzionalità innovative, permette la creazione di sistemi moderni e complessi in maniera più efficiente: il loro continuo avanzamento tecnologico li ha portati ad essere una realtà su cui fare affidamento nella produzione di vari applicativi diversi, come applicazioni di realtà aumentata/virtuale/mista, simulazioni immersive, costruzione di mondi virtuali e 3D, ecc. Ciononostante, soffrono la mancanza di proprie astrazioni e meccanismi che possano essere affidabili e utilizzati per aggredire la complessità durante il design di sistemi complessi. Il tentativo di sfruttare le caratteristiche della teoria dei MAS all'interno degli ambienti di sviluppo dei Game Engines procede secondo questa direzione: integrando le astrazioni costituenti i MAS all'interno dei Game Engines, con particolare riferimento ai modelli di coordinazione tra agenti, può portare a nuove soluzioni, riuscendo a risolvere problemi tecnologici grazie all'aiuto degli engine grafici. Questa tesi utilizza il Game Engine Unity3D proponendo due librerie C#, le quali sfruttano una precedente integrazione dello stesso framework con il Prolog per l'abilitazione di un modello di interazione e coordinazione basato su spazi di tuple, utilizzabile tramite l'implementazione di primitive LINDA. Le librerie offrono interfacce di programmazione (API) sfruttabili dai programmatori C# Unity3D per integrare nelle loro creazioni il supporto a tale modello, con una nuova modalità per la gestione della coordinazione tra oggetti in Unity3D e fornisce importanti proprietà, essendo fondamentale nel contesto dei MAS dal punto di vista dell'ingegnerizzazione di sistemi complessi e della gestione delle interazioni tra agenti

    Using Mobile Augmented Reality to Improve Attention in Adults with Autism Spectrum Disorder

    Get PDF
    Adults on the autism spectrum commonly experience impairments in attention management that hinder many other cognitive functions necessary to appreciate relationships between sensory stimuli. As autistic individuals generally identify as visual learners, the effective use of visual aids can be critical in developing life skills. In this brief paper, we propose a Mobile Augmented Reality for Attention (MARA) application which addresses a lack of supportive and simple costeffective solutions for autistic adults to train attention management skills. We present the proposed design, configuration and implementation. Lastly, we discuss future directions for research
    corecore