32 research outputs found

    ElectroCutscenes: Realistic Haptic Feedback in Cutscenes of Virtual Reality Games Using Electric Muscle Stimulation

    Get PDF
    Cutscenes in Virtual Reality (VR) games enhance story telling by delivering output in the form of visual, auditory, or haptic feedback (e.g., using vibrating handheld controllers). Since they lack interaction in the form of user input, cutscenes would significantly benefit from improved feedback. We introduce the concept and implementation of ElectroCutscenes, a concept in which Electric Muscle Stimulation (EMS) is leveraged to elicit physical user movements to correspond to those of personal avatars in cutscenes of VR games while the user stays passive. Through a user study (N=22) in which users passively received kinesthetic feedback resulting in involuntarily movements, we show that ElectroCutscenes significantly increases perceived presence and realism compared to controller-based vibrotactile and no haptic feedback. Furthermore, we found preliminary evidence that combining visual and EMS feedback can evoke movements that are not actuated by either of them alone. We discuss how to enhance realism and presence of cutscenes in VR games even when EMS can partially rather than completely actuate the desired body movements

    Augmented instructions : analysis of performance and efficiency of assembly tasks

    Get PDF
    Augmented Reality (AR) technology makes it possible to present information in the user’s line of sight, right at the point of use. This brings the capability to visualise complex information for industrial maintenance applications in an effective manner, which typically rely on paper instructions and tacit knowledge developed over time. Existing research in AR instruction manuals has already shown its potential to reduce the time taken to complete assembly tasks, as well as improving accuracy [1–3]. In this study, the outcomes of several aspects of AR instructions are explored and their effects on the chosen Key Performance Indicators (KPIs) of task completion time, error rate, cognitive effort and usability are assessed. A standardised AR assembly task is also described for performance comparison, and a novel AR experimental tool is presented, which takes advantage of the flexibility of internet connected peripherals, to explore various different aspects of AR app design to isolate their effects. Results of the experiments are given here, providing insight into the most effective way of delivering information and promoting interaction between user and computer, in terms of user performance and acceptance

    Enhanced device-based 3D object manipulation technique for handheld mobile augmented reality

    Get PDF
    3D object manipulation is one of the most important tasks for handheld mobile Augmented Reality (AR) towards its practical potential, especially for realworld assembly support. In this context, techniques used to manipulate 3D object is an important research area. Therefore, this study developed an improved device based interaction technique within handheld mobile AR interfaces to solve the large range 3D object rotation problem as well as issues related to 3D object position and orientation deviations in manipulating 3D object. The research firstly enhanced the existing device-based 3D object rotation technique with an innovative control structure that utilizes the handheld mobile device tilting and skewing amplitudes to determine the rotation axes and directions of the 3D object. Whenever the device is tilted or skewed exceeding the threshold values of the amplitudes, the 3D object rotation will start continuously with a pre-defined angular speed per second to prevent over-rotation of the handheld mobile device. This over-rotation is a common occurrence when using the existing technique to perform large-range 3D object rotations. The problem of over-rotation of the handheld mobile device needs to be solved since it causes a 3D object registration error and a 3D object display issue where the 3D object does not appear consistent within the user’s range of view. Secondly, restructuring the existing device-based 3D object manipulation technique was done by separating the degrees of freedom (DOF) of the 3D object translation and rotation to prevent the 3D object position and orientation deviations caused by the DOF integration that utilizes the same control structure for both tasks. Next, an improved device-based interaction technique, with better performance on task completion time for 3D object rotation unilaterally and 3D object manipulation comprehensively within handheld mobile AR interfaces was developed. A pilot test was carried out before other main tests to determine several pre-defined values designed in the control structure of the proposed 3D object rotation technique. A series of 3D object rotation and manipulation tasks was designed and developed as separate experimental tasks to benchmark both the proposed 3D object rotation and manipulation techniques with existing ones on task completion time (s). Two different groups of participants aged 19-24 years old were selected for both experiments, with each group consisting sixteen participants. Each participant had to complete twelve trials, which came to a total 192 trials per experiment for all the participants. Repeated measure analysis was used to analyze the data. The results obtained have statistically proven that the developed 3D object rotation technique markedly outpaced existing technique with significant shorter task completion times of 2.04s shorter on easy tasks and 3.09s shorter on hard tasks after comparing the mean times upon all successful trials. On the other hand, for the failed trials, the 3D object rotation technique was 4.99% more accurate on easy tasks and 1.78% more accurate on hard tasks in comparison to the existing technique. Similar results were also extended to 3D object manipulation tasks with an overall 9.529s significant shorter task completion time of the proposed manipulation technique as compared to the existing technique. Based on the findings, an improved device-based interaction technique has been successfully developed to address the insufficient functionalities of the current technique

    Augmented reality selection through smart glasses

    Get PDF
    O mercado de óculos inteligentes está em crescimento. Este crescimento abre a possibilidade de um dia os óculos inteligentes assumirem um papel mais ativo tal como os smartphones já têm na vida quotidiana das pessoas. Vários métodos de interação com esta tecnologia têm sido estudados, mas ainda não é claro qual o método que poderá ser o melhor para interagir com objetos virtuais. Neste trabalho são mencionados diversos estudos que se focam nos diferentes métodos de interação para aplicações de realidade aumentada. É dado destaque às técnicas de interação para óculos inteligentes tal como às suas vantagens e desvantagens. No contexto deste trabalho foi desenvolvido um protótipo de Realidade Aumentada para locais fechados, implementando três métodos de interação diferentes. Foram também estudadas as preferências do utilizador e sua vontade de executar o método de interação em público. Além disso, é extraído o tempo de reação que é o tempo entre a deteção de uma marca e o utilizador interagir com ela. Um protótipo de Realidade Aumentada ao ar livre foi desenvolvido a fim compreender os desafios diferentes entre uma aplicação de Realidade Aumentada para ambientes interiores e exteriores. Na discussão é possível entender que os utilizadores se sentem mais confortáveis usando um método de interação semelhante ao que eles já usam. No entanto, a solução com dois métodos de interação, função de toque nos óculos inteligentes e movimento da cabeça, permitem obter resultados próximos aos resultados do controlador. É importante destacar que os utilizadores não passaram por uma fase de aprendizagem os resultados apresentados nos testes referem-se sempre à primeira e única vez com o método de interação. O que leva a crer que o futuro de interação com óculos inteligentes possa ser uma fusão de diferentes técnicas de interação.The smart glasses’ market continues growing. It enables the possibility of someday smart glasses to have a presence as smartphones have already nowadays in people's daily life. Several interaction methods for smart glasses have been studied, but it is not clear which method could be the best to interact with virtual objects. In this research, it is covered studies that focus on the different interaction methods for reality augmented applications. It is highlighted the interaction methods for smart glasses and the advantages and disadvantages of each interaction method. In this work, an Augmented Reality prototype for indoor was developed, implementing three different interaction methods. It was studied the users’ preferences and their willingness to perform the interaction method in public. Besides that, it is extracted the reaction time which is the time between the detection of a marker and the user interact with it. An outdoor Augmented Reality application was developed to understand the different challenges between indoor and outdoor Augmented Reality applications. In the discussion, it is possible to understand that users feel more comfortable using an interaction method similar to what they already use. However, the solution with two interaction methods, smart glass’s tap function, and head movement allows getting results close to the results of the controller. It is important to highlight that was always the first time of the users, so there was no learning before testing. This leads to believe that the future of smart glasses interaction can be the merge of different interaction methods

    The usage of fully immersive head-mounted displays in social everyday contexts

    Get PDF
    Technology often evolves from decades of research in university and industrial laboratories and changes people's lives when it becomes available to the masses. In the interaction between technology and consumer, established designs in the laboratory environment must be adapted to the needs of everyday life. This paper deals with the challenges arising from the development of fully immersive Head Mounted Displays (HMD) in laboratories towards their application in everyday contexts. Research on virtual reality (VR) technologies spans over 50 years and covers a wide field of topics, e.g., technology, system design, user interfaces, user experience or human perception. Other disciplines such as psychology or the teleoperation of robots are examples for users of VR technology. The work in the previous examples was mainly carried out in laboratories or highly specialized environments. The main goal was to generate systems that are ideal for a single user to conduct a particular task in VR. The new emerging environments for the use of HMDs range from private homes to offices to convention halls. Even in public spaces such as public transport, cafés or parks, immersive experiences are possible. However, current VR systems are not yet designed for these environments. Previous work on problems in the everyday environment deals with challenges such as preventing the user from colliding with a physical object. However, current research does not take into account the new social context for an HMD user associated with these environments. Several people who have different roles are around the user in these contexts. In contrast to laboratory scenarios, the non-HMD user, for example, does not share the task with or is aware of the state of the HMD user in VR. This thesis contributes to the challenges introduced by the social context. For this purpose I offer solutions to overcome the visual separation of the HMD user. I also suggest methods for investigating and evaluating the use of HMDs suitable for everyday context. First, we present concepts and insights to overcome the challenges arising from an HMD covering the user's face. In the private context, e.g., living rooms, one of the main challenges is the need for an HMD user to take off the HMD to be able to communicate with others. Reasons for taking off the HMD are the visual exclusion of the surrounding world for HMD users and the HMD covering the users' face, hindering communication. Additionally, the Non-HMD users do not know about the virtual world the HMD user is acting in. Previous work suggests to visualize the bystanding Non-HMD user or its actions in VR to address such challenges. The biggest advantage of a fully immersive experience, however, is the full separation from the physical surrounding with the ultimate goal of being at another place. Therefore I argue not to integrate a non-HMD users directly into VR. I introduce the approach of using a shared surface that provides a common basis for information and interaction between a non-HMD and a HMD user. Such a surface can be utilized by using a smartphone. The same information is presented to the HMD in VR and the Non-HMD user on the shared surface in the same physical position, enabling joint interaction at the surface. By examining four feedback modalities, we provide design guidelines for touch interaction. The guidelines support interaction design with such a shared surface by an HMD user. Further, we explore the possibility to inform the Non-HMD user about the user's state during a mixed presence collaboration, e.g., if the HMD user is inattentive to the real world. For this purpose I use a frontal display attached to the HMD. In particular we explore the challenges of disturbed socialness and reduced collaboration quality, by presenting the users state on the front facing display. In summary, our concepts and studies explore the application of a shared surface to overcome challenges in a co-located mixed presence collaboration. Second, we look at the challenges of using HMDs in a public environment that have not yet been considered. The use of HMDs in these environments is becoming a reality due to the current development of HMDs, which contain all necessary hardware in one portable device. Related work, in particular, the work on public displays, already addresses the interaction with technology in public environments. The form factor of the HMD, the need to take an HMD onto the head and especially the visual and mental exclusion of the HMD user are new and not yet understood challenges in these environments. We propose a problem space for semi-public (e.g., conference rooms) and public environments (e.g., market places). With an explorative field study, we gain insight into the effects of the visual and physical separation of an HMD user from surrounding Non-HMD users. Further, we present a method that helps to design and evaluate the unsupervised usage of HMDs in public environments, the \emph{audience funnel flow model for HMDs}. Third, we look into methods that are suitable to monitor and evaluate HMD-based experiences in the everyday context. One core measure is the experience of being present in the virtual world, i.e., the feeling of ``being there''. Consumer-grade HMDs are already able to create highly immersive experiences, leading to a strong presence experience in VR. Hence we argue it is important to find and understand the remaining disturbances during the experience. Existing methods from the laboratory context are either not precise enough, e.g, questionnaires, to find these disturbances or cause high effort in their application and evaluation, e.g., physiological measures. In a literature review, we show that current research heavily relies on questionnaire-based approaches. I improve current qualitative approaches -- interviews, questionnaires -- to make the temporal variation of a VR experience assessable. I propose a drawing method that recognizes breaks in the presence experience. Also, it helps the user in reflecting an HMD-based experience and supports the communication between an interviewer and the HMD user. In the same paper, we propose a descriptive model that allows the objective description of the temporal variations of a presence experience from beginning to end. Further, I present and explore the concept of using electroencephalography to detect an HMD user's visual stress objectively. Objective detection supports the usage of HMDs in private and industrial contexts, as it ensures the health of the user. With my work, I would like to draw attention to the new challenges when using virtual reality technologies in everyday life. I hope that my concepts, methods and evaluation tools will serve research and development on the usage of HMDs. In particular, I would like to promote the use in the everyday social context and thereby create an enriching experience for all.Technologie entwickelt sich oft aus jahrzehntelanger Forschung in Universitäts- und Industrielabors und verändert das Leben der Menschen, wenn sie für die Masse verfügbar wird. Im Zusammenspiel von Technik und Konsument müssen im Laborumfeld etablierte Designs an die Bedürfnisse des Alltags angepasst werden. Diese Arbeit beschäftigt sich mit den Herausforderungen, die sich aus der Entwicklung voll immersiver Head Mounted Displays (HMD) in Labors, hin zu ihrer Anwendung im täglichen Kontext ergeben. Die Forschung zu Virtual-Reality-Technologien erstreckt sich über mehr als 50 Jahre und deckt ein breites Themenspektrum ab, wie zum Beispiel Technologie, Systemdesign, Benutzeroberflächen, Benutzererfahrung oder menschliche Wahrnehmung. Andere Disziplinen wie die Psychologie oder die Teleoperation von Robotern sind Beispiele für Anwender von VR Technologie. in der Vergangenheit Arbeiten wurden Arbeiten mit VR Systemen überwiegend in Labors oder hochspezialisierten Umgebungen durchgeführt. Der Großteil dieser Arbeiten zielte darauf ab, Systeme zu generieren, die für einen einzigen Benutzer ideal sind, um eine bestimmte Aufgabe in VR durchzuführen. Die neu aufkommenden Umgebungen für den Einsatz von HMDs reichen vom privaten Haushalt über Büros bis hin zu Kongresssälen. Auch in öffentlichen Räumen wie öffentlichen Verkehrsmitteln, Cafés oder Parks sind immersive Erlebnisse möglich. Allerdings sind die aktuellen VR Systeme noch nicht für diese Umgebungen ausgelegt. Vorangegangene Arbeiten zu den Problemen im Alltags Umfeld befassen sich daher mit Herausforderungen, wie der Vermeidung von Kollisionen des Benutzers mit einem physischen Objekt. Die aktuelle Forschung berücksichtigt allerdings nicht den neuen sozialen Kontext für einen HMD-Anwender, der mit den Alltagsumgebungen verbunden ist. Mehrere Personen, die unterschiedliche Rollen haben, sind in diesen Kontexten um den Benutzer herum. Im Gegensatz zu Szenarien im Labor teilt der Nicht-HMD-Benutzer beispielsweise nicht die Aufgabe und ist sich nicht über den Zustand des HMD-Benutzers in VR bewusst. Diese Arbeit trägt zu den Herausforderungen bei, die durch den sozialen Kontext eingeführt werden. Zu diesem Zweck bieten ich in meiner Arbeit Lösungen an, um die visuelle Abgrenzung des HMD-Anwenders zu überwinden. Ich schlage zudem Methoden zur Untersuchung und Bewertung des Einsatzes von HMDs in öffentlichen Bereichen vor. Zuerst präsentieren wir Konzepte und Erkenntnisse, um die Herausforderungen zu meistern, die sich durch das HMD ergeben, welches das Gesicht des Benutzers abdeckt. Im privaten Bereich, z.B. in Wohnzimmern, ist eine der größten Herausforderungen die Notwendigkeit, dass der HMD-Nutzer das HMD abnimmt, um mit anderen kommunizieren zu können. Gründe für das Abnehmen des HMDs sind die visuelle Ausgrenzung der Umgebung für die HMD-Anwender und das HMD selbst, welches das Gesicht des Anwenders bedeckt und die Kommunikation behindert. Darüber hinaus wissen die Nicht-HMD-Benutzer nichts über die virtuelle Welt, in der der HMD-Benutzer handelt. Bisherige Konzepte schlugen vor, den Nicht-HMD-Benutzer oder seine Aktionen in VR zu visualisieren, um diese Herausforderungen zu adressieren. Der größte Vorteil einer völlig immersiven Erfahrung ist jedoch die vollständige Trennung der physischen Umgebung mit dem ultimativen Ziel, an einem anderen Ort zu sein. Daher schlage ich vor die Nicht-HMD-Anwender nicht direkt in VR einzubinden. Stattdessen stelle ich den Ansatz der Verwendung einer geteilten Oberfläche vor, die eine gemeinsame Grundlage für Informationen und Interaktion zwischen einem Nicht-HMD und einem HMD-Benutzer bietet. Eine geteile Oberfläche kann etwa durch die Verwendung eines Smartphones realisiert werden. Eine solche Oberfläche präsentiert dem HMD und dem Nicht-HMD-Benutzer an der gleichen physikalischen Position die gleichen Informationen. Durch die Untersuchung von vier Feedbackmodalitäten stellen wir Designrichtlinien zur Touch-Interaktion zur Verfügung. Die Richtlinien ermöglichen die Interaktion mit einer solchen geteilten Oberfläche durch einen HMD-Anwender ermöglichen. Weiterhin untersuchen wir die Möglichkeit, den Nicht-HMD-Benutzer während einer Zusammenarbeit über den Zustand des HMD Benutzers zu informieren, z.B., wenn der HMD Nutzer gegenüber der realen Welt unachtsam ist. Zu diesem Zweck schlage ich die Verwendung eines frontseitigen Displays, das an dem HMD angebracht ist. Zusätzlich bieten unsere Studien Einblicke, die den Designprozess für eine lokale, gemischt präsente Zusammenarbeit unterstützen. Zweitens betrachten wir die bisher unberücksichtigten Herausforderungen beim Einsatz von HMDs im öffentlichen Umfeld. Ein Nutzung von HMDs in diesen Umgebungen wird durch die aktuelle Entwicklung von HMDs, die alle notwendige Hardware in einem tragbaren Gerät enthalten, zur Realität. Verwandte Arbeiten, insbesondere aus der Forschung an Public Displays, befassen sich bereits mit der Nutzung von Display basierter Technologien im öffentlichen Kontext. Der Formfaktor des HMDs, die Notwendigkeit ein HMD auf den Kopf zu Ziehen und vor allem die visuelle und mentale Ausgrenzung des HMD-Anwenders sind neue und noch nicht verstanden Herausforderung in diesen Umgebungen. Ich schlage einen Design Space für halböffentliche (z.B. Konferenzräume) und öffentliche Bereiche (z.B. Marktplätze) vor. Mit einer explorativen Feldstudie gewinnen wir Einblicke in die Auswirkungen der visuellen und physischen Trennung eines HMD-Anwenders von umliegenden Nicht-HMD-Anwendern. Weiterhin stellen wir eine Methode vor, die unterstützt, den unbeaufsichtigten Einsatz von HMDs in öffentlichen Umgebungen zu entwerfen und zu bewerten, das \emph{audience funnel flow model for HMDs}. Drittens untersuchen wir Methoden, die geeignet sind, HMD-basierte Erfahrungen im Alltagskontext zu überwachen und zu bewerten. Eine zentrale Messgröße ist die Erfahrung der Präsenz in der virtuellen Welt, d.h. das Gefühl des "dort seins". HMDs für Verbraucher sind bereits in der Lage, hoch immersive Erlebnisse zu schaffen, was zu einer starken Präsenzerfahrung im VR führt. Daher argumentieren wir, dass es wichtig ist, die verbleibenden Störungen während der Erfahrung zu finden und zu verstehen. Bestehende Methoden aus dem Laborkontext sind entweder nicht präzise genug, z.B. Fragebögen, um diese Störungen zu finden oder verursachen einen hohen Aufwand in ihrer Anwendung und Auswertung, z.B. physilogische Messungen. In einer Literaturübersicht zeigen wir, dass die aktuelle Forschung stark auf fragebogenbasierte Ansätze angewiesen ist. Ich verbessern aktuelle qualitative Ansätze -- Interviews, Fragebögen -- um die zeitliche Variation einer VR-Erfahrung bewertbar zu machen. Ich schlagen eine Zeichnungsmethode vor die Brüche in der Präsenzerfahrung erkennt, den Benutzer bei der Reflexion einer HMD-basierten Erfahrung hilft und die Kommunikation zwischen einem Interviewer und dem HMD-Benutzer unterstützt. In der gleichen Veröffentlichung schlage ich ein Modell vor, das die objektive Beschreibung der zeitlichen Variationen einer Präsenzerfahrung von Anfang bis Ende ermöglicht. Weiterhin präsentieren und erforschen ich das Konzept der Elektroenzephalographie, um den visuellen Stress eines HMD-Anwenders objektiv zu erfassen. Die objektive Erkennung unterstützt den Einsatz von HMDs im privaten und industriellen Kontext, da sie die Gesundheit des Benutzers sicherstellt. Mit meiner Arbeit möchte ich auf die neuen Herausforderungen beim Einsatz von VR-Technologien im Alltag aufmerksam machen. Ich hoffe, dass meine Konzepte, Methoden und Evaluierungswerkzeuge der Forschung und Entwicklung über den Einsatz von HMDs dienen werden. Insbesondere möchte ich den Einsatz im alltäglichen sozialen Kontext fördern und damit eine bereichernde Erfahrung für alle schaffen

    Lightness, Brightness, and Transparency in Optical See-Through Augmented Reality

    Get PDF
    Augmented reality (AR), as a key component of the future metaverse, has leaped from the research labs to the consumer and enterprise markets. AR optical see-through (OST) devices utilize transparent optical combiners to provide visibility of the real environment as well as superimpose virtual content on top of it. OST displays distinct from existing media because of their optical additivity, meaning the light reaching the eyes is composed of both virtual content and real background. The composition results in the intended virtual colors being distorted and perceived transparent. When the luminance of the virtual content decreases, the perceived lightness and brightness decrease, and the perceived transparency increases. Lightness, brightness, and transparency are modulated by one physical dimension (luminance), and all interact with the background and each other. In this research, we aim to identify and quantify the three perceptual dimensions, as well as build mathematical models to predict them. In the first part of the study, we focused on the perceived brightness and lightness with two experiments: a brightness partition scaling experiment to build brightness scales, and a diffuse white adjustment experiment to determine the absolute luminance level required for diffuse white appearances on 2D and 3D AR stimuli. The second part of the research targeted at the perceived transparency in the AR environment with three experiments. The transparency was modulated by the background Michelson contrast reduction in either average luminance or peak-to-peak luminance difference to investigate, and later illustrated, the fundamental mechanism evoking transparency perception. The first experiment measured the transparency detection thresholds and confirmed that contrast sensitivity functions with contrast adaptation could model the thresholds. Subsequently, the transparency perception was investigated through direct anchored scaling experiment by building perceived transparency scales from the virtual content contrast ratio to the background. A contrast-ratio-based model was proposed predicting the perceived transparency scales. Finally, the transparency equivalency experiment between the two types of contrast modulation confirmed the mechanism difference and validated the proposed model

    Around-Body Interaction: Leveraging Limb Movements for Interacting in a Digitally Augmented Physical World

    Full text link
    Recent technological advances have made head-mounted displays (HMDs) smaller and untethered, fostering the vision of ubiquitous interaction with information in a digitally augmented physical world. For interacting with such devices, three main types of input - besides not very intuitive finger gestures - have emerged so far: 1) Touch input on the frame of the devices or 2) on accessories (controller) as well as 3) voice input. While these techniques have both advantages and disadvantages depending on the current situation of the user, they largely ignore the skills and dexterity that we show when interacting with the real world: Throughout our lives, we have trained extensively to use our limbs to interact with and manipulate the physical world around us. This thesis explores how the skills and dexterity of our upper and lower limbs, acquired and trained in interacting with the real world, can be transferred to the interaction with HMDs. Thus, this thesis develops the vision of around-body interaction, in which we use the space around our body, defined by the reach of our limbs, for fast, accurate, and enjoyable interaction with such devices. This work contributes four interaction techniques, two for the upper limbs and two for the lower limbs: The first contribution shows how the proximity between our head and hand can be used to interact with HMDs. The second contribution extends the interaction with the upper limbs to multiple users and illustrates how the registration of augmented information in the real world can support cooperative use cases. The third contribution shifts the focus to the lower limbs and discusses how foot taps can be leveraged as an input modality for HMDs. The fourth contribution presents how lateral shifts of the walking path can be exploited for mobile and hands-free interaction with HMDs while walking.Comment: thesi

    The Application of Mixed Reality Within Civil Nuclear Manufacturing and Operational Environments

    Get PDF
    This thesis documents the design and application of Mixed Reality (MR) within a nuclear manufacturing cell through the creation of a Digitally Assisted Assembly Cell (DAAC). The DAAC is a proof of concept system, combining full body tracking within a room sized environment and bi-directional feedback mechanism to allow communication between users within the Virtual Environment (VE) and a manufacturing cell. This allows for training, remote assistance, delivery of work instructions, and data capture within a manufacturing cell. The research underpinning the DAAC encompasses four main areas; the nuclear industry, Virtual Reality (VR) and MR technology, MR within manufacturing, and finally the 4 th Industrial Revolution (IR4.0). Using an array of Kinect sensors, the DAAC was designed to capture user movements within a real manufacturing cell, which can be transferred in real time to a VE, creating a digital twin of the real cell. Users can interact with each other via digital assets and laser pointers projected into the cell, accompanied by a built-in Voice over Internet Protocol (VoIP) system. This allows for the capture of implicit knowledge from operators within the real manufacturing cell, as well as transfer of that knowledge to future operators. Additionally, users can connect to the VE from anywhere in the world. In this way, experts are able to communicate with the users in the real manufacturing cell and assist with their training. The human tracking data fills an identified gap in the IR4.0 network of Cyber Physical System (CPS), and could allow for future optimisations within manufacturing systems, Material Resource Planning (MRP) and Enterprise Resource Planning (ERP). This project is a demonstration of how MR could prove valuable within nuclear manufacture. The DAAC is designed to be low cost. It is hoped this will allow for its use by groups who have traditionally been priced out of MR technology. This could help Small to Medium Enterprises (SMEs) close the double digital divide between themselves and larger global corporations. For larger corporations it offers the benefit of being low cost, and, is consequently, easier to roll out across the value chain. Skills developed in one area can also be transferred to others across the internet, as users from one manufacturing cell can watch and communicate with those in another. However, as a proof of concept, the DAAC is at Technology Readiness Level (TRL) five or six and, prior to its wider application, further testing is required to asses and improve the technology. The work was patented in both the UK (S. R EDDISH et al., 2017a), the US (S. R EDDISH et al., 2017b) and China (S. R EDDISH et al., 2017c). The patents are owned by Rolls-Royce and cover the methods of bi-directional feedback from which users can interact from the digital to the real and vice versa. Stephen Reddish Mixed Mode Realities in Nuclear Manufacturing Key words: Mixed Mode Reality, Virtual Reality, Augmented Reality, Nuclear, Manufacture, Digital Twin, Cyber Physical Syste
    corecore