97 research outputs found

    Examining the use of visualisation methods for the design of interactive systems

    Get PDF
    Human-Computer Interaction (HCI) design has historically involved people from different fields. Designing HCI systems with people of varying background and expertise can bring different perspectives and ideas, but discipline-specific language and design methods can hinder such collaborations. The application of visualisation methods is a way to overcome these challenges, but to date selection tools tend to focus on a facet of HCI design methods and no research has been attempted to assemble a collection of HCI visualisation methods. To fill this gap, this research seeks to establish an inventory of HCI visualisation methods and identify ways of selecting amongst them. Creating the inventory of HCI methods would enable designers to discover and learn about methods that they may not have used before or be familiar with. Categorising the methods provides a structure for new and experienced designers to determine appropriate methods for their design project. The aim of this research is to support designers in the development of Human-Computer Interaction (HCI) systems through better selection and application of visualisation methods. This is achieved through four phases. In the first phase, three case studies are conducted to investigate the challenges and obstacles that influence the choice of a design approach in the development of HCI systems. The findings from the three case studies helped to form the design requirements for a visualisation methods selection and application guide. In the second phase, the Guide is developed. The third phase aims to evaluate the Guide. The Guide is employed in the development of a serious training game to demonstrate its applicability. In the fourth phase, a user study was designed to evaluate the serious training game. Through the evaluation of the serious training game, the Guide is validated. This research has contributed to the knowledge surrounding visualisation tools used in the design of interactive systems. The compilation of HCI visualisation methods establishes an inventory of methods for interaction design. The identification of Selection Approaches brings together the ways in which visualisation methods are organised and grouped. By mapping visualisation methods to Selection Approaches, this study has provided a way for practitioners to select a visualisation method to support their design practice. The development of the Selection Guide provided five filters, which helps designers to identify suitable visualisation methods based on the nature of the design challenge. The development of the Application Guide presented the methodology of each visualisation method in a consistent format. This enables the ease of method comparison and to ensure there is comprehensive information for each method. A user study showing the evaluation of a serious training game is presented. Two learning objectives were identified and mapped to Bloom’s Taxonomy to advocate an approach for like-to-like comparison with future studies

    A Survey of Interaction Techniques for Interactive 3D Environments

    Get PDF
    International audienceVarious interaction techniques have been developed for interactive 3D environments. This paper presents an up-to-date and comprehensive review of the state of the art of non-immersive interaction techniques for Navigation, Selection & Manipulation, and System Control, including a basic introduction to the topic, the challenges, and an examination of a number of popular approaches. We hope that this survey can aid both researchers and developers of interactive 3D applications in having a clearer overview of the topic and in particular can be useful for practitioners and researchers that are new to the field of interactive 3D graphics

    Interfaces for human-centered production and use of computer graphics assets

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    The Application of Mixed Reality Within Civil Nuclear Manufacturing and Operational Environments

    Get PDF
    This thesis documents the design and application of Mixed Reality (MR) within a nuclear manufacturing cell through the creation of a Digitally Assisted Assembly Cell (DAAC). The DAAC is a proof of concept system, combining full body tracking within a room sized environment and bi-directional feedback mechanism to allow communication between users within the Virtual Environment (VE) and a manufacturing cell. This allows for training, remote assistance, delivery of work instructions, and data capture within a manufacturing cell. The research underpinning the DAAC encompasses four main areas; the nuclear industry, Virtual Reality (VR) and MR technology, MR within manufacturing, and finally the 4 th Industrial Revolution (IR4.0). Using an array of Kinect sensors, the DAAC was designed to capture user movements within a real manufacturing cell, which can be transferred in real time to a VE, creating a digital twin of the real cell. Users can interact with each other via digital assets and laser pointers projected into the cell, accompanied by a built-in Voice over Internet Protocol (VoIP) system. This allows for the capture of implicit knowledge from operators within the real manufacturing cell, as well as transfer of that knowledge to future operators. Additionally, users can connect to the VE from anywhere in the world. In this way, experts are able to communicate with the users in the real manufacturing cell and assist with their training. The human tracking data fills an identified gap in the IR4.0 network of Cyber Physical System (CPS), and could allow for future optimisations within manufacturing systems, Material Resource Planning (MRP) and Enterprise Resource Planning (ERP). This project is a demonstration of how MR could prove valuable within nuclear manufacture. The DAAC is designed to be low cost. It is hoped this will allow for its use by groups who have traditionally been priced out of MR technology. This could help Small to Medium Enterprises (SMEs) close the double digital divide between themselves and larger global corporations. For larger corporations it offers the benefit of being low cost, and, is consequently, easier to roll out across the value chain. Skills developed in one area can also be transferred to others across the internet, as users from one manufacturing cell can watch and communicate with those in another. However, as a proof of concept, the DAAC is at Technology Readiness Level (TRL) five or six and, prior to its wider application, further testing is required to asses and improve the technology. The work was patented in both the UK (S. R EDDISH et al., 2017a), the US (S. R EDDISH et al., 2017b) and China (S. R EDDISH et al., 2017c). The patents are owned by Rolls-Royce and cover the methods of bi-directional feedback from which users can interact from the digital to the real and vice versa. Stephen Reddish Mixed Mode Realities in Nuclear Manufacturing Key words: Mixed Mode Reality, Virtual Reality, Augmented Reality, Nuclear, Manufacture, Digital Twin, Cyber Physical Syste

    User-oriented markerless augmented reality framework based on 3D reconstruction and loop closure detection

    Get PDF
    An augmented reality (AR) system needs to track the user-view to perform an accurate augmentation registration. The present research proposes a conceptual marker-less, natural feature-based AR framework system, the process for which is divided into two stages - an offline database training session for the application developers, and an online AR tracking and display session for the final users. In the offline session, two types of 3D reconstruction application, RGBD-SLAM and SfM are integrated into the development framework for building the reference template of a target environment. The performance and applicable conditions of these two methods are presented in the present thesis, and the application developers can choose which method to apply for their developmental demands. A general developmental user interface is provided to the developer for interaction, including a simple GUI tool for augmentation configuration. The present proposal also applies a Bag of Words strategy to enable a rapid "loop-closure detection" in the online session, for efficiently querying the application user-view from the trained database to locate the user pose. The rendering and display process of augmentation is currently implemented within an OpenGL window, which is one result of the research that is worthy of future detailed investigation and development
    • …
    corecore