97 research outputs found
Examining the use of visualisation methods for the design of interactive systems
Human-Computer Interaction (HCI) design has historically involved people from different fields. Designing HCI systems with people of varying background and expertise can bring different perspectives and ideas, but discipline-specific language and design methods can hinder such collaborations. The application of visualisation methods is a way to overcome these challenges, but to date selection tools tend to focus on a facet of HCI design methods and no research has been attempted to assemble a collection of HCI visualisation methods. To fill this gap, this research seeks to establish an inventory of HCI visualisation methods and identify ways of selecting amongst them. Creating the inventory of HCI methods would enable designers to discover and learn about methods that they may not have used before or be familiar with. Categorising the methods provides a structure for new and experienced designers to determine appropriate methods for their design project.
The aim of this research is to support designers in the development of Human-Computer Interaction (HCI) systems through better selection and application of visualisation methods. This is achieved through four phases. In the first phase, three case studies are conducted to investigate the challenges and obstacles that influence the choice of a design approach in the development of HCI systems. The findings from the three case studies helped to form the design requirements for a visualisation methods selection and application guide. In the second phase, the Guide is developed. The third phase aims to evaluate the Guide. The Guide is employed in the development of a serious training game to demonstrate its applicability. In the fourth phase, a user study was designed to evaluate the serious training game. Through the evaluation of the serious training game, the Guide is validated.
This research has contributed to the knowledge surrounding visualisation tools used in the design of interactive systems. The compilation of HCI visualisation methods establishes an inventory of methods for interaction design. The identification of Selection Approaches brings together the ways in which visualisation methods are organised and grouped. By mapping visualisation methods to Selection Approaches, this study has provided a way for practitioners to select a visualisation method to support their design practice. The development of the Selection Guide provided five filters, which helps designers to identify suitable visualisation methods based on the nature of the design challenge. The development of the Application Guide presented the methodology of each visualisation method in a consistent format. This enables the ease of method comparison and to ensure there is comprehensive information for each method. A user study showing the evaluation of a serious training game is presented. Two learning objectives were identified and mapped to Bloom’s Taxonomy to advocate an approach for like-to-like comparison with future studies
Recommended from our members
Technology-assisted healthcare: exploring the use of mobile 3D visualisation technology to augment home-based fall prevention assessments
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonFalls often cause devastating injuries which precipitate hospital and long-term care admission and result in an increased burden on health care services. Fall prevention interventions are used to overcome fall risk factors in an ageing population. There is an increasing need for technology-assisted interventions to reduce health care costs, whilst also lessening the burden that an ageing population increasingly has on health care services. Research efforts have been spent on reducing intrinsic fall risk factors (i.e. functional ability deficits and balance impairments) in the older adult population through the use of technology-assisted interventions, but relatively little effort has been expended on extrinsic risk factors (i.e. unsuitable environmental conditions and lack of assistive equipment use), considering the drive for healthcare outside of the clinical setting into the patients’ home. In the field of occupational therapy, the extrinsic fall-risk assessment process (EFAP) is a prominent preventive intervention used to promote independent living and alleviate fall risk factors via the provision of assistive equipment prescribed for use by patients in their home environment. Currently, paper-based forms with measurement guidance presented in the form of 2D diagrams are used in the EFAP. These indicate the precise points and dimensions on a furniture item that must be measured as part of an assessment for equipment. However, this process involves challenges, such as inappropriate equipment prescribed due to inaccurate measurements being taken and recorded from the misinterpretation of the measurement guidance. This is largely due to the poor visual representation of guidance that is provided by existing paper-based forms, resulting in high levels of equipment abandonment by patients. Consequently, there is a need to overcome the challenges mentioned above by augmenting the limitations of the paper-based approach to visualise measurement guidance for equipment. To this end, this thesis proposes the use of 3D visualisation technology in the form of a novel mobile 3D application (Guidetomeasure) to visualise guidance in a well-perceived manner and support stakeholders with equipment prescriptions. To ensure that the artefact is a viable improvement over its 2D predecessor, it was designed, developed and empirically evaluated with patients and clinicians alike through conducting five user-centred design and experimental studies. A mixed-method analysis was undertaken to establish the design, effectiveness, efficiency and usability of the proposed artefact, compared with conventional approaches used for data collection and equipment prescription. The research findings show that both patients and clinicians suggest that 3D visualisation is a promising development of an alternative tool that contains functionality to overcome existing issues faced in the EFAP. Overall, this research makes a conceptual contribution (secondary) to the research domain and a software artefact (primary) that significantly improves practice, resulting in implications and recommendations for the wider healthcare provision (primary).The Engineering and Physical Sciences Research Council (EPSRC)
Recommended from our members
Mobile depth sensing technology and algorithms with application to occupational therapy healthcare
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonThe UK government is striving to shift its current healthcare delivery model from clini-cian–oriented services, to that of patient and self–care–oriented intervention strategies. It seeks to do so through Information Communication (ICT) and Computer Mediated Re-ality Technologies (CMRT) as a key strategy to overcome the ever–increasing scarcity of healthcare resources and costs. To this end, in the UK the use of paper–based information systems have exhibited their limitations in providing apposite care. At the national level, The Royal College of Occupational Therapists (RCOT) identify home visits and modifica-tions as key levers in a multifactorial health programme to evaluate interventions for older people with a history of falling or are identified as being prone to falling. Prescribing Assistive Equipment (AE) is one such mechanism that seeks to reduce the risk of falling whilst promoting the continued independence of physical dexterity and mobility in older adults at home. In the UK, the yearly cost of falls is estimated at £2.3 billion. Further evidence places a 30% to 60% abandonment rate on prescribed AE by and large due to a ‘poor fit’ and measurement inaccuracies.
To remain aligned with the national strategy, and assist in the eradication of measurement inaccuracies, this thesis employs Mobile Depth Sensing and Motion Track-ing Devices (MDSMTDs) to assist OTs in in the process of digitally measuring the extrin-sic fall–risk factors for the provision of AE. The quintessential component in this assess-ment lies in the measurement of fittings and furniture items in the home. To digitise and aid in this process, the artefact presented in this thesis employs stereo computer–vision and camera calibration algorithms to extract edges in 3D space. It modifies the Sobel–Feldman convolution filter by reducing the magnitude response and employs the camera intrinsic parameters as a mechanism to calculate the distortion matrix for interpolation between the edges and the 3D point cloud. Further Augmented Reality User Experience (AR-UX) facets are provided to digitise current state of the art clinical guidance and over-lay its instructions onto the real world (i.e., 3D space).
Empirical mixed methods assessment revealed that in terms of accuracy, the arte-fact exhibited enhanced performance gains over current paper–based guidance. In terms of accuracy consistency, the artefact can rectify measurement consistency inaccuracies, but there are still a wide range of factors that can influence the integrity of the point-cloud in respect of the device’s point-of-view, holding positions and measurement speed. To this end, OTs usability, and adoption preferences materialise in favour of the artefact. In conclusion, this thesis demonstrates that MDSMTDs are a promising alterna-tive to existing paper–based measurement practices as OTs appear to prefer the digital–based system and that they can take measurements more efficiently and accurately
A Survey of Interaction Techniques for Interactive 3D Environments
International audienceVarious interaction techniques have been developed for interactive 3D environments. This paper presents an up-to-date and comprehensive review of the state of the art of non-immersive interaction techniques for Navigation, Selection & Manipulation, and System Control, including a basic introduction to the topic, the challenges, and an examination of a number of popular approaches. We hope that this survey can aid both researchers and developers of interactive 3D applications in having a clearer overview of the topic and in particular can be useful for practitioners and researchers that are new to the field of interactive 3D graphics
Interfaces for human-centered production and use of computer graphics assets
L'abstract è presente nell'allegato / the abstract is in the attachmen
The Application of Mixed Reality Within Civil Nuclear Manufacturing and Operational Environments
This thesis documents the design and application of Mixed Reality (MR) within a nuclear
manufacturing cell through the creation of a Digitally Assisted Assembly Cell (DAAC). The
DAAC is a proof of concept system, combining full body tracking within a room sized
environment and bi-directional feedback mechanism to allow communication between users within
the Virtual Environment (VE) and a manufacturing cell. This allows for training, remote assistance,
delivery of work instructions, and data capture within a manufacturing cell.
The research underpinning the DAAC encompasses four main areas; the nuclear industry, Virtual
Reality (VR) and MR technology, MR within manufacturing, and finally the 4 th Industrial
Revolution (IR4.0). Using an array of Kinect sensors, the DAAC was designed to capture user
movements within a real manufacturing cell, which can be transferred in real time to a VE, creating
a digital twin of the real cell. Users can interact with each other via digital assets and laser pointers
projected into the cell, accompanied by a built-in Voice over Internet Protocol (VoIP) system. This
allows for the capture of implicit knowledge from operators within the real manufacturing cell, as
well as transfer of that knowledge to future operators. Additionally, users can connect to the VE
from anywhere in the world. In this way, experts are able to communicate with the users in the real
manufacturing cell and assist with their training. The human tracking data fills an identified gap in
the IR4.0 network of Cyber Physical System (CPS), and could allow for future optimisations
within manufacturing systems, Material Resource Planning (MRP) and Enterprise Resource
Planning (ERP).
This project is a demonstration of how MR could prove valuable within nuclear manufacture. The
DAAC is designed to be low cost. It is hoped this will allow for its use by groups who have
traditionally been priced out of MR technology. This could help Small to Medium Enterprises
(SMEs) close the double digital divide between themselves and larger global corporations. For
larger corporations it offers the benefit of being low cost, and, is consequently, easier to roll out
across the value chain. Skills developed in one area can also be transferred to others across the
internet, as users from one manufacturing cell can watch and communicate with those in another.
However, as a proof of concept, the DAAC is at Technology Readiness Level (TRL) five or six and,
prior to its wider application, further testing is required to asses and improve the technology.
The work was patented in both the UK (S. R EDDISH et al., 2017a), the US (S. R EDDISH et al.,
2017b) and China (S. R EDDISH et al., 2017c). The patents are owned by Rolls-Royce and cover
the methods of bi-directional feedback from which users can interact from the digital to the real
and vice versa.
Stephen Reddish
Mixed Mode Realities in Nuclear Manufacturing
Key words: Mixed Mode Reality, Virtual Reality, Augmented Reality, Nuclear, Manufacture,
Digital Twin, Cyber Physical Syste
User-oriented markerless augmented reality framework based on 3D reconstruction and loop closure detection
An augmented reality (AR) system needs to track the user-view to perform an accurate augmentation registration. The present research proposes a conceptual marker-less, natural feature-based AR framework system, the process for which is divided into two stages - an offline database training session for the application developers, and an online AR tracking and display session for the final users. In the offline session, two types of 3D reconstruction application, RGBD-SLAM and SfM are integrated into the development framework for building the reference template of a target environment. The performance and applicable conditions of these two methods are presented in the present thesis, and the application developers can choose which method to apply for their developmental demands. A general developmental user interface is provided to the developer for interaction, including a simple GUI tool for augmentation configuration. The present proposal also applies a Bag of Words strategy to enable a rapid "loop-closure detection" in the online session, for efficiently querying the application user-view from the trained database to locate the user pose. The rendering and display process of augmentation is currently implemented within an OpenGL window, which is one result of the research that is worthy of future detailed investigation and development
- …