467 research outputs found

    Object selection and scaling using multimodal interaction in mixed reality

    Get PDF
    Mixed Reality (MR) is the next evolution of human interacting with the computer as MR has the ability to combine the physical environment and digital environment and making them coexist with each other. Interaction is still a huge research area in Augmented Reality (AR) but very less in MR, this is due to current advanced MR display techniques still not robust and intuitive enough to let the user to naturally interact with 3D content. New techniques on user interaction have been widely studied, the advanced technique in interaction when the system able to invoke more than one input modalities. Multimodal interaction undertakes to deliver intuitive multiple objects manipulation with gestures. This paper discusses the multimodal interaction technique using gesture and speech which the proposed experimental setup to implement multimodal in the MR interface. The real hand gesture is combined with speech inputs in MR to perform spatial object manipulations. The paper explains the implementation stage that involves interaction using gesture and speech inputs to enhance user experience in MR workspace. After acquiring gesture input and speech commands, spatial manipulation for selection and scaling using multimodal interaction has been invoked, and this paper ends with a discussion

    Human-Computer Interaction

    Get PDF
    In this book the reader will find a collection of 31 papers presenting different facets of Human Computer Interaction, the result of research projects and experiments as well as new approaches to design user interfaces. The book is organized according to the following main topics in a sequential order: new interaction paradigms, multimodality, usability studies on several interaction mechanisms, human factors, universal design and development methodologies and tools

    A Haptic Study to Inclusively Aid Teaching and Learning in the Discipline of Design

    Get PDF
    Designers are known to use a blend of manual and virtual processes to produce design prototype solutions. For modern designers, computer-aided design (CAD) tools are an essential requirement to begin to develop design concept solutions. CAD, together with augmented reality (AR) systems have altered the face of design practice, as witnessed by the way a designer can now change a 3D concept shape, form, color, pattern, and texture of a product by the click of a button in minutes, rather than the classic approach to labor on a physical model in the studio for hours. However, often CAD can limit a designer’s experience of being ‘hands-on’ with materials and processes. The rise of machine haptic1 (MH) tools have afforded a great potential for designers to feel more ‘hands-on’ with the virtual modeling processes. Through the use of MH, product designers are able to control, virtually sculpt, and manipulate virtual 3D objects on-screen. Design practitioners are well placed to make use of haptics, to augment 3D concept creation which is traditionally a highly tactile process. For similar reasoning, it could also be said that, non-sighted and visually impaired (NS, VI) communities could also benefit from using MH tools to increase touch-based interactions, thereby creating better access for NS, VI designers. In spite of this the use of MH within the design industry (specifically product design), or for use by the non-sighted community is still in its infancy. Therefore the full benefit of haptics to aid non-sighted designers has not yet been fully realised. This thesis empirically investigates the use of multimodal MH as a step closer to improving the virtual hands-on process, for the benefit of NS, VI and fully sighted (FS) Designer-Makers. This thesis comprises four experiments, embedded within four case studies (CS1-4). Case study 1and2 worked with self-employed NS, VI Art Makers at Henshaws College for the Blind and Visual Impaired. The study examined the effects of haptics on NS, VI users, evaluations of experience. Case study 3 and4, featuring experiments 3 and4, have been designed to examine the effects of haptics on distance learning design students at the Open University. The empirical results from all four case studies showed that NS, VI users were able to navigate and perceive virtual objects via the force from the haptically rendered objects on-screen. Moreover, they were assisted by the whole multimodal MH assistance, which in CS2 appeared to offer better assistance to NS versus FS participants. In CS3 and 4 MH and multimodal assistance afforded equal assistance to NS, VI, and FS, but haptics were not as successful in bettering the time results recorded in manual (M) haptic conditions. However, the collision data between M and MH showed little statistical difference. The thesis showed that multimodal MH systems, specifically used in kinesthetic mode have enabled human (non-disabled and disabled) to credibly judge objects within the virtual realm. It also shows that multimodal augmented tooling can improve the interaction and afford better access to the graphical user interface for a wider body of users

    Literacy for digital futures : Mind, body, text

    Get PDF
    The unprecedented rate of global, technological, and societal change calls for a radical, new understanding of literacy. This book offers a nuanced framework for making sense of literacy by addressing knowledge as contextualised, embodied, multimodal, and digitally mediated. In today’s world of technological breakthroughs, social shifts, and rapid changes to the educational landscape, literacy can no longer be understood through established curriculum and static text structures. To prepare teachers, scholars, and researchers for the digital future, the book is organised around three themes – Mind and Materiality; Body and Senses; and Texts and Digital Semiotics – to shape readers’ understanding of literacy. Opening up new interdisciplinary themes, Mills, Unsworth, and Scholes confront emerging issues for next-generation digital literacy practices. The volume helps new and established researchers rethink dynamic changes in the materiality of texts and their implications for the mind and body, and features recommendations for educational and professional practice

    Multi-Sensory Interaction for Blind and Visually Impaired People

    Get PDF
    This book conveyed the visual elements of artwork to the visually impaired through various sensory elements to open a new perspective for appreciating visual artwork. In addition, the technique of expressing a color code by integrating patterns, temperatures, scents, music, and vibrations was explored, and future research topics were presented. A holistic experience using multi-sensory interaction acquired by people with visual impairment was provided to convey the meaning and contents of the work through rich multi-sensory appreciation. A method that allows people with visual impairments to engage in artwork using a variety of senses, including touch, temperature, tactile pattern, and sound, helps them to appreciate artwork at a deeper level than can be achieved with hearing or touch alone. The development of such art appreciation aids for the visually impaired will ultimately improve their cultural enjoyment and strengthen their access to culture and the arts. The development of this new concept aids ultimately expands opportunities for the non-visually impaired as well as the visually impaired to enjoy works of art and breaks down the boundaries between the disabled and the non-disabled in the field of culture and arts through continuous efforts to enhance accessibility. In addition, the developed multi-sensory expression and delivery tool can be used as an educational tool to increase product and artwork accessibility and usability through multi-modal interaction. Training the multi-sensory experiences introduced in this book may lead to more vivid visual imageries or seeing with the mind’s eye

    Visual Human-Computer Interaction

    Get PDF

    Supporting Collaborative Learning in Computer-Enhanced Environments

    Full text link
    As computers have expanded into almost every aspect of our lives, the ever-present graphical user interface (GUI) has begun facing its limitations. Demanding its own share of attention, GUIs move some of the users\u27 focus away from the task, particularly when the task is 3D in nature or requires collaboration. Researchers are therefore exploring other means of human-computer interaction. Individually, some of these new techniques show promise, but it is the combination of multiple approaches into larger systems that will allow us to more fully replicate our natural behavior within a computing environment. As computers become more capable of understanding our varied natural behavior (speech, gesture, etc.), the less we need to adjust our behavior to conform to computers\u27 requirements. Such capabilities are particularly useful where children are involved, and make using computers in education all the more appealing. Herein are described two approaches and implementations of educational computer systems that work not by user manipulation of virtual objects, but rather, by user manipulation of physical objects within their environment. These systems demonstrate how new technologies can promote collaborative learning among students, thereby enhancing both the students\u27 knowledge and their ability to work together to achieve even greater learning. With these systems, the horizon of computer-facilitated collaborative learning has been expanded. Included among this expansion is identification of issues for general and special education students, and applications in a variety of domains, which have been suggested

    Sonic interactions in virtual environments

    Get PDF
    This book tackles the design of 3D spatial interactions in an audio-centered and audio-first perspective, providing the fundamental notions related to the creation and evaluation of immersive sonic experiences. The key elements that enhance the sensation of place in a virtual environment (VE) are: Immersive audio: the computational aspects of the acoustical-space properties of Virutal Reality (VR) technologies Sonic interaction: the human-computer interplay through auditory feedback in VE VR systems: naturally support multimodal integration, impacting different application domains Sonic Interactions in Virtual Environments will feature state-of-the-art research on real-time auralization, sonic interaction design in VR, quality of the experience in multimodal scenarios, and applications. Contributors and editors include interdisciplinary experts from the fields of computer science, engineering, acoustics, psychology, design, humanities, and beyond. Their mission is to shape an emerging new field of study at the intersection of sonic interaction design and immersive media, embracing an archipelago of existing research spread in different audio communities and to increase among the VR communities, researchers, and practitioners, the awareness of the importance of sonic elements when designing immersive environments
    • 

    corecore