2,032 research outputs found

    All Hands on Deck: Choosing Virtual End Effector Representations to Improve Near Field Object Manipulation Interactions in Extended Reality

    Get PDF
    Extended reality, or XR , is the adopted umbrella term that is heavily gaining traction to collectively describe Virtual reality (VR), Augmented reality (AR), and Mixed reality (MR) technologies. Together, these technologies extend the reality that we experience either by creating a fully immersive experience like in VR or by blending in the virtual and real worlds like in AR and MR. The sustained success of XR in the workplace largely hinges on its ability to facilitate efficient user interactions. Similar to interacting with objects in the real world, users in XR typically interact with virtual integrants like objects, menus, windows, and information that convolve together to form the overall experience. Most of these interactions involve near-field object manipulation for which users are generally provisioned with visual representations of themselves also called self-avatars. Representations that involve only the distal entity are called end-effector representations and they shape how users perceive XR experiences. Through a series of investigations, this dissertation evaluates the effects of virtual end effector representations on near-field object retrieval interactions in XR settings. Through studies conducted in virtual, augmented, and mixed reality, implications about the virtual representation of end-effectors are discussed, and inferences are made for the future of near-field interaction in XR to draw upon from. This body of research aids technologists and designers by providing them with details that help in appropriately tailoring the right end effector representation to improve near-field interactions, thereby collectively establishing knowledge that epitomizes the future of interactions in XR

    An Actor-Centric Approach to Facial Animation Control by Neural Networks For Non-Player Characters in Video Games

    Get PDF
    Game developers increasingly consider the degree to which character animation emulates facial expressions found in cinema. Employing animators and actors to produce cinematic facial animation by mixing motion capture and hand-crafted animation is labor intensive and therefore expensive. Emotion corpora and neural network controllers have shown promise toward developing autonomous animation that does not rely on motion capture. Previous research and practice in disciplines of Computer Science, Psychology and the Performing Arts have provided frameworks on which to build a workflow toward creating an emotion AI system that can animate the facial mesh of a 3d non-player character deploying a combination of related theories and methods. However, past investigations and their resulting production methods largely ignore the emotion generation systems that have evolved in the performing arts for more than a century. We find very little research that embraces the intellectual process of trained actors as complex collaborators from which to understand and model the training of a neural network for character animation. This investigation demonstrates a workflow design that integrates knowledge from the performing arts and the affective branches of the social and biological sciences. Our workflow begins at the stage of developing and annotating a fictional scenario with actors, to producing a video emotion corpus, to designing training and validating a neural network, to analyzing the emotion data annotation of the corpus and neural network, and finally to determining resemblant behavior of its autonomous animation control of a 3d character facial mesh. The resulting workflow includes a method for the development of a neural network architecture whose initial efficacy as a facial emotion expression simulator has been tested and validated as substantially resemblant to the character behavior developed by a human actor

    Tangible user interfaces : past, present and future directions

    Get PDF
    In the last two decades, Tangible User Interfaces (TUIs) have emerged as a new interface type that interlinks the digital and physical worlds. Drawing upon users' knowledge and skills of interaction with the real non-digital world, TUIs show a potential to enhance the way in which people interact with and leverage digital information. However, TUI research is still in its infancy and extensive research is required in or- der to fully understand the implications of tangible user interfaces, to develop technologies that further bridge the digital and the physical, and to guide TUI design with empirical knowledge. This paper examines the existing body of work on Tangible User In- terfaces. We start by sketching the history of tangible user interfaces, examining the intellectual origins of this field. We then present TUIs in a broader context, survey application domains, and review frame- works and taxonomies. We also discuss conceptual foundations of TUIs including perspectives from cognitive sciences, phycology, and philoso- phy. Methods and technologies for designing, building, and evaluating TUIs are also addressed. Finally, we discuss the strengths and limita- tions of TUIs and chart directions for future research

    Guitar hero world tour: a creator of new sonic experiences?

    Get PDF
    The academic literature for sonic media and gaming is – too frequently – separated by method, politics and approach. To increase the dialogue between gaming and sonic discourses, this paper discusses the impact of the drum controller (for use with the Guitar Hero gaming software) and the potential for new auditory experiences and literacies. Sonic media is often more volatile than screen-based platforms. The advent of MP3s and the iPod has ensured that sound is the carrier for changes in media and meaning. There has been an evolution of technology which has subsequently configured a convergence or revolution in sonic media. Concurrently, there has been a growing trend in the gaming industry to replicate instruments into alternative controllers for rhythm games as a form of interactive entertainment. Generally, these input devices are considered as part of rhythm play rather that music generation. Predominant in this group of devices is the Guitar Hero© franchise where a controller is shaped like a guitar. While it does not feature strings, it has buttons that are pressed as part of the use of such controllers. In 2008, drums were added to this equation for the current seventh generation of consoles (PS3, XBOX 360, and Wii) with the release of a drum controller for the games Guitar Hero World Tour (GHWT), Rock Band (RB) and Rock Revolutions (it is worth noting that PS2 a sixth generation console is also supported to some extent). Whils

    Designing games for chidren's rehabilitation.

    Get PDF
    The upsurge of video games applied to various contexts such as health care and education has led to an increased interest in strategies on how to design games that generate real-life outcomes, knowledge or skills useful outside of the game itself. However, the current state of game design research that borrows extensively from game studies is at the risk of inheriting a predisposition for descriptive over prescriptive theories, to the detriment of potential applicability and industrial relevance. This MPhil project explores a design strategy that is focused on producing and predicting real-life behavioural outcomes by emphasizing mechanics and interactions over rules and content. With the aim of scrutinizing this design strategy a multi-method case study was conducted during the concept phase of a video game that utilizes the Nintendo Wii’s motion-control capabilities, for the rehabilitation of children within the age range of 8 – 16 with an acquired brain injury (ABI). The action research method was used to explore the design thinking underpinning the mechanics and interactions that bring about behavioural outcomes; those which satisfy specific therapeutic needs in the areas of motor, socio-emotional, and cognitive skills. Design decisions were subsequently evaluated through a series of playtests performed with the purpose of tracing real-life behavioural outcomes back to their roots in mechanics and interactions. This study has led to a thorough understanding of the advantages and limitations of the applied game design strategy under scrutiny, and contributes to the field of game design studies by: 1) critically analysing some of the formal concepts that underpin our current understanding of applied game design; 2) promoting an applied game design strategy for therapeutic effect, that emphasizes mechanics and interactions over rules and content; 3) providing the basis for a playtest method for validating design decisions

    Worlds at our fingertips:reading (in) <i>What Remains of Edith Finch</i>

    Get PDF
    Video games are works of written code which portray worlds and characters in action and facilitate an aesthetic and interpretive experience. Beyond this similarity to literary works, some video games deploy various design strategies which blend gameplay and literary elements to explicitly foreground a hybrid literary/ludic experience. We identify three such strategies: engaging with literary structures, forms and techniques; deploying text in an aesthetic rather than a functional way; and intertextuality. This paper aims to analyse how these design strategies are deployed in What Remains of Edith Finch (Giant Sparrow, 2017) to support a hybrid readerly/playerly experience. We argue that this type of design is particularly suited for walking simulators because they support interpretive play (Upton, 2015) through slowness, ambiguity (Muscat et al., 2016; Pinchbeck 2012), narrative and aesthetic aspirations (Carbo-Mascarell, 2016). Understanding walking sims as literary games (Ensslin, 2014) can shift the emphasis from their lack of ‘traditional’ gameplay complexity and focus instead on the opportunities that they afford for hybrid storytelling and for weaving literature and gameplay in innovative and playful ways

    Model-Driven Information Security Risk Assessment of Socio-Technical Systems

    Get PDF

    A study of spatial data models and their application to selecting information from pictorial databases

    Get PDF
    People have always used visual techniques to locate information in the space surrounding them. However with the advent of powerful computer systems and user-friendly interfaces it has become possible to extend such techniques to stored pictorial information. Pictorial database systems have in the past primarily used mathematical or textual search techniques to locate specific pictures contained within such databases. However these techniques have largely relied upon complex combinations of numeric and textual queries in order to find the required pictures. Such techniques restrict users of pictorial databases to expressing what is in essence a visual query in a numeric or character based form. What is required is the ability to express such queries in a form that more closely matches the user's visual memory or perception of the picture required. It is suggested in this thesis that spatial techniques of search are important and that two of the most important attributes of a picture are the spatial positions and the spatial relationships of objects contained within such pictures. It is further suggested that a database management system which allows users to indicate the nature of their query by visually placing iconic representations of objects on an interface in spatially appropriate positions, is a feasible method by which pictures might be found from a pictorial database. This thesis undertakes a detailed study of spatial techniques using a combination of historical evidence, psychological conclusions and practical examples to demonstrate that the spatial metaphor is an important concept and that pictures can be readily found by visually specifying the spatial positions and relationships between objects contained within them
    corecore