137 research outputs found

    Computer-Assisted Interactive Documentary and Performance Arts in Illimitable Space

    Get PDF
    This major component of the research described in this thesis is 3D computer graphics, specifically the realistic physics-based softbody simulation and haptic responsive environments. Minor components include advanced human-computer interaction environments, non-linear documentary storytelling, and theatre performance. The journey of this research has been unusual because it requires a researcher with solid knowledge and background in multiple disciplines; who also has to be creative and sensitive in order to combine the possible areas into a new research direction. [...] It focuses on the advanced computer graphics and emerges from experimental cinematic works and theatrical artistic practices. Some development content and installations are completed to prove and evaluate the described concepts and to be convincing. [...] To summarize, the resulting work involves not only artistic creativity, but solving or combining technological hurdles in motion tracking, pattern recognition, force feedback control, etc., with the available documentary footage on film, video, or images, and text via a variety of devices [....] and programming, and installing all the needed interfaces such that it all works in real-time. Thus, the contribution to the knowledge advancement is in solving these interfacing problems and the real-time aspects of the interaction that have uses in film industry, fashion industry, new age interactive theatre, computer games, and web-based technologies and services for entertainment and education. It also includes building up on this experience to integrate Kinect- and haptic-based interaction, artistic scenery rendering, and other forms of control. This research work connects all the research disciplines, seemingly disjoint fields of research, such as computer graphics, documentary film, interactive media, and theatre performance together.Comment: PhD thesis copy; 272 pages, 83 figures, 6 algorithm

    Exploring Immersive Learning Experiences: A Survey

    Get PDF
    Immersive technologies have been shown to significantly improve learning as they can simplify and simulate complicated concepts in various fields. However, there is a lack of studies that analyze the recent evidence-based immersive learning experiences applied in a classroom setting or offered to the public. This study presents a systematic review of 42 papers to understand, compare, and reflect on recent attempts to integrate immersive technologies in education using seven dimensions: application field, the technology used, educational role, interaction techniques, evaluation methods, and challenges. The results show that most studies covered STEM (science, technology, engineering, math) topics and mostly used head-mounted display (HMD) virtual reality in addition to marker-based augmented reality, while mixed reality was only represented in two studies. Further, the studies mostly used a form of active learning, and highlighted touch and hardware-based interactions enabling viewpoint and select tasks. Moreover, the studies utilized experiments, questionnaires, and evaluation studies for evaluating the immersive experiences. The evaluations show improved performance and engagement, but also point to various usability issues. Finally, we discuss implications and future research directions, and compare our findings with related review studies

    Review of three-dimensional human-computer interaction with focus on the leap motion controller

    Get PDF
    Modern hardware and software development has led to an evolution of user interfaces from command-line to natural user interfaces for virtual immersive environments. Gestures imitating real-world interaction tasks increasingly replace classical two-dimensional interfaces based on Windows/Icons/Menus/Pointers (WIMP) or touch metaphors. Thus, the purpose of this paper is to survey the state-of-the-art Human-Computer Interaction (HCI) techniques with a focus on the special field of three-dimensional interaction. This includes an overview of currently available interaction devices, their applications of usage and underlying methods for gesture design and recognition. Focus is on interfaces based on the Leap Motion Controller (LMC) and corresponding methods of gesture design and recognition. Further, a review of evaluation methods for the proposed natural user interfaces is given

    Does modality make a difference? A comparative study of mobile augmented reality for education and training

    Get PDF
    Includes bibliographical references.2022 Fall.As augmented reality (AR) technologies progress they have begun to impact the field of education and training. Many prior studies have explored the potential benefits and challenges to integrating emerging technologies into educational practices. Both internal and external factors may impact the overall adoption of the technology, however there are key benefits identified for the schema building process, which is important for knowledge acquisition. This study aims to elaborate and expand upon prior studies to explore the question does mobile augmented reality provide for stronger knowledge retention compared to other training and education modalities? To address this question this study takes a comparative experimental approach by exposing participants to one of three training modalities (AR, paper manual, or online video) and evaluating their knowledge retention and other educational outcomes

    Video game graphics and players’ perception of subjective realism.

    Get PDF
    This work explores how people who play and develop video games perceive realism. ‘Realism’ is a very broad term and has different meanings for different people, therefore in this project the terms 'realism’ and ‘visual fidelity’ are used to refer to the visuals and their appearance in video games. This helps define what is perceived as believable and close to real-life by consumers as well as developers. Realism can clearly be noticed in the artistic aspect of games; accordingly, this project focuses on this side of the subject. In order to understand why visual fidelity is an important factor in game development, this work provides a brief summary of the history of video games. As Physically Based Rendering is commonly used nowadays, the project aims to understand the contribution of PBR31 to achieving realism. The project aims to investigate how game developers achieve visual fidelity and realistic environments. It will consider what is needed to create visuals that are perceived as realistic and what distinguishes the realistic aesthetic from other art styles in video games. Lighting, texture maps, workflows and other terms are discussed, in conjunction with exploring consumer opinion on the subject. The project employs a qualitative research method through asking game developers and gamers for their opinions on themes regarding the subject to help establish whether there is a different understanding of the term in the different groups. To understand better why visuals are sometimes perceived as ‘creepy’ and as part of the ‘uncanny valley’, related psychological aspects and influences are taken into account. This work also investigates how other aspects of the development process (design, animation, narrative, sound, etc.) assist the visual art with conveying realism to the customers. This also aids the formation of a hypothesis of whether true realism in video games will ever be accomplished

    Additive manufacturing in bespoke interactive devices-a thematic analysis

    Get PDF
    Additive Manufacturing (AM) facilitates product development due to the various native advantages of AM when compared to traditional manufacturing processes. Efficiency, customisation, innovation, and ease of product modifications are a few advantages of AM. This manufacturing process can therefore be applied to fabricate customisable devices, such as bespoke interactive devices for rehabilitation purposes. In this context, a two-day workshop titled Design for Additive Manufacturing: Future Interactive Devices (DEFINED) was held to discuss the design for AM issues encountered in the development of an innovative bespoke controller and supporting platform, in a Virtual Reality (VR)-based environment, intended for people with limited dexterity in their hands. The workshop sessions were transcribed, and a thematic analysis was carried out to identify the main topics discussed. The themes were Additive Manufacturing, Generative Design Algorithms, User-Centred Design, Measurement Devices for Data Acquisition, Virtual Reality, Augmented Reality, and Haptics. These themes were then discussed in relation to the available literature. The main conclusion of this workshop was that a coherent design for AM tools is needed by designers to take AM considerations throughout the design process, since they lack the AM knowledge required to develop bespoke interactive devices

    Scene understanding through semantic image segmentation in augmented reality

    Get PDF
    Abstract. Semantic image segmentation, the task of assigning a label to each pixel in an image, is a major challenge in the field of computer vision. Semantic image segmentation using fully convolutional neural networks (FCNNs) offers an online solution to the scene understanding while having a simple training procedure and fast inference speed if designed efficiently. The semantic information provided by the semantic segmentation is a detailed understanding of the current context and this scene understanding is vital for scene modification in augmented reality (AR), especially if one aims to perform destructive scene augmentation. Augmented reality systems, by nature, aim to have a real-time modification of the context through head-mounted see-through or video-see-through displays, thus require efficiency in each step. Although there are many solutions to the semantic image segmentation in the literature such as DeeplabV3+, Deeplab DPC, they fail to offer a low latency inference due to their complex architectures in aim to acquire the best accuracy. As a part of this thesis work, we provide an efficient architecture for semantic image segmentation using an FCNN model and achieve real-time performance on smartphones at 19.65 frames per second (fps) while maintaining a high mean intersection over union (mIOU) of 67.7% on Cityscapes validation set with our "Basic" variant and 15.41 fps and 70.3% mIOU on Cityscapes test set using our "DPC" variant. The implementation is open-sourced and compatible with Tensorflow Lite, thus able to run on embedded and mobile devices. Furthermore, the thesis work demonstrates an augmented reality implementation where semantic segmentation masks are tracked online in a 3D environment using Google ARCore. We show that the frequent calculation of semantic information is not necessary and by tracking the calculated semantic information in 3D space using inertial-visual odometry that is provided by the ARCore framework, we can achieve savings on battery and CPU usage while maintaining a high mIOU. We further demonstrate a possible use case of the system by inpainting the objects in 3D space that are found by the semantic image segmentation network. The implemented Android application performs real-time augmented reality at 30 fps while running the computationally efficient network that was proposed as a part of this thesis work in parallel
    • …
    corecore