10,358 research outputs found

    Using Digital Game, Augmented Reality, and Head Mounted Displays for Immediate-Action Commander Training

    Get PDF
    Disaster education focusing on how we should take immediate actions after disasters strike is essential to protect our lives. However, children find it difficult to understand such disaster education. Instead of disaster education to children, adults should properly instruct them to take immediate actions in the event of a disaster. We refer to such adults as Immediate-Action Commanders (IACers) and attach importance to technology-enhanced IACer training programs with high situational and audio-visual realities. To realize such programs, we focused on digital game, augmented reality (AR) and head-mounted displays (HMDs). We prototyped three AR systems that superimpose interactive virtual objects onto HMDs’ real-time vision or a trainee’s actual view based on interactive fictional scenarios. In addition, the systems are designed to realize voice-based interactions between the virtual objects (i.e., virtual children) and the trainee. According to a brief comparative survey, the AR system equipped with a smartphone-based binocular opaque HMD (Google Cardboard) has the most promising practical system for technology-enhanced IACer training programs

    Intuitive and Accurate Material Appearance Design and Editing

    Get PDF
    Creating and editing high-quality materials for photorealistic rendering can be a difficult task due to the diversity and complexity of material appearance. Material design is the process by which artists specify the reflectance properties of a surface, such as its diffuse color and specular roughness. Even with the support of commercial software packages, material design can be a time-consuming trial-and-error task due to the counter-intuitive nature of the complex reflectance models. Moreover, many material design tasks require the physical realization of virtually designed materials as the final step, which makes the process even more challenging due to rendering artifacts and the limitations of fabrication. In this dissertation, we propose a series of studies and novel techniques to improve the intuitiveness and accuracy of material design and editing. Our goal is to understand how humans visually perceive materials, simplify user interaction in the design process and, and improve the accuracy of the physical fabrication of designs. Our first work focuses on understanding the perceptual dimensions for measured material data. We build a perceptual space based on a low-dimensional reflectance manifold that is computed from crowd-sourced data using a multi-dimensional scaling model. Our analysis shows the proposed perceptual space is consistent with the physical interpretation of the measured data. We also put forward a new material editing interface that takes advantage of the proposed perceptual space. We visualize each dimension of the manifold to help users understand how it changes the material appearance. Our second work investigates the relationship between translucency and glossiness in material perception. We conduct two human subject studies to test if subsurface scattering impacts gloss perception and examine how the shape of an object influences this perception. Based on our results, we discuss why it is necessary to include transparent and translucent media for future research in gloss perception and material design. Our third work addresses user interaction in the material design system. We present a novel Augmented Reality (AR) material design prototype, which allows users to visualize their designs against a real environment and lighting. We believe introducing AR technology can make the design process more intuitive and improve the authenticity of the results for both novice and experienced users. To test this assumption, we conduct a user study to compare our prototype with the traditional material design system with gray-scale background and synthetic lighting. The results demonstrate that with the help of AR techniques, users perform better in terms of objectively measured accuracy and time and they are subjectively more satisfied with their results. Finally, our last work turns to the challenge presented by the physical realization of designed materials. We propose a learning-based solution to map the virtually designed appearance to a meso-scale geometry that can be easily fabricated. Essentially, this is a fitting problem, but compared with previous solutions, our method can provide the fabrication recipe with higher reconstruction accuracy for a large fitting gamut. We demonstrate the efficacy of our solution by comparing the reconstructions with existing solutions and comparing fabrication results with the original design. We also provide an application of bi-scale material editing using the proposed method

    Augmented reality magic mirror in the service sector: experiential consumption and the self

    Get PDF
    Purpose This paper examines what the use of an augmented reality (AR) makeup mirror means to consumers, focusing on experiential consumption and the extended self. Design/methodology/approach The authors employed a multimethod approach involving netnography and semi-structured interviews with participants in India and the UK (n = 30). Findings Two main themes emerged from the data: (1) the importance of imagination and fantasy and (2) the (in)authenticity of the self and the surrounding “reality.” Research limitations/implications This research focuses on AR magic makeup mirror. The authors call for further research on different AR contexts. Practical implications The authors provide service managers with insights on addressing gaps between the perceived service (i.e. AR contexts and the makeup consumption journey) and the conceived service (i.e. fantasies and the extended self). Originality/value The authors examine the lived fantasy experiences of AR experiential consumption. In addition, the authors reveal a novel understanding of the extended self as temporarily re-envisioned through the AR mirror

    Bridging the Domain-Gap in Computer Vision Tasks

    Get PDF

    A dramaturgy of intermediality: composing with integrative design

    Get PDF
    The thesis investigates and develops a compositional system on intermediality in theatre and performance as a dramaturgical practice through integrative design. The position of the visual/sonic media in theatre and performance has been altered by the digitalisation and networking of media technologies, which enables enhanced dynamic variables in the intermedial processes. The emergent intermediality sites are made accessible by developments in media technologies and form part of broader changes towards a mediatised society: a simultaneous shift in cultural contexts, theatre practice and audience perception. The practice-led research is situated within a postdramatic context and develops a system of compositional perspectives and procedures to enhance the knowledge of a dramaturgy on intermediality. The intermediality forms seem to re-situate the actual/virtual relations in theatre and re-construct the processes of theatricalisation in the composition of the stage narrative. The integration of media and performers produces a compositional environment of semiosis, where the theatre becomes a site of narration, and the designed integration in-between medialities emerges as intermediality sites in the performance event. A selection of performances and theatre directors is identified, who each in distinct ways integrate mediating technologies as a core element in their compositional design. These directors and performances constitute a source of reflection on compositional strategies from the perspective of practice, and enable comparative discussions on dramaturgical design and the consistency of intermediality sites. The practice-led research realised a series of prototyping processes situated in performance laboratories in 2004-5. The laboratories staged investigations into the relation between integrative design procedures and parameters for composition of intermediality sites, particularly the relative presence in-between the actual and the virtual, and the relative duration and distance in-between timeness and placeness. The integration of performer activities and media operations into dramaturgical structures were developed as a design process of identifying the mapping and experiencing the landscape through iterative prototyping. The developed compositional concepts and strategies were realised in the prototype performance Still I Know Who I Am, performed October 2006. This final research performance was a full-scale professional production, which explored the developed dramaturgical designs through creative practice. The performance was realised as a public event, and composed of a series of scenes, each presenting a specific composite of the developed integrative design strategies, and generating a particular intermediality site. The research processes in the performance laboratories and the prototype performance developed on characteristics, parameters and procedures of compositional strategies, investigating the viability of a dramaturgy of intermediality through integrative design. The practice undertaken constitutes raw material from which the concepts are drawn and underpins the premises for the theoretical reflections

    NeRRF: 3D Reconstruction and View Synthesis for Transparent and Specular Objects with Neural Refractive-Reflective Fields

    Full text link
    Neural radiance fields (NeRF) have revolutionized the field of image-based view synthesis. However, NeRF uses straight rays and fails to deal with complicated light path changes caused by refraction and reflection. This prevents NeRF from successfully synthesizing transparent or specular objects, which are ubiquitous in real-world robotics and A/VR applications. In this paper, we introduce the refractive-reflective field. Taking the object silhouette as input, we first utilize marching tetrahedra with a progressive encoding to reconstruct the geometry of non-Lambertian objects and then model refraction and reflection effects of the object in a unified framework using Fresnel terms. Meanwhile, to achieve efficient and effective anti-aliasing, we propose a virtual cone supersampling technique. We benchmark our method on different shapes, backgrounds and Fresnel terms on both real-world and synthetic datasets. We also qualitatively and quantitatively benchmark the rendering results of various editing applications, including material editing, object replacement/insertion, and environment illumination estimation. Codes and data are publicly available at https://github.com/dawning77/NeRRF

    Neural Radiance Fields: Past, Present, and Future

    Full text link
    The various aspects like modeling and interpreting 3D environments and surroundings have enticed humans to progress their research in 3D Computer Vision, Computer Graphics, and Machine Learning. An attempt made by Mildenhall et al in their paper about NeRFs (Neural Radiance Fields) led to a boom in Computer Graphics, Robotics, Computer Vision, and the possible scope of High-Resolution Low Storage Augmented Reality and Virtual Reality-based 3D models have gained traction from res with more than 1000 preprints related to NeRFs published. This paper serves as a bridge for people starting to study these fields by building on the basics of Mathematics, Geometry, Computer Vision, and Computer Graphics to the difficulties encountered in Implicit Representations at the intersection of all these disciplines. This survey provides the history of rendering, Implicit Learning, and NeRFs, the progression of research on NeRFs, and the potential applications and implications of NeRFs in today's world. In doing so, this survey categorizes all the NeRF-related research in terms of the datasets used, objective functions, applications solved, and evaluation criteria for these applications.Comment: 413 pages, 9 figures, 277 citation
    • …
    corecore