13,866 research outputs found

    The Metaverse: Survey, Trends, Novel Pipeline Ecosystem & Future Directions

    Full text link
    The Metaverse offers a second world beyond reality, where boundaries are non-existent, and possibilities are endless through engagement and immersive experiences using the virtual reality (VR) technology. Many disciplines can benefit from the advancement of the Metaverse when accurately developed, including the fields of technology, gaming, education, art, and culture. Nevertheless, developing the Metaverse environment to its full potential is an ambiguous task that needs proper guidance and directions. Existing surveys on the Metaverse focus only on a specific aspect and discipline of the Metaverse and lack a holistic view of the entire process. To this end, a more holistic, multi-disciplinary, in-depth, and academic and industry-oriented review is required to provide a thorough study of the Metaverse development pipeline. To address these issues, we present in this survey a novel multi-layered pipeline ecosystem composed of (1) the Metaverse computing, networking, communications and hardware infrastructure, (2) environment digitization, and (3) user interactions. For every layer, we discuss the components that detail the steps of its development. Also, for each of these components, we examine the impact of a set of enabling technologies and empowering domains (e.g., Artificial Intelligence, Security & Privacy, Blockchain, Business, Ethics, and Social) on its advancement. In addition, we explain the importance of these technologies to support decentralization, interoperability, user experiences, interactions, and monetization. Our presented study highlights the existing challenges for each component, followed by research directions and potential solutions. To the best of our knowledge, this survey is the most comprehensive and allows users, scholars, and entrepreneurs to get an in-depth understanding of the Metaverse ecosystem to find their opportunities and potentials for contribution

    Hardware Acceleration of Neural Graphics

    Full text link
    Rendering and inverse-rendering algorithms that drive conventional computer graphics have recently been superseded by neural representations (NR). NRs have recently been used to learn the geometric and the material properties of the scenes and use the information to synthesize photorealistic imagery, thereby promising a replacement for traditional rendering algorithms with scalable quality and predictable performance. In this work we ask the question: Does neural graphics (NG) need hardware support? We studied representative NG applications showing that, if we want to render 4k res. at 60FPS there is a gap of 1.5X-55X in the desired performance on current GPUs. For AR/VR applications, there is an even larger gap of 2-4 OOM between the desired performance and the required system power. We identify that the input encoding and the MLP kernels are the performance bottlenecks, consuming 72%,60% and 59% of application time for multi res. hashgrid, multi res. densegrid and low res. densegrid encodings, respectively. We propose a NG processing cluster, a scalable and flexible hardware architecture that directly accelerates the input encoding and MLP kernels through dedicated engines and supports a wide range of NG applications. We also accelerate the rest of the kernels by fusing them together in Vulkan, which leads to 9.94X kernel-level performance improvement compared to un-fused implementation of the pre-processing and the post-processing kernels. Our results show that, NGPC gives up to 58X end-to-end application-level performance improvement, for multi res. hashgrid encoding on average across the four NG applications, the performance benefits are 12X,20X,33X and 39X for the scaling factor of 8,16,32 and 64, respectively. Our results show that with multi res. hashgrid encoding, NGPC enables the rendering of 4k res. at 30FPS for NeRF and 8k res. at 120FPS for all our other NG applications

    Perceptual Requirements for World-Locked Rendering in AR and VR

    Full text link
    Stereoscopic, head-tracked display systems can show users realistic, world-locked virtual objects and environments. However, discrepancies between the rendering pipeline and physical viewing conditions can lead to perceived instability in the rendered content resulting in reduced realism, immersion, and, potentially, visually-induced motion sickness. The requirements to achieve perceptually stable world-locked rendering are unknown due to the challenge of constructing a wide field of view, distortion-free display with highly accurate head- and eye-tracking. In this work we introduce new hardware and software built upon recently introduced hardware and present a system capable of rendering virtual objects over real-world references without perceivable drift under such constraints. The platform is used to study acceptable errors in render camera position for world-locked rendering in augmented and virtual reality scenarios, where we find an order of magnitude difference in perceptual sensitivity between them. We conclude by comparing study results with an analytic model which examines changes to apparent depth and visual heading in response to camera displacement errors. We identify visual heading as an important consideration for world-locked rendering alongside depth errors from incorrect disparity

    A Proposed Meta-Reality Immersive Development Pipeline: Generative AI Models and Extended Reality (XR) Content for the Metaverse

    Get PDF
    The realization of an interoperable and scalable virtual platform, currently known as the “metaverse,” is inevitable, but many technological challenges need to be overcome first. With the metaverse still in a nascent phase, research currently indicates that building a new 3D social environment capable of interoperable avatars and digital transactions will represent most of the initial investment in time and capital. The return on investment, however, is worth the financial risk for firms like Meta, Google, and Apple. While the current virtual space of the metaverse is worth 6.30billion,thatisexpectedtogrowto6.30 billion, that is expected to grow to 84.09 billion by the end of 2028. But the creation of an entire alternate virtual universe of 3D avatars, objects, and otherworldly cityscapes calls for a new development pipeline and workflow. Existing 3D modeling and digital twin processes, already well-established in industry and gaming, will be ported to support the need to architect and furnish this new digital world. The current development pipeline, however, is cumbersome, expensive and limited in output capacity. This paper proposes a new and innovative immersive development pipeline leveraging the recent advances in artificial intelligence (AI) for 3D model creation and optimization. The previous reliance on 3D modeling software to create assets and then import into a game engine can be replaced with nearly instantaneous content creation with AI. While AI art generators like DALL-E 2 and DeepAI have been used for 2D asset creation, when combined with game engine technology, such as Unreal Engine 5 and virtualized geometry systems like Nanite, a new process for creating nearly unlimited content for immersive reality is possible. New processes and workflows, such as those proposed here, will revolutionize content creation and pave the way for Web 3.0, the metaverse and a truly 3D social environment

    Strategies for Early Learners

    Get PDF
    Welcome to learning about how to effectively plan curriculum for young children. This textbook will address: • Developing curriculum through the planning cycle • Theories that inform what we know about how children learn and the best ways for teachers to support learning • The three components of developmentally appropriate practice • Importance and value of play and intentional teaching • Different models of curriculum • Process of lesson planning (documenting planned experiences for children) • Physical, temporal, and social environments that set the stage for children’s learning • Appropriate guidance techniques to support children’s behaviors as the self-regulation abilities mature. • Planning for preschool-aged children in specific domains including o Physical development o Language and literacy o Math o Science o Creative (the visual and performing arts) o Diversity (social science and history) o Health and safety • Making children’s learning visible through documentation and assessmenthttps://scholar.utc.edu/open-textbooks/1001/thumbnail.jp

    3D Computer Graphics and Virtual Reality

    Get PDF
    This chapter is dedicated to the description of 3D computer graphics used for the needs of virtual reality. Virtual reality (VR) is the use of computer technology to create a 3D virtual environment. The chapter presents some graphical features used in an environment as well as an explanation of good design practice. The chapter contains also the description of lighting settings, 3D objects/models and virtualization sequence, camera, and scenes where the wheelchair simulator is used as an example of the implementation environment

    Interview with Wolfgang Knauss

    Get PDF
    An oral history in four sessions (September 2019–January 2020) with Wolfgang Knauss, von Kármán Professor of Aeronautics and Applied Mechanics, Emeritus. Born in Germany in 1933, he speaks about his early life and experiences under the Nazi regime, his teenage years in Siegen and Heidelberg during the Allied occupation, and his move to Pasadena, California, in 1954 under the sponsorship of a local minister and his family. He enrolled in Caltech as an undergraduate in 1957, commencing a more than half-century affiliation with the Institute and GALCIT (today the Graduate Aerospace Laboratories of Caltech). He recalls the roots of his interest in aeronautics, his PhD solid mechanics studies with his advisor, M. Williams, and the GALCIT environment in the late 1950s and 1960s at the dawn of the Space Age, including the impact of Sputnik and classes with NASA astronauts. He discusses his experimental and theoretical work on materials deformation, dynamic fracture, and crack propagation, including his solid-propellant fuels research for NASA and the US Army, wide-ranging programs with the US Navy, and his pioneering micromechanics investigations and work on the time-dependent fracture of polymers in the 1990s. He offers his perspective on GALCIT’s academic culture, its solid mechanics and fluid mechanics programs, and its evolving administrative directions over the course of five decades, as well as its impact and reputation both within and beyond Caltech. He describes his work with Caltech’s undergraduate admissions committee and his scientific collaborations with numerous graduate students and postdocs and shares his recollections of GALCIT and other Caltech colleagues, including C. Babcock, D. Coles, R.P. Feynman, Y.C. Fung, G. Neugebauer, G. Housner, D. Hudson, H. Liepmann, A. Klein, G. Ravichandran, A. Rosakis, A. Roshko, and E. Sechler. Six appendices contributed by Dr. Knauss, offering further insight into his life and career, also form part of this oral history and are cross-referenced in the main text

    Embodying entrepreneurship: everyday practices, processes and routines in a technology incubator

    Get PDF
    The growing interest in the processes and practices of entrepreneurship has been dominated by a consideration of temporality. Through a thirty-six-month ethnography of a technology incubator, this thesis contributes to extant understanding by exploring the effect of space. The first paper explores how class structures from the surrounding city have appropriated entrepreneurship within the incubator. The second paper adopts a more explicitly spatial analysis to reveal how the use of space influences a common understanding of entrepreneurship. The final paper looks more closely at the entrepreneurs within the incubator and how they use visual symbols to develop their identity. Taken together, the three papers reject the notion of entrepreneurship as a primarily economic endeavour as articulated through commonly understood language and propose entrepreneuring as an enigmatic attractor that is accessed through the ambiguity of the non-verbal to develop the ‘new’. The thesis therefore contributes to the understanding of entrepreneurship and proposes a distinct role for the non-verbal in that understanding

    Pocket size interactive films: Embedding the mobile devices’ features, characteristics and affordances into filmic interactive narratives.

    Get PDF
    Throughout the history of interactive film, creators have experimented with different modes of interaction to allow for the viewers’ agency. As interactive films have not yet established a standardised form, projects have continually been shaped by new technology. Over time, viewers have shifted from the cinema, to televisions, the personal computer and recently the mobile device. These devices further extend the interactive capabilities that are at the creators’ disposal. Therefore, this thesis proposes that mobile devices could facilitate new forms of interactive film that make use of these features. This study investigates the integration of the mobile devices’ characteristics, features and affordances into an interactive film project that is both viewed and interacted with on a mobile device. First and foremost, it establishes whether the mobile device can be successfully used by authors to relay interactive films. Secondly, it gives insights into design considerations for authors that aim to make use of the mobile devices’ features. Additionally, the thesis gathers insights into the use of game-engine technology for developing similar interactive film projects. The research begins with a literature review establishing the historical and academic context in regards to interactive- films, narratives, and interfaces, thereby focussing on mobile devices. Consecutively, a selection of projects is surveyed to garner insights into the current state of the art. These sections are then used to inform the practice-based part of this thesis in which the production of an interactive film project will be comprehensively documented. A concurrent think-aloud usability test, accompanied by a reflection on the outcomes and production process will conclude the research. The outcome suggests that mobile devices can act as successful vessels for interactive narratives. However, usability tests as well as reflection reveal that the thesis project cannot be strictly classified as an interactive film. Therefore, suggestions for future research as well as insights into the retention of filmic quality can be made in retrospect. Additionally, The use of game-engines for interactive film authoring proves to allow creators rapid prototyping and ease of implementation. Though their use might impact projects by over-complicating interaction paradigms more extensively used in game production.Media files notes: Project Documentation of Creations; an interactive short film for the mobile device. Media rights: CC-BY-NC-ND 4.

    MoFaNeRF: Morphable Facial Neural Radiance Field

    Full text link
    We propose a parametric model that maps free-view images into a vector space of coded facial shape, expression and appearance with a neural radiance field, namely Morphable Facial NeRF. Specifically, MoFaNeRF takes the coded facial shape, expression and appearance along with space coordinate and view direction as input to an MLP, and outputs the radiance of the space point for photo-realistic image synthesis. Compared with conventional 3D morphable models (3DMM), MoFaNeRF shows superiority in directly synthesizing photo-realistic facial details even for eyes, mouths, and beards. Also, continuous face morphing can be easily achieved by interpolating the input shape, expression and appearance codes. By introducing identity-specific modulation and texture encoder, our model synthesizes accurate photometric details and shows strong representation ability. Our model shows strong ability on multiple applications including image-based fitting, random generation, face rigging, face editing, and novel view synthesis. Experiments show that our method achieves higher representation ability than previous parametric models, and achieves competitive performance in several applications. To the best of our knowledge, our work is the first facial parametric model built upon a neural radiance field that can be used in fitting, generation and manipulation. The code and data is available at https://github.com/zhuhao-nju/mofanerf.Comment: accepted to ECCV2022; code available at http://github.com/zhuhao-nju/mofaner
    • …
    corecore