643 research outputs found

    A Process for the Semi-Automated Generation of Life-Sized, Interactive 3D Character Models for Holographic Projection

    Get PDF
    By mixing digital data into the real world, Augmented Reality (AR) can deliver potent immersive and interactive experience to its users. In many application contexts, this requires the capability to deploy animated, high fidelity 3D character models. In this paper, we propose a novel approach to efficiently transform – using 3D scanning – an actor to a photorealistic, animated character. This generated 3D assistant must be able to move to perform recorded motion capture data, and it must be able to generate dialogue with lip sync to naturally interact with the users. The approach we propose for creating these virtual AR assistants utilizes photogrammetric scanning, motion capture, and free viewpoint video for their integration in Unity. We deploy the Occipital Structure sensor to acquire static high-resolution textured surfaces, and a Vicon motion capture system to track series of movements. The proposed capturing process consists of the steps scanning, reconstruction with Wrap 3 and Maya, editing texture maps to reduce artefacts with Photoshop, and rigging with Maya and Motion Builder to render the models fit for animation and lip-sync using LipSyncPro. We test the approach in Unity by scanning two human models with 23 captured animations each. Our findings indicate that the major factors affecting the result quality are environment setup, lighting, and processing constraints

    Evoking presence through creative practice on Pepper's ghost displays.

    Get PDF
    This thesis proposes a theoretic framework for the analysis of presence research in the context of Pepper’s ghost. Pepper’s ghost as a media platform offers new possibilities for performances, real-time communication and media art. The thesis gives an overview on the 150 year old history, as well as contemporary art creation on Pepper’s ghost with a specific focus on telepresence. Telepresence, a concept that infused academic debate since 1980, discusses the topic of remote communication, perceived presence transmitted through networked environments. This discourse of telepresence revealed shortcomings in current analytical frameworks. This thesis presents a new model for presence in the context of my research. The standard telepresence model (STM) assumes a direct link between three fundamental components of presence and a measurable impact on the audience. Its three pillars are conceptualised as presence co-factors immersion, interactivity and realism, presented individually in the framework of my practice. My research is firmly rooted in the field of media art and considers the effect of presence in the context of Pepper’s ghost. This Victorian parlour trick serves as an interface, an intermediary for the discussion of live streaming experiences. Three case studies present pillars of the standard model, seeking answers to elemental questions of presence research. The hypothesis assumes a positive relationship between presence and its three co-factors. All case studies were developed as media art pieces in the context of Pepper’s ghost. As exemplifiers, they illustrate the concept of presence in respect of my own creative practice. KIMA, a real-time sound representation experience, proposes a form of telepresence that relies exclusively on immersive sound as a medium. Immersion as co-factor of presence is analysed and explored creatively on the Pepper’s ghost canvas. Transmission, the second case study, investigates the effect of physical interaction on presence experiences. An experiment helps to draw inferences in a mixed method approach. The third case study, Aura, discusses variations of realism as presence co factor in the specific context of Pepper’s ghost. The practical example is accompanied by an in-depth meta-analysis of realism factors, specifically focusing on the intricacies of Pepper’s ghost creative production processes. Together, these three case studies help to shed light on new strategies to improve production methods with possible impact on presence in Pepper’s ghost related virtual environments – and beyond

    Substitutive bodies and constructed actors: a practice-based investigation of animation as performance

    Full text link
    The fundamental conceptualisation of what animation actually is has been changing in the face of material change to production and distribution methods since the introduction of digital technology. This re-conceptualisation has been contributed to by increasing artistic and academic interest in the field, such as the emergence of Animation Studies, a relatively new branch of academic enquiry that is establishing itself as a discipline. This research (documentation of live events and thesis) examines animation in the context of performance, rather than in terms of technology or material process. Its scope is neither to cover all possible types of animation nor to put forward a new ‘catch-all’ definition of animation, but rather to examine the site of performance in character animation and to propose animation as a form of performance. In elaborating this argument, each chapter is structured around the framing device of animation as a message that is encoded and produced, delivered and played back, then received and decoded. The PhD includes a portfolio of projects undertaken as part of the research process on which the text critically reflects. Due to their site-specific approach, these live events are documented through video and still images. The work represents an intertwining, interdisciplinary, post-animation praxis where theory and practice inform one another and test relationships between animation and performance to problematise a binary opposition between that which is live as opposed to that which is animated. It is contextualised by a review of historical practice and interviews with key contemporary practitioners whose work combines animation with an intermedial mixture of interaction design, fine art, dance and theatre

    Augmented Reality and Its Application

    Get PDF
    Augmented Reality (AR) is a discipline that includes the interactive experience of a real-world environment, in which real-world objects and elements are enhanced using computer perceptual information. It has many potential applications in education, medicine, and engineering, among other fields. This book explores these potential uses, presenting case studies and investigations of AR for vocational training, emergency response, interior design, architecture, and much more

    How Chinese SMEs innovate with a ‘diegetic innovation templating’? - the stimulating role of Sci-fi and fantasy

    Get PDF
    Use of established fiction provides a connection to society at large, tapping into the creative abilities of great authors and filmmakers, which can offer a valuable source of creative ideas. This paper explores how science fiction and fantasy, particularly in the form of films, is being used to stimulate creativity and produce innovation outputs in non-science SMEs in China. We argue that fiction has the potential to inspire innovation through a constructive organisational process, we provide a simple metric, the ‘Diegetic Gap’, as a means for illustrating this. In particular, we present four empirical case studies that explore the application of science fiction and fantasy to product and process innovation, utilising a concept we call a Diegetic Innovation Template to merge fictional narrative and tangible innovation output

    Computational imaging and automated identification for aqueous environments

    Get PDF
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy at the Massachusetts Institute of Technology and the Woods Hole Oceanographic Institution June 2011Sampling the vast volumes of the ocean requires tools capable of observing from a distance while retaining detail necessary for biology and ecology, ideal for optical methods. Algorithms that work with existing SeaBED AUV imagery are developed, including habitat classi fication with bag-of-words models and multi-stage boosting for rock sh detection. Methods for extracting images of sh from videos of longline operations are demonstrated. A prototype digital holographic imaging device is designed and tested for quantitative in situ microscale imaging. Theory to support the device is developed, including particle noise and the effects of motion. A Wigner-domain model provides optimal settings and optical limits for spherical and planar holographic references. Algorithms to extract the information from real-world digital holograms are created. Focus metrics are discussed, including a novel focus detector using local Zernike moments. Two methods for estimating lateral positions of objects in holograms without reconstruction are presented by extending a summation kernel to spherical references and using a local frequency signature from a Riesz transform. A new metric for quickly estimating object depths without reconstruction is proposed and tested. An example application, quantifying oil droplet size distributions in an underwater plume, demonstrates the efficacy of the prototype and algorithms.Funding was provided by NOAA Grant #5710002014, NOAA NMFS Grant #NA17RJ1223, NSF Grant #OCE-0925284, and NOAA Grant #NA10OAR417008

    The Next Generation BioPhotonics Workstation

    Get PDF

    Immersive Visualization in Biomedical Computational Fluid Dynamics and Didactic Teaching and Learning

    Get PDF
    Virtual reality (VR) can stimulate active learning, critical thinking, decision making and improved performance. It requires a medium to show virtual content, which is called a virtual environment (VE). The MARquette Visualization Lab (MARVL) is an example of a VE. Robust processes and workflows that allow for the creation of content for use within MARVL further increases the userbase for this valuable resource. A workflow was created to display biomedical computational fluid dynamics (CFD) and complementary data in a wide range of VE’s. This allows a researcher to study the simulation in its natural three-dimensional (3D) morphology. In addition, it is an exciting way to extract more information from CFD results by taking advantage of improved depth cues, a larger display canvas, custom interactivity, and an immersive approach that surrounds the researcher. The CFD to VR workflow was designed to be basic enough for a novice user. It is also used as a tool to foster collaboration between engineers and clinicians. The workflow aimed to support results from common CFD software packages and across clinical research areas. ParaView, Blender and Unity were used in the workflow to take standard CFD files and process them for viewing in VR. Designated scripts were written to automate the steps implemented in each software package. The workflow was successfully completed across multiple biomedical vessels, scales and applications including: the aorta with application to congenital cardiovascular disease, the Circle of Willis with respect to cerebral aneurysms, and the airway for surgical treatment planning. The workflow was completed by novice users in approximately an hour. Bringing VR further into didactic teaching within academia allows students to be fully immersed in their respective subject matter, thereby increasing the students’ sense of presence, understanding and enthusiasm. MARVL is a space for collaborative learning that also offers an immersive, virtual experience. A workflow was created to view PowerPoint presentations in 3D using MARVL. A resulting Immersive PowerPoint workflow used PowerPoint, Unity and other open-source software packages to display the PowerPoint presentations in 3D. The Immersive PowerPoint workflow can be completed in under thirty minutes
    • â€Ķ
    corecore