1,825 research outputs found

    The Virtual Worlds of Cinema Visual Effects, Simulation, and the Aesthetics of Cinematic Immersion

    Get PDF
    This thesis develops a phenomenology of immersive cinematic spectatorship. During an immersive experience in the cinema, the images, sounds, events, emotions, and characters that form a fictional diegesis become so compelling that our conscious experience of the real world is displaced by a virtual world. Theorists and audiences have long recognized cinema’s ability to momentarily substitute for the lived experience of reality, but it remains an under-theorized aspect of cinematic spectatorship. The first aim of this thesis is therefore to examine these immersive responses to cinema from three perspectives – the formal, the technological, and the neuroscientific – to describe the exact mechanisms through which a spectator’s immersion in a cinematic world is achieved. A second aim is to examine the historical development of the technologies of visual simulation that are used to create these immersive diegetic worlds. My analysis shows a consistent increase in the vividness and transparency of simulative technologies, two factors that are crucial determinants in a spectator’s immersion. In contrast to the cultural anxiety that often surrounds immersive responses to simulative technologies, I examine immersive spectatorship as an aesthetic phenomenon that is central to our engagement with cinema. The ubiquity of narrative – written, verbal, cinematic – shows that the ability to achieve immersion is a fundamental property of the human mind found in cultures diverse in both time and place. This thesis is thus an attempt to illuminate this unique human ability and examine the technologies that allow it to flourish

    The Challenges of Processing Kite Aerial Photography Imagery with Modern Photogrammetry Techniques

    Get PDF
    Kite Aerial Photography (KAP) is a traditional method of collecting small-format aerial photography used for work in a variety of fields. This research explored techniques for processing KAP imagery with a focus on some of the challenges specific to photo processing. The performance of multiple automated image compositing programs was compared using a common set of 29 images. Those packages that were based on a photogrammetry approach outperformed the non-photogrammetric software, and generated similar levels of quality to one another. While all three photogrammetric packages produced satisfactory output, each had unique challenges

    Storytelling with salient stills

    Get PDF
    Thesis (M.S.)--Massachusetts Institute of Technology, Program in Media Arts & Sciences, 1996.Includes bibliographical references (p. 59-63).Michale J. Massey.M.S

    Integration of Z-Depth in Compositing

    Get PDF
    It is important for video compositors to be able to complete their jobs quickly and efficiently. One of the tasks they might encounter is to insert assets such as characters into a 3D rendered environment that has depth information embedded into the image sequence. Currently, a plug-in that facilitates this task (Depth Matte®) functions by looking at the depth information of the layer it\u27s applied to and showing or hiding pixels of that layer. In this plug-in, the Z-Depth used is locked to the layer the plug-in is applied. This research focuses on comparing Depth Matte® to a custom-made plug-in that looks at depth information of a layer other than the one it is applied to, yet showing or hiding the pixels of the layer that it is associated with. Nine subjects tested both Depth Matte® and the custom plug-in ZeDI to gather time and mouse-click data. Time was gathered to test speed and mouse-click data was gathered to test efficiency. ZeDI was shown to be significantly quicker and more efficient, and was also overwhelmingly preferred by the users. In conclusion a technique where pixels are shown dependent on depth information that does not necessarily come from the same layer it\u27s applied to, is quicker and more efficient than one where the depth information is locked to the layer that the plug-in is applied

    Integration of Z-Depth in Compositing

    Get PDF
    It is important for video compositors to be able to complete their jobs quickly and efficiently. One of the tasks they might encounter is to insert assets such as characters into a 3D rendered environment that has depth information embedded into the image sequence. Currently, a plug-in that facilitates this task (Depth Matte®) functions by looking at the depth information of the layer it\u27s applied to and showing or hiding pixels of that layer. In this plug-in, the Z-Depth used is locked to the layer the plug-in is applied. This research focuses on comparing Depth Matte® to a custom-made plug-in that looks at depth information of a layer other than the one it is applied to, yet showing or hiding the pixels of the layer that it is associated with. Nine subjects tested both Depth Matte® and the custom plug-in ZeDI to gather time and mouse-click data. Time was gathered to test speed and mouse-click data was gathered to test efficiency. ZeDI was shown to be significantly quicker and more efficient, and was also overwhelmingly preferred by the users. In conclusion a technique where pixels are shown dependent on depth information that does not necessarily come from the same layer it\u27s applied to, is quicker and more efficient than one where the depth information is locked to the layer that the plug-in is applied

    Toward General Purpose 3D User Interfaces: Extending Windowing Systems to Three Dimensions

    Get PDF
    Recent growth in the commercial availability of consumer grade 3D user interface devices like the Microsoft Kinect and the Oculus Rift, coupled with the broad availability of high performance 3D graphics hardware, has put high quality 3D user interfaces firmly within the reach of consumer markets for the first time ever. However, these devices require custom integration with every application which wishes to use them, seriously limiting application support, and there is no established mechanism for multiple applications to use the same 3D interface hardware simultaneously. This thesis proposes that these problems can be solved in the same way that the same problems were solved for 2D interfaces: by abstracting the input hardware behind input primitives provided by the windowing system and compositing the output of applications within the windowing system before displaying it. To demonstrate the feasibility of this approach this thesis also presents a novel Wayland compositor which allows clients to create 3D interface contexts within a 3D interface space in the same way that traditional windowing systems allow applications to create 2D interface contexts (windows) within a 2D interface space (the desktop), as well as allowing unmodified 2D Wayland clients to window into the same 3D interface space and receive standard 2D input events. This implementation demonstrates the ability of consumer 3D interface hardware to support a 3D windowing system, the ability of this 3D windowing system to support applications with compelling 3D interfaces, the ability of this style of windowing system to be built on top of existing hardware accelerated graphics and windowing infrastructure, and the ability of such a windowing system to support unmodified 2D interface applications windowing into the same 3D windowing space as the 3D interface applications. This means that application developers could create compelling 3D interfaces with no knowledge of the hardware that supports them, that new hardware could be introduced without needing to integrate it with individual applications, and that users could mix whatever 2D and 3D applications they wish in an immersive 3D interface space regardless of the details of the underlying hardware

    Beyond buttons: Explorations in creative storytelling

    Get PDF
    None provided

    Efficient rendering for three-dimensional displays

    Get PDF
    This thesis explores more efficient methods for visualizing point data sets on three-dimensional (3D) displays. Point data sets are used in many scientific applications, e.g. cosmological simulations. Visualizing these data sets in {3D} is desirable because it can more readily reveal structure and unknown phenomena. However, cutting-edge scientific point data sets are very large and producing/rendering even a single image is expensive. Furthermore, current literature suggests that the ideal number of views for 3D (multiview) displays can be in the hundreds, which compounds the costs. The accepted notion that many views are required for {3D} displays is challenged by carrying out a novel human factor trials study. The results suggest that humans are actually surprisingly insensitive to the number of viewpoints with regard to their task performance, when occlusion in the scene is not a dominant factor. Existing stereoscopic rendering algorithms can have high set-up costs which limits their use and none are tuned for uncorrelated {3D} point rendering. This thesis shows that it is possible to improve rendering speeds for a low number of views by perspective reprojection. The novelty in the approach described lies in delaying the reprojection and generation of the viewpoints until the fragment stage of the pipeline and streamlining the rendering pipeline for points only. Theoretical analysis suggests a fragment reprojection scheme will render at least 2.8 times faster than na\"{i}vely re-rendering the scene from multiple viewpoints. Building upon the fragment reprojection technique, further rendering performance is shown to be possible (at the cost of some rendering accuracy) by restricting the amount of reprojection required according to the stereoscopic resolution of the display. A significant benefit is that the scene depth can be mapped arbitrarily to the perceived depth range of the display at no extra cost than a single region mapping approach. Using an average case-study (rendering from a 500k points for a 9-view High Definition 3D display), theoretical analysis suggests that this new approach is capable of twice the performance gains than simply reprojecting every single fragment, and quantitative measures show the algorithm to be 5 times faster than a naïve rendering approach. Further detailed quantitative results, under varying scenarios, are provided and discussed
    corecore