34 research outputs found

    Perceptually-Aligned Frame Rate Selection Using Spatio-Temporal Features

    Get PDF

    Evidence that Viewers Prefer Higher Frame Rate Film

    Get PDF
    High frame rate (HFR) movie-making refers to the capture and projection of movies at frame rates several times higher than the traditional 24 frames per second. This higher frame rate theoretically improves the quality of motion portrayed in movies, and helps avoid motion blur, judder and other undesirable artefacts. However, there is considerable debate in the cinema industry regarding the acceptance of HFR content given anecdotal reports of hyper-realistic imagery that reveals too much set and costume detail. Despite the potential theoretical advantages, there has been little empirical investigation of the impact of high-frame rate techniques on the viewer experience. In this study we use stereoscopic 3D content, filmed and projected at multiple frame rates (24, 48 and 60 fps), with shutter angles ranging from 90 degrees to 358 degrees, to evaluate viewer preferences. In a paired-comparison paradigm we assessed preferences along a set of five attributes (realism, motion smoothness, blur/clarity, quality of depth and overall preference). The resulting data show a clear preference for higher frame rates, particularly when contrasting 24 fps with 48 or 60 fps. We found little impact of shutter angle on viewers’ choices, with the exception of one measure (motion smoothness) for one clip type. These data are the first empirical evidence of the advantages afforded by high frame rate capture and presentation in a cinema context.https://source.sheridancollege.ca/centres_sirt_works/1000/thumbnail.jp

    A Frame Rate Conversion Method Based on a Virtual Shutter Angle

    Get PDF

    A Study of High Frame Rate Video Formats

    Get PDF

    Investigating the impact of high frame rates on video compression

    Get PDF

    Visual Perception in Simulated Reality

    Get PDF

    Co-incidental animation: Framing chance occurrences of illusion of movement as animation events

    Get PDF
    This research originates from a practice-driven urge to achieve simultaneity and immediacy in the creation and experience of animation by aiming to bring together construction, production and presentation of illusion of movement in time and place. Focusing on illusion of movement as animation, this research borrowed from the perceptual elements already employed in animation practices. However, recording a sequence – as in filmmaking – leads to temporal and physical distance between the creation and presentation of an animated work. Accomplishing the aimed simultaneity and immediacy suggested looking for ways to achieve illusory movement without producing material artefacts to yield it. In order to realise this goal, this research turned to performance studies, where ephemerality and immediacy are theorised as inherent properties of performance practice. Those insights from performance theory were developed as possibilities for animation within the research practice. Performance theorist Erika Fischer-Lichte’s positioning of performance event as open-ended and artwork as fixed was taken as a starting point. On the basis of that theoretical grounding, a process to unite separate phases in animation creation is explored in tandem with incorporating event properties into animation. The research asks the question: How can animation be created and experienced simultaneously and immediately, as ephemeral as a performance event? In this practice-based research, the enquiries were carried out through practical experimentation while building a framework for reference and analysis based on performance theories. As suggested by Gray and Malins, this study devised its own methodology where collecting visual, auditory and written data, building physical and developing theoretical tools, as well as working with participants provided methods to inquire an animation practice of immediacy. The research begins in animation practice, negotiating possible ways to create the illusion of movement. In order to understand how this illusion occurs in animation, the research looked at perceptual and cognitive mechanisms. The preliminary investigation of optical toys and flipbooks, rather than films, was then extended to non-visual modes of illusory perception, and possibilities through aural and haptic illusions of movements were explored. The study then introduced the theoretical framework to explore the immediacy of ‘event-ness’. Based on Fischer-Lichte’s framing of the four characteristics of performance, the framework through which to shape and analyse the research practice emerged: mediality (bodily co-presence), materiality (transience), semioticity (emergence of new meaning) and aestheticity (the experience of performance as ‘event’). The considerations of liveness, co-creation, ephemerality, and fixity of the research practice thus found structure for evaluation through Fischer-Lichte’s perspective. Finally, as contribution to expanding animation practice, it is proposed to approach animation as an event where illusory movement is observed through instructional scores. By calibrating and analysing possibilities of animation through the framework provided by Fischer-Lichte’s work, it becomes possible to amalgamate the three separate processes of animation – construction, production and presentation – into a single process, the animation event. In this event, the creation and experience of animation are simultaneous and concurrent; thus, providing an answer to the research question

    Low Latency Rendering with Dataflow Architectures

    Get PDF
    The research presented in this thesis concerns latency in VR and synthetic environments. Latency is the end-to-end delay experienced by the user of an interactive computer system, between their physical actions and the perceived response to these actions. Latency is a product of the various processing, transport and buffering delays present in any current computer system. For many computer mediated applications, latency can be distracting, but it is not critical to the utility of the application. Synthetic environments on the other hand attempt to facilitate direct interaction with a digitised world. Direct interaction here implies the formation of a sensorimotor loop between the user and the digitised world - that is, the user makes predictions about how their actions affect the world, and see these predictions realised. By facilitating the formation of the this loop, the synthetic environment allows users to directly sense the digitised world, rather than the interface, and induce perceptions, such as that of the digital world existing as a distinct physical place. This has many applications for knowledge transfer and efficient interaction through the use of enhanced communication cues. The complication is, the formation of the sensorimotor loop that underpins this is highly dependent on the fidelity of the virtual stimuli, including latency. The main research questions we ask are how can the characteristics of dataflow computing be leveraged to improve the temporal fidelity of the visual stimuli, and what implications does this have on other aspects of the fidelity. Secondarily, we ask what effects latency itself has on user interaction. We test the effects of latency on physical interaction at levels previously hypothesized but unexplored. We also test for a previously unconsidered effect of latency on higher level cognitive functions. To do this, we create prototype image generators for interactive systems and virtual reality, using dataflow computing platforms. We integrate these into real interactive systems to gain practical experience of how the real perceptible benefits of alternative rendering approaches, but also what implications are when they are subject to the constraints of real systems. We quantify the differences of our systems compared with traditional systems using latency and objective image fidelity measures. We use our novel systems to perform user studies into the effects of latency. Our high performance apparatuses allow experimentation at latencies lower than previously tested in comparable studies. The low latency apparatuses are designed to minimise what is currently the largest delay in traditional rendering pipelines and we find that the approach is successful in this respect. Our 3D low latency apparatus achieves lower latencies and higher fidelities than traditional systems. The conditions under which it can do this are highly constrained however. We do not foresee dataflow computing shouldering the bulk of the rendering workload in the future but rather facilitating the augmentation of the traditional pipeline with a very high speed local loop. This may be an image distortion stage or otherwise. Our latency experiments revealed that many predictions about the effects of low latency should be re-evaluated and experimenting in this range requires great care

    Audiovisual granular synthesis: creating synergistic relationships between sound and image

    Get PDF
    The aims of this research were to investigate how an audio processing technique known as granular synthesis can be translated to a visual processing equivalent, and to develop software that fuses audiovisual relationships for the creation of real-time audiovisual art. In order to carry out this project, two main research questions were posed. The first question was: how can audio processing techniques such as granular synthesis be adapted and applied to influence new visual performance techniques, and the second question was: how can computer software synergistically integrate audio and visuals to enable the real-time creation and performance of audiovisual art. The project at the centre of my research was the creation of a real-time audiovisual granular synthesis instrument named Kortex. The research project involved a practice-based methodology and used an iterative performance cycle to evaluate and develop the Kortex prototype. These included performing iterations of the Kortex prototype at a number of local, interstate and international events. Kortex facilitates the identification of shared characteristics found between sound and image at the micro and macro level. The micro level addresses individual audiovisual segments, or grains, while the macro level addresses post-processing effects applied to the stream of audiovisual grains. Audiovisual characteristics are paired together by the user at each level, enabling composition with both media simultaneously. This provides the audiovisual artist with a dynamic approach for the creation of new works. Creating relationships between image and sound is highly subjective, yet an artist may use a mathematical, metaphorical/intuitive or intrinsic approach to create a convincing correlation between the two media. The mathematical approach expresses the relationship between sound and image as an equation. Metaphorical/intuitive relationships are formed when the two media share similar emotional or perceptual characteristics, while intrinsic relationships occur when audio and visual media are synthesised from the same source. Performers need powerful control strategies to manipulate large collections of variables in real-time. I found that pattern-generating modulation sources created overlapping phrases that evolved the behaviour of audiovisual relationships. Furthermore, saving interesting aesthetics that emerged into banks of presets, along with the ability to slide from one to the next, facilitated powerful transformations during a performance. The project has contributed to the field of audiovisual art, specifically to the performance work of DJs and VJs. Kortex provides a single audiovisual composition and performance environment that can be used by DJs and VJs for creative collaboration. Kortex has enormous potential for adoption by the DJ/VJ community to assist in the production of tightly synchronised real-time audiovisual performances
    corecore