3,098 research outputs found

    Medium practices

    Get PDF
    In this essay I develop a topic addressed in my book, Film Art Phenomena: the question of medium specificity. Rosalind Krauss's essay 'Art In the Age of the Post-Medium Condition' has catalysed a move away from medium specificity to hybridity. I propose that questions of medium cannot be ignored, since they carry their own history and give rise to specific formal traits and possibilities. The research involves close critical analysis of four moving image works that have not previously been written about: two made with film, and one each with computer and mobile phone. The analyses are conducted by reference to my ideas about how technological peculiarities inform and inflect practice: I see the work's material composition, its form and final meaning as intricately bound up with each other. Film, video and the computer give rise to specific forms of moving image, partly because artists exploit a medium’s peculiarities, and because certain media lend themselves to some methodologies and not others. I do not seek hard distinctions between these media, but discuss them in terms of predispositions. For example, I discuss a 16mm cine film in which the shifting visibility of grain raises ideas around movement and stillness. The aim is to develop a definition of medium specificity, in relation to the moving image, that is not essentialist in the way previous versions were criticised for being, that is, based on ideas of "material substrate" (Wollen). I argue that film is a medium of stages, in contrast to the modern tapeless camcorder, in which all functions of recording, storage, playback and even editing are contained in a single device. Supported by a travel grant, I presented a version of this essay at the International Conference of Experimental Media Congress, Toronto, in April 2011, along with a selection of works: http://www.experimentalcongress.org/full-schedule

    Image synthesis based on a model of human vision

    Get PDF
    Modern computer graphics systems are able to construct renderings of such high quality that viewers are deceived into regarding the images as coming from a photographic source. Large amounts of computing resources are expended in this rendering process, using complex mathematical models of lighting and shading. However, psychophysical experiments have revealed that viewers only regard certain informative regions within a presented image. Furthermore, it has been shown that these visually important regions contain low-level visual feature differences that attract the attention of the viewer. This thesis will present a new approach to image synthesis that exploits these experimental findings by modulating the spatial quality of image regions by their visual importance. Efficiency gains are therefore reaped, without sacrificing much of the perceived quality of the image. Two tasks must be undertaken to achieve this goal. Firstly, the design of an appropriate region-based model of visual importance, and secondly, the modification of progressive rendering techniques to effect an importance-based rendering approach. A rule-based fuzzy logic model is presented that computes, using spatial feature differences, the relative visual importance of regions in an image. This model improves upon previous work by incorporating threshold effects induced by global feature difference distributions and by using texture concentration measures. A modified approach to progressive ray-tracing is also presented. This new approach uses the visual importance model to guide the progressive refinement of an image. In addition, this concept of visual importance has been incorporated into supersampling, texture mapping and computer animation techniques. Experimental results are presented, illustrating the efficiency gains reaped from using this method of progressive rendering. This visual importance-based rendering approach is expected to have applications in the entertainment industry, where image fidelity may be sacrificed for efficiency purposes, as long as the overall visual impression of the scene is maintained. Different aspects of the approach should find many other applications in image compression, image retrieval, progressive data transmission and active robotic vision

    Motion analysis report

    Get PDF
    Human motion analysis is the task of converting actual human movements into computer readable data. Such movement information may be obtained though active or passive sensing methods. Active methods include physical measuring devices such as goniometers on joints of the body, force plates, and manually operated sensors such as a Cybex dynamometer. Passive sensing de-couples the position measuring device from actual human contact. Passive sensors include Selspot scanning systems (since there is no mechanical connection between the subject's attached LEDs and the infrared sensing cameras), sonic (spark-based) three-dimensional digitizers, Polhemus six-dimensional tracking systems, and image processing systems based on multiple views and photogrammetric calculations

    Multi-touch Detection and Semantic Response on Non-parametric Rear-projection Surfaces

    Get PDF
    The ability of human beings to physically touch our surroundings has had a profound impact on our daily lives. Young children learn to explore their world by touch; likewise, many simulation and training applications benefit from natural touch interactivity. As a result, modern interfaces supporting touch input are ubiquitous. Typically, such interfaces are implemented on integrated touch-display surfaces with simple geometry that can be mathematically parameterized, such as planar surfaces and spheres; for more complicated non-parametric surfaces, such parameterizations are not available. In this dissertation, we introduce a method for generalizable optical multi-touch detection and semantic response on uninstrumented non-parametric rear-projection surfaces using an infrared-light-based multi-camera multi-projector platform. In this paradigm, touch input allows users to manipulate complex virtual 3D content that is registered to and displayed on a physical 3D object. Detected touches trigger responses with specific semantic meaning in the context of the virtual content, such as animations or audio responses. The broad problem of touch detection and response can be decomposed into three major components: determining if a touch has occurred, determining where a detected touch has occurred, and determining how to respond to a detected touch. Our fundamental contribution is the design and implementation of a relational lookup table architecture that addresses these challenges through the encoding of coordinate relationships among the cameras, the projectors, the physical surface, and the virtual content. Detecting the presence of touch input primarily involves distinguishing between touches (actual contact events) and hovers (near-contact proximity events). We present and evaluate two algorithms for touch detection and localization utilizing the lookup table architecture. One of the algorithms, a bounded plane sweep, is additionally able to estimate hover-surface distances, which we explore for interactions above surfaces. The proposed method is designed to operate with low latency and to be generalizable. We demonstrate touch-based interactions on several physical parametric and non-parametric surfaces, and we evaluate both system accuracy and the accuracy of typical users in touching desired targets on these surfaces. In a formative human-subject study, we examine how touch interactions are used in the context of healthcare and present an exploratory application of this method in patient simulation. A second study highlights the advantages of touch input on content-matched physical surfaces achieved by the proposed approach, such as decreases in induced cognitive load, increases in system usability, and increases in user touch performance. In this experiment, novice users were nearly as accurate when touching targets on a 3D head-shaped surface as when touching targets on a flat surface, and their self-perception of their accuracy was higher

    In the Blink of an Eye: Neural Responses Elicited to Viewing the Eye Blinks of Another Individual

    Get PDF
    Facial movements have the potential to be powerful social signals. Previous studies have shown that eye gaze changes and simple mouth movements can elicit robust neural responses, which can be altered as a function of potential social significance. Eye blinks are frequent events and are usually not deliberately communicative, yet blink rate is known to influence social perception. Here, we studied event-related potentials (ERPs) elicited to observing non-task relevant blinks, eye closure, and eye gaze changes in a centrally presented natural face stimulus. Our first hypothesis (H1) that blinks would produce robust ERPs (N170 and later ERP components) was validated, suggesting that the brain may register and process all types of eye movement for potential social relevance. We also predicted an amplitude gradient for ERPs as a function of gaze change, relative to eye closure and then blinks (H2). H2 was only partly validated: large temporo-occipital N170s to all eye change conditions were observed and did not significantly differ between blinks and other conditions. However, blinks elicited late ERPs that, although robust, were significantly smaller relative to gaze conditions. Our data indicate that small and task-irrelevant facial movements such as blinks are measurably registered by the observer's brain. This finding is suggestive of the potential social significance of blinks which, in turn, has implications for the study of social cognition and use of real-life social scenarios

    Animating Ephemeral Surfaces: Transparency, Translucency and Disney’s World of Color

    No full text
    This paper examines the unusual theatrical and exhibition dimensions of Disney’s World of Color, an outdoor night time entertainment spectacle which screens animated films on ephemeral materials: the water spray and light produced by fountains, water, mist and fire. It considers how this show innovates a new form of theatrical exhibition, combining older art forms from fireworks to pyrodramas, with contemporary computer-controlled light and colour design and immersive effects. It will suggest structural and aesthetic connections between this animated attraction and recent technological innovations such as Google Glass™ in which mobile computer interfaces combine transparency and opacity as an essential part of their formal structure and tactile pleasure. Theorising that the relationship between animation and the ephemeral is also situated in these tensions between the transparent and opaque, I go on to suggest that Disney’s World of Color is a particular instantiation of the ways in which “animation” can be understood not only as a specific technical process, but also as a form of corporeal transformation in which movement, light and colour enlivens individual bodies and screen spaces

    Immersive Visualization in Biomedical Computational Fluid Dynamics and Didactic Teaching and Learning

    Get PDF
    Virtual reality (VR) can stimulate active learning, critical thinking, decision making and improved performance. It requires a medium to show virtual content, which is called a virtual environment (VE). The MARquette Visualization Lab (MARVL) is an example of a VE. Robust processes and workflows that allow for the creation of content for use within MARVL further increases the userbase for this valuable resource. A workflow was created to display biomedical computational fluid dynamics (CFD) and complementary data in a wide range of VE’s. This allows a researcher to study the simulation in its natural three-dimensional (3D) morphology. In addition, it is an exciting way to extract more information from CFD results by taking advantage of improved depth cues, a larger display canvas, custom interactivity, and an immersive approach that surrounds the researcher. The CFD to VR workflow was designed to be basic enough for a novice user. It is also used as a tool to foster collaboration between engineers and clinicians. The workflow aimed to support results from common CFD software packages and across clinical research areas. ParaView, Blender and Unity were used in the workflow to take standard CFD files and process them for viewing in VR. Designated scripts were written to automate the steps implemented in each software package. The workflow was successfully completed across multiple biomedical vessels, scales and applications including: the aorta with application to congenital cardiovascular disease, the Circle of Willis with respect to cerebral aneurysms, and the airway for surgical treatment planning. The workflow was completed by novice users in approximately an hour. Bringing VR further into didactic teaching within academia allows students to be fully immersed in their respective subject matter, thereby increasing the students’ sense of presence, understanding and enthusiasm. MARVL is a space for collaborative learning that also offers an immersive, virtual experience. A workflow was created to view PowerPoint presentations in 3D using MARVL. A resulting Immersive PowerPoint workflow used PowerPoint, Unity and other open-source software packages to display the PowerPoint presentations in 3D. The Immersive PowerPoint workflow can be completed in under thirty minutes
    • …
    corecore