6,421 research outputs found

    Computer vision for interactive skewed video projection

    Get PDF

    Real-time refocusing using an FPGA-based standard plenoptic camera

    Get PDF
    Plenoptic cameras are receiving increased attention in scientific and commercial applications because they capture the entire structure of light in a scene, enabling optical transforms (such as focusing) to be applied computationally after the fact, rather than once and for all at the time a picture is taken. In many settings, real-time inter active performance is also desired, which in turn requires significant computational power due to the large amount of data required to represent a plenoptic image. Although GPUs have been shown to provide acceptable performance for real-time plenoptic rendering, their cost and power requirements make them prohibitive for embedded uses (such as in-camera). On the other hand, the computation to accomplish plenoptic rendering is well structured, suggesting the use of specialized hardware. Accordingly, this paper presents an array of switch-driven finite impulse response filters, implemented with FPGA to accomplish high-throughput spatial-domain rendering. The proposed architecture provides a power-efficient rendering hardware design suitable for full-video applications as required in broadcasting or cinematography. A benchmark assessment of the proposed hardware implementation shows that real-time performance can readily be achieved, with a one order of magnitude performance improvement over a GPU implementation and three orders ofmagnitude performance improvement over a general-purpose CPU implementation

    Perceptual Moment

    Get PDF
    Moving image art can provide unique possibilities for making sense of our surrounding reality. Consisting of a series of artworks produced through a creative research methodology, this thesis project explores wonderment and its role in visual perception. The series, Perceptual Moments, is comprised of short, evocative video works presented in a variety of modes including interactive and sculptural installation. To question the role of vision in mediating reality, the works engage the viewer through an intensive experience of seeing. This accompanying essay explores key visual and editing devices in the series that appear to have a role in shaping the viewer’s perception and interpretation of the visual experience, including “the chasm,” “the blur” and “interactive installation.” The essay also investigates the motivation behind the works through journal entries and offers critical analyses for each production. The visual devices in question are grounded within the context of psychology, neuroscience, phenomenology and film theories. Philosopher Gaston Bachelard provides an anchor for the concept of wonderment, while theorists Jonathan Crary and Gilles Deleuze create dialogical space around the act of viewing filmic images and the affect that it involves. The devices are also observed in other media works, including seminal pieces by Stan Brackhage, Kurt Kren and Jan Svankmajer as well as contemporary figures such as Nathalie Djurbergand Matt Hope.PsychologyVisual perceptionFil

    Cuboid-maps for indoor illumination modeling and augmented reality rendering

    Get PDF
    This thesis proposes a novel approach for indoor scene illumination modeling and augmented reality rendering. Our key observation is that an indoor scene is well represented by a set of rectangular spaces, where important illuminants reside on their boundary faces, such as a window on a wall or a ceiling light. Given a perspective image or a panorama and detected rectangular spaces as inputs, we estimate their cuboid shapes, and infer illumination components for each face of the cuboids by a simple convolutional neural architecture. The process turns an image into a set of cuboid environment maps, each of which is a simple extension of a traditional cube-map. For augmented reality rendering, we simply take a linear combination of inferred environment maps and an input image, producing surprisingly realistic illumination effects. This approach is simple and efficient, avoids flickering, and achieves quantitatively more accurate and qualitatively more realistic effects than competing substantially more complicated systems

    The hunt for submarines in classical art: mappings between scientific invention and artistic interpretation

    Get PDF
    This is a report to the AHRC's ICT in Arts and Humanities Research Programme. This report stems from a project which aimed to produce a series of mappings between advanced imaging information and communications technologies (ICT) and needs within visual arts research. A secondary aim was to demonstrate the feasibility of a structured approach to establishing such mappings. The project was carried out over 2006, from January to December, by the visual arts centre of the Arts and Humanities Data Service (AHDS Visual Arts).1 It was funded by the Arts and Humanities Research Council (AHRC) as one of the Strategy Projects run under the aegis of its ICT in Arts and Humanities Research programme. The programme, which runs from October 2003 until September 2008, aims ‘to develop, promote and monitor the AHRC’s ICT strategy, and to build capacity nation-wide in the use of ICT for arts and humanities research’.2 As part of this, the Strategy Projects were intended to contribute to the programme in two ways: knowledge-gathering projects would inform the programme’s Fundamental Strategic Review of ICT, conducted for the AHRC in the second half of 2006, focusing ‘on critical strategic issues such as e-science and peer-review of digital resources’. Resource-development projects would ‘build tools and resources of broad relevance across the range of the AHRC’s academic subject disciplines’.3 This project fell into the knowledge-gathering strand. The project ran under the leadership of Dr Mike Pringle, Director, AHDS Visual Arts, and the day-to-day management of Polly Christie, Projects Manager, AHDS Visual Arts. The research was carried out by Dr Rupert Shepherd

    QuadStream: {A} Quad-Based Scene Streaming Architecture for Novel Viewpoint Reconstruction

    Get PDF
    corecore