62,353 research outputs found

    A versatile maskless microscope projection photolithography system and its application in light-directed fabrication of DNA microarrays

    Full text link
    We present a maskless microscope projection lithography system (MPLS), in which photomasks have been replaced by a Digital Micromirror Device type spatial light modulator (DMD, Texas Instruments). Employing video projector technology high resolution patterns, designed as bitmap images on the computer, are displayed using a micromirror array consisting of about 786000 tiny individually addressable tilting mirrors. The DMD, which is located in the image plane of an infinity corrected microscope, is projected onto a substrate placed in the focal plane of the microscope objective. With a 5x(0.25 NA) Fluar microscope objective, a fivefold reduction of the image to a total size of 9 mm2 and a minimum feature size of 3.5 microns is achieved. Our system can be used in the visible range as well as in the near UV (with a light intensity of up to 76 mW/cm2 around the 365 nm Hg-line). We developed an inexpensive and simple method to enable exact focusing and controlling of the image quality of the projected patterns. Our MPLS has originally been designed for the light-directed in situ synthesis of DNA microarrays. One requirement is a high UV intensity to keep the fabrication process reasonably short. Another demand is a sufficient contrast ratio over small distances (of about 5 microns). This is necessary to achieve a high density of features (i.e. separated sites on the substrate at which different DNA sequences are synthesized in parallel fashion) while at the same time the number of stray light induced DNA sequence errors is kept reasonably small. We demonstrate the performance of the apparatus in light-directed DNA chip synthesis and discuss its advantages and limitations.Comment: 12 pages, 9 figures, journal articl

    In-Band Disparity Compensation for Multiview Image Compression and View Synthesis

    Get PDF

    Steered mixture-of-experts for light field images and video : representation and coding

    Get PDF
    Research in light field (LF) processing has heavily increased over the last decade. This is largely driven by the desire to achieve the same level of immersion and navigational freedom for camera-captured scenes as it is currently available for CGI content. Standardization organizations such as MPEG and JPEG continue to follow conventional coding paradigms in which viewpoints are discretely represented on 2-D regular grids. These grids are then further decorrelated through hybrid DPCM/transform techniques. However, these 2-D regular grids are less suited for high-dimensional data, such as LFs. We propose a novel coding framework for higher-dimensional image modalities, called Steered Mixture-of-Experts (SMoE). Coherent areas in the higher-dimensional space are represented by single higher-dimensional entities, called kernels. These kernels hold spatially localized information about light rays at any angle arriving at a certain region. The global model consists thus of a set of kernels which define a continuous approximation of the underlying plenoptic function. We introduce the theory of SMoE and illustrate its application for 2-D images, 4-D LF images, and 5-D LF video. We also propose an efficient coding strategy to convert the model parameters into a bitstream. Even without provisions for high-frequency information, the proposed method performs comparable to the state of the art for low-to-mid range bitrates with respect to subjective visual quality of 4-D LF images. In case of 5-D LF video, we observe superior decorrelation and coding performance with coding gains of a factor of 4x in bitrate for the same quality. At least equally important is the fact that our method inherently has desired functionality for LF rendering which is lacking in other state-of-the-art techniques: (1) full zero-delay random access, (2) light-weight pixel-parallel view reconstruction, and (3) intrinsic view interpolation and super-resolution

    HeadOn: Real-time Reenactment of Human Portrait Videos

    Get PDF
    We propose HeadOn, the first real-time source-to-target reenactment approach for complete human portrait videos that enables transfer of torso and head motion, face expression, and eye gaze. Given a short RGB-D video of the target actor, we automatically construct a personalized geometry proxy that embeds a parametric head, eye, and kinematic torso model. A novel real-time reenactment algorithm employs this proxy to photo-realistically map the captured motion from the source actor to the target actor. On top of the coarse geometric proxy, we propose a video-based rendering technique that composites the modified target portrait video via view- and pose-dependent texturing, and creates photo-realistic imagery of the target actor under novel torso and head poses, facial expressions, and gaze directions. To this end, we propose a robust tracking of the face and torso of the source actor. We extensively evaluate our approach and show significant improvements in enabling much greater flexibility in creating realistic reenacted output videos.Comment: Video: https://www.youtube.com/watch?v=7Dg49wv2c_g Presented at Siggraph'1

    Light Field Denoising via Anisotropic Parallax Analysis in a CNN Framework

    Full text link
    Light field (LF) cameras provide perspective information of scenes by taking directional measurements of the focusing light rays. The raw outputs are usually dark with additive camera noise, which impedes subsequent processing and applications. We propose a novel LF denoising framework based on anisotropic parallax analysis (APA). Two convolutional neural networks are jointly designed for the task: first, the structural parallax synthesis network predicts the parallax details for the entire LF based on a set of anisotropic parallax features. These novel features can efficiently capture the high frequency perspective components of a LF from noisy observations. Second, the view-dependent detail compensation network restores non-Lambertian variation to each LF view by involving view-specific spatial energies. Extensive experiments show that the proposed APA LF denoiser provides a much better denoising performance than state-of-the-art methods in terms of visual quality and in preservation of parallax details
    • …
    corecore