10,914 research outputs found

    A versatile maskless microscope projection photolithography system and its application in light-directed fabrication of DNA microarrays

    Full text link
    We present a maskless microscope projection lithography system (MPLS), in which photomasks have been replaced by a Digital Micromirror Device type spatial light modulator (DMD, Texas Instruments). Employing video projector technology high resolution patterns, designed as bitmap images on the computer, are displayed using a micromirror array consisting of about 786000 tiny individually addressable tilting mirrors. The DMD, which is located in the image plane of an infinity corrected microscope, is projected onto a substrate placed in the focal plane of the microscope objective. With a 5x(0.25 NA) Fluar microscope objective, a fivefold reduction of the image to a total size of 9 mm2 and a minimum feature size of 3.5 microns is achieved. Our system can be used in the visible range as well as in the near UV (with a light intensity of up to 76 mW/cm2 around the 365 nm Hg-line). We developed an inexpensive and simple method to enable exact focusing and controlling of the image quality of the projected patterns. Our MPLS has originally been designed for the light-directed in situ synthesis of DNA microarrays. One requirement is a high UV intensity to keep the fabrication process reasonably short. Another demand is a sufficient contrast ratio over small distances (of about 5 microns). This is necessary to achieve a high density of features (i.e. separated sites on the substrate at which different DNA sequences are synthesized in parallel fashion) while at the same time the number of stray light induced DNA sequence errors is kept reasonably small. We demonstrate the performance of the apparatus in light-directed DNA chip synthesis and discuss its advantages and limitations.Comment: 12 pages, 9 figures, journal articl

    Keeping track of worm trackers

    Get PDF
    C. elegans is used extensively as a model system in the neurosciences due to its well defined nervous system. However, the seeming simplicity of this nervous system in anatomical structure and neuronal connectivity, at least compared to higher animals, underlies a rich diversity of behaviors. The usefulness of the worm in genome-wide mutagenesis or RNAi screens, where thousands of strains are assessed for phenotype, emphasizes the need for computational methods for automated parameterization of generated behaviors. In addition, behaviors can be modulated upon external cues like temperature, O2 and CO2 concentrations, mechanosensory and chemosensory inputs. Different machine vision tools have been developed to aid researchers in their efforts to inventory and characterize defined behavioral “outputs”. Here we aim at providing an overview of different worm-tracking packages or video analysis tools designed to quantify different aspects of locomotion such as the occurrence of directional changes (turns, omega bends), curvature of the sinusoidal shape (amplitude, body bend angles) and velocity (speed, backward or forward movement)

    CGAMES'2009

    Get PDF

    Application for light field inpainting

    Get PDF
    Light Field (LF) imaging is a multimedia technology that can provide more immersive experience when visualizing a multimedia content with higher levels of realism compared to conventional imaging technologies. This technology is mainly promising for Virtual Reality (VR) since it displays real-world scenes in a way that users can experience the captured scenes in every position and every angle, due to its 4-dimensional LF representation. For these reasons, LF is a fast-growing technology, with so many topics to explore, being the LF inpainting the one that was explored in this dissertation. Image inpainting is an editing technique that allows synthesizing alternative content to fill in holes in an image. It is commonly used to fill missing parts in a scene and restore damaged images such that the modifications are correct and visually realistic. Applying traditional 2D inpainting techniques straightforwardly to LFs is very unlikely to result in a consistent inpainting in its all 4 dimensions. Usually, to inpaint a 4D LF content, 2D inpainting algorithms are used to inpaint a particular point of view and then 4D inpainting propagation algorithms propagate the inpainted result for the whole 4D LF data. Based on this idea of 4D inpainting propagation, some 4D LF inpainting techniques have been recently proposed in the literature. Therefore, this dissertation proposes to design and implement an LF inpainting application that can be used by the public that desire to work in this field and/or explore and edit LFs.Campos de luz Ă© uma tecnologia multimĂ©dia que fornece uma experiĂȘncia mais imersiva ao visualizar conteĂșdo multimĂ©dia com nĂ­veis mais altos de realismo, comparando a tecnologias convencionais de imagem. Esta tecnologia Ă© promissora, principalmente para Realidade Virtual, pois exibe cenas capturadas do mundo real de forma que utilizadores as possam experimentar em todas as posiçÔes e Ăąngulos, devido Ă  sua representação em 4 dimensĂ”es. Por isso, esta Ă© tecnologia em rĂĄpido crescimento, com tantos tĂłpicos para explorar, sendo o inpainting o explorado nesta dissertação. Inpainting de imagens Ă© uma tĂ©cnica de edição, permitindo sintetizar conteĂșdo alternativo para preencher lacunas numa imagem. Comumente usado para preencher partes que faltam numa cena e restaurar imagens danificadas, de forma que as modificaçÔes sejam corretas e visualmente realistas. É muito improvĂĄvel que aplicar tĂ©cnicas tradicionais de inpainting 2D diretamente a campos de luz resulte num inpainting consistente em todas as suas 4 dimensĂ”es. Normalmente, para fazer inpainting num conteĂșdo 4D de campos de luz, os algoritmos de inpainting 2D sĂŁo usados para fazer inpainting de um ponto de vista especĂ­fico e, seguidamente, os algoritmos de propagação de inpainting 4D propagam o resultado do inpainting para todos os dados do campo de luz 4D. Com base nessa ideia de propagação de inpainting 4D, algumas tĂ©cnicas foram recentemente propostas na literatura. Assim, esta dissertação propĂ”e-se a conceber e implementar uma aplicação de inpainting de campos de luz que possa ser utilizada pelo pĂșblico que pretenda trabalhar nesta ĂĄrea e/ou explorar e editar campos de luz

    The Iray Light Transport Simulation and Rendering System

    Full text link
    While ray tracing has become increasingly common and path tracing is well understood by now, a major challenge lies in crafting an easy-to-use and efficient system implementing these technologies. Following a purely physically-based paradigm while still allowing for artistic workflows, the Iray light transport simulation and rendering system allows for rendering complex scenes by the push of a button and thus makes accurate light transport simulation widely available. In this document we discuss the challenges and implementation choices that follow from our primary design decisions, demonstrating that such a rendering system can be made a practical, scalable, and efficient real-world application that has been adopted by various companies across many fields and is in use by many industry professionals today

    Tailored displays to compensate for visual aberrations

    Get PDF
    We introduce tailored displays that enhance visual acuity by decomposing virtual objects and placing the resulting anisotropic pieces into the subject's focal range. The goal is to free the viewer from needing wearable optical corrections when looking at displays. Our tailoring process uses aberration and scattering maps to account for refractive errors and cataracts. It splits an object's light field into multiple instances that are each in-focus for a given eye sub-aperture. Their integration onto the retina leads to a quality improvement of perceived images when observing the display with naked eyes. The use of multiple depths to render each point of focus on the retina creates multi-focus, multi-depth displays. User evaluations and validation with modified camera optics are performed. We propose tailored displays for daily tasks where using eyeglasses are unfeasible or inconvenient (e.g., on head-mounted displays, e-readers, as well as for games); when a multi-focus function is required but undoable (e.g., driving for farsighted individuals, checking a portable device while doing physical activities); or for correcting the visual distortions produced by high-order aberrations that eyeglasses are not able to.Conselho Nacional de Pesquisas (Brazil) (CNPq-Brazil fellowship 142563/2008-0)Conselho Nacional de Pesquisas (Brazil) (CNPq-Brazil fellowship 308936/2010-8)Conselho Nacional de Pesquisas (Brazil) (CNPq-Brazil fellowship 480485/2010- 0)National Science Foundation (U.S.) (NSF CNS 0913875)Alfred P. Sloan Foundation (fellowship)United States. Defense Advanced Research Projects Agency (DARPA Young Faculty Award)Massachusetts Institute of Technology. Media Laboratory (Consortium Members

    The LOFAR Transients Pipeline

    Get PDF
    Current and future astronomical survey facilities provide a remarkably rich opportunity for transient astronomy, combining unprecedented fields of view with high sensitivity and the ability to access previously unexplored wavelength regimes. This is particularly true of LOFAR, a recently-commissioned, low-frequency radio interferometer, based in the Netherlands and with stations across Europe. The identification of and response to transients is one of LOFAR's key science goals. However, the large data volumes which LOFAR produces, combined with the scientific requirement for rapid response, make automation essential. To support this, we have developed the LOFAR Transients Pipeline, or TraP. The TraP ingests multi-frequency image data from LOFAR or other instruments and searches it for transients and variables, providing automatic alerts of significant detections and populating a lightcurve database for further analysis by astronomers. Here, we discuss the scientific goals of the TraP and how it has been designed to meet them. We describe its implementation, including both the algorithms adopted to maximize performance as well as the development methodology used to ensure it is robust and reliable, particularly in the presence of artefacts typical of radio astronomy imaging. Finally, we report on a series of tests of the pipeline carried out using simulated LOFAR observations with a known population of transients.Comment: 30 pages, 11 figures; Accepted for publication in Astronomy & Computing; Code at https://github.com/transientskp/tk

    An object-based approach to image/video-based synthesis and processing for 3-D and multiview televisions

    Get PDF
    This paper proposes an object-based approach to a class of dynamic image-based representations called "plenoptic videos," where the plenoptic video sequences are segmented into image-based rendering (IBR) objects each with its image sequence, depth map, and other relevant information such as shape and alpha information. This allows desirable functionalities such as scalability of contents, error resilience, and interactivity with individual IBR objects to be supported. Moreover, the rendering quality in scenes with large depth variations can also be improved considerably. A portable capturing system consisting of two linear camera arrays was developed to verify the proposed approach. An important step in the object-based approach is to segment the objects in video streams into layers or IBR objects. To reduce the time for segmenting plenoptic videos under the semiautomatic technique, a new object tracking method based on the level-set method is proposed. Due to possible segmentation errors around object boundaries, natural matting with Bayesian approach is also incorporated into our system. Furthermore, extensions of conventional image processing algorithms to these IBR objects are studied and illustrated with examples. Experimental results are given to illustrate the efficiency of the tracking, matting, rendering, and processing algorithms under the proposed object-based framework. © 2009 IEEE.published_or_final_versio

    Absolute depth using low-cost light field cameras

    Get PDF
    Digital cameras are increasingly used for measurement tasks within engineering scenarios, often being part of metrology platforms. Existing cameras are well equipped to provide 2D information about the fields of view (FOV) they observe, the objects within the FOV, and the accompanying environments. But for some applications these 2D results are not sufficient, specifically applications that require Z dimensional data (depth data) along with the X and Y dimensional data. New designs of camera systems have previously been developed by integrating multiple cameras to provide 3D data, ranging from 2 camera photogrammetry to multiple camera stereo systems. Many earlier attempts to record 3D data on 2D sensors have been completed, and likewise many research groups around the world are currently working on camera technology but from different perspectives; computer vision, algorithm development, metrology, etc. Plenoptic or Lightfield camera technology was defined as a technique over 100 years ago but has remained dormant as a potential metrology instrument. Lightfield cameras utilize an additional Micro Lens Array (MLA) in front of the imaging sensor, to create multiple viewpoints of the same scene and allow encoding of depth information. A small number of companies have explored the potential of lightfield cameras, but in the majority, these have been aimed at domestic consumer photography, only ever recording scenes as relative scale greyscale images. This research considers the potential for lightfield cameras to be used for world scene metrology applications, specifically to record absolute coordinate data. Specific interest has been paid to a range of low cost lightfield cameras to; understand the functional/behavioural characteristics of the optics, identify potential need for optical and/or algorithm development, define sensitivity, repeatability and accuracy characteristics and limiting thresholds of use, and allow quantified 3D absolute scale coordinate data to be extracted from the images. The novel output of this work is; an analysis of lightfield camera system sensitivity leading to the definition of Active Zones (linear data generation good data) and In-active Zones (non-linear data generation poor data), development of bespoke calibration algorithms that remove radial/tangential distortion from the data captured using any MLA based camera, and, a light field camera independent algorithm that allows the delivery of 3D coordinate data in absolute units within a well-defined measurable range from a given camera
    • 

    corecore