2,798 research outputs found

    The Ultraviolet Imaging Telescope: Instrument and Data Characteristics

    Get PDF
    The Ultraviolet Imaging Telescope (UIT) was flown as part of the Astro observatory on the Space Shuttle Columbia in December 1990 and again on the Space Shuttle Endeavor in March 1995. Ultraviolet (1200-3300 Angstroms) images of a variety of astronomical objects, with a 40 arcmin field of view and a resolution of about 3 arcsec, were recorded on photographic film. The data recorded during the first flight are available to the astronomical community through the National Space Science Data Center (NSSDC); the data recorded during the second flight will soon be available as well. This paper discusses in detail the design, operation, data reduction, and calibration of UIT, providing the user of the data with information for understanding and using the data. It also provides guidelines for analyzing other astronomical imagery made with image intensifiers and photographic film.Comment: 44 pages, LaTeX, AAS preprint style and EPSF macros, accepted by PAS

    Photo repair and 3d structure from flatbed scanners

    Get PDF
    We introduce a technique that allows 3D information to be captured from a conventional flatbed scanner. The technique requires no hardware modification and allows untrained users to easily capture 3D datasets. Once captured, these datasets can be used for interactive relighting and enhancement of surface detail on physical objects. We have also found that the method can be used to scan and repair damaged photographs. Since the only 3D structure on these photographs will typically be surface tears and creases, our method provides an accurate procedure for automatically detecting these flaws without any user intervention. Once detected, automatic techniques, such as infilling and texture synthesis, can be leveraged to seamlessly repair such damaged areas. We first present a method that is able to repair damaged photographs with minimal user interaction and then show how we can achieve similar results using a fully automatic process

    Detection of dirt impairments from archived film sequences : survey and evaluations

    Get PDF
    Film dirt is the most commonly encountered artifact in archive restoration applications. Since dirt usually appears as a temporally impulsive event, motion-compensated interframe processing is widely applied for its detection. However, motion-compensated prediction requires a high degree of complexity and can be unreliable when motion estimation fails. Consequently, many techniques using spatial or spatiotemporal filtering without motion were also been proposed as alternatives. A comprehensive survey and evaluation of existing methods is presented, in which both qualitative and quantitative performances are compared in terms of accuracy, robustness, and complexity. After analyzing these algorithms and identifying their limitations, we conclude with guidance in choosing from these algorithms and promising directions for future research

    Understanding and suppressing field emission using DC

    Get PDF

    Electronic Photography at the NASA Langley Research Center

    Get PDF
    An electronic photography facility has been established in the Imaging & Photographic Technology Section, Visual Imaging Branch, at the NASA Langley Research Center (LaRC). The purpose of this facility is to provide the LaRC community with access to digital imaging technology. In particular, capabilities have been established for image scanning, direct image capture, optimized image processing for storage, image enhancement, and optimized device dependent image processing for output. Unique approaches include: evaluation and extraction of the entire film information content through scanning; standardization of image file tone reproduction characteristics for optimal bit utilization and viewing; education of digital imaging personnel on the effects of sampling and quantization to minimize image processing related information loss; investigation of the use of small kernel optimal filters for image restoration; characterization of a large array of output devices and development of image processing protocols for standardized output. Currently, the laboratory has a large collection of digital image files which contain essentially all the information present on the original films. These files are stored at 8-bits per color, but the initial image processing was done at higher bit depths and/or resolutions so that the full 8-bits are used in the stored files. The tone reproduction of these files has also been optimized so the available levels are distributed according to visual perceptibility. Look up tables are available which modify these files for standardized output on various devices, although color reproduction has been allowed to float to some extent to allow for full utilization of output device gamut

    Towards Automatic Blotch Detection for Film Restoration by Comparison of Spatio-Temporal Neighbours

    Get PDF
    In this paper, a new method of blotch detection for digitised film sequences is proposed. Due to the aging of film stocks, their poor storage and/or repeated viewing, it is estimated that approximately 50% of all films produced prior to 1950 have either been destroyed or rendered unwatchable [1,2]. To prevent their complete destruction, original film reels must be scanned into digital format; however, any defects such as blotches will be retained. By combining a variation of a linear time, contour tracing technique with a simple temporal nearest neighbour algorithm, a preliminary detection system has been created. Using component labelling of dirt and sparkle the overall performance of the completed system, in terms of time and accuracy, will compare favourably to traditional motion compensated detection methods. This small study (based on 13 film sequences) represents a significant first step towards automatic blotch detection

    Tailored for Real-World: A Whole Slide Image Classification System Validated on Uncurated Multi-Site Data Emulating the Prospective Pathology Workload.

    Get PDF
    Standard of care diagnostic procedure for suspected skin cancer is microscopic examination of hematoxylin & eosin stained tissue by a pathologist. Areas of high inter-pathologist discordance and rising biopsy rates necessitate higher efficiency and diagnostic reproducibility. We present and validate a deep learning system which classifies digitized dermatopathology slides into 4 categories. The system is developed using 5,070 images from a single lab, and tested on an uncurated set of 13,537 images from 3 test labs, using whole slide scanners manufactured by 3 different vendors. The system\u27s use of deep-learning-based confidence scoring as a criterion to consider the result as accurate yields an accuracy of up to 98%, and makes it adoptable in a real-world setting. Without confidence scoring, the system achieved an accuracy of 78%. We anticipate that our deep learning system will serve as a foundation enabling faster diagnosis of skin cancer, identification of cases for specialist review, and targeted diagnostic classifications
    • …
    corecore