18,362 research outputs found

    The lens parallax method: determining redshifts of faint blue galaxies through gravitational lensing

    Get PDF
    We propose a new technique, which we call the lens parallax method, to determine simultaneously the redshift distribution of the faint blue galaxies and the mass distributions of foreground clusters of galaxies. The method is based on gravitational lensing and makes use of the following: (1) the amplitude of lensing-induced distortions of background galaxies increases with redshift; (2) the surface brightnesses of galaxies decrease steeply with redshift. The distortions of galaxy images due to lensing are thus expected to be inversely correlated with surface brightness, allowing us to obtain relative distances to galaxies as a function of surface brightness. If the redshifts of the brightest galaxies are measured, then the relative distance scale can be converted to mean galaxy redshifts as a function of surface brightness. Further, by comparing the angular sizes of lensed galaxies with those of similar galaxies in empty control fields, it is possible to break the so-called mass sheet degeneracy inherent to cluster mass reconstruction techniques which are based purely on image ellipticities. This allows an unambiguous determination of the surface density of a lensing cluster. We describe an iterative algorithm based on these ideas and present numerical simulations which show that the proposed techniques are feasible with a sample of ~ 10 rich clusters at moderate redshifts ~ 0.3-0.4 and an equal number of control fields. The numerical tests show that the method can be used to determine the redshifts of galaxies with an accuracy of dz ~ 0.1-0.2 at z ~ 1-1.7, and to measure the masses of lensing clusters to about 5% accuracy.Comment: 31 pages, uuencoded compressed postscript file containing 10 figures, to be published in the Sep. 20 issue of Ap

    Concepts for on-board satellite image registration. Volume 2: IAS prototype performance evaluation standard definition

    Get PDF
    Problems encountered in testing onboard signal processing hardware designed to achieve radiometric and geometric correction of satellite imaging data are considered. These include obtaining representative image and ancillary data for simulation and the transfer and storage of a large quantity of image data at very high speed. The high resolution, high speed preprocessing of LANDSAT-D imagery is considered

    Deep-sea image processing

    Get PDF
    High-resolution seafloor mapping often requires optical methods of sensing, to confirm interpretations made from sonar data. Optical digital imagery of seafloor sites can now provide very high resolution and also provides additional cues, such as color information for sediments, biota and divers rock types. During the cruise AT11-7 of the Woods Hole Oceanographic Institution (WHOI) vessel R/V Atlantis (February 2004, East Pacific Rise) visual imagery was acquired from three sources: (1) a digital still down-looking camera mounted on the submersible Alvin, (2) observer-operated 1-and 3-chip video cameras with tilt and pan capabilities mounted on the front of Alvin, and (3) a digital still camera on the WHOI TowCam (Fornari, 2003). Imagery from the first source collected on a previous cruise (AT7-13) to the Galapagos Rift at 86°W was successfully processed and mosaicked post-cruise, resulting in a single image covering area of about 2000 sq.m, with the resolution of 3 mm per pixel (Rzhanov et al., 2003). This paper addresses the issues of the optimal acquisition of visual imagery in deep-seaconditions, and requirements for on-board processing. Shipboard processing of digital imagery allows for reviewing collected imagery immediately after the dive, evaluating its importance and optimizing acquisition parameters, and augmenting acquisition of data over specific sites on subsequent dives.Images from the deepsea power and light (DSPL) digital camera offer the best resolution (3.3 Mega pixels) and are taken at an interval of 10 seconds (determined by the strobe\u27s recharge rate). This makes images suitable for mosaicking only when Alvin moves slowly (≪1/4 kt), which is not always possible for time-critical missions. Video cameras provided a source of imagery more suitable for mosaicking, despite its inferiority in resolution. We discuss required pre-processing and imageenhancement techniques and their influence on the interpretation of mosaic content. An algorithm for determination of camera tilt parameters from acquired imagery is proposed and robustness conditions are discussed

    Data compression using adaptive transform coding. Appendix 1: Item 1

    Get PDF
    Adaptive low-rate source coders are described in this dissertation. These coders adapt by adjusting the complexity of the coder to match the local coding difficulty of the image. This is accomplished by using a threshold driven maximum distortion criterion to select the specific coder used. The different coders are built using variable blocksized transform techniques, and the threshold criterion selects small transform blocks to code the more difficult regions and larger blocks to code the less complex regions. A theoretical framework is constructed from which the study of these coders can be explored. An algorithm for selecting the optimal bit allocation for the quantization of transform coefficients is developed. The bit allocation algorithm is more fully developed, and can be used to achieve more accurate bit assignments than the algorithms currently used in the literature. Some upper and lower bounds for the bit-allocation distortion-rate function are developed. An obtainable distortion-rate function is developed for a particular scalar quantizer mixing method that can be used to code transform coefficients at any rate

    Digital Color Imaging

    Full text link
    This paper surveys current technology and research in the area of digital color imaging. In order to establish the background and lay down terminology, fundamental concepts of color perception and measurement are first presented us-ing vector-space notation and terminology. Present-day color recording and reproduction systems are reviewed along with the common mathematical models used for representing these devices. Algorithms for processing color images for display and communication are surveyed, and a forecast of research trends is attempted. An extensive bibliography is provided

    Guidance for benthic habitat mapping: an aerial photographic approach

    Get PDF
    This document, Guidance for Benthic Habitat Mapping: An Aerial Photographic Approach, describes proven technology that can be applied in an operational manner by state-level scientists and resource managers. This information is based on the experience gained by NOAA Coastal Services Center staff and state-level cooperators in the production of a series of benthic habitat data sets in Delaware, Florida, Maine, Massachusetts, New York, Rhode Island, the Virgin Islands, and Washington, as well as during Center-sponsored workshops on coral remote sensing and seagrass and aquatic habitat assessment. (PDF contains 39 pages) The original benthic habitat document, NOAA Coastal Change Analysis Program (C-CAP): Guidance for Regional Implementation (Dobson et al.), was published by the Department of Commerce in 1995. That document summarized procedures that were to be used by scientists throughout the United States to develop consistent and reliable coastal land cover and benthic habitat information. Advances in technology and new methodologies for generating these data created the need for this updated report, which builds upon the foundation of its predecessor
    • …
    corecore