4,060 research outputs found

    Near-Infrared Survey of the GOODS-North Field: Search for Luminous Galaxy Candidates at z=>6.5

    Get PDF
    We present near-infrared (NIR; J & Ks) survey of the Great Observatories Origins Deep Survey-North (GOODS-N) field. The publicly available imaging data were obtained using the MOIRCS instrument on the 8.2m Subaru and the WIRCam instrument on the 3.6m Canada-France-Hawaii Telescope (CFHT). These observations fulfill a serious wavelength gap in the GOODS-N data - i.e., lack of deep NIR observations. We combine the Subaru/MOIRCS and CFHT/WIRCam archival data to generate deep J and Ks band images, covering the full GOODS-N field (~169 sq. arcmin) to an AB magnitude limit of ~25 mag (3sigma). We applied z'-band dropout color selection criteria, using the NIR data generated here. We have identified two possible Lyman Break Galaxy (LBG) candidates at z\gtrsim6.5 with J\lesssim24.5. The first candidate is a likely LBG at z\sim6.5 based on a weak spectral feature tentatively identified as Lyalpha line in the deep Keck/DEIMOS spectrum, while the second candidate is a possible LBG at z\sim7 based on its photometric redshift. These z'-dropout objects, if confirmed, are among the brightest such candidates found so far. At z\gtrsim6.5, their star formation rate is estimated as 100-200 solar mass per year. If they continue to form stars at this rate, they assemble a stellar mass of ~5x10^10 solar mass after about 400 million years, becoming the progenitors of massive galaxies observed at z\sim5. We study the implication of the z'-band dropout candidates discovered here, in constraining the bright-end of the luminosity function and understanding the nature of high redshift galaxies.Comment: ApJ in press, minor text/reference update

    Convolutional Deblurring for Natural Imaging

    Full text link
    In this paper, we propose a novel design of image deblurring in the form of one-shot convolution filtering that can directly convolve with naturally blurred images for restoration. The problem of optical blurring is a common disadvantage to many imaging applications that suffer from optical imperfections. Despite numerous deconvolution methods that blindly estimate blurring in either inclusive or exclusive forms, they are practically challenging due to high computational cost and low image reconstruction quality. Both conditions of high accuracy and high speed are prerequisites for high-throughput imaging platforms in digital archiving. In such platforms, deblurring is required after image acquisition before being stored, previewed, or processed for high-level interpretation. Therefore, on-the-fly correction of such images is important to avoid possible time delays, mitigate computational expenses, and increase image perception quality. We bridge this gap by synthesizing a deconvolution kernel as a linear combination of Finite Impulse Response (FIR) even-derivative filters that can be directly convolved with blurry input images to boost the frequency fall-off of the Point Spread Function (PSF) associated with the optical blur. We employ a Gaussian low-pass filter to decouple the image denoising problem for image edge deblurring. Furthermore, we propose a blind approach to estimate the PSF statistics for two Gaussian and Laplacian models that are common in many imaging pipelines. Thorough experiments are designed to test and validate the efficiency of the proposed method using 2054 naturally blurred images across six imaging applications and seven state-of-the-art deconvolution methods.Comment: 15 pages, for publication in IEEE Transaction Image Processin

    DESHADOWING OF HIGH SPATIAL RESOLUTION IMAGERY APPLIED TO URBAN AREA DETECTION

    Get PDF
    Different built-up structures usually lead to large regions covered by shadows, causing partial or total loss of information present in urban environments. In order to mitigate the presence of shadows while improving the urban target discrimination in multispectral images, this paper proposes an automated methodology for both detection and recovery of shadows. First, the image bands are preprocessed in order to highlight their most relevant parts. Secondly, a shadow detection procedure is performed by using morphological filtering so that a shadow mask is obtained. Finally, the reconstruction of shadow-occluded areas is accomplished by an image inpainting strategy. The experimental evaluation of our methodology was carried out in four study areas acquired from a WorldView-2 (WV-2) satellite scene over the urban area of SĂŁo Paulo city. The experiments have demonstrated a high performance of the proposed shadow detection scheme, with an average overall accuracy up to 92%. Considering the results obtained by our shadow removal strategy, the pre-selected shadows were substantially recovered, as verified by visual inspections. Comparisons involving both VrNIR-BI and VgNIR-BI spectral indices computed from original and shadow-free images also attest the substantial gain in recovering anthropic targets such as streets, roofs and buildings initially damaged by shadows

    Uv-to-fir analysis of spitzer/irac sources in the extended groth strip i: Multi-wavelength photometry and spectral energy distributions

    Get PDF
    We present an IRAC 3.6+4.5 microns selected catalog in the Extended Groth Strip (EGS) containing photometry from the ultraviolet to the far-infrared and stellar parameters derived from the analysis of the multi-wavelength data. In this paper, we describe the method used to build coherent spectral energy distributions (SEDs) for all the sources. In a companion paper, we analyze those SEDs to obtain robust estimations of stellar parameters such as photometric redshifts, stellar masses, and star formation rates. The catalog comprises 76,936 sources with [3.6]<23.75 mag (85% completeness level of the IRAC survey in the EGS) over 0.48 square degrees. For approximately 16% of this sample, we are able to deconvolve the IRAC data to obtain robust fluxes for the multiple counterparts found in ground-based optical images. Typically, the SEDs of the IRAC sources in our catalog count with more than 15 photometric data points, spanning from the UV to the FIR. Approximately 95% and 90% of all IRAC sources are detected in the deepest optical and near-infrared bands. Only 10% of the sources have optical spectroscopy and redshift estimations. Almost 20% and 2% of the sources are detected by MIPS at 24 and 70 microns, respectively. We also cross-correlate our catalog with public X-ray and radio catalogs. Finally, we present the Rainbow Navigator public web-interface utility designed to browse all the data products resulting from this work, including images, spectra, photometry, and stellar parameters.Comment: 28 pages, 12 figures, Accepted for publication in ApJ. Access the Rainbow Database at: http://rainbowx.fis.ucm.e

    Coupled Deep Learning for Heterogeneous Face Recognition

    Full text link
    Heterogeneous face matching is a challenge issue in face recognition due to large domain difference as well as insufficient pairwise images in different modalities during training. This paper proposes a coupled deep learning (CDL) approach for the heterogeneous face matching. CDL seeks a shared feature space in which the heterogeneous face matching problem can be approximately treated as a homogeneous face matching problem. The objective function of CDL mainly includes two parts. The first part contains a trace norm and a block-diagonal prior as relevance constraints, which not only make unpaired images from multiple modalities be clustered and correlated, but also regularize the parameters to alleviate overfitting. An approximate variational formulation is introduced to deal with the difficulties of optimizing low-rank constraint directly. The second part contains a cross modal ranking among triplet domain specific images to maximize the margin for different identities and increase data for a small amount of training samples. Besides, an alternating minimization method is employed to iteratively update the parameters of CDL. Experimental results show that CDL achieves better performance on the challenging CASIA NIR-VIS 2.0 face recognition database, the IIIT-D Sketch database, the CUHK Face Sketch (CUFS), and the CUHK Face Sketch FERET (CUFSF), which significantly outperforms state-of-the-art heterogeneous face recognition methods.Comment: AAAI 201

    Bio-Inspired Multi-Spectral and Polarization Imaging Sensors for Image-Guided Surgery

    Get PDF
    Image-guided surgery (IGS) can enhance cancer treatment by decreasing, and ideally eliminating, positive tumor margins and iatrogenic damage to healthy tissue. Current state-of-the-art near-infrared fluorescence imaging systems are bulky, costly, lack sensitivity under surgical illumination, and lack co-registration accuracy between multimodal images. As a result, an overwhelming majority of physicians still rely on their unaided eyes and palpation as the primary sensing modalities to distinguish cancerous from healthy tissue. In my thesis, I have addressed these challenges in IGC by mimicking the visual systems of several animals to construct low power, compact and highly sensitive multi-spectral and color-polarization sensors. I have realized single-chip multi-spectral imagers with 1000-fold higher sensitivity and 7-fold better spatial co-registration accuracy compared to clinical imaging systems in current use by monolithically integrating spectral tapetal and polarization filters with an array of vertically stacked photodetectors. These imaging sensors yield the unique capabilities of imaging simultaneously color, polarization, and multiple fluorophores for near-infrared fluorescence imaging. Preclinical and clinical data demonstrate seamless integration of this technologies in the surgical work flow while providing surgeons with real-time information on the location of cancerous tissue and sentinel lymph nodes, respectively. Due to its low cost, the bio-inspired sensors will provide resource-limited hospitals with much-needed technology to enable more accurate value-based health care

    Recovering Stellar Population Properties and Redshifts from Broad-Band Photometry of Simulated Galaxies: Lessons for SED Modeling

    Full text link
    We present a detailed analysis of our ability to determine stellar masses, ages, reddening and extinction values, and star formation rates of high-redshift galaxies by modeling broad-band SEDs with stellar population synthesis. In order to do so, we computed synthetic optical-to-NIR SEDs for model galaxies taken from hydrodynamical merger simulations placed at redshifts 1.5 < z < 3. Viewed under different angles and during different evolutionary phases, the simulations represent a wide variety of galaxy types (disks, mergers, spheroids). We show that simulated galaxies span a wide range in SEDs and color, comparable to these of observed galaxies. In all star-forming phases, dust attenuation has a large effect on colors, SEDs, and fluxes. The broad-band SEDs were then fed to a standard SED modeling procedure and resulting stellar population parameters were compared to their true values. Disk galaxies generally show a decent median correspondence between the true and estimated mass and age, but suffer from large uncertainties. During the merger itself, we find larger offsets (e.g., log M_recovered - log M_true = -0.13^{+0.10}_{-0.14}). E(B-V) values are generally recovered well, but the estimated total visual absorption Av is consistently too low, increasingly so for larger optical depths. Since the largest optical depths occur during the phases of most intense star formation, it is for the highest SFRs that we find the largest underestimates. The masses, ages, E(B-V), Av, and SFR of merger remnants (spheroids) are very well reproduced. We discuss possible biases in SED modeling results caused by mismatch between the true and template star formation history, dust distribution, metallicity variations and AGN contribution.Comment: Accepted for publication in the Astrophysical Journal, 24 pages, 19 figure

    The Multiwavelength Survey By Yale-Chile (MUSYC) Wide K-Band Imaging, Photometric Catalogs, Clustering, And Physical Properties Of Galaxies At Z Similar To 2

    Get PDF
    We present K-band imaging of two similar to 30' x 30' fields covered by the Multiwavelength Survey by Yale-Chile (MUSYC) Wide NIR Survey. The SDSS 1030+05 and Cast 1255 fields were imaged with the Infrared Side Port Imager (ISPI) on the 4 m Blanco telescope at the Cerro Tololo Inter-American Observatory (CTIO) to a 5 sigma point-source limiting depth of K similar to 20 (Vega). Combining these data with the MUSYC optical UBVRIz imaging, we created multiband K-selected source catalogs for both fields. These catalogs, together with the MUSYC K-band catalog of the Extended Chandra Deep Field South (ECDF-S) field, were used to select K 20 BzK galaxies over an area of 0.71 deg(2). This is the largest area ever surveyed for BzK galaxies. We present number counts, redshift distributions, and stellar masses for our sample of 3261 BzK galaxies (2502 star-forming [sBzK] and 759 passively evolving [pBzK]), as well as reddening and star formation rate estimates for the star-forming BzK systems. We also present two-point angular correlation functions and spatial correlation lengths for both sBzK and pBzK galaxies and show that previous estimates of the correlation function of these galaxies were affected by cosmic variance due to the small areas surveyed. We have measured correlation lengths r(0) of 8.89 +/- 2.03 and 10.82 +/- 1.72 Mpc for sBzK and pBzK galaxies, respectively. This is the first reported measurement of the spatial correlation function of passive BzK galaxies. In the Lambda CDM scenario of galaxy formation, these correlation lengths at z similar to 2 translate into minimum masses of similar to 4 x 10(12) and similar to 9 x 10(12) M(circle dot) for the dark matter halos hosting sBzK and pBzK galaxies, respectively. The clustering properties of the galaxies in our sample are consistent with their being the descendants of bright Lyman break galaxies at z similar to 3, and the progenitors of present-day > 1L* galaxies.Astronom

    Enhanced Detection of Artisanal Small-Scale Mining with Spectral and Textural Segmentation of Landsat Time Series

    Get PDF
    Artisanal small-scale mines (ASMs) in the Amazon Rainforest are an important cause of deforestation, forest degradation, biodiversity loss, sedimentation in rivers, and mercury emissions. Satellite image data are widely used in environmental decision-making to monitor changes in the land surface, but ASMs are difficult to map from space. ASMs are small, irregularly shaped, unevenly distributed, and confused (spectrally) with other land clearance types. To address this issue, we developed a reliable and efficient ASM detection method for the Tapajós River Basin of Brazil—an important gold mining region of the Amazon Rainforest. We enhanced detection in three key ways. First, we used the time-series segmentation (LandTrendr) Google Earth Engine (GEE) Application Programming Interface to map the pixel-wise trajectory of natural vegetation disturbance and recovery on an annual basis with a 2000 to 2019 Landsat image time series. Second, we segmented 26 textural features in addition to 5 spectral features to account for the high spatial heterogeneity in ASM pixels. Third, we trained and tested a Random Forest model to detect ASMs after eliminating irrelevant and redundant features with the Variable Selection Using Random Forests “ensemble of ensembles” technique. The out-of-bag error and overall accuracy of the final Random Forest was 3.73 and 92.6%, which are comparable to studies mapping large industrial mines with the normalized difference vegetation index (NDVI) and LandTrendr. The most important feature in our study was NDVI, followed by textural features in the near and shortwave infrared. Our work paves the way for future ASM regulation through large area monitoring from space with free and open-source GEE and operational satellites. Studies with sufficient computational resources can improve ASM monitoring with advanced sensors consisting of spectral narrow bands (Sentinel-2, Environmental Mapping and Analysis Program, PRecursore IperSpettrale della Missione Applicativa) and deep learning
    • …
    corecore