64,316 research outputs found
Multilayer Complex Network Descriptors for Color-Texture Characterization
A new method based on complex networks is proposed for color-texture
analysis. The proposal consists on modeling the image as a multilayer complex
network where each color channel is a layer, and each pixel (in each color
channel) is represented as a network vertex. The network dynamic evolution is
accessed using a set of modeling parameters (radii and thresholds), and new
characterization techniques are introduced to capt information regarding within
and between color channel spatial interaction. An automatic and adaptive
approach for threshold selection is also proposed. We conduct classification
experiments on 5 well-known datasets: Vistex, Usptex, Outex13, CURet and MBT.
Results among various literature methods are compared, including deep
convolutional neural networks with pre-trained architectures. The proposed
method presented the highest overall performance over the 5 datasets, with 97.7
of mean accuracy against 97.0 achieved by the ResNet convolutional neural
network with 50 layers.Comment: 20 pages, 7 figures and 4 table
Exploring Human Vision Driven Features for Pedestrian Detection
Motivated by the center-surround mechanism in the human visual attention
system, we propose to use average contrast maps for the challenge of pedestrian
detection in street scenes due to the observation that pedestrians indeed
exhibit discriminative contrast texture. Our main contributions are first to
design a local, statistical multi-channel descriptorin order to incorporate
both color and gradient information. Second, we introduce a multi-direction and
multi-scale contrast scheme based on grid-cells in order to integrate
expressive local variations. Contributing to the issue of selecting most
discriminative features for assessing and classification, we perform extensive
comparisons w.r.t. statistical descriptors, contrast measurements, and scale
structures. This way, we obtain reasonable results under various
configurations. Empirical findings from applying our optimized detector on the
INRIA and Caltech pedestrian datasets show that our features yield
state-of-the-art performance in pedestrian detection.Comment: Accepted for publication in IEEE Transactions on Circuits and Systems
for Video Technology (TCSVT
A Galaxy Photometric Redshift Catalog for the Sloan Digital Sky Survey Data Release 6
We present and describe a catalog of galaxy photometric redshifts (photo-z's)
for the Sloan Digital Sky Survey (SDSS) Data Release 6 (DR6). We use the
Artificial Neural Network (ANN) technique to calculate photo-z's and the
Nearest Neighbor Error (NNE) method to estimate photo-z errors for ~ 77 million
objects classified as galaxies in DR6 with r < 22. The photo-z and photo-z
error estimators are trained and validated on a sample of ~ 640,000 galaxies
that have SDSS photometry and spectroscopic redshifts measured by SDSS, 2SLAQ,
CFRS, CNOC2, TKRS, DEEP, and DEEP2. For the two best ANN methods we have tried,
we find that 68% of the galaxies in the validation set have a photo-z error
smaller than sigma_{68} =0.021 or $0.024. After presenting our results and
quality tests, we provide a short guide for users accessing the public data.Comment: 16 pages, 12 figure
Hybrid image representation methods for automatic image annotation: a survey
In most automatic image annotation systems, images are represented with low level features using either global
methods or local methods. In global methods, the entire image is used as a unit. Local methods divide images into blocks where fixed-size sub-image blocks are adopted as sub-units; or into regions by using segmented regions as sub-units in images. In contrast to typical automatic image annotation methods that use either global or local features exclusively, several recent methods have considered incorporating the two kinds of information, and believe that the combination of the two levels of features is
beneficial in annotating images. In this paper, we provide a
survey on automatic image annotation techniques according to
one aspect: feature extraction, and, in order to complement
existing surveys in literature, we focus on the emerging image annotation methods: hybrid methods that combine both global and local features for image representation
LEGUS and Halpha-LEGUS Observations of Star Clusters in NGC 4449: Improved Ages and the Fraction of Light in Clusters as a Function of Age
We present a new catalog and results for the cluster system of the starburst
galaxy NGC 4449 based on multi-band imaging observations taken as part of the
LEGUS and Halpha-LEGUS surveys. We improve the spectral energy fitting method
used to estimate cluster ages and find that the results, particularly for older
clusters, are in better agreement with those from spectroscopy. The inclusion
of Halpha measurements, the role of stochasticity for low mass clusters, the
assumptions about reddening, and the choices of SSP model and metallicity all
have important impacts on the age-dating of clusters. A comparison with ages
derived from stellar color-magnitude diagrams for partially resolved clusters
shows reasonable agreement, but large scatter in some cases. The fraction of
light found in clusters relative to the total light (i.e., T_L) in the U, B,
and V filters in 25 different ~kpc-size regions throughout NGC 4449 correlates
with both the specific Region Luminosity, R_L, and the dominant age of the
underlying stellar population in each region. The observed cluster age
distribution is found to decline over time as dN/dt ~ t^g, with g=-0.85+/-0.15,
independent of cluster mass, and is consistent with strong, early cluster
disruption. The mass functions of the clusters can be described by a power law
with dN/dM ~ M^b and b=-1.86+/-0.2, independent of cluster age. The mass and
age distributions are quite resilient to differences in age-dating methods.
There is tentative evidence for a factor of 2-3 enhancement in both the star
and cluster formation rate ~100 - 300 Myr ago, indicating that cluster
formation tracks star formation generally. The enhancement is probably
associated with an earlier interaction event
- âŠ