289,687 research outputs found
Relatively-paired space analysis
Session 11: Segmentation & FeaturesDiscovering a latent common space between different modalities plays an important role in cross-modality pattern recognition. Existing techniques often require absolutely paired observations as training data, and are incapable of capturing more general semantic relationships between cross-modality observations. This greatly limits their applications. In this paper, we propose a general framework for learning a latent common space from relatively-paired observations (i.e., two observations from different modalities are more-likely-paired than another two). Relative-pairing information is encoded using relative proximities of observations in the latent common space. By building a discriminative model and maximizing a distance margin, a projection function that maps observations into the latent common space is learned for each modality. Cross-modality pattern recognition can then be carried out in the latent common space. To evaluate its performance, the proposed framework has been applied to cross-pose face recognition and feature fusion. Experimental results demonstrate that the proposed framework outperforms other state-of-the-art approaches.postprin
Relatively-Paired Space Analysis
Discovering a latent common space between different modalities plays an important role in cross-modality pattern recognition. Existing techniques often require absolutely-paired observations as training data, and are incapable of capturing more general seman-tic relationships between cross-modality observations. This greatly limits their appli-cations. In this paper, we propose a general framework for learning a latent common space from relatively-paired observations (i.e., two observations from different modali-ties are more-likely-paired than another two). Relative-pairing information is encoded using relative proximities of observations in the latent common space. By building a discriminative model and maximizing a distance margin, a projection function that maps observations into the latent common space is learned for each modality. Cross-modality pattern recognition can then be carried out in the latent common space. To evaluate its performance, the proposed framework has been applied to cross-pose face recognition and feature fusion. Experimental results demonstrate that the proposed framework out-performs other state-of-the-art approaches.
Finding a boundary between valid and invalid regions of the input space
In the context of robustness testing, the boundary between the valid and
invalid regions of the input space can be an interesting source of erroneous
inputs. Knowing where a specific software under test (SUT) has a boundary is
essential for validation in relation to requirements. However, finding where a
SUT actually implements the boundary is a non-trivial problem that has not
gotten much attention. This paper proposes a method of finding the boundary
between the valid and invalid regions of the input space. The proposed method
consists of two steps. First, test data generators, directed by a search
algorithm to maximise distance to known, valid test cases, generate valid test
cases that are closer to the boundary. Second, these valid test cases undergo
mutations to try to push them over the boundary and into the invalid part of
the input space. This results in a pair of test sets, one consisting of test
cases on the valid side of the boundary and a matched set on the outer side,
with only a small distance between the two sets. The method is evaluated on a
number of examples from the standard library of a modern programming language.
We propose a method of determining the boundary between valid and invalid
regions of the input space and apply it on a SUT that has a non-contiguous
valid region of the input space. From the small distance between the developed
pairs of test sets, and the fact that one test set contains valid test cases
and the other invalid test cases, we conclude that the pair of test sets
described the boundary between the valid and invalid regions of that input
space. Differences of behaviour can be observed between different distances and
sets of mutation operators, but all show that the method is able to identify
the boundary between the valid and invalid regions of the input space. This is
an important step towards more automated robustness testing.Comment: 10 pages, conferenc
A Hubble Space Telescope Snapshot Survey of Dynamically Close Galaxy Pairs in the CNOC2 Redshift Survey
We compare the structural properties of two classes of galaxies at
intermediate redshift: those in dynamically close galaxy pairs, and those which
are isolated. Both samples are selected from the CNOC2 Redshift Survey, and
have redshifts in the range 0.1 < z <0.6. Hubble Space Telescope WFPC2 images
were acquired as part of a snapshot survey, and were used to measure bulge
fraction and asymmetry for these galaxies. We find that paired and isolated
galaxies have identical distributions of bulge fractions. Conversely, we find
that paired galaxies are much more likely to be asymmetric (R_T+R_A >= 0.13)
than isolated galaxies. Assuming that half of these pairs are unlikely to be
close enough to merge, we estimate that 40% +/- 11% of merging galaxies are
asymmetric, compared with 9% +/- 3% of isolated galaxies. The difference is
even more striking for strongly asymmetric (R_T+R_A >= 0.16) galaxies: 25% +/-
8% for merging galaxies versus 1% +/- 1% for isolated galaxies. We find that
strongly asymmetric paired galaxies are very blue, with rest-frame B-R colors
close to 0.80, compared with a mean (B-R)_0 of 1.24 for all paired galaxies. In
addition, asymmetric galaxies in pairs have strong [OII]3727 emission lines. We
conclude that close to half of the galaxy pairs in our sample are in the
process of merging, and that most of these mergers are accompanied by triggered
star formation.Comment: Accepted for publication in the Astronomical Journal. 40 pages,
including 15 figures. For full resolution version, please see
http://www.trentu.ca/physics/dpatton/hstpairs
Cool White Dwarfs Identified in the Second Data Release of the UKIRT Infrared Deep Sky Survey
We have paired the Second Data Release of the Large Area Survey of the UKIRT
Infrared Deep Sky Survey with the Fifth Data Release of the Sloan Digital Sky
Survey to identify ten cool white dwarf candidates, from their photometry and
astrometry. Of these ten, one was previously known to be a very cool white
dwarf. We have obtained optical spectroscopy for seven of the candidates using
the GMOS-N spectrograph on Gemini North, and have confirmed all seven as white
dwarfs. Our photometry and astrometry indicates that the remaining two objects
are also white dwarfs. Model analysis of the photometry and available
spectroscopy shows that the seven confirmed new white dwarfs, and the two new
likely white dwarfs, have effective temperatures in the range Teff = 5400-6600
K. Our analysis of the previously known white dwarf confirms that it is cool,
with Teff = 3800 K. The cooling age for this dwarf is 8.7 Gyr, while that of
the nine ~6000 K white dwarfs is 1.8-3.6 Gyr. We are unable to determine the
masses of the white dwarfs from the existing data, and therefore we cannot
constrain the total ages of the white dwarfs. The large cooling age for the
coolest white dwarf in the sample, combined with its low estimated tangential
velocity, suggests that it is an old member of the thin disk, or a member of
the thick disk of the Galaxy, with an age 10-11 Gyr. The warmer white dwarfs
appear to have velocities typical of the thick disk or even halo; these may be
very old remnants of low-mass stars, or they may be relatively young thin disk
objects with unusually high space motion.Comment: 37 pages (referee format), 4 tables, 7 figures, accepted to Ap
Reevaluating Assembly Evaluations with Feature Response Curves: GAGE and Assemblathons
In just the last decade, a multitude of bio-technologies and software
pipelines have emerged to revolutionize genomics. To further their central
goal, they aim to accelerate and improve the quality of de novo whole-genome
assembly starting from short DNA reads. However, the performance of each of
these tools is contingent on the length and quality of the sequencing data, the
structure and complexity of the genome sequence, and the resolution and quality
of long-range information. Furthermore, in the absence of any metric that
captures the most fundamental "features" of a high-quality assembly, there is
no obvious recipe for users to select the most desirable assembler/assembly.
International competitions such as Assemblathons or GAGE tried to identify the
best assembler(s) and their features. Some what circuitously, the only
available approach to gauge de novo assemblies and assemblers relies solely on
the availability of a high-quality fully assembled reference genome sequence.
Still worse, reference-guided evaluations are often both difficult to analyze,
leading to conclusions that are difficult to interpret. In this paper, we
circumvent many of these issues by relying upon a tool, dubbed FRCbam, which is
capable of evaluating de novo assemblies from the read-layouts even when no
reference exists. We extend the FRCurve approach to cases where lay-out
information may have been obscured, as is true in many deBruijn-graph-based
algorithms. As a by-product, FRCurve now expands its applicability to a much
wider class of assemblers -- thus, identifying higher-quality members of this
group, their inter-relations as well as sensitivity to carefully selected
features, with or without the support of a reference sequence or layout for the
reads. The paper concludes by reevaluating several recently conducted assembly
competitions and the datasets that have resulted from them.Comment: Submitted to PLoS One. Supplementary material available at
http://www.nada.kth.se/~vezzi/publications/supplementary.pdf and
http://cs.nyu.edu/mishra/PUBLICATIONS/12.supplementaryFRC.pd
Suppressing cosmic variance with paired-and-fixed cosmological simulations: average properties and covariances of dark matter clustering statistics
Making cosmological inferences from the observed galaxy clustering requires
accurate predictions for the mean clustering statistics and their covariances.
Those are affected by cosmic variance -- the statistical noise due to the
finite number of harmonics. The cosmic variance can be suppressed by fixing the
amplitudes of the harmonics instead of drawing them from a Gaussian
distribution predicted by the inflation models. Initial realizations also can
be generated in pairs with 180 degrees flipped phases to further reduce the
variance. Here, we compare the consequences of using paired-and-fixed vs
Gaussian initial conditions on the average dark matter clustering and
covariance matrices predicted from N-body simulations. As in previous studies,
we find no measurable differences between paired-and-fixed and Gaussian
simulations for the average density distribution function, power spectrum and
bispectrum. Yet, the covariances from paired-and-fixed simulations are
suppressed in a complicated scale- and redshift-dependent way. The situation is
particularly problematic on the scales of Baryon Acoustic Oscillations where
the covariance matrix of the power spectrum is lower by only 20% compared to
the Gaussian realizations, implying that there is not much of a reduction of
the cosmic variance. The non-trivial suppression, combined with the fact that
paired-and-fixed covariances are noisier than from Gaussian simulations,
suggests that there is no path towards obtaining accurate covariance matrices
from paired-and-fixed simulations. Because the covariances are crucial for the
observational estimates of galaxy clustering statistics and cosmological
parameters, paired-and-fixed simulations, though useful for some applications,
cannot be used for the production of mock galaxy catalogs.Comment: Submitted to MNRA
- …