573 research outputs found
Comparison of two-dimensional binned data distributions using the energy test
For the purposes of monitoring HEP experiments, comparison is often made between regularly acquired histograms of data and reference histograms which represent the ideal state of the equipment. With the larger experiments now starting up, there is a need for automation of this task since the volume of comparisons would overwhelm human operators. However, the two-dimensional histogram comparison tools currently available in ROOT have noticeable shortcomings. We present a new comparison test for 2D histograms, based on the Energy Test of Aslan and Zech, which provides more decisive discrimination between histograms of data coming from different distributions
The Clustering of Luminous Red Galaxies in the Sloan Digital Sky Survey Imaging Data
We present the 3D real space clustering power spectrum of a sample of
\~600,000 luminous red galaxies (LRGs) measured by the Sloan Digital Sky Survey
(SDSS), using photometric redshifts. This sample of galaxies ranges from
redshift z=0.2 to 0.6 over 3,528 deg^2 of the sky, probing a volume of 1.5
(Gpc/h)^3, making it the largest volume ever used for galaxy clustering
measurements. We measure the angular clustering power spectrum in eight
redshift slices and combine these into a high precision 3D real space power
spectrum from k=0.005 (h/Mpc) to k=1 (h/Mpc). We detect power on gigaparsec
scales, beyond the turnover in the matter power spectrum, on scales
significantly larger than those accessible to current spectroscopic redshift
surveys. We also find evidence for baryonic oscillations, both in the power
spectrum, as well as in fits to the baryon density, at a 2.5 sigma confidence
level. The statistical power of these data to constrain cosmology is ~1.7 times
better than previous clustering analyses. Varying the matter density and baryon
fraction, we find \Omega_M = 0.30 \pm 0.03, and \Omega_b/\Omega_M = 0.18 \pm
0.04, The detection of baryonic oscillations also allows us to measure the
comoving distance to z=0.5; we find a best fit distance of 1.73 \pm 0.12 Gpc,
corresponding to a 6.5% error on the distance. These results demonstrate the
ability to make precise clustering measurements with photometric surveys
(abridged).Comment: 23 pages, 27 figures, submitted to MNRA
3D Object Class Detection in the Wild
Object class detection has been a synonym for 2D bounding box localization
for the longest time, fueled by the success of powerful statistical learning
techniques, combined with robust image representations. Only recently, there
has been a growing interest in revisiting the promise of computer vision from
the early days: to precisely delineate the contents of a visual scene, object
by object, in 3D. In this paper, we draw from recent advances in object
detection and 2D-3D object lifting in order to design an object class detector
that is particularly tailored towards 3D object class detection. Our 3D object
class detection method consists of several stages gradually enriching the
object detection output with object viewpoint, keypoints and 3D shape
estimates. Following careful design, in each stage it constantly improves the
performance and achieves state-ofthe-art performance in simultaneous 2D
bounding box and viewpoint estimation on the challenging Pascal3D+ dataset
Testing the Hubble Law with the IRAS 1.2 Jy Redshift Survey
We test and reject the claim of Segal et al. (1993) that the correlation of
redshifts and flux densities in a complete sample of IRAS galaxies favors a
quadratic redshift-distance relation over the linear Hubble law. This is done,
in effect, by treating the entire galaxy luminosity function as derived from
the 60 micron 1.2 Jy IRAS redshift survey of Fisher et al. (1995) as a distance
indicator; equivalently, we compare the flux density distribution of galaxies
as a function of redshift with predictions under different redshift-distance
cosmologies, under the assumption of a universal luminosity function. This
method does not assume a uniform distribution of galaxies in space. We find
that this test has rather weak discriminatory power, as argued by Petrosian
(1993), and the differences between models are not as stark as one might expect
a priori. Even so, we find that the Hubble law is indeed more strongly
supported by the analysis than is the quadratic redshift-distance relation. We
identify a bias in the the Segal et al. determination of the luminosity
function, which could lead one to mistakenly favor the quadratic
redshift-distance law. We also present several complementary analyses of the
density field of the sample; the galaxy density field is found to be close to
homogeneous on large scales if the Hubble law is assumed, while this is not the
case with the quadratic redshift-distance relation.Comment: 27 pages Latex (w/figures), ApJ, in press. Uses AAS macros,
postscript also available at
http://www.astro.princeton.edu/~library/preprints/pop682.ps.g
Probing physics students' conceptual knowledge structures through term association
Traditional tests are not effective tools for diagnosing the content and
structure of students' knowledge of physics. As a possible alternative, a set
of term-association tasks (the "ConMap" tasks) was developed to probe the
interconnections within students' store of conceptual knowledge. The tasks have
students respond spontaneously to a term or problem or topic area with a
sequence of associated terms; the response terms and timeof- entry data are
captured. The tasks were tried on introductory physics students, and
preliminary investigations show that the tasks are capable of eliciting
information about the stucture of their knowledge. Specifically, data gathered
through the tasks is similar to that produced by a hand-drawn concept map task,
has measures that correlate with inclass exam performance, and is sensitive to
learning produced by topic coverage in class. Although the results are
preliminary and only suggestive, the tasks warrant further study as
student-knowledge assessment instruments and sources of experimental data for
cognitive modeling efforts.Comment: 31 pages plus 2 tables and 8 figure
Steady-state simulations using weighted ensemble path sampling
We extend the weighted ensemble (WE) path sampling method to perform rigorous
statistical sampling for systems at steady state. The straightforward
steady-state implementation of WE is directly practical for simple landscapes,
but not when significant metastable intermediates states are present. We
therefore develop an enhanced WE scheme, building on existing ideas, which
accelerates attainment of steady state in complex systems. We apply both WE
approaches to several model systems confirming their correctness and efficiency
by comparison with brute-force results. The enhanced version is significantly
faster than the brute force and straightforward WE for systems with WE bins
that accurately reflect the reaction coordinate(s). The new WE methods can also
be applied to equilibrium sampling, since equilibrium is a steady state
Computationally efficient algorithms for the two-dimensional Kolmogorov-Smirnov test
Goodness-of-fit statistics measure the compatibility of random samples against some theoretical or reference probability distribution function. The classical one-dimensional Kolmogorov-Smirnov test is a non-parametric statistic for comparing two empirical distributions which defines the largest absolute difference between the two cumulative distribution functions as a measure of disagreement. Adapting this test to more than one dimension is a challenge because there are 2^d-1 independent ways of ordering a cumulative distribution function in d dimensions. We discuss Peacock's version of the Kolmogorov-Smirnov test for two-dimensional data sets which computes the differences between cumulative distribution functions in 4n^2 quadrants. We also examine Fasano and Franceschini's variation of Peacock's test, Cooke's algorithm for Peacock's test, and ROOT's version of the two-dimensional Kolmogorov-Smirnov test. We establish a lower-bound limit on the work for computing Peacock's test of
Omega(n^2.lg(n)), introducing optimal algorithms for both this and Fasano and Franceschini's test, and show that Cooke's algorithm is not a faithful implementation of Peacock's test. We also discuss and evaluate parallel algorithms for Peacock's test
- …