452,823 research outputs found
The Luminosity Dependence of Quasar Clustering
We investigate the luminosity dependence of quasar clustering, inspired by
numerical simulations of galaxy mergers that incorporate black hole growth.
These simulations have motivated a new interpretation of the quasar luminosity
function. In this picture, the bright end of the quasar luminosity function
consists of quasars radiating nearly at their peak luminosities, while the
faint end consists mainly of very similar sources, but at dimmer phases in
their evolution. We combine this model with the statistics of dark matter halos
that host quasar activity. We find that, since bright and faint quasars are
mostly similar sources seen in different evolutionary stages, a broad range in
quasar luminosities corresponds to only a narrow range in the masses of quasar
host halos. On average, bright and faint quasars reside in similar host halos.
Consequently, we argue that quasar clustering should depend only weakly on
luminosity. This prediction is in qualitative agreement with recent
measurements of the luminosity dependence of the quasar correlation function
(Croom et al. 2005) and the galaxy-quasar cross-correlation function
(Adelberger & Steidel 2005). Future precision clustering measurements from SDSS
and 2dF, spanning a large range in luminosity, should provide a strong test of
our model.Comment: 9 pages, 4 figures, submitted to Ap
Studying light propagation in a locally homogeneous universe through an extended Dyer-Roeder approach
Light is affected by local inhomogeneities in its propagation, which may
alter distances and so cosmological parameter estimation. In the era of
precision cosmology, the presence of inhomogeneities may induce systematic
errors if not properly accounted. In this vein, a new interpretation of the
conventional Dyer-Roeder (DR) approach by allowing light received from distant
sources to travel in regions denser than average is proposed. It is argued that
the existence of a distribution of small and moderate cosmic voids (or "black
regions") implies that its matter content was redistributed to the homogeneous
and clustered matter components with the former becoming denser than the cosmic
average in the absence of voids. Phenomenologically, this means that the DR
smoothness parameter (denoted here by ) can be greater than unity,
and, therefore, all previous analyses constraining it should be rediscussed
with a free upper limit. Accordingly, by performing a statistical analysis
involving 557 type Ia supernovae (SNe Ia) from Union2 compilation data in a
flat CDM model we obtain for the extended parameter,
(). The effects of are also
analyzed for generic CDM models and flat XCDM cosmologies. For both
models, we find that a value of greater than unity is able to
harmonize SNe Ia and cosmic microwave background observations thereby
alleviating the well-known tension between low and high redshift data. Finally,
a simple toy model based on the existence of cosmic voids is proposed in order
to justify why can be greater than unity as required by supernovae
data.Comment: 5 pages, 2 figures. Title modified, results unchanged. It matches
version published as a Brief Report in Phys. Rev.
Measurement of Vegetation and Terrain Characteristics On Small Scale Vertical Aerial Photographs
Few events in recent years have stirred public imagination and interest to the degree occasioned by the uses made of aerial photographs in the Cuban affair. The average earth scientist, however, was not taken by surprise since the basic methods, materials and principles involved were not new to him. As a matter of fact, since World War II, aerial photography has become an everyday, virtually indispensable tool to most earth feature and natural resource analysts. In order better to understand the application of aerial photographs to such civil pursuits, it would be well at this point to differentiate the two basic levels of use:
(1). Photogrammetry involves use of highly precise measurements and complicated instrument systems. Among the products of photogrammetry, to list a very few, are highway design, topographic maps, bridge and dam site surveys.
(2). Photo Interpretation involves the extraction of both subjective information and the performance of measurements at a lower level of precision than that essential to the photogrammetrist. Photo interpretation work is usually done by the skilled professional ( e.g., archeologist, forester, geographer, geologist) who utilizes this information to formulate decisions pertinent to his professional activity.
Admittedly, this a gross over-simplification of the distinction between the two levels of activity and precision since there is a certain degree of overlap between the two . As a matter of fact, there are some individuals who are fully qualified to perform both functions. Nevertheless, these basic categories must be recognized in order to indicate to the average subject matter specialist the photo interpretation applications which are available to him directly.
Although the photo interpretation process is basically subjective, both in nature and by definition, a useful degree of quantification is possible. Crude though these measurements and controlled estimates may appear to be to the photogrammetric engineer, they are still suitable and often fully adequate for the purposes of the interpreter. It is doubtful, however, whether these techniques are being put to sufficient use since it is generally estimated that not more than 60% of the useful capabilities of currently-available photo interpretation systems are being realized. It is the purpose of this paper, therefore, to briefly describe some of these measurement techniques, give a few examples in current use and suggest some possibilities for the future
Polarimetry of Water Ice Particles Providing Insights on Grain Size and Degree of Sintering on Icy Planetary Surfaces
The polarimetry of the light scattered by planetary surfaces is a powerful
tool to provide constraints on their microstructure. To improve the
interpretation of polarimetric data from icy surfaces, we have developed the
POLarimeter for ICE Samples (POLICES) complementing the measurement facilities
of the Ice Laboratory at the University of Bern. The new setup uses a high
precision Stokes polarimeter to measure the degree of polarization in the
visible light scattered by surfaces at moderate phase angles (from 1.5 to
30{\deg}). We present the photometric and polarimetric phase curves measured on
various surfaces made of pure water ice particles having well-controlled size
and shape (spherical, crushed, frost). The results show how the amplitude and
the shape of the negative polarization branch change with the particles sizes
and the degree of metamorphism of the ice. We found that fresh frost formed by
water condensation on cold surfaces has a phase curve characterized by
resonances (Mie oscillations) indicating that frost embryos are transparent
micrometer-sized particles with a narrow size distribution and spherical shape.
Comparisons of these measurements with polarimetric observations of the icy
satellites of the Solar System suggest that Europa is possibly covered by
relatively coarser (~40-400 {\mu}m) and more sintered grains than Enceladus and
Rhea, more likely covered by frost-like particles of few micrometers in
average. The great sensitivity of polarization to grain size and degree of
sintering makes it an ideal tool to detect hints of ongoing processes on icy
planetary surfaces, such as cryovolcanism.Comment: 36 pages, 1 table, 11 figures, 2 data sets, accepted in Journal of
Geophysical Research: Planet
Pavement Performance Evaluation Using Connected Vehicles
Roads deteriorate at different rates from weathering and use. Hence, transportation agencies must assess the ride quality of a facility regularly to determine its maintenance needs. Existing models to characterize ride quality produce the International Roughness Index (IRI), the prevailing summary of roughness. Nearly all state agencies use Inertial Profilers to produce the IRI. Such heavily instrumented vehicles require trained personnel for their operation and data interpretation. Resource constraints prevent the scaling of these existing methods beyond 4% of the network. This dissertation developed an alternative method to characterize ride quality that uses regular passenger vehicles. Smartphones or connected vehicles provide the onboard sensor data needed to enable the new technique. The new method provides a single index summary of ride quality for all paved and unpaved roads. The new index is directly proportional to the IRI. A new transform integrates sensor data streams from connected vehicles to produce a linear energy density representation of roughness. The ensemble average of indices from different speed ranges converges to a repeatable characterization of roughness. The currently used IRI is undefined at speeds other than 80 km/h. This constraint mischaracterizes roughness experienced at other speeds. The newly proposed transform integrates the average roughness indices from all speed ranges to produce a speed-independent characterization of ride quality. This property avoids spatial wavelength bias, which is a critical deficiency of the IRI. The new method leverages the emergence of connected vehicles to provide continuous characterizations of ride quality for the entire roadway network. This dissertation derived precision bounds of deterioration forecasting for models that could utilize the new index. The results demonstrated continuous performance improvements with additional vehicle participation. With practical traversal volumes, the achievable precision of forecast is within a few days. This work also quantified capabilities of the new transform to localize roadway anomalies that could pose travel hazards. The methods included derivations of the best sensor settings to achieve the desired performances. Several case studies validated the findings. These new techniques have the potential to save agencies millions of dollars annually by enabling predictive maintenance practices for all roadways, worldwide.Mountain Plains Consortium (MPC
Validation of Immunoassay-Based Tools for the Comprehensive Quantification of Aß40 and Aß42 Peptides in Plasma
Recent advances in neuroimaging and cerebrospinal fluid (CSF) biomarker assays have provided evidence of a long preclinical stage of Alzheimer''s disease (AD). This period is being increasingly targeted for secondary prevention trials of new therapies. In this context, the interest of a noninvasive, cost-effective amyloid-ß (Aß) blood-based test does not need to be overstated. Nevertheless, a thorough validation of these bioanalytical methods should be performed as a prerequisite for confident interpretation of clinical results. The aim of this study was to validate ELISA sandwich colorimetric ABtest40 and ABtest42 for the quantification of Aß40 and Aß42 in human plasma. The validation parameters assessed included precision, accuracy, sensitivity, specificity, recovery, and dilution linearity. ABtest40 and ABtest42 proved to be specific for their target peptide using Aß peptides with sequence similar to the target. Mean relative error in the quantification was found to be below 7.5 for both assays, with high intra-assay, inter-assay, and inter-batch precision (CV <9.0 on average). Sensitivity was assessed by determination of the limit of quantification fulfilling precision and accuracy criteria; it was established at 7.60 pg/ml and 3.60 pg/ml for ABtest40 and ABtest42, respectively. Plasma dilution linearity was demonstrated in PBS; however, dilution in a proprietary formulated buffer significantly increased the recovery of both Aß40 and Aß42 masked by matrix interactions, allowing a more comprehensive assessment of the free and total peptide levels in the plasma. In conclusion, both assays were successfully validated as tools for the quantification Aß40 and Aß42 in plasma
Deep ROC Analysis and AUC as Balanced Average Accuracy to Improve Model Selection, Understanding and Interpretation
Optimal performance is critical for decision-making tasks from medicine to
autonomous driving, however common performance measures may be too general or
too specific. For binary classifiers, diagnostic tests or prognosis at a
timepoint, measures such as the area under the receiver operating
characteristic curve, or the area under the precision recall curve, are too
general because they include unrealistic decision thresholds. On the other
hand, measures such as accuracy, sensitivity or the F1 score are measures at a
single threshold that reflect an individual single probability or predicted
risk, rather than a range of individuals or risk. We propose a method in
between, deep ROC analysis, that examines groups of probabilities or predicted
risks for more insightful analysis. We translate esoteric measures into
familiar terms: AUC and the normalized concordant partial AUC are balanced
average accuracy (a new finding); the normalized partial AUC is average
sensitivity; and the normalized horizontal partial AUC is average specificity.
Along with post-test measures, we provide a method that can improve model
selection in some cases and provide interpretation and assurance for patients
in each risk group. We demonstrate deep ROC analysis in two case studies and
provide a toolkit in Python.Comment: 14 pages, 6 Figures, submitted to IEEE Transactions on Pattern
Analysis and Machine Intelligence (TPAMI), currently under revie
- …