9,944 research outputs found
Evaluation of the Land Surface Water Budget in NCEP/NCAR and NCEP/DOE Reanalyses using an Off-line Hydrologic Model
The ability of the National Centers for Environmental Prediction (NCEP)/National Center for Atmospheric Research (NCAR) reanalysis (NRA1) and the follow-up NCEP/Department of Energy (DOE) reanalysis (NRA2), to reproduce the hydrologic budgets over the Mississippi River basin is evaluated using a macroscale hydrology model. This diagnosis is aided by a relatively unconstrained global climate simulation using the NCEP global spectral model, and a more highly constrained regional climate simulation using the NCEP regional spectral model, both employing the same land surface parameterization (LSP) as the reanalyses. The hydrology model is the variable infiltration capacity (VIC) model, which is forced by gridded observed precipitation and temperature. It reproduces observed streamflow, and by closure is constrained to balance other terms in the surface water and energy budgets. The VIC-simulated surface fluxes therefore provide a benchmark for evaluating the predictions from the reanalyses and the climate models. The comparisons, conducted for the 10-year period 1988–1997, show the well-known overestimation of summer precipitation in the southeastern Mississippi River basin, a consistent overestimation of evapotranspiration, and an underprediction of snow in NRA1. These biases are generally lower in NRA2, though a large overprediction of snow water equivalent exists. NRA1 is subject to errors in the surface water budget due to nudging of modeled soil moisture to an assumed climatology. The nudging and precipitation bias alone do not explain the consistent overprediction of evapotranspiration throughout the basin. Another source of error is the gravitational drainage term in the NCEP LSP, which produces the majority of the model\u27s reported runoff. This may contribute to an overprediction of persistence of surface water anomalies in much of the basin. Residual evapotranspiration inferred from an atmospheric balance of NRA1, which is more directly related to observed atmospheric variables, matches the VIC prediction much more closely than the coupled models. However, the persistence of the residual evapotranspiration is much less than is predicted by the hydrological model or the climate models
Conic Multi-Task Classification
Traditionally, Multi-task Learning (MTL) models optimize the average of
task-related objective functions, which is an intuitive approach and which we
will be referring to as Average MTL. However, a more general framework,
referred to as Conic MTL, can be formulated by considering conic combinations
of the objective functions instead; in this framework, Average MTL arises as a
special case, when all combination coefficients equal 1. Although the advantage
of Conic MTL over Average MTL has been shown experimentally in previous works,
no theoretical justification has been provided to date. In this paper, we
derive a generalization bound for the Conic MTL method, and demonstrate that
the tightest bound is not necessarily achieved, when all combination
coefficients equal 1; hence, Average MTL may not always be the optimal choice,
and it is important to consider Conic MTL. As a byproduct of the generalization
bound, it also theoretically explains the good experimental results of previous
relevant works. Finally, we propose a new Conic MTL model, whose conic
combination coefficients minimize the generalization bound, instead of choosing
them heuristically as has been done in previous methods. The rationale and
advantage of our model is demonstrated and verified via a series of experiments
by comparing with several other methods.Comment: Accepted by European Conference on Machine Learning and Principles
and Practice of Knowledge Discovery in Databases (ECMLPKDD)-201
Using Sideband Transitions for Two-Qubit Operations in Superconducting Circuits
We demonstrate time resolved driving of two-photon blue sideband transitions
between superconducting qubits and a transmission line resonator. Using the
sidebands, we implement a pulse sequence that first entangles one qubit with
the resonator, and subsequently distributes the entanglement between two
qubits. We show generation of 75% fidelity Bell states by this method. The full
density matrix of the two qubit system is extracted using joint measurement and
quantum state tomography, and shows close agreement with numerical simulation.
The scheme is potentially extendable to a scalable universal gate for quantum
computation.Comment: 4 pages, 5 figures, version with high resolution figures available at
http://qudev.ethz.ch/content/science/PubsPapers.htm
Eine Mikromethode zur Markierung von Steroiden und von Ecdyson mit Tritium. EUR 510. = A micro-method for labeling of steroids and ecdysone with tritium. EUR 510.
Optical determination and identification of organic shells around nanoparticles: application to silver nanoparticles
We present a simple method to prove the presence of an organic shell around
silver nanoparticles. This method is based on the comparison between optical
extinction measurements of isolated nanoparticles and Mie calculations
predicting the expected wavelength of the Localized Surface Plasmon Resonance
of the nanoparticles with and without the presence of an organic layer. This
method was applied to silver nanoparticles which seemed to be well protected
from oxidation. Further experimental characterization via Surface Enhanced
Raman Spectroscopy (SERS) measurements allowed to identify this protective
shell as ethylene glycol. Combining LSPR and SERS measurements could thus give
proof of both presence and identification for other plasmonic nanoparticles
surrounded by organic shells
Close to Uniform Prime Number Generation With Fewer Random Bits
In this paper, we analyze several variants of a simple method for generating
prime numbers with fewer random bits. To generate a prime less than ,
the basic idea is to fix a constant , pick a
uniformly random coprime to , and choose of the form ,
where only is updated if the primality test fails. We prove that variants
of this approach provide prime generation algorithms requiring few random bits
and whose output distribution is close to uniform, under less and less
expensive assumptions: first a relatively strong conjecture by H.L. Montgomery,
made precise by Friedlander and Granville; then the Extended Riemann
Hypothesis; and finally fully unconditionally using the
Barban-Davenport-Halberstam theorem. We argue that this approach has a number
of desirable properties compared to previous algorithms.Comment: Full version of ICALP 2014 paper. Alternate version of IACR ePrint
Report 2011/48
Characteristic velocities of stripped-envelope core-collapse supernova cores
The velocity of the inner ejecta of stripped-envelope core-collapse
supernovae (CC-SNe) is studied by means of an analysis of their nebular
spectra. Stripped-envelope CC-SNe are the result of the explosion of bare cores
of massive stars ( M), and their late-time spectra are
typically dominated by a strong [O {\sc i}] 6300, 6363 emission
line produced by the innermost, slow-moving ejecta which are not visible at
earlier times as they are located below the photosphere. A characteristic
velocity of the inner ejecta is obtained for a sample of 56 stripped-envelope
CC-SNe of different spectral types (IIb, Ib, Ic) using direct measurements of
the line width as well as spectral fitting. For most SNe, this value shows a
small scatter around 4500 km s. Observations ( days) of
stripped-envelope CC-SNe have revealed a subclass of very energetic SNe, termed
broad-lined SNe (BL-SNe) or hypernovae, which are characterised by broad
absorption lines in the early-time spectra, indicative of outer ejecta moving
at very high velocity (). SNe identified as BL in the early phase
show large variations of core velocities at late phases, with some having much
higher and some having similar velocities with respect to regular CC-SNe. This
might indicate asphericity of the inner ejecta of BL-SNe, a possibility we
investigate using synthetic three-dimensional nebular spectra.Comment: 14 pages, 10 figures, MNRAS accepte
Differential Dynamic Microscopy to characterize Brownian motion and bacteria motility
We have developed a lab work module where we teach undergraduate students how
to quantify the dynamics of a suspension of microscopic particles, measuring
and analyzing the motion of those particles at the individual level or as a
group. Differential Dynamic Microscopy (DDM) is a relatively recent technique
that precisely does that and constitutes an alternative method to more
classical techniques such as dynamics light scattering (DLS) or video particle
tracking (VPT). DDM consists in imaging a particle dispersion with a standard
light microscope and a camera. The image analysis requires the students to code
and relies on digital Fourier transform to obtain the intermediate scattering
function, an autocorrelation function that characterizes the dynamics of the
dispersion. We first illustrate DDM on the textbook case of colloids where we
measure the diffusion coefficient. Then we show that DDM is a pertinent tool to
characterize biologic systems such as motile bacteria i.e.bacteria that can
self propel, where we not only determine the diffusion coefficient but also the
velocity and the fraction of motile bacteria. Finally, so that our paper can be
used as a tutorial to the DDM technique, we have joined to this article movies
of the colloidal and bacterial suspensions and the DDM algorithm in both Matlab
and Python to analyze the movies
Validity of silhouette showcards as a measure of body size and obesity in a population in the African region : a practical research tool for general-purpose surveys.
BACKGROUND: The purpose of this study is to validate the Pulvers silhouette showcard as a measure of weight status in a population in the African region. This tool is particularly beneficial when scarce resources do not allow for direct anthropometric measurements due to limited survey time or lack of measurement technology in face-to-face general-purpose surveys or in mailed, online, or mobile device-based surveys.
METHODS: A cross-sectional study was conducted in the Republic of Seychelles with a sample of 1240 adults. We compared self-reported body sizes measured by Pulvers' silhouette showcards to four measurements of body size and adiposity: body mass index (BMI), body fat percent measured, waist circumference, and waist to height ratio. The accuracy of silhouettes as an obesity indicator was examined using sex-specific receiver operator curve (ROC) analysis and the reliability of this tool to detect socioeconomic gradients in obesity was compared to BMI-based measurements.
RESULTS: Our study supports silhouette body size showcards as a valid and reliable survey tool to measure self-reported body size and adiposity in an African population. The mean correlation coefficients of self-reported silhouettes with measured BMI were 0.80 in men and 0.81 in women (P < 0.001). The silhouette showcards also showed high accuracy for detecting obesity as per a BMI ≥ 30 (Area under curve, AUC: 0.91/0.89, SE: 0.01), which was comparable to other measured adiposity indicators: fat percent (AUC: 0.94/0.94, SE: 0.01), waist circumference (AUC: 0.95/0.94, SE: 0.01), and waist to height ratio (AUC: 0.95/0.94, SE: 0.01) amongst men and women, respectively. The use of silhouettes in detecting obesity differences among different socioeconomic groups resulted in similar magnitude, direction, and significance of association between obesity and socioeconomic status as when using measured BMI.
CONCLUSIONS: This study highlights the validity and reliability of silhouettes as a survey tool for measuring obesity in a population in the African region. The ease of use and cost-effectiveness of this tool makes it an attractive alternative to measured BMI in the design of non-face-to-face online- or mobile device-based surveys as well as in-person general-purpose surveys of obesity in social sciences, where limited resources do not allow for direct anthropometric measurements
- …