8,731 research outputs found
Confidence Statements for Efficiency Estimates from Stochastic Frontier Models
This paper is an empirical study of the uncertainty associated with estimates from stochastic frontier models. We show how to construct confidence intervals for estimates of technical efficiency levels under different sets of assumptions ranging from the very strong to the relatively weak. We demonstrate empirically how the degree of uncertainty associated with these estimates relates to the strength of the assumptions made and to various features of the data.Confidence intervals, stochastic frontier models, efficiency measurement
HoPP: Robust and Resilient Publish-Subscribe for an Information-Centric Internet of Things
This paper revisits NDN deployment in the IoT with a special focus on the
interaction of sensors and actuators. Such scenarios require high
responsiveness and limited control state at the constrained nodes. We argue
that the NDN request-response pattern which prevents data push is vital for IoT
networks. We contribute HoP-and-Pull (HoPP), a robust publish-subscribe scheme
for typical IoT scenarios that targets IoT networks consisting of hundreds of
resource constrained devices at intermittent connectivity. Our approach limits
the FIB tables to a minimum and naturally supports mobility, temporary network
partitioning, data aggregation and near real-time reactivity. We experimentally
evaluate the protocol in a real-world deployment using the IoT-Lab testbed with
varying numbers of constrained devices, each wirelessly interconnected via IEEE
802.15.4 LowPANs. Implementations are built on CCN-lite with RIOT and support
experiments using various single- and multi-hop scenarios
Sampling Errors and Confidence Intervals for Order Statistics: Implementing the Family Support Act
The Family Support Act allows states to reimburse child care costs up to the 75th percentile of local market price for child care. States must carry out surveys to estimate these 75th percentiles. This estimation problem raises two major statistical issues: (1) picking a sample design that will allow one to estimate the percentiles cheaply, efficiently and equitably; and (2) assessing the sampling variability of the estimates obtained. For Massa- chusetts, we developed a sampling design that equalized the standard errors of the estimated percentiles across 65 distinct local markets. This design was chosen because state administrators felt public day care providers and child advocates would find it equitable, thus limiting costly appeals. Estimation of standard errors for the sample 75th percentiles requires estimation of the density of the population at the 75th percentile. We implement and compare a number of parametric and nonparametric methods of density estimation. A kernel estimator provides the most reasonable estimates. On the basis of the mean integrated squared error criterion we selected the Epanechnikov kernel and the Sheather-Jones automatic bandwidth selection procedure. Because some of our sample sizes were too small to rely on asymptotics, we also constructed nonparametric confidence intervals using the hypergeometric distrition. For most of our samples, these confidence intervals were similar to those based on the asymptotic standard errors. Substantively we find wide variation in the price of child care, depending on the child's age, type of care and geographic location. For full-time care, the 75th percentiles ranged from 85 per week for family day care in western Massachusetts.
Panel Data Models with Multiple Time-Varying Individual Effects
This paper considers a panel data model with time-varying individual effects. The data are assumed to contain a large number of cross-sectional units repeatedly observed over a fixed number of time periods. The model has a feature of the fixed-effects model in that the effects are assumed to be correlated with the regressors. The unobservable individual effects are assumed to have a factor structure. For consistent estimation of the model, it is important to estimate the true number of factors. We propose a generalized methods of moments procedure by which both the number of factors and the regression coefficients can be consistently estimated. Some important identification issues are also discussed. Our simulation results indicate that the proposed methods produce reliable estimates.panel data, time-varying individual effects, factor models
Primary Care Validation of a Single-Question Alcohol Screening Test
BACKGROUND
Unhealthy alcohol use is prevalent but under-diagnosed in primary care settings.
OBJECTIVE
To validate, in primary care, a single-item screening test for unhealthy alcohol use recommended by the National Institute on Alcohol Abuse and Alcoholism (NIAAA).
DESIGN
Cross-sectional study.
PARTICIPANTS
Adult English-speaking patients recruited from primary care waiting rooms.
MEASUREMENTS
Participants were asked the single screening question, "How many times in the past year have you had X or more drinks in a day?", where X is 5 for men and 4 for women, and a response of >1 is considered positive. Unhealthy alcohol use was defined as the presence of an alcohol use disorder, as determined by a standardized diagnostic interview, or risky consumption, as determined using a validated 30-day calendar method.
MAIN RESULTS
Of 394 eligible primary care patients, 286 (73%) completed the
interview. The single-question screen was 81.8% sensitive (95% confidence interval (CI) 72.5% to 88.5%) and 79.3% specific (95% CI 73.1% to 84.4%) for the detection of unhealthy alcohol use. It was slightly more sensitive (87.9%, 95% CI 72.7% to 95.2%) but was less specific (66.8%, 95% CI 60.8% to 72.3%) for the detection of a current alcohol use disorder. Test characteristics were similar to that of a commonly used three-item screen, and were affected very little by subject demographic characteristics.
CONCLUSIONS. The single screening question recommended by the NIAAA accurately identified unhealthy alcohol use in this sample of primary care patients. These findings support the use of this brief screen in primary care.National Institute on Alcohol Abuse and Alcoholism (R01-AA010870
Otto Stern (1888-1969): The founding father of experimental atomic physics
We review the work and life of Otto Stern who developed the molecular beam
technique and with its aid laid the foundations of experimental atomic physics.
Among the key results of his research are: the experimental determination of
the Maxwell-Boltzmann distribution of molecular velocities (1920), experimental
demonstration of space quantization of angular momentum (1922), diffraction of
matter waves comprised of atoms and molecules by crystals (1931) and the
determination of the magnetic dipole moments of the proton and deuteron (1933).Comment: 39 pages, 8 figure
Multiple Comparisons with the Best, with Economic Applications
In this paper we discuss a statistical method called multiple comparisons with the best, or MCB. Suppose that we have N populations, and population i has parameter value Ξi. Let \nopagenumbers\end, the parameter value for the âbestâ population. Then MCB constructs joint confidence intervals for the differences \nopagenumbers\end. It is not assumed that it is known which population is best, and part of the problem is to say whether any population is so identified, at the given confidence level. This paper is meant to introduce MCB to economists. We discuss possible uses of MCB in economics. The application that we treat in most detail is the construction of confidence intervals for inefficiency measures from stochastic frontier models with panel data. We also consider an application to the analysis of labour market wage gaps
Confidence Statements for Efficiency Estimates from Stochastic Frontier Models
This paper is an empirical study of the uncertainty associated with technical efficiency estimates from stochastic frontier models. We show how to construct confidence intervals for estimates of technical efficiency levels under different sets of assumptions ranging from the very strong to the relatively weak. We demonstrate empirically how the degree of uncertainty associated with these estimates relates to the strength of the assumptions made and to various features of the data
Dynamics of confined water reconstructed from inelastic x-ray scattering measurements of bulk response functions
Nanoconfined water and surface-structured water impacts a broad range of fields. For water confined between hydrophilic surfaces, measurements and simulations have shown conflicting results ranging from âliquidlikeâ to âsolidlikeâ behavior, from bulklike water viscosity to viscosity orders of magnitude higher. Here, we investigate how a homogeneous fluid behaves under nanoconfinement using its bulk response function: The Green's function of water extracted from a library of S(q,Ï) inelastic x-ray scattering data is used to make femtosecond movies of nanoconfined water. Between two confining surfaces, the structure undergoes drastic changes as a function of surface separation. For surface separations of â9 Ă
, although the surface-associated hydration layers are highly deformed, they are separated by a layer of bulklike water. For separations of â6 Ă
, the two surface-associated hydration layers are forced to reconstruct into a single layer that modulates between localized âfrozenâ and delocalized âmeltedâ structures due to interference of density fields. These results potentially reconcile recent conflicting experiments. Importantly, we find a different delocalized wetting regime for nanoconfined water between surfaces with high spatial frequency charge densities, where water is organized into delocalized hydration layers instead of localized hydration shells, and are strongly resistant to `freezing' down to molecular distances (<6 Ă
)
A Guideline on Pseudorandom Number Generation (PRNG) in the IoT
Random numbers are an essential input to many functions on the Internet of
Things (IoT). Common use cases of randomness range from low-level packet
transmission to advanced algorithms of artificial intelligence as well as
security and trust, which heavily rely on unpredictable random sources. In the
constrained IoT, though, unpredictable random sources are a challenging desire
due to limited resources, deterministic real-time operations, and frequent lack
of a user interface.
In this paper, we revisit the generation of randomness from the perspective
of an IoT operating system (OS) that needs to support general purpose or
crypto-secure random numbers. We analyse the potential attack surface, derive
common requirements, and discuss the potentials and shortcomings of current IoT
OSs. A systematic evaluation of current IoT hardware components and popular
software generators based on well-established test suits and on experiments for
measuring performance give rise to a set of clear recommendations on how to
build such a random subsystem and which generators to use.Comment: 43 pages, 11 figures, 11 table
- âŠ