2,305 research outputs found
Maximal uniform convergence rates in parametric estimation problems
This paper considers parametric estimation problems with independent, identically nonregularly distributed data. It focuses on rate efficiency, in the sense of maximal possible convergence rates of stochastically bounded estimators, as an optimality criterion, largely unexplored in parametric estimation. Under mild conditions, the Hellinger metric, defined on the space of parametric probability measures, is shown to be an essentially universally applicable tool to determine maximal possible convergence rates. These rates are shown to be attainable in general classes of parametric estimation problems
Compression of sub-relativistic space-charge-dominated electron bunches for single-shot femtosecond electron diffraction
We demonstrate compression of 95 keV, space-charge-dominated electron bunches
to sub-100 fs durations. These bunches have sufficient charge (200 fC) and are
of sufficient quality to capture a diffraction pattern with a single shot,
which we demonstrate by a diffraction experiment on a polycrystalline gold
foil. Compression is realized by means of velocity bunching as a result of a
velocity chirp, induced by the oscillatory longitudinal electric field of a 3
GHz radio-frequency cavity. The arrival time jitter is measured to be 80 fs
Matter profile effect in neutrino factory
We point out that the matter profile effect --- the effect of matter density
fluctuation on the baseline --- is very important to estimate the parameters in
a neutrino factory with a very long baseline. To make it clear, we propose the
method of the Fourier series expansion of the matter profile. By using this
method, we can take account of both the matter profile effect and its
ambiguity. For very long baseline experiment, such as L=7332km, in the analysis
of the oscillation phenomena we need to introduce a new parameter ---
the Fourier coefficient of the matter profile --- as a theoretical parameter to
deal with the matter profile effects.Comment: 21 pages, 15 figure
Influence of Deep Margin Elevation and preparation design on the fracture strength of indirectly restored molars
The objectives of this in-vitro study were to investigate the influence of Deep Margin Elevation (DME) and the preparation design (cusp coverage) on the fracture strength and repairability of CAD/CAM manufactured lithium disilicate (LS2) restorations on molars. Sound extracted human molars (n = 60) were randomly divided into 4 groups (n = 15) (inlay without DME (InoD); inlay with DME (IWD); onlay without DME (OnoD); onlay with DME (OnWD)). All samples were aged (1.2 × 106 cycles of 50N, 8000 cycles of 5–55 °C) followed by oblique static loading until fracture. Fracture strength was measured in Newton and the fracture analysis was performed using a (scanning electron) microscope. Data was statistically analyzed using two-way ANOVA and contingency tables. DME did not affect the fracture strength of LS2 restorations to a statistically significant level (p =.15). Onlays were stronger compared to inlays (p =.00). DME and preparation design did not interact (p =.97). However, onlays with DME were significantly stronger than inlays without DME (p =.00). More repairable fractures were observed among inlays (p =.00). Catastrophic, crown-root fractures were more prevalent in onlays (p =.00). DME did not influence repairability of fractures or fracture types to a statistically significant level (p >.05). Within the limitations of this in-vitro study, DME did not statistical significantly affect the fracture strength, nor the fracture type or repairability of LS2 restorations in molars. Cusp coverage did increase the fracture strength. However, oblique forces necessary to fracture both inlays and onlays, either with or without DME, by far exceeded the bite forces that can be expected under physiological clinical conditions. Hence, both inlays and onlays are likely to be fracture resistant during clinical service.</p
Large-scale groundwater modeling using global datasets: a test case for the Rhine-Meuse basin
The current generation of large-scale hydrological models does not include a groundwater flow component. Large-scale groundwater models, involving aquifers and basins of multiple countries, are still rare mainly due to a lack of hydro-geological data which are usually only available in developed countries. In this study, we propose a novel approach to construct large-scale groundwater models by using global datasets that are readily available. As the test-bed, we use the combined Rhine-Meuse basin that contains groundwater head data used to verify the model output. We start by building a distributed land surface model (30 arc-second resolution) to estimate groundwater recharge and river discharge. Subsequently, a MODFLOW transient groundwater model is built and forced by the recharge and surface water levels calculated by the land surface model. Results are promising despite the fact that we still use an offline procedure to couple the land surface and MODFLOW groundwater models (i.e. the simulations of both models are separately performed). The simulated river discharges compare well to the observations. Moreover, based on our sensitivity analysis, in which we run several groundwater model scenarios with various hydro-geological parameter settings, we observe that the model can reasonably well reproduce the observed groundwater head time series. However, we note that there are still some limitations in the current approach, specifically because the offline-coupling technique simplifies the dynamic feedbacks between surface water levels and groundwater heads, and between soil moisture states and groundwater heads. Also the current sensitivity analysis ignores the uncertainty of the land surface model output. Despite these limitations, we argue that the results of the current model show a promise for large-scale groundwater modeling practices, including for data-poor environments and at the global scale
Clear vision: a step towards unravelling student recruitment in English universities?
Purpose The recruitment of undergraduate students within English universities is of vital importance to both the academic success and the financial stability of the organisation. Despite the primacy of the task, there has been a dearth of research looking at related performance and how to ensure that the process is optimised. The purpose of this study was to investigate the degree of variation both within a university and between different universities. The reliance that individual programmes and/or universities place on the Clearing process is key; given its uncertainty, resource demands and timing shortly before students take up their places. Design/methodology/approach The Nomogramma di Gandy diagrammatical approach utilises readily available data to analyse universities’ performance in recruiting students to different programmes, and the degree to which they each rely of the Clearing process. Inter-university performance was investigated on a whole-student intake basis for a sample of English universities, representative of type and region. Findings The study found that there were disparate patterns for the many programmes within the pilot university and also disparate patterns between different types of universities across England. Accordingly, universities should internally benchmark their programmes to inform both strategic and tactical decision-making. Similarly, Universities and Colleges Admissions Service benchmarking inter-university patterns could inform the overall sector. Originality/value The approach and findings provide lessons for analysing student recruitment which could be critical to universities’ academic and financial health, in an increasingly competitive environment
Long-Baseline Study of the Leading Neutrino Oscillation at a Neutrino Factory
Within the framework of three-flavor neutrino oscillations, we consider the
physics potential of \nu_e --> \nu_\mu appearance and \nu_\mu --> \nu_\mu
survival measurements at a neutrino factory for a leading oscillation scale
\delta m^2 ~ 3.5 \times 10^{-3} eV^2. Event rates are evaluated versus baseline
and stored muon energy, and optimal values discussed. Over a sizeable region of
oscillation parameter space, matter effects would enable the sign of \delta m^2
to be determined from a comparison of \nu_e --> \nu_\mu with \bar\nu_e -->
\bar\nu_\mu event rates and energy distributions. It is important, therefore,
that both positive and negative muons can be stored in the ring. Measurements
of the \nu_\mu --> \nu_\mu survival spectrum could determine the magnitude of
\delta m^2 and the leading oscillation amplitude with a precision of O(1%--2%).Comment: 33 pages, single-spaced Revtex, uses epsf.sty, 14 postscript figures.
Added references, expanded conclusions, improved figs. 13 and 14. Version to
be published in Phys. Rev.
- …