3,632 research outputs found
Triple gauge couplings in polarised e-e+ -> W-W+ and their measurement using optimal observables
The sensitivity of optimal integrated observables to electroweak triple gauge
couplings is investigated for the process e-e+ -> W-W+ -> 4 fermions at future
linear colliders. By a suitable reparameterisation of the couplings we achieve
that all 28 coupling parameters have uncorrelated statistical errors and are
naturally normalised for this process. Discrete symmetry properties simplify
the analysis and allow checks on the stability of numerical results. We
investigate the sensitivity to the couplings of the normalised event
distribution and the additional constraints that can be obtained from the total
rate. Particular emphasis is put on the gain in sensitivity one can achieve
with longitudinal beam polarisation. We also point out questions that may best
be settled with transversely polarised beams. In particular we find that with
purely longitudinal polarisation one linear combination of coupling parameters
is hardly measurable by means of the normalised event distribution.Comment: 56 pages, 20 figure
Refractive X-ray beam shaping
This work introduces new refractive illumination optics in the hard X-ray region and describes a method for overcoming fabrication limitations of X-ray depth lithography. In particular, the problem of high aspect ratio in X-ray prism lenses was addressed. The refractive X-ray optics are developed for the photon energy range 8-100 keV. In the following, we report the development of a principal new focusing optics with large aperture, an illumination condenser for full-field X-ray microscopy and a so-called beam shaping optics to overcome the limitation of the field of view at 3rd and 4th generation synchrotron sources.
To reduce the absorption of X-rays in the material of the optical systems, the approach of X-ray prism lenses was pursued. Here, the optics consist of rows of micro prisms with an edge length of about 20 µm, which deflect the incident rays. This improves the ratio of the refractive power of the optics to the volume of the absorbing lens material. The mechanical stability of the fragile, very tall micro prisms is achieved by exposing thin, stabilizing support planes.
In order to achieve focal sizes smaller than the prism edge lengths, double parabolic biconcave micro-lenses were added to the prism rows. A similar arrangement with biconvex micro-lenses was used to achieve beam expansion while simultaneously homogenizing the illumination of the image field of a full-field X-ray microscope. Beam shaping optics consisting of kinoform Fresnel lens elements were developed for vertical beam expansion at high brilliance synchrotron sources.
In all cases, the theory is based on geometrical optics and ray tracing simulations. The optics were produced via deep X-ray lithography using the synchrotron radiation source at KIT at the LIGA I and II beamlines. The lens material is the negative resist mr-X, an epoxy resin-based polymer of type SU-8. The lenses were characterized at PETRA III, DESY in Hamburg and at ESRF in Grenoble
Subtraction-noise projection in gravitational-wave detector networks
In this paper, we present a successful implementation of a subtraction-noise
projection method into a simple, simulated data analysis pipeline of a
gravitational-wave search. We investigate the problem to reveal a weak
stochastic background signal which is covered by a strong foreground of
compact-binary coalescences. The foreground which is estimated by matched
filters, has to be subtracted from the data. Even an optimal analysis of
foreground signals will leave subtraction noise due to estimation errors of
template parameters which may corrupt the measurement of the background signal.
The subtraction noise can be removed by a noise projection. We apply our
analysis pipeline to the proposed future-generation space-borne Big Bang
Observer (BBO) mission which seeks for a stochastic background of primordial
GWs in the frequency range Hz covered by a foreground of
black-hole and neutron-star binaries. Our analysis is based on a simulation
code which provides a dynamical model of a time-delay interferometer (TDI)
network. It generates the data as time series and incorporates the analysis
pipeline together with the noise projection. Our results confirm previous ad
hoc predictions which say that BBO will be sensitive to backgrounds with
fractional energy densities below Comment: 54 pages, 15 figure
Treatment of chyloperitoneum after extended lymphatic dissection during duodenopancreatectomy
Summary: Background. Chyloperitoneum is a rare postoperative complication that might be caused by an interruption of chylous ducts in the mesenteric root or the cysterna chyli. Two cases of chyloperitoneum after duodenopancreatectomy are reported in the literature. Methods. We here report the third case that developed a chyloperitoneum 2 wk postoperatively when he resumed his normal diet. Results. The patient was treated conservatively with paracenteses and chyloperitoneum subsided thereafter. Conclusions. Chyloperitoneum after extended duodenopancreatectomy might be treated conservativel
Composite used for thermal spray instrumentation and method for making the same
A superalloy article which comprises a substrate comprised of a superalloy, a bond coat comprised of MCrAlY wherein M is a metal selected from the group consisting of cobalt, nickel and mixtures thereof applied onto at least a portion of the substrate and a ceramic top coat applied over at least a portion of the bond coat. The bond coat is exposed to a temperature of within the range of between about 1600-1800.degree. F. subsequent to its application onto the substrate
The structure group for quasi-linear equations via universal enveloping algebras
We consider the approach of replacing trees by (fewer) multi-indices as an
index set of the abstract model space to tackle quasi-linear
singular SPDEs. We show that this approach is consistent with the postulates of
regularity structures when it comes to the structure group . In
particular, arises from a Hopf algebra
and a comodule . This approach allows to interpret
as a Lie group arising from a Lie
algebra consisting of derivations;
these derivations in turn are the infinitesimal generators arising from actions
on the space of pairs (nonlinearities, functions of space-time). The Hopf
algebra arises from a coordinate representation of the universal
enveloping algebra of the Lie algebra . The
coordinates arise from an underlying pre-Lie algebra structure of .
Strong finiteness properties, which are enforced by gradedness and the
restrictive definition of , allow to define these structures in our
infinite-dimensional setting.Comment: 50 page
Lecture notes on Malliavin calculus in regularity structures
Malliavin calculus provides a characterization of the centered model in
regularity structures that is stable under removing the small-scale cut-off. In
conjunction with a spectral gap inequality, it yields the stochastic estimates
of the model.
This becomes transparent on the level of a notion of model that parameterizes
the solution manifold, and thus is indexed by multi-indices rather than trees,
and which allows for a more geometric than combinatorial perspective. In these
lecture notes, this is carried out for a PDE with heat operator, a cubic
nonlinearity, and driven by additive noise, reminiscent of the stochastic
quantization of the Euclidean model.
More precisely, we informally motivate our notion of the model
as charts and transition maps, respectively, of the nonlinear solution
manifold. These geometric objects are algebrized in terms of formal power
series, and their algebra automorphisms. We will assimilate the directional
Malliavin derivative to a tangent vector of the solution manifold. This means
that it can be treated as a modelled distribution, thereby connecting
stochastic model estimates to pathwise solution theory, with its analytic tools
of reconstruction and integration. We unroll an inductive calculus that in an
automated way applies to the full subcritical regime.Comment: 70 pages, 3 figures. Comments welcome
Creutzfeldt-Jakob disease and homocysteine levels in plasma and cerebrospinal fluid
Background: There is evidence that homocysteine contributes to various neurodegenerative disorders. Objective: To assess the values of homocysteine in patients with Creutzfeldt-Jakob disease (CJD) in both cerebrospinal fluid (CSF) and plasma. Methods: Study design: Case control study. Total homocysteine was quantified in CSF and plasma samples of CJD patients (n = 13) and healthy controls (n = 13). Results: Mean values in healthy controls: 0.15 mumol/l +/- 0.07 (CSF) and 9.10 mumol/l +/- 2.99 (plasma); mean values in CJD patients: 0.13 mumol/l +/- 0.03 (CSF) and 9.22 mumol/l +/- 1.81 (plasma). No significant differences between CJD patients and controls were observed (Mann-Whitney U, p > 0.05). Conclusions: The results indicate that the CSF and plasma of CJD patients showed no higher endogenous levels of homocysteine as compared to normal healthy controls. These findings provide no evidence for an additional role of homocysteine in the pathogenetic mechanisms underlying CJD neurodegeneration. Copyright (C) 2005 S. Karger AG, Basel
Scalability of Distributed Version Control Systems
Source at https://ojs.bibsys.no/index.php/NIK/article/view/434.Distributed version control systems are popular for storing source code, but they are notoriously ill suited for storing large binary files.
We report on the results from a set of experiments designed to characterize the behavior of some widely used distributed version control systems with respect to scaling. The experiments measured commit times and repository sizes when storing single files of increasing size, and when storing increasing numbers of single-kilobyte files.
The goal is to build a distributed storage system with characteristics similar to version control but for much larger data sets. An early prototype of such a system, Distributed Media Versioning (DMV), is briefly described and compared with Git, Mercurial, and the Git-based backup tool Bup.
We find that processing large files without splitting them into smaller parts will limit maximum file size to what can fit in RAM. Storing millions of small files will result in inefficient use of disk space. And storing files with hash-based file and directory names will result in high-latency write operations, due to having to switch between directories rather than performing a sequential write.
The next-phase strategy for DMV will be to break files into chunks by content for de-duplication, then re-aggregating the chunks into append-only log files for low-latency write operations and efficient use of disk space
- …