8,201 research outputs found
Cygnus X-2, super-Eddington mass transfer, and pulsar binaries
We consider the unusual evolutionary state of the secondary star in Cygnus
X-2. Spectroscopic data give a low mass (M_2 \simeq 0.5 - 0.7\msun) and yet a
large radius (R_2 \simeq 7\rsun) and high luminosity (L_2 \simeq 150\lsun).
We show that this star closely resembles a remnant of early massive Case B
evolution, during which the neutron star ejected most of the \sim 3\msun
transferred from the donor (initial mass M_{\rm 2i}\sim 3.6\msun) on its
thermal time-scale yr. As the system is far too wide to result from
common-envelope evolution, this strongly supports the idea that a neutron star
efficiently ejects the excess inflow during super--Eddington mass transfer.
Cygnus X-2 is unusual in having had an initial mass ratio in a narrow critical range near . Smaller lead to long-period systems with the former donor near the Hayashi line,
and larger to pulsar binaries with shorter periods and relatively
massive white dwarf companions. The latter naturally explain the surprisingly
large companion masses in several millisecond pulsar binaries. Systems like
Cygnus X-2 may thus be an important channel for forming pulsar binaries.Comment: 9 pages, 4 encapsulated figures, LaTeX, revised version with a few
typos corrected and an appendix added, accepted by MNRA
Asymptotic Level Density of the Elastic Net Self-Organizing Feature Map
Whileas the Kohonen Self Organizing Map shows an asymptotic level density
following a power law with a magnification exponent 2/3, it would be desired to
have an exponent 1 in order to provide optimal mapping in the sense of
information theory. In this paper, we study analytically and numerically the
magnification behaviour of the Elastic Net algorithm as a model for
self-organizing feature maps. In contrast to the Kohonen map the Elastic Net
shows no power law, but for onedimensional maps nevertheless the density
follows an universal magnification law, i.e. depends on the local stimulus
density only and is independent on position and decouples from the stimulus
density at other positions.Comment: 8 pages, 10 figures. Link to publisher under
http://link.springer.de/link/service/series/0558/bibs/2415/24150939.ht
Investigation of topographical stability of the concave and convex Self-Organizing Map variant
We investigate, by a systematic numerical study, the parameter dependence of
the stability of the Kohonen Self-Organizing Map and the Zheng and Greenleaf
concave and convex learning with respect to different input distributions,
input and output dimensions
Lie Algebras and Suppression of Decoherence in Open Quantum Systems
Since there are many examples in which no decoherence-free subsystems exist
(among them all cases where the error generators act irreducibly on the system
Hilbert space), it is of interest to search for novel mechanisms which suppress
decoherence in these more general cases. Drawing on recent work
(quant-ph/0502153) we present three results which indicate decoherence
suppression without the need for noiseless subsystems. There is a certain
trade-off; our results do not necessarily apply to an arbitrary initial density
matrix, or for completely generic noise parameters. On the other hand, our
computational methods are novel and the result--suppression of decoherence in
the error-algebra approach without noiseless subsystems--is an interesting new
direction.Comment: 7 page
Calculating Kaon Fragmentation Functions from NJL-Jet Model
The Nambu--Jona-Lasinio (NJL) - Jet model provides a sound framework for
calculating the fragmentation functions in an effective chiral quark theory,
where the momentum and isospin sum rules are satisfied without the introduction
of ad hoc parameters. Earlier studies of the pion fragmentation functions using
the NJL model within this framework showed qualitative agreement with the
empirical parameterizations. Here we extend the NJL-Jet model by including the
strange quark. The corrections to the pion fragmentation functions and
corresponding kaon fragmentation functions are calculated using the elementary
quark to quark-meson fragmentation functions from NJL. The results for the kaon
fragmentation functions exhibit a qualitative agreement with the empirical
parameterizations, while the unfavored strange quark fragmentation to pions is
shown to be of the same order of magnitude as the unfavored light quark's. The
results of these studies are expected to provide important guidance for the
analysis of a large variety of semi-inclusive data.Comment: 9 pages, 14 figure
A controlled experiment for the empirical evaluation of safety analysis techniques for safety-critical software
Context: Today's safety critical systems are increasingly reliant on
software. Software becomes responsible for most of the critical functions of
systems. Many different safety analysis techniques have been developed to
identify hazards of systems. FTA and FMEA are most commonly used by safety
analysts. Recently, STPA has been proposed with the goal to better cope with
complex systems including software. Objective: This research aimed at comparing
quantitatively these three safety analysis techniques with regard to their
effectiveness, applicability, understandability, ease of use and efficiency in
identifying software safety requirements at the system level. Method: We
conducted a controlled experiment with 21 master and bachelor students applying
these three techniques to three safety-critical systems: train door control,
anti-lock braking and traffic collision and avoidance. Results: The results
showed that there is no statistically significant difference between these
techniques in terms of applicability, understandability and ease of use, but a
significant difference in terms of effectiveness and efficiency is obtained.
Conclusion: We conclude that STPA seems to be an effective method to identify
software safety requirements at the system level. In particular, STPA addresses
more different software safety requirements than the traditional techniques FTA
and FMEA, but STPA needs more time to carry out by safety analysts with little
or no prior experience.Comment: 10 pages, 1 figure in Proceedings of the 19th International
Conference on Evaluation and Assessment in Software Engineering (EASE '15).
ACM, 201
Indomethacin decreases viscosity of gallbladder bile in patients with cholesterol gallstone disease
There is experimental evidence that inhibition of cyclooxygenase with nonsteroidal anti-inflammatory drugs may decrease cholesterol gall-stone formation and mitigate biliary pain in gall-stone patients. The mechanisms by which NSAIDs exert these effect are unclear. In a prospective, controlled clinical trial we examined the effects of oral indomethacin on the composition of human gall-bladder bile. The study included 28 patients with symptomatic cholesterol or mixed gallstones. Of these, 8 were treated with 3 × 25 mg indomethacin daily for 7 days prior to elective cholecystectomy while 20 received no treatment and served as controls. Bile and tissue samples from the gallbladder were obtained during cholecystectomy. Indomethacin tissue levels in the gallbladder mucosa, as assessed by HPLC, were 1.05±0.4 ng/mg wet weight, a concentration known to inhibit effectively cyclooxygenase activity. Nevertheless, no differences between the treated and untreated groups were found in the concentrations of biliary mucus glycoprotein (0.94±0.27 versus 0.93±0.32 mg/ml) or total protein (5.8±0.9 versus 6.4±1.3 mg/ml), cholesterol saturation (1.3±0.2 versus 1.5±0.2), or nucleation time (2.0±3.0 versus 1.5±2.0 days). However, biliary viscosity, measured using a low-shear rotation viscosimeter, was significantly lower in patients receiving indomethacin treatment (2.9±0.6 versus 5.6±1.2 mPa.s; P < 0.02). In conclusion, in man oral indomethacin decreases bile viscosity without alteration of bile lithogenicity or biliary mucus glycoprotein content. Since mucus glycoproteins are major determinants of bile viscosity, an alteration in mucin macromolecular composition may conceivably cause the indomethacin-induced decrease in biliary viscosity and explain the beneficial effects of nonsteroidal anti-inflammatory drugs in gallstone disease
Cygnus X-2: the Descendant of an Intermediate-Mass X-Ray Binary
The X-ray binary Cygnus X-2 (Cyg X-2) has recently been shown to contain a
secondary that is much more luminous and hotter than is appropriate for a
low-mass subgiant. We present detailed binary-evolution calculations which
demonstrate that the present evolutionary state of Cyg X-2 can be understood if
the secondary had an initial mass of around 3.5 M_sun and started to transfer
mass near the end of its main-sequence phase (or, somewhat less likely, just
after leaving the main sequence). Most of the mass of the secondary must have
been ejected from the system during an earlier rapid mass-transfer phase. In
the present phase, the secondary has a mass of around 0.5 M_sun with a
non-degenerate helium core. It is burning hydrogen in a shell, and mass
transfer is driven by the advancement of the burning shell. Cyg X-2 therefore
is related to a previously little studied class of intermediate-mass X-ray
binaries (IMXBs). We suggest that perhaps a significant fraction of X-ray
binaries presently classified as low-mass X-ray binaries may be descendants of
IMXBs and discuss some of the implications
Energy Dependence of High Moments for Net-proton Distributions
High moments of multiplicity distributions of conserved quantities are
predicted to be sensitive to critical fluctuations. To understand the effect of
the complicated non-critical physics backgrounds on the proposed observable, we
have studied various moments of net-proton distributions with AMPT, Hijing,
Therminator and UrQMD models, in which no QCD critical point physics is
implemented. It is found that the centrality evolution of various moments of
net-proton distributions can be uniformly described by a superposition of
emission sources. In addition, in the absence of critical phenomena, some
moment products of net-proton distribution, related to the baryon number
susceptibilities ratio in Lattice QCD calculation, are predicted to be constant
as a function of the collision centrality. We argue that a non-monotonic
dependence of the moment products as a function collision centrality and the
beam energy may be used to locate the QCD critical point.Comment: SQM2009 Proceeding, 6 pages, 5 figure
- …