20,411 research outputs found

    IR diagnostics of embedded jets: velocity resolved observations of the HH34 and HH1 jets

    Full text link
    We present VLT-ISAAC medium resolution spectroscopy of the HH34 and HH1 jets. Our aim is to derive the kinematics and the physical parameters and to study how they vary with jet velocity. We use several important diagnostic lines such as [FeII] 1.644um, 1.600um and H2 2.122um. In the inner jet region of HH34 we find that both the atomic and molecular gas present two components at high and low velocity. The [FeII] LVC in HH34 is detected up to large distances from the source (>1000 AU), at variance with TTauri jets. In H2 2.122um, the LVC and HVC are spatially separated. We detect, for the first time, the fainter red-shifted counterpart down to the central source. In HH1, we trace the jet down to ~1" from the VLA1 driving source: the kinematics of this inner region is again characterised by the presence of two velocity components, one blue-shifted and one red-shifted with respect to the source LSR velocity. In the inner HH34 jet region, ne increases with decreasing velocity. Up to ~10" from the driving source, and along the whole HH1 jet an opposite behaviour is observed instead, with ne increasing with velocity. In both jets the mass flux is carried mainly by the high-velocity gas. A comparison between the position velocity diagrams and derived electron densities with models for MHD jet launching mechanisms has been performed for HH34. While the kinematical characteristics of the line emission at the jet base can be, at least qualitatively, reproduced by both X-winds and disc-wind models, none of these models can explain the extent of the LVC and the dependence of electron density with velocity that we observe. It is possible that the LVC in HH34 represents gas not directly ejected in the jet but instead denser ambient gas entrained by the high velocity collimated jet.Comment: A&A accepte

    An Approach to Static Performance Guarantees for Programs with Run-time Checks

    Full text link
    Instrumenting programs for performing run-time checking of properties, such as regular shapes, is a common and useful technique that helps programmers detect incorrect program behaviors. This is specially true in dynamic languages such as Prolog. However, such run-time checks inevitably introduce run-time overhead (in execution time, memory, energy, etc.). Several approaches have been proposed for reducing such overhead, such as eliminating the checks that can statically be proved to always succeed, and/or optimizing the way in which the (remaining) checks are performed. However, there are cases in which it is not possible to remove all checks statically (e.g., open libraries which must check their interfaces, complex properties, unknown code, etc.) and in which, even after optimizations, these remaining checks still may introduce an unacceptable level of overhead. It is thus important for programmers to be able to determine the additional cost due to the run-time checks and compare it to some notion of admissible cost. The common practice used for estimating run-time checking overhead is profiling, which is not exhaustive by nature. Instead, we propose a method that uses static analysis to estimate such overhead, with the advantage that the estimations are functions parameterized by input data sizes. Unlike profiling, this approach can provide guarantees for all possible execution traces, and allows assessing how the overhead grows as the size of the input grows. Our method also extends an existing assertion verification framework to express "admissible" overheads, and statically and automatically checks whether the instrumented program conforms with such specifications. Finally, we present an experimental evaluation of our approach that suggests that our method is feasible and promising.Comment: 15 pages, 3 tables; submitted to ICLP'18, accepted as technical communicatio

    Phenomenology Tools on Cloud Infrastructures using OpenStack

    Get PDF
    We present a new environment for computations in particle physics phenomenology employing recent developments in cloud computing. On this environment users can create and manage "virtual" machines on which the phenomenology codes/tools can be deployed easily in an automated way. We analyze the performance of this environment based on "virtual" machines versus the utilization of "real" physical hardware. In this way we provide a qualitative result for the influence of the host operating system on the performance of a representative set of applications for phenomenology calculations.Comment: 25 pages, 12 figures; information on memory usage included, as well as minor modifications. Version to appear in EPJ

    Non-linear response of single-molecule magnets: field-tuned quantum-to-classical crossovers

    Get PDF
    Quantum nanomagnets can show a field dependence of the relaxation time very different from their classical counterparts, due to resonant tunneling via excited states (near the anisotropy barrier top). The relaxation time then shows minima at the resonant fields H_{n}=n D at which the levels at both sides of the barrier become degenerate (D is the anisotropy constant). We showed that in Mn12, near zero field, this yields a contribution to the nonlinear susceptibility that makes it qualitatively different from the classical curves [Phys. Rev. B 72, 224433 (2005)]. Here we extend the experimental study to finite dc fields showing how the bias can trigger the system to display those quantum nonlinear responses, near the resonant fields, while recovering an classical-like behaviour for fields between them. The analysis of the experiments is done with heuristic expressions derived from simple balance equations and calculations with a Pauli-type quantum master equation.Comment: 4 pages, 3 figures. Submitted to Phys. Rev. B, brief report

    Implications of a Sub-Threshold Resonance for Stellar Beryllium Depletion

    Get PDF
    Abundance measurements of the light elements lithium, beryllium, and boron are playing an increasingly important role in the study of stellar physics. Because these elements are easily destroyed in stars at temperatures 2--4 million K, the abundances in the surface convective zone are diagnostics of the star's internal workings. Standard stellar models cannot explain depletion patterns observed in low mass stars, and so are not accounting for all the relevant physical processes. These processes have important implications for stellar evolution and primordial lithium production in big bang nucleosynthesis. Because beryllium is destroyed at slightly higher temperatures than lithium, observations of both light elements can differentiate between the various proposed depletion mechanisms. Unfortunately, the reaction rate for the main destruction channel, 9Be(p,alpha)6Li, is uncertain. A level in the compound nucleus 10B is only 25.7 keV below the reaction's energetic threshold. The angular momentum and parity of this level are not well known; current estimates indicate that the resonance entrance channel is either s- or d-wave. We show that an s-wave resonance can easily increase the reaction rate by an order of magnitude at temperatures of approximately 4 million K. Observations of sub-solar mass stars can constrain the strength of the resonance, as can experimental measurements at lab energies lower than 30 keV.Comment: 9 pages, 1 ps figure, uses AASTeX macros and epsfig.sty. Reference added, typos corrected. To appear in ApJ, 10 March 199

    An improved discrete bat algorithm for symmetric and asymmetric traveling salesman problems

    Get PDF
    Bat algorithm is a population metaheuristic proposed in 2010 which is based on the echolocation or bio-sonar characteristics of microbats. Since its first implementation, the bat algorithm has been used in a wide range of fields. In this paper, we present a discrete version of the bat algorithm to solve the well-known symmetric and asymmetric traveling salesman problems. In addition, we propose an improvement in the basic structure of the classic bat algorithm. To prove that our proposal is a promising approximation method, we have compared its performance in 37 instances with the results obtained by five different techniques: evolutionary simulated annealing, genetic algorithm, an island based distributed genetic algorithm, a discrete firefly algorithm and an imperialist competitive algorithm. In order to obtain fair and rigorous comparisons, we have conducted three different statistical tests along the paper: the Student's tt-test, the Holm's test, and the Friedman test. We have also compared the convergence behaviour shown by our proposal with the ones shown by the evolutionary simulated annealing, and the discrete firefly algorithm. The experimentation carried out in this study has shown that the presented improved bat algorithm outperforms significantly all the other alternatives in most of the cases
    • …
    corecore