5,899 research outputs found

    Nuclear timescale mass transfer in models of supergiant and ultra-luminous X-ray binaries

    Full text link
    We investigate how the proximity of supergiant donor stars to the Eddington-limit, and their advanced evolutionary stage, may influence the evolution of massive and ultra-luminous X-ray binaries with supergiant donor stars (SGXBs and ULXs). We construct models of massive stars with different internal hydrogen/helium gradients and different hydrogen-rich envelope masses, and expose them to slow mass loss to probe the response of the stellar radius. In addition, we compute the corresponding Roche-lobe overflow mass-transfer evolution with our detailed binary stellar evolution code, approximating the compact objects as point masses. We find that a hydrogen/helium gradient in the layers beneath the surface, as it is likely present in the well-studied donor stars of observed SGBXs, can enable nuclear timescale mass-transfer in SGXBs with a BH or a NS accretor, even for mass ratios in excess of 20. In our binary evolution models, the donor stars rapidly decrease their thermal equilibrium radius and can therefore cope with the inevitably strong orbital contraction imposed by the high mass ratio. Our results open a new perspective for understanding the large number of Galactic SGXBs, and their almost complete absence in the SMC. They may also offer a way to obtain more ULX systems, to find nuclear timescale mass-transfer in ULX systems even with neutron star accretors, and shed new light on the origin of the strong B-field in these neutron stars.Comment: 23 pages, 21 figures, we are thankful for any comments an this draf

    The MANTA: An RPV design to investigate forces and moments on a lifting surface

    Get PDF
    The overall goal was to investigate and exploit the advantages of using remotely powered vehicles (RPV's) for in-flight data collection at low Reynold's numbers. The data to be collected is on actual flight loads for any type of rectangular or tapered airfoil section, including vertical and horizontal stabilizers. The data will be on a test specimen using a force-balance system which is located forward of the aircraft to insure an undisturbed air flow over the test section. The collected data of the lift, drag and moment of the test specimen is to be radioed to a grand receiver, thus providing real-time data acquisition. The design of the mission profile and the selection of the instrumentation to satisfy aerodynamic requirements are studied and tested. A half-size demonstrator was constructed and flown to test the flight worthiness of the system

    Dynamic Virtualized Deployment of Particle Physics Environments on a High Performance Computing Cluster

    Full text link
    The NEMO High Performance Computing Cluster at the University of Freiburg has been made available to researchers of the ATLAS and CMS experiments. Users access the cluster from external machines connected to the World-wide LHC Computing Grid (WLCG). This paper describes how the full software environment of the WLCG is provided in a virtual machine image. The interplay between the schedulers for NEMO and for the external clusters is coordinated through the ROCED service. A cloud computing infrastructure is deployed at NEMO to orchestrate the simultaneous usage by bare metal and virtualized jobs. Through the setup, resources are provided to users in a transparent, automatized, and on-demand way. The performance of the virtualized environment has been evaluated for particle physics applications

    Dynamic provisioning of a HEP computing infrastructure on a shared hybrid HPC system

    Get PDF
    Experiments in high-energy physics (HEP) rely on elaborate hardware, software and computing systems to sustain the high data rates necessary to study rare physics processes. The Institut fr Experimentelle Kernphysik (EKP) at KIT is a member of the CMS and Belle II experiments, located at the LHC and the Super-KEKB accelerators, respectively. These detectors share the requirement, that enormous amounts of measurement data must be processed and analyzed and a comparable amount of simulated events is required to compare experimental results with theoretical predictions. Classical HEP computing centers are dedicated sites which support multiple experiments and have the required software pre-installed. Nowadays, funding agencies encourage research groups to participate in shared HPC cluster models, where scientist from different domains use the same hardware to increase synergies. This shared usage proves to be challenging for HEP groups, due to their specialized software setup which includes a custom OS (often Scientific Linux), libraries and applications. To overcome this hurdle, the EKP and data center team of the University of Freiburg have developed a system to enable the HEP use case on a shared HPC cluster. To achieve this, an OpenStack-based virtualization layer is installed on top of a bare-metal cluster. While other user groups can run their batch jobs via the Moab workload manager directly on bare-metal, HEP users can request virtual machines with a specialized machine image which contains a dedicated operating system and software stack. In contrast to similar installations, in this hybrid setup, no static partitioning of the cluster into a physical and virtualized segment is required. As a unique feature, the placement of the virtual machine on the cluster nodes is scheduled by Moab and the job lifetime is coupled to the lifetime of the virtual machine. This allows for a seamless integration with the jobs sent by other user groups and honors the fairshare policies of the cluster. The developed thin integration layer between OpenStack and Moab can be adapted to other batch servers and virtualization systems, making the concept also applicable for other cluster operators. This contribution will report on the concept and implementation of an OpenStack-virtualized cluster used for HEP work ows. While the full cluster will be installed in spring 2016, a test-bed setup with 800 cores has been used to study the overall system performance and dedicated HEP jobs were run in a virtualized environment over many weeks. Furthermore, the dynamic integration of the virtualized worker nodes, depending on the workload at the institute\u27s computing system, will be described

    Common circuit defect of excitatory-inhibitory balance in mouse models of autism

    Get PDF
    One unifying explanation for the complexity of Autism Spectrum Disorders (ASD) may lie in the disruption of excitatory/inhibitory (E/I) circuit balance during critical periods of development. We examined whether Parvalbumin (PV)-positive inhibitory neurons, which normally drive experience-dependent circuit refinement (Hensch Nat Rev Neurosci 6:877–888, 1), are disrupted across heterogeneous ASD mouse models. We performed a meta-analysis of PV expression in previously published ASD mouse models and analyzed two additional models, reflecting an embryonic chemical insult (prenatal valproate, VPA) or single-gene mutation identified in human patients (Neuroligin-3, NL-3 R451C). PV-cells were reduced in the neocortex across multiple ASD mouse models. In striking contrast to controls, both VPA and NL-3 mouse models exhibited an asymmetric PV-cell reduction across hemispheres in parietal and occipital cortices (but not the underlying area CA1). ASD mouse models may share a PV-circuit disruption, providing new insight into circuit development and potential prevention by treatment of autism
    corecore