702 research outputs found

    Schlüsseltechnologien und Nachhaltigkeit

    Get PDF
    Einführung in das Schwerpunktthem

    Nutzungsregime im Wandel

    Get PDF
    "Teilen statt Kaufen", "Reparieren statt Wegwerfen" – diese und ähnliche Slogans spielen in der Debatte um nachhaltige Formen des Konsums eine immer prominentere Rolle. Sie illustrieren die Vorstellung einer Wirtschaftsweise, in der durch intelligente und funktionsorientierte Gebrauchsmuster natürliche Ressour­cen effektiver eingesetzt werden. Doch wie ist es um die praktischen Umset­zungsmöglichkeiten solcher Strategien bestellt? Welche ökologischen Entlastungs­potenziale sind realistisch und welche Entwicklungsperspektiven tun sich auf

    Klimaschutz lernen

    Get PDF
    Einführung in das Schwerpunktthem

    The shaded side of the UHC cube: a systematic review of human resources for health management and administration in social health protection schemes

    Get PDF
    Managers and administrators in charge of social protection and health financing, service purchasing and provision play a crucial role in harnessing the potential advantage of prudent organization, management and purchasing of health services, thereby supporting the attainment of Universal Health Coverage. However, very little is known about the needed quantity and quality of such staff, in particular when it comes to those institutions managing mandatory health insurance schemes and purchasing services. As many health care systems in low- and middle-income countries move towards independent institutions (both purchasers and providers) there is a clear need to have good data on staff and administrative cost in different social health protection schemes as a basis for investing in the development of a cadre of health managers and administrators for such schemes. We report on a systematic literature review of human resources in health management and administration in social protection schemes and suggest some aspects in moving research, practical applications and the policy debate forward

    Field comparison of dry deposition samplers for collection of atmospheric mineral dust: results from single-particle characterization

    Get PDF
    Frequently, passive dry deposition collectors are used to sample atmospheric dust deposition. However, there exists a multitude of different instruments with different, usually not well-characterized sampling efficiencies. As a result, the acquired data might be considerably biased with respect to their size representativity and, as a consequence, also composition. In this study, individual particle analysis by automated scanning electron microscopy coupled with energy-dispersive X-ray analysis was used to characterize different, commonly used passive samplers with respect to their size-resolved deposition rate and concentration. This study focuses on the microphysical properties, i.e., the aerosol concentration and deposition rates as well as the particle size distributions. In addition, computational fluid dynamics modeling was used in parallel to achieve deposition velocities from a theoretical point of view. Scanning electron microscopy (SEM)-calculated deposition rate measurements made using different passive samplers show a disagreement among the samplers. Modified Wilson and Cooke (MWAC) and Big Spring Number Eight (BSNE) – both horizontal flux samplers – collect considerably more material than the flat plate and Sigma-2 samplers, which are vertical flux samplers. The collection efficiency of MWAC increases for large particles in comparison to Sigma-2 with increasing wind speed, while such an increase is less observed in the case of BSNE. A positive correlation is found between deposition rate and PM10 concentration measurements by an optical particle spectrometer. The results indicate that a BSNE and Sigma-2 can be good options for PM10 measurement, whereas MWAC and flat-plate samplers are not a suitable choice. A negative correlation was observed in between dust deposition rate and wind speed. Deposition velocities calculated from different classical deposition models do not agree with deposition velocities estimated using computational fluid dynamics (CFD) simulations. The deposition velocity estimated from CFD was often higher than the values derived from classical deposition velocity models. Moreover, the modeled deposition velocity ratios between different samplers do not agree with the observations.This research has been supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) (grant nos. 264907654, 264912134 and 416816480 (KA 2280))

    Dynamic Virtualized Deployment of Particle Physics Environments on a High Performance Computing Cluster

    Full text link
    The NEMO High Performance Computing Cluster at the University of Freiburg has been made available to researchers of the ATLAS and CMS experiments. Users access the cluster from external machines connected to the World-wide LHC Computing Grid (WLCG). This paper describes how the full software environment of the WLCG is provided in a virtual machine image. The interplay between the schedulers for NEMO and for the external clusters is coordinated through the ROCED service. A cloud computing infrastructure is deployed at NEMO to orchestrate the simultaneous usage by bare metal and virtualized jobs. Through the setup, resources are provided to users in a transparent, automatized, and on-demand way. The performance of the virtualized environment has been evaluated for particle physics applications

    Dynamic provisioning of a HEP computing infrastructure on a shared hybrid HPC system

    Get PDF
    Experiments in high-energy physics (HEP) rely on elaborate hardware, software and computing systems to sustain the high data rates necessary to study rare physics processes. The Institut fr Experimentelle Kernphysik (EKP) at KIT is a member of the CMS and Belle II experiments, located at the LHC and the Super-KEKB accelerators, respectively. These detectors share the requirement, that enormous amounts of measurement data must be processed and analyzed and a comparable amount of simulated events is required to compare experimental results with theoretical predictions. Classical HEP computing centers are dedicated sites which support multiple experiments and have the required software pre-installed. Nowadays, funding agencies encourage research groups to participate in shared HPC cluster models, where scientist from different domains use the same hardware to increase synergies. This shared usage proves to be challenging for HEP groups, due to their specialized software setup which includes a custom OS (often Scientific Linux), libraries and applications. To overcome this hurdle, the EKP and data center team of the University of Freiburg have developed a system to enable the HEP use case on a shared HPC cluster. To achieve this, an OpenStack-based virtualization layer is installed on top of a bare-metal cluster. While other user groups can run their batch jobs via the Moab workload manager directly on bare-metal, HEP users can request virtual machines with a specialized machine image which contains a dedicated operating system and software stack. In contrast to similar installations, in this hybrid setup, no static partitioning of the cluster into a physical and virtualized segment is required. As a unique feature, the placement of the virtual machine on the cluster nodes is scheduled by Moab and the job lifetime is coupled to the lifetime of the virtual machine. This allows for a seamless integration with the jobs sent by other user groups and honors the fairshare policies of the cluster. The developed thin integration layer between OpenStack and Moab can be adapted to other batch servers and virtualization systems, making the concept also applicable for other cluster operators. This contribution will report on the concept and implementation of an OpenStack-virtualized cluster used for HEP work ows. While the full cluster will be installed in spring 2016, a test-bed setup with 800 cores has been used to study the overall system performance and dedicated HEP jobs were run in a virtualized environment over many weeks. Furthermore, the dynamic integration of the virtualized worker nodes, depending on the workload at the institute\u27s computing system, will be described

    Protein disulfide isomerase acts as an injury response signal that enhances fibrin generation via tissue factor activation

    Get PDF
    The activation of initiator protein tissue factor (TF) is likely to be a crucial step in the blood coagulation process, which leads to fibrin formation. The stimuli responsible for inducing TF activation are largely undefined. Here we show that the oxidoreductase protein disulfide isomerase (PDI) directly promotes TF-dependent fibrin production during thrombus formation in vivo. After endothelial denudation of mouse carotid arteries, PDI was released at the injury site from adherent platelets and disrupted vessel wall cells. Inhibition of PDI decreased TF-triggered fibrin formation in different in vivo murine models of thrombus formation, as determined by intravital fluorescence microscopy. PDI infusion increased — and, under conditions of decreased platelet adhesion, PDI inhibition reduced — fibrin generation at the injury site, indicating that PDI can directly initiate blood coagulation. In vitro, human platelet–secreted PDI contributed to the activation of cryptic TF on microvesicles (microparticles). Mass spectrometry analyses indicated that part of the extracellular cysteine 209 of TF was constitutively glutathionylated. Mixed disulfide formation contributed to maintaining TF in a state of low functionality. We propose that reduced PDI activates TF by isomerization of a mixed disulfide and a free thiol to an intramolecular disulfide. Our findings suggest that disulfide isomerases can act as injury response signals that trigger the activation of fibrin formation following vessel injury
    • …
    corecore