306 research outputs found

    Enriching product ads with Metadata from HTML annotations

    Full text link

    Micro-CernVM: Slashing the Cost of Building and Deploying Virtual Machines

    Full text link
    The traditional virtual machine building and and deployment process is centered around the virtual machine hard disk image. The packages comprising the VM operating system are carefully selected, hard disk images are built for a variety of different hypervisors, and images have to be distributed and decompressed in order to instantiate a virtual machine. Within the HEP community, the CernVM File System has been established in order to decouple the distribution from the experiment software from the building and distribution of the VM hard disk images. We show how to get rid of such pre-built hard disk images altogether. Due to the high requirements on POSIX compliance imposed by HEP application software, CernVM-FS can also be used to host and boot a Linux operating system. This allows the use of a tiny bootable CD image that comprises only a Linux kernel while the rest of the operating system is provided on demand by CernVM-FS. This approach speeds up the initial instantiation time and reduces virtual machine image sizes by an order of magnitude. Furthermore, security updates can be distributed instantaneously through CernVM-FS. By leveraging the fact that CernVM-FS is a versioning file system, a historic analysis environment can be easily re-spawned by selecting the corresponding CernVM-FS file system snapshot.Comment: Conference paper at the 2013 Computing in High Energy Physics (CHEP) Conference, Amsterda

    CernVM Online and Cloud Gateway: a uniform interface for CernVM contextualization and deployment

    Full text link
    In a virtualized environment, contextualization is the process of configuring a VM instance for the needs of various deployment use cases. Contextualization in CernVM can be done by passing a handwritten context to the user data field of cloud APIs, when running CernVM on the cloud, or by using CernVM web interface when running the VM locally. CernVM Online is a publicly accessible web interface that unifies these two procedures. A user is able to define, store and share CernVM contexts using CernVM Online and then apply them either in a cloud by using CernVM Cloud Gateway or on a local VM with the single-step pairing mechanism. CernVM Cloud Gateway is a distributed system that provides a single interface to use multiple and different clouds (by location or type, private or public). Cloud gateway has been so far integrated with OpenNebula, CloudStack and EC2 tools interfaces. A user, with access to a number of clouds, can run CernVM cloud agents that will communicate with these clouds using their interfaces, and then use one single interface to deploy and scale CernVM clusters. CernVM clusters are defined in CernVM Online and consist of a set of CernVM instances that are contextualized and can communicate with each other.Comment: Conference paper at the 2013 Computing in High Energy Physics (CHEP) Conference, Amsterda

    Electron beam based space charge measurement of intense ion beams

    Get PDF

    Simulations of the High-Energy Beam-Transport (HEBT) section at FRANZ

    Get PDF
    The neutron source FRANZ (Frankfurter Neutronenquelle am Stern-Gerlach-Zentrum), which is currently under construction, will be the neutron source with the highest intensity in the nuclear-astrophysically relevant energy region. The TraceWin code was used to design the High-Energy Beam-Transport section with regard to the experimental requirements at different target positions

    The s Process: Nuclear Physics, Stellar Models, Observations

    Full text link
    Nucleosynthesis in the s process takes place in the He burning layers of low mass AGB stars and during the He and C burning phases of massive stars. The s process contributes about half of the element abundances between Cu and Bi in solar system material. Depending on stellar mass and metallicity the resulting s-abundance patterns exhibit characteristic features, which provide comprehensive information for our understanding of the stellar life cycle and for the chemical evolution of galaxies. The rapidly growing body of detailed abundance observations, in particular for AGB and post-AGB stars, for objects in binary systems, and for the very faint metal-poor population represents exciting challenges and constraints for stellar model calculations. Based on updated and improved nuclear physics data for the s-process reaction network, current models are aiming at ab initio solution for the stellar physics related to convection and mixing processes. Progress in the intimately related areas of observations, nuclear and atomic physics, and stellar modeling is reviewed and the corresponding interplay is illustrated by the general abundance patterns of the elements beyond iron and by the effect of sensitive branching points along the s-process path. The strong variations of the s-process efficiency with metallicity bear also interesting consequences for Galactic chemical evolution.Comment: 53 pages, 20 figures, 3 tables; Reviews of Modern Physics, accepte

    EXPERIMENTS WITH A FAST CHOPPER SYSTEM FOR INTENSE ION BEAMS

    Get PDF
    Abstract Chopper systems are used to pulse charged particle beams. In most cases, electric deflection systems are used to generate beam pulses of defined lengths and appropriate repetition rates. At high beam intensities, the field distribution of the chopper system needs to be adapted precisely to the beam dynamics in order to avoid aberrations For the Frankfurt Neutron Source FRAN

    Data Integration for Open Data on the Web

    Get PDF
    In this lecture we will discuss and introduce challenges of integrating openly available Web data and how to solve them. Firstly, while we will address this topic from the viewpoint of Semantic Web research, not all data is readily available as RDF or Linked Data, so we will give an introduction to different data formats prevalent on the Web, namely, standard formats for publishing and exchanging tabular, tree-shaped, and graph data. Secondly, not all Open Data is really completely open, so we will discuss and address issues around licences, terms of usage associated with Open Data, as well as documentation of data provenance. Thirdly, we will discuss issues connected with (meta-)data quality issues associated with Open Data on the Web and how Semantic Web techniques and vocabularies can be used to describe and remedy them. Fourth, we will address issues about searchability and integration of Open Data and discuss in how far semantic search can help to overcome these. We close with briefly summarizing further issues not covered explicitly herein, such as multi-linguality, temporal aspects (archiving, evolution, temporal querying), as well as how/whether OWL and RDFS reasoning on top of integrated open data could be help

    Prepared to react? Assessing the functional capacity of the primary health care system in rural Orissa, India to respond to the devastating flood of September 2008

    Get PDF
    Background: Early detection of an impending flood and the availability of countermeasures to deal with it can significantly reduce its health impacts. In developing countries like India, public primary health care facilities are frontline organizations that deal with disasters particularly in rural settings. For developing robust counter reacting systems evaluating preparedness capacities within existing systems becomes necessary. Objective: The objective of the study is to assess the functional capacity of the primary health care system in Jagatsinghpur district of rural Orissa in India to respond to the devastating flood of September 2008. Methods: An onsite survey was conducted in all 29 primary and secondary facilities in five rural blocks (administrative units) of Jagatsinghpur district in Orissa state. A pre-tested structured questionnaire was administered face to face in the facilities. The data was entered, processed and analyzed using STATA® 10. Results: Data from our primary survey clearly shows that the healthcare facilities are ill prepared to handle the flood despite being faced by them annually. Basic utilities like electricity backup and essential medical supplies are lacking during floods. Lack of human resources along with missing standard operating procedures; pre-identified communication and incident command systems; effective leadership; and weak financial structures are the main hindering factors in mounting an adequate response to the floods. Conclusion: The 2008 flood challenged the primary curative and preventive health care services in Jagatsinghpur. Simple steps like developing facility specific preparedness plans which detail out standard operating procedures during floods and identify clear lines of command will go a long way in strengthening the response to future floods. Performance critiques provided by the grass roots workers, like this one, should be used for institutional learning and effective preparedness planning. Additionally each facility should maintain contingency funds for emergency response along with local vendor agreements to ensure stock supplies during floods. The facilities should ensure that baseline public health standards for health care delivery identified by the Government are met in non-flood periods in order to improve the response during floods. Building strong public primary health care systems is a development challenge. The recovery phases of disasters should be seen as an opportunity to expand and improve services and facilities
    corecore