813 research outputs found

    Can Clouds Replace Grids? A Real-Life Exabyte-Scale Test-Case

    Get PDF
    The worldâs largest scientific machine â comprising dual 27km circular proton accelerators cooled to 1.9oK and located some 100m underground â currently relies on major production Grid infrastructures for the offline computing needs of the 4 main experiments that will take data at this facility. After many years of sometimes difficult preparation the computing service has been declared âワopenâ and ready to meet the challenges that will come shortly when the machine restarts in 2009. But the service is not without its problems: reliability â as seen by the experiments, as opposed to that measured by the official tools â still needs to be significantly improved. Prolonged downtimes or degradations of major services or even complete sites are still too common and the operational and coordination effort to keep the overall service running is probably not sustainable at this level. Recently âワCloud Computingâ â in terms of pay-per-use fabric provisioning â has emerged as a potentially viable alternative but with rather different strengths and no doubt weaknesses too. Based on the concrete needs of the LHC experiments â where the total data volume that will be acquired over the full lifetime of the project, including the additional data copies that are required by the Computing Models of the experiments, approaches 1 Exabyte â we analyze the pros and cons of Grids versus Clouds. This analysis covers not only t echnical issues â such as those related to demanding database and data management needs â but also sociological aspects, which cannot be ignored, neither in terms of funding nor in the wider context of the essential but often overlooked role of science in society, education and economy

    The IEEE mass storage system reference model

    Get PDF
    The IEEE Reference Model for Mass Storage Systems provides a basis for the develop­ment of standards for storage systems. The model identifies the high level abstractions that underlie modern storage systems. The model itself does not attempt to provide implementation specifications. Its main purpose is to permit the development of indi­vidual standards within a common framework. High Energy Physics has consistently been on the leading edge of technology and Mass Storage is no exception. This paper describes the IEEE MSS Reference model in the HEP context and examines how it could be used to help solve the data management problems of HEP. (Originally published in CERN Yellow Report 94-06)These are the notes from a series of lectures given at the 1993 CERN School of Computing. They have been extracted from the scanned PDF document, converted to MS Word using a free online tool and then saved as PDF. No attempt has been made to correct typographical or other errors in the original text

    Grids Today, Clouds on the Horizon

    Get PDF
    By the time of CCP 2008, the worldâs largest scientific machine â the Large Hadron Collider â should have been cooled down to its operational temperature of below 20K and injection tests should have started. Collisions of proton beams at 5 + 5 TeV are expected within one to two months of the initial tests, with data taking at design energy (7 + 7 TeV) now foreseen for 2009. In order to process the data from this world machine, we have put our âワHiggs in one basketâ â that of Grid computing. After many years of preparation, 2008 has seen a final âワCommon Computing Readiness Challengeâ (CCRCâ08) â aimed at demonstrating full readiness for 2008 data taking, processing and analysis. By definition, this relies on a worldâwide production Grid infrastructure. But change â as always â is on the horizon. The current funding model for Grids â which in Europe has been through 3 generations of EGEE projects, together with related projects in other parts of the world, including South America â is evolving towards a longâterm, sustainable eâinfrastructure, like the European Grid Initiative (EGI). At the same time, (potentially?) new paradigms, such as that of âワCloud Computingâ are emerging. This talk summarizes the (successful) results of CCRCâ08 and discusses the potential impact of future Grid funding on both regional and international application communities. It contrasts Grid and Cloud computing mode ls from both technical and sociological points of view. Finally, it discusses the requirements from production application communities, in terms of stability and continuity in the medium to long term

    The correlation space of Gaussian latent tree models and model selection without fitting

    Get PDF
    We provide a complete description of possible covariance matrices consistent with a Gaussian latent tree model for any tree. We then present techniques for utilising these constraints to assess whether observed data is compatible with that Gaussian latent tree model. Our method does not require us first to fit such a tree. We demonstrate the usefulness of the inverse-Wishart distribution for performing preliminary assessments of tree-compatibility using semialgebraic constraints. Using results from Drton et al. (2008) we then provide the appropriate moments required for test statistics for assessing adherence to these equality constraints. These are shown to be effective even for small sample sizes and can be easily adjusted to test either the entire model or only certain macrostructures hypothesized within the tree. We illustrate our exploratory tetrad analysis using a linguistic application and our confirmatory tetrad analysis using a biological application.Comment: 15 page

    Lessons Learnt from WLCG Service Deployment

    Get PDF
    This paper summarises the main lessons learnt from deploying WLCG production services, with a focus on Reliability, Scalability, Accountability, which lead to both manageability and usability. Each topic is analysed in turn. Techniques for zero-user-visible downtime for the main service interventions are described, together with pathological cases that need special treatment. The requirements in terms of scalability are analysed, calling for as much robustness and automation in the service as possible. The different aspects of accountability - which covers measuring / tracking / logging / monitoring what is going on -- and has gone on - is examined, with the goal of attaining a manageable service. Finally, a simple analogy is drawn with the Web in terms of usability - what do we need to achieve to cross the chasm from small-scale adoption to ubiquity

    Hurricane Landing: An Analysis of Site 22LA516 in Sardis Lake, Lafayette County, Mississippi

    Get PDF
    Site 22LA516, known as Hurricane Landing, is a single mound early Mississippian site located in the middle of Sardis Lake, Lafayette County, Mississippi. As part of a 2015 joint salvage archaeology project between the Center for Archaeological Research (CAR) and the Vicksburg District Corp of Engineers, nine pit features were excavated. Analyses of the ceramics and lithic remains recovered from the features, combined with AMS dates, were conducted with the focus of better understanding Hurricane Landing within its North Central Hills region of Mississippi. Hurricane Landing’s 2015 excavation ceramic collection contains shell tempered and grog tempered plainware with several shell tempered decorated types and no grog tempered decorated types. Analysis of the lithics recovered indicates Hurricane Landing imported Citronelle and Ft. Payne Chert with long trajectory Citronelle production and short trajectory Ft. Payne production. Settlement data for the North Central Hills indicate a population shift to downriver floodplains in the early Mississippian. The results of the ceramic and lithic analyses coupled with the AMS dating, indicate that the pit features were filled from around AD 1165 to roughly AD 1295, strongly suggest that Hurricane Landing is a transitional Mississippian site

    Databases in High Energy Physics: a critial review

    Get PDF
    The year 2000 is marked by a plethora of significant milestones in the history of High Energy Physics. Not only the true numerical end to the second millennium, this watershed year saw the final run of CERN's Large Electron-Positron collider (LEP) - the world-class machine that had been the focus of the lives of many of us for such a long time. It is also closely related to the subject of this chapter in the following respects: - Classified as a nuclear installation, information on the LEP machine must be retained indefinitely. This represents a challenge to the database community that is almost beyond discussion - archiving of data for a relatively small number of years is indeed feasible, but retaining it for centuries, millennia or more is a very different issue; - There are strong scientific arguments as to why the data from the LEP machine should be retained for a short period. However, the complexity of the data itself, the associated metadata and the programs that manipulate it make even this a huge challenge; - The story of databases in HEP is closely linked to that of LEP itself: what were the basic requirements that were identified in the early years of LEP preparation? How well have these been satisfied? What are the remaining issues and key messages? - Finally, the year 2000 also marked the entry of Grid architectures into the central stage of HEP computing. How has the Grid affected the requirements on databases or the manner in which they are deployed? Furthermore, as the LEP tunnel and even parts of the detectors that it housed are readied for re-use for the Large Hadron Collider (LHC), how have our requirements on databases evolved at this new scale of computing? A number of the key players in the field of databases - as can be seen from the author list of the various publications - have since retired from the field or else this world. Given the fallibility of human memory, the need for a record of the use of databases for physics data processing is clearly needed before memories fade completely and the story is lost forever. It is necessarily somewhat CERN-centric, although effort has been made to cover important developments and events elsewhere. Frequent reference is made to the Computing in High Energy Physics (CHEP) conference series - the most accessible and consistent record of this field

    Tackling a scandal of premature mortality; time for a ‘hearts & minds’ approach

    Get PDF
    David Shiers* and Tim Kendall** suggest it is untenable in 2012 to provide healthcare which fails to address the physical needs of those with mental illness, and the mental needs of those with physical illness

    Blind estimation of reverberation time in classrooms and hospital wards

    Get PDF
    This paper investigates blind Reverberation Time (RT) estimation in occupied classrooms and hospital wards. Measurements are usually made while these spaces are unoccupied for logistical reasons. However, occupancy can have a significant impact on the rate of reverberant decay. Recent work has developed a Maximum Likelihood Estimation (MLE) method which utilises only passively recorded speech and music signals, this enables measurements to be made while the room is in use. In this paper the MLE method is applied to recordings made in classrooms during lessons. Classroom occupancy levels differ for each lesson, therefore a model is developed using blind estimates to predict the RT for any occupancy level to within ±0.07s for the mid-frequency octave bands. The model is also able to predict the effective room and per person absorption area. Ambient sound recordings were also carried out in a number of rooms in two hospitals for a week. Hospital measurements are more challenging as the occurrence of free reverberant decay is rarer than in schools and the acoustic conditions may be non-stationary. However, by gaining recordings over a period of a week, estimates can be gained within ±0.07 s. These estimates are representative of the times when the room contains the highest acoustic absorption. In other words when curtains are drawn, there are many visitors or perhaps a window may be open
    corecore