12,954 research outputs found

    ATLAS Data Challenge 1

    Full text link
    In 2002 the ATLAS experiment started a series of Data Challenges (DC) of which the goals are the validation of the Computing Model, of the complete software suite, of the data model, and to ensure the correctness of the technical choices to be made. A major feature of the first Data Challenge (DC1) was the preparation and the deployment of the software required for the production of large event samples for the High Level Trigger (HLT) and physics communities, and the production of those samples as a world-wide distributed activity. The first phase of DC1 was run during summer 2002, and involved 39 institutes in 18 countries. More than 10 million physics events and 30 million single particle events were fully simulated. Over a period of about 40 calendar days 71000 CPU-days were used producing 30 Tbytes of data in about 35000 partitions. In the second phase the next processing step was performed with the participation of 56 institutes in 21 countries (~ 4000 processors used in parallel). The basic elements of the ATLAS Monte Carlo production system are described. We also present how the software suite was validated and the participating sites were certified. These productions were already partly performed by using different flavours of Grid middleware at ~ 20 sites.Comment: 10 pages; 3 figures; CHEP03 Conference, San Diego; Reference MOCT00

    Description and Experience of the Clinical Testbeds

    Get PDF
    This deliverable describes the up-to-date technical environment at three clinical testbed demonstrator sites of the 6WINIT Project, including the adapted clinical applications, project components and network transition technologies in use at these sites after 18 months of the Project. It also provides an interim description of early experiences with deployment and usage of these applications, components and technologies, and their clinical service impact

    The OMII Software Distribution

    No full text
    This paper describes the work carried out at the Open Middleware Infrastructure Institute (OMII) and the key elements of the OMII software distribution that have been developed in collaboration with members of the Managed Programme Initiative. The main objective of the OMII is to preserve and consolidate the achievements of the UK e-Science Programme by collecting, maintaining and improving the software modules that form the key components of a generic Grid middleware. Recently, the activity at Southampton has been extended beyond 2009 through a new project, OMII-UK, that forms a partnership that now includes the OGSA-DAI activities at Edinburgh and the myGrid project at Manchester

    A FRAMEWORK FOR BIOPROFILE ANALYSIS OVER GRID

    Get PDF
    An important trend in modern medicine is towards individualisation of healthcare to tailor care to the needs of the individual. This makes it possible, for example, to personalise diagnosis and treatment to improve outcome. However, the benefits of this can only be fully realised if healthcare and ICT resources are exploited (e.g. to provide access to relevant data, analysis algorithms, knowledge and expertise). Potentially, grid can play an important role in this by allowing sharing of resources and expertise to improve the quality of care. The integration of grid and the new concept of bioprofile represents a new topic in the healthgrid for individualisation of healthcare. A bioprofile represents a personal dynamic "fingerprint" that fuses together a person's current and past bio-history, biopatterns and prognosis. It combines not just data, but also analysis and predictions of future or likely susceptibility to disease, such as brain diseases and cancer. The creation and use of bioprofile require the support of a number of healthcare and ICT technologies and techniques, such as medical imaging and electrophysiology and related facilities, analysis tools, data storage and computation clusters. The need to share clinical data, storage and computation resources between different bioprofile centres creates not only local problems, but also global problems. Existing ICT technologies are inappropriate for bioprofiling because of the difficulties in the use and management of heterogeneous IT resources at different bioprofile centres. Grid as an emerging resource sharing concept fulfils the needs of bioprofile in several aspects, including discovery, access, monitoring and allocation of distributed bioprofile databases, computation resoiuces, bioprofile knowledge bases, etc. However, the challenge of how to integrate the grid and bioprofile technologies together in order to offer an advanced distributed bioprofile environment to support individualized healthcare remains. The aim of this project is to develop a framework for one of the key meta-level bioprofile applications: bioprofile analysis over grid to support individualised healthcare. Bioprofile analysis is a critical part of bioprofiling (i.e. the creation, use and update of bioprofiles). Analysis makes it possible, for example, to extract markers from data for diagnosis and to assess individual's health status. The framework provides a basis for a "grid-based" solution to the challenge of "distributed bioprofile analysis" in bioprofiling. The main contributions of the thesis are fourfold: A. An architecture for bioprofile analysis over grid. The design of a suitable aichitecture is fundamental to the development of any ICT systems. The architecture creates a meaiis for categorisation, determination and organisation of core grid components to support the development and use of grid for bioprofile analysis; B. A service model for bioprofile analysis over grid. The service model proposes a service design principle, a service architecture for bioprofile analysis over grid, and a distributed EEG analysis service model. The service design principle addresses the main service design considerations behind the service model, in the aspects of usability, flexibility, extensibility, reusability, etc. The service architecture identifies the main categories of services and outlines an approach in organising services to realise certain functionalities required by distributed bioprofile analysis applications. The EEG analysis service model demonstrates the utilisation and development of services to enable bioprofile analysis over grid; C. Two grid test-beds and a practical implementation of EEG analysis over grid. The two grid test-beds: the BIOPATTERN grid and PlymGRID are built based on existing grid middleware tools. They provide essential experimental platforms for research in bioprofiling over grid. The work here demonstrates how resources, grid middleware and services can be utilised, organised and implemented to support distributed EEG analysis for early detection of dementia. The distributed Electroencephalography (EEG) analysis environment can be used to support a variety of research activities in EEG analysis; D. A scheme for organising multiple (heterogeneous) descriptions of individual grid entities for knowledge representation of grid. The scheme solves the compatibility and adaptability problems in managing heterogeneous descriptions (i.e. descriptions using different languages and schemas/ontologies) for collaborated representation of a grid environment in different scales. It underpins the concept of bioprofile analysis over grid in the aspect of knowledge-based global coordination between components of bioprofile analysis over grid

    A failure diagnosis and impact assessment prototype for Space Station Freedom

    Get PDF
    NASA is investigating the use of advanced automation to enhance crew productivity for Space Station Freedom in numerous areas, one being failure management. A prototype is described that diagnoses failure sources and assesses the future impacts of those failures on other Freedom entities

    ERIGrid Holistic Test Description for Validating Cyber-Physical Energy Systems

    Get PDF
    Smart energy solutions aim to modify and optimise the operation of existing energy infrastructure. Such cyber-physical technology must be mature before deployment to the actual infrastructure, and competitive solutions will have to be compliant to standards still under development. Achieving this technology readiness and harmonisation requires reproducible experiments and appropriately realistic testing environments. Such testbeds for multi-domain cyber-physical experiments are complex in and of themselves. This work addresses a method for the scoping and design of experiments where both testbed and solution each require detailed expertise. This empirical work first revisited present test description approaches, developed a newdescription method for cyber-physical energy systems testing, and matured it by means of user involvement. The new Holistic Test Description (HTD) method facilitates the conception, deconstruction and reproduction of complex experimental designs in the domains of cyber-physical energy systems. This work develops the background and motivation, offers a guideline and examples to the proposed approach, and summarises experience from three years of its application.This work received funding in the European Community’s Horizon 2020 Program (H2020/2014–2020) under project “ERIGrid” (Grant Agreement No. 654113)

    Advanced Simulation and Computing FY12-13 Implementation Plan, Volume 2, Revision 0.5

    Full text link

    Opaque Service Virtualisation: A Practical Tool for Emulating Endpoint Systems

    Full text link
    Large enterprise software systems make many complex interactions with other services in their environment. Developing and testing for production-like conditions is therefore a very challenging task. Current approaches include emulation of dependent services using either explicit modelling or record-and-replay approaches. Models require deep knowledge of the target services while record-and-replay is limited in accuracy. Both face developmental and scaling issues. We present a new technique that improves the accuracy of record-and-replay approaches, without requiring prior knowledge of the service protocols. The approach uses Multiple Sequence Alignment to derive message prototypes from recorded system interactions and a scheme to match incoming request messages against prototypes to generate response messages. We use a modified Needleman-Wunsch algorithm for distance calculation during message matching. Our approach has shown greater than 99% accuracy for four evaluated enterprise system messaging protocols. The approach has been successfully integrated into the CA Service Virtualization commercial product to complement its existing techniques.Comment: In Proceedings of the 38th International Conference on Software Engineering Companion (pp. 202-211). arXiv admin note: text overlap with arXiv:1510.0142

    Definition of avionics concepts for a heavy lift cargo vehicle, appendix A

    Get PDF
    The major objective of the study task was to define a cost effective, multiuser simulation, test, and demonstration facility to support the development of avionics systems for future space vehicles. This volume provides the results of the main simulation processor selection study and describes some proof-of-concept demonstrations for the avionics test bed facility
    • 

    corecore