187,649 research outputs found

    GATE simulation for medical physics with genius Web portal

    Get PDF
    présenté par C. ThiamPCSV team of the LPC laboratory in Clermont-Ferrand is involved in the deployment of biomedical applications on the grid architecture. One of these applications deals with the deployment of GATE (Geant4 Application for Tomographic Emission) for medical physics application. The aim of the developments actually performed is to enable an application of the GATE platform in clinical routine. However, this perspective is only possible if the computing time and user time are highly reduced. The new grid architecture, developed within the framework of the European project Enabling Grid for E-sciencE (EGEE) is there to answer this requirement. The use of the grid resources must be transparent easy and rapid for the medical physicists. For this perpose, we adapted the GENIUS web portal in order to facilitate the GATE simulations planning on the grid. We will present a demonstration of the GENIUS portal which integrates all the functionalities of EGEE: to create, to submit and manage GATE jobs on the grid architecture. Our GATE activities for dosimetry application entered in to direct phase of evaluation by the cancer treatment center of Clermont Ferrand (Centre Jean perrin).A work station is currently available in this center to test the use of GATE application on the grid through GENIUS. This portal will allow in a long term to use GATE application in brachytherapy and radiotherapy treatment planning using medical data (medical images, DICOM, binary data dose calculation in the heterogeneous mediums) and to analyze the results obtained in visual form. Other functionalities are under development and will make possible to register medical data on grid storages elements and to manage them. However, these data must be anonymised before their recording on the grid. Their access via the GENIUS portal must be made safe and fast (compared simulation computing time). In order to be sure that the medical data are accessible for calculations, their replication on various storage element (SE) should be possible. The grid services give the possibility of managing this information in a free way and transparently. Operations of data handling and catalogues on the grid are ensured by the Replica Manager system which integrates all tools making it possible to manage data on the grid. The computing grid give promising results and meet a definite need: reach acceptable computing time for a future use of Monte Carlo simulations for treatment planning in brachytherapy and radiotherapy

    AstroGrid-D: Grid Technology for Astronomical Science

    Full text link
    We present status and results of AstroGrid-D, a joint effort of astrophysicists and computer scientists to employ grid technology for scientific applications. AstroGrid-D provides access to a network of distributed machines with a set of commands as well as software interfaces. It allows simple use of computer and storage facilities and to schedule or monitor compute tasks and data management. It is based on the Globus Toolkit middleware (GT4). Chapter 1 describes the context which led to the demand for advanced software solutions in Astrophysics, and we state the goals of the project. We then present characteristic astrophysical applications that have been implemented on AstroGrid-D in chapter 2. We describe simulations of different complexity, compute-intensive calculations running on multiple sites, and advanced applications for specific scientific purposes, such as a connection to robotic telescopes. We can show from these examples how grid execution improves e.g. the scientific workflow. Chapter 3 explains the software tools and services that we adapted or newly developed. Section 3.1 is focused on the administrative aspects of the infrastructure, to manage users and monitor activity. Section 3.2 characterises the central components of our architecture: The AstroGrid-D information service to collect and store metadata, a file management system, the data management system, and a job manager for automatic submission of compute tasks. We summarise the successfully established infrastructure in chapter 4, concluding with our future plans to establish AstroGrid-D as a platform of modern e-Astronomy.Comment: 14 pages, 12 figures Subjects: data analysis, image processing, robotic telescopes, simulations, grid. Accepted for publication in New Astronom

    A Practical Searchable Symmetric Encryption Scheme for Smart Grid Data

    Full text link
    Outsourcing data storage to the remote cloud can be an economical solution to enhance data management in the smart grid ecosystem. To protect the privacy of data, the utility company may choose to encrypt the data before uploading them to the cloud. However, while encryption provides confidentiality to data, it also sacrifices the data owners' ability to query a special segment in their data. Searchable symmetric encryption is a technology that enables users to store documents in ciphertext form while keeping the functionality to search keywords in the documents. However, most state-of-the-art SSE algorithms are only focusing on general document storage, which may become unsuitable for smart grid applications. In this paper, we propose a simple, practical SSE scheme that aims to protect the privacy of data generated in the smart grid. Our scheme achieves high space complexity with small information disclosure that was acceptable for practical smart grid application. We also implement a prototype over the statistical data of advanced meter infrastructure to show the effectiveness of our approach

    Managing community membership information in a small-world grid

    Get PDF
    As the Grid matures the problem of resource discovery across communities, where resources now include computational services, is becoming more critical. The number of resources available on a world-wide grid is set to grow exponentially in much the same way as the number of static web pages on the WWW. We observe that the world-wide resource discovery problem can be modelled as a slowly evolving very-large sparse-matrix where individual matrix elements represent nodes’ knowledge of one another. Blocks in the matrix arise where nodes offer more than one service. Blocking effects also arise in the identification of sub-communities in the Grid. The linear algebra community has long been aware of suitable representations of large, sparse matrices. However, matrices the size of the world-wide grid potentially number in the billions, making dense solutions completely intractable. Distributed nodes will not necessarily have the storage capacity to store the addresses of any significant percentage of the available resources. We discuss ways of modelling this problem in the regime of a slowly changing service base including phenomena such as percolating networks and small-world network effects

    HEP Applications Evaluation of the EDG Testbed and Middleware

    Full text link
    Workpackage 8 of the European Datagrid project was formed in January 2001 with representatives from the four LHC experiments, and with experiment independent people from five of the six main EDG partners. In September 2002 WP8 was strengthened by the addition of effort from BaBar and D0. The original mandate of WP8 was, following the definition of short- and long-term requirements, to port experiment software to the EDG middleware and testbed environment. A major additional activity has been testing the basic functionality and performance of this environment. This paper reviews experiences and evaluations in the areas of job submission, data management, mass storage handling, information systems and monitoring. It also comments on the problems of remote debugging, the portability of code, and scaling problems with increasing numbers of jobs, sites and nodes. Reference is made to the pioneeering work of Atlas and CMS in integrating the use of the EDG Testbed into their data challenges. A forward look is made to essential software developments within EDG and to the necessary cooperation between EDG and LCG for the LCG prototype due in mid 2003.Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics Conference (CHEP03), La Jolla, CA, USA, March 2003, 7 pages. PSN THCT00
    corecore