1,278 research outputs found

    Grid Computing: Concepts and Applications

    Get PDF
    The challenge of CERN experiments at the Large Hadron Collider (LHC), which will collect data at rates in the range of PBs/year, requires the development of GRID technologies to optimize the exploitation of distributed computing power and the automatic access to distributed data storage. Several projects are addressing the problem of setting up the hardware infrastructure of a GRID, as well as the development of the middleware required to manage it: a working GRID should look like a set of services, accessible to registered applications, which will help cooperate the different computing and storage resources. As it happened for the World Wide Web, GRID concepts are in principle important not only for High Energy Physics (HEP): for this reason, GRID developers, while keeping in mind the needs of HEP experiments, are trying to design GRID services in the most general way. As examples, two applications are described: the CERN/ALICE experiment at the LHC and a recently approved INFN project (GPCALMA) which will set up a GRID prototype between several mammographic centres in Italy

    GPCALMA: a Grid Approach to Mammographic Screening

    Get PDF
    The next generation of High Energy Physics experiments requires a GRID approach to a distributed computing system and the associated data management: the key concept is the "Virtual Organisation" (VO), a group of geographycally distributed users with a common goal and the will to share their resources. A similar approach is being applied to a group of Hospitals which joined the GPCALMA project (Grid Platform for Computer Assisted Library for MAmmography), which will allow common screening programs for early diagnosis of breast and, in the future, lung cancer. HEP techniques come into play in writing the application code, which makes use of neural networks for the image analysis and shows performances similar to radiologists in the diagnosis. GRID technologies will allow remote image analysis and interactive online diagnosis, with a relevant reduction of the delays presently associated to screening programs.Comment: 4 pages, 3 figures; to appear in the Proceedings of Frontier Detectors For Frontier Physics, 9th Pisa Meeting on Advanced Detectors, 25-31 May 2003, La Biodola, Isola d'Elba, Ital

    A Computer Aided Detection system for mammographic images implemented on a GRID infrastructure

    Full text link
    The use of an automatic system for the analysis of mammographic images has proven to be very useful to radiologists in the investigation of breast cancer, especially in the framework of mammographic-screening programs. A breast neoplasia is often marked by the presence of microcalcification clusters and massive lesions in the mammogram: hence the need for tools able to recognize such lesions at an early stage. In the framework of the GPCALMA (GRID Platform for Computer Assisted Library for MAmmography) project, the co-working of italian physicists and radiologists built a large distributed database of digitized mammographic images (about 5500 images corresponding to 1650 patients) and developed a CAD (Computer Aided Detection) system, able to make an automatic search of massive lesions and microcalcification clusters. The CAD is implemented in the GPCALMA integrated station, which can be used also for digitization, as archive and to perform statistical analyses. Some GPCALMA integrated stations have already been implemented and are currently on clinical trial in some italian hospitals. The emerging GRID technology can been used to connect the GPCALMA integrated stations operating in different medical centers. The GRID approach will support an effective tele- and co-working between radiologists, cancer specialists and epidemiology experts by allowing remote image analysis and interactive online diagnosis.Comment: 5 pages, 5 figures, to appear in the Proceedings of the 13th IEEE-NPSS Real Time Conference 2003, Montreal, Canada, May 18-23 200

    Lung Nodule Detection in Screening Computed Tomography

    Get PDF
    A computer-aided detection (CAD) system for the identification of pulmonary nodules in low-dose multi-detector helical Computed Tomography (CT) images with 1.25 mm slice thickness is presented. The basic modules of our lung-CAD system, a dot-enhancement filter for nodule candidate selection and a neural classifier for false-positive finding reduction, are described. The results obtained on the collected database of lung CT scans are discussed.Comment: 3 pages, 4 figures; Proceedings of the IEEE NNS and MIC Conference, Oct. 29 - Nov. 4, 2006, San Diego, Californi

    CADe tools for early detection of breast cancer

    Get PDF
    A breast neoplasia is often marked by the presence of microcalcifications and massive lesions in the mammogram: hence the need for tools able to recognize such lesions at an early stage. Our collaboration, among italian physicists and radiologists, has built a large distributed database of digitized mammographic images and has developed a Computer Aided Detection (CADe) system for the automatic analysis of mammographic images and installed it in some Italian hospitals by a GRID connection. Regarding microcalcifications, in our CADe digital mammogram is divided into wide windows which are processed by a convolution filter; after a self-organizing map analyzes each window and produces 8 principal components which are used as input of a neural network (FFNN) able to classify the windows matched to a threshold. Regarding massive lesions we select all important maximum intensity position and define the ROI radius. From each ROI found we extract the parameters which are used as input in a FFNN to distinguish between pathological and non-pathological ROI. We present here a test of our CADe system, used as a second reader and a comparison with another (commercial) CADe system.Comment: 4 pages, Proceedings of the 4th International Symposium on Nuclear and Related Techniques 2003, Vol. unico, pp. d10/1-d10/4 Havana, Cub

    Recent Developments on the Silicon Drift Detector readout scheme for the ALICE Inner Tracking System

    Get PDF
    Proposal of abstract for LEB99, Snowmass, Colorado, 20-24 September 1999Recent developments of the Silicon Drift Detector (SDD) readout system for the ALICE Experiment are presented. The foreseen readout system is based on 2 main units. The first unit consists of a low noise preamplifier, an analog memory which continuously samples the amplifier output, an A/D converter and a digital memory. When the trigger signal validates the analog data, the ADCs convert the samples into a digital form and store them into the digital memory. The second unit performs the zero suppression/data compression operations. In this paper the status of the design is presented, together with the test results of the A/D converter, the multi-event buffer and the compression unit prototype.Summary:In the Inner Tracker System (ITS) of the ALICE experiment the third and the fourth layer of the detectors are SDDs. These detectors provide the measurement of both the energy deposition and the bi-dimensional position of the track. In terms of readout an SDD can be viewed as a matrix, where the rows are the detector anodes and the columns are the samples to be read during the drift time; therefore, a very large amount of data has to be amplified, converted in digital form and preprocessed in order to avoid the storage of non-significatn data.Since the electron mobility is a strong temperature function, detector temperature has to be kept constant; on the other hand, it is not possible to use very efficient cooling systems because the amount of material in this area is very limited, so the power budget for the electronic readout is very low (less than 6 mW/anode).The simplest solution would be to send the analog signals outside the sensitive area immediately after a preamplification; unfortunately, the ratio between the number of channels (around 200 000) and the space available is so high that the simple solution of sending all the SDD anodes output outside teh detector zone after a low-noise amplification is not practically manageable.Abstract:The adopted solution is based on three main units:(i) A front-end chip that performs low noise amplification, fast analog storage and A/D conversion(ii) A multi-event digital buffer for data derandomization(iii) A data compression/zero suppression and system control boardThe first two units are distributed on the ladders near the detectors and have stringent power and space requirements, while the third unit is placed at both ends of the ladders and in boxes placed on both ends of the TPC detector.The first unit is the most critical part of the system. It works as follows: the detector signals are continuously amplified, sampled and stored in the analog memory with a frequency of 40 MSamples/s The L0d trigger signal stops the write operation, while the L1 trigger signal starts the conversion phase. This phase will continue until the event data are stored in the event buffer if the L2y confirm trigger signal is received, or rejected if the L2n abort signal will be issued by the trigger system.Prototypes of the three parts have been designed and tested while the full chip is currently under design. Tests of the A/D converter will be presented.The multi-event buffer purpose is to de-randomize the even data in order to reduce the transmission speed. Preliminary tests of the first prototype will be presented.The board placed at the end of the ladders performs various functions. It reduces the amount of data through various cascaded algorithms with variable parameters and transmits the data to the SIU board. It also controls the test and slow control system for the ladder circuitry. Tests of the FPGA-based prototypes will be presented.Special care has been taken for the test problem. The ASICs designed are provided of a test control port based on teh IEEE 1149.1 JTAG standard. The same protocol is used for downloading configuration information

    Test Results of the ALICE SDD Electronic Readout Prototypes

    Get PDF
    The first prototypes of the front-end electronics of the ALICE silicon driftdetectors have been designed and tested. The integrated circuits have been designed using state of the art technologies and, for the analog parts, with radiation-tolerantdesign techniques. In this paper, the test results of the building blocks of the PASCAL chip and the first prototype of the AMBRA chip are presented. The prototypes fully respect the ALICE requirements; owingto the use of deep-submicron technologies together with radiation-tolerant layout techniques, the prototypes have shown a toleranceto a radiation dose much higher than the one foreseen for the ALICE environment.(Abstract only available, full text to follow)

    GPCALMA: A Tool For Mammography With A GRID-Connected Distributed Database

    Get PDF
    The GPCALMA (Grid Platform for Computer Assisted Library for MAmmography) collaboration involves several departments of physics, INFN sections, and italian hospitals. The aim of this collaboration is developing a tool that can help radiologists in early detection of breast cancer. GPCALMA has built a large distributed database of digitised mammographic images (about 5500 images corresponding to 1650 patients) and developed a CAD (Computer Aided Detection) software which is integrated in a station that can also be used for acquire new images, as archive and to perform statistical analysis. The images are completely described: pathological ones have a consistent characterization with radiologist's diagnosis and histological data, non pathological ones correspond to patients with a follow up at least three years. The distributed database is realized throught the connection of all the hospitals and research centers in GRID tecnology. In each hospital local patients digital images are stored in the local database. Using GRID connection, GPCALMA will allow each node to work on distributed database data as well as local database data. Using its database the GPCALMA tools perform several analysis. A texture analysis, i.e. an automated classification on adipose, dense or glandular texture, can be provided by the system. GPCALMA software also allows classification of pathological features, in particular massive lesions analysis and microcalcification clusters analysis. The performance of the GPCALMA system will be presented in terms of the ROC (Receiver Operating Characteristic) curves. The results of GPCALMA system as "second reader" will also be presented.Comment: 6 pages, Proceedings of the Seventh Mexican Symposium on Medical Physics 2003, Vol. 682/1, pp. 67-72, Mexico City, Mexic

    HEP Applications Evaluation of the EDG Testbed and Middleware

    Full text link
    Workpackage 8 of the European Datagrid project was formed in January 2001 with representatives from the four LHC experiments, and with experiment independent people from five of the six main EDG partners. In September 2002 WP8 was strengthened by the addition of effort from BaBar and D0. The original mandate of WP8 was, following the definition of short- and long-term requirements, to port experiment software to the EDG middleware and testbed environment. A major additional activity has been testing the basic functionality and performance of this environment. This paper reviews experiences and evaluations in the areas of job submission, data management, mass storage handling, information systems and monitoring. It also comments on the problems of remote debugging, the portability of code, and scaling problems with increasing numbers of jobs, sites and nodes. Reference is made to the pioneeering work of Atlas and CMS in integrating the use of the EDG Testbed into their data challenges. A forward look is made to essential software developments within EDG and to the necessary cooperation between EDG and LCG for the LCG prototype due in mid 2003.Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics Conference (CHEP03), La Jolla, CA, USA, March 2003, 7 pages. PSN THCT00
    • …
    corecore