369 research outputs found

    Charged hadrons production in p-p and Pb-Pb interactions at the ALICE experiment

    Get PDF
    The aim of the ALICE experiment at the LHC is the study of the nuclear matter under conditions of extreme energy density and temperature. Under these conditions a deconfined phase, in which quarks and gluons are no longer confined to individual nucleons, the Quark Gluon Plasma, is predicted by Lattice-QCD. In this context the precise measurement of charged particles spectra produced in heavy ions and proton-proton collisions is a fundamental tool to study the physics of the Quark Gluon Plasma. After a brief review of ALICE identification techniques to extract particle yields, we present the identified pions, kaons and protons spectra obtained in p-p (√s = 7TeV) and Pb-Pb (√sNN = 2.76TeV) collisions

    A Computer Aided Detection system for mammographic images implemented on a GRID infrastructure

    Full text link
    The use of an automatic system for the analysis of mammographic images has proven to be very useful to radiologists in the investigation of breast cancer, especially in the framework of mammographic-screening programs. A breast neoplasia is often marked by the presence of microcalcification clusters and massive lesions in the mammogram: hence the need for tools able to recognize such lesions at an early stage. In the framework of the GPCALMA (GRID Platform for Computer Assisted Library for MAmmography) project, the co-working of italian physicists and radiologists built a large distributed database of digitized mammographic images (about 5500 images corresponding to 1650 patients) and developed a CAD (Computer Aided Detection) system, able to make an automatic search of massive lesions and microcalcification clusters. The CAD is implemented in the GPCALMA integrated station, which can be used also for digitization, as archive and to perform statistical analyses. Some GPCALMA integrated stations have already been implemented and are currently on clinical trial in some italian hospitals. The emerging GRID technology can been used to connect the GPCALMA integrated stations operating in different medical centers. The GRID approach will support an effective tele- and co-working between radiologists, cancer specialists and epidemiology experts by allowing remote image analysis and interactive online diagnosis.Comment: 5 pages, 5 figures, to appear in the Proceedings of the 13th IEEE-NPSS Real Time Conference 2003, Montreal, Canada, May 18-23 200

    MRI analysis for Hippocampus segmentation on a distributed infrastructure

    Get PDF
    Medical image computing raises new challenges due to the scale and the complexity of the required analyses. Medical image databases are currently available to supply clinical diagnosis. For instance, it is possible to provide diagnostic information based on an imaging biomarker comparing a single case to the reference group (controls or patients with disease). At the same time many sophisticated and computationally intensive algorithms have been implemented to extract useful information from medical images. Many applications would take great advantage by using scientific workflow technology due to its design, rapid implementation and reuse. However this technology requires a distributed computing infrastructure (such as Grid or Cloud) to be executed efficiently. One of the most used workflow manager for medical image processing is the LONI pipeline (LP), a graphical workbench developed by the Laboratory of Neuro Imaging (http://pipeline.loni.usc.edu). In this article we present a general approach to submit and monitor workflows on distributed infrastructures using LONI Pipeline, including European Grid Infrastructure (EGI) and Torque-based batch farm. In this paper we implemented a complete segmentation pipeline in brain magnetic resonance imaging (MRI). It requires time-consuming and data-intensive processing and for which reducing the computing time is crucial to meet clinical practice constraints. The developed approach is based on web services and can be used for any medical imaging application

    Radiomic analysis in contrast-enhanced spectral mammography for predicting breast cancer histological outcome

    Get PDF
    Contrast-Enhanced Spectral Mammography (CESM) is a recently introduced mammographic method with characteristics particularly suitable for breast cancer radiomic analysis. This work aims to evaluate radiomic features for predicting histological outcome and two cancer molecular subtypes, namely Human Epidermal growth factor Receptor 2 (HER2)-positive and triple-negative. From 52 patients, 68 lesions were identified and confirmed on histological examination. Radiomic analysis was performed on regions of interest (ROIs) selected from both low-energy (LE) and ReCombined (RC) CESM images. Fourteen statistical features were extracted from each ROI. Expression of estrogen receptor (ER) was significantly correlated with variation coefficient and variation range calculated on both LE and RC images; progesterone receptor (PR) with skewness index calculated on LE images; and Ki67 with variation coefficient, variation range, entropy and relative smoothness indices calculated on RC images. HER2 was significantly associated with relative smoothness calculated on LE images, and grading tumor with variation coefficient, entropy and relative smoothness calculated on RC images. Encouraging results for differentiation between ER+/ER−, PR+/PR−, HER2+/HER2−, Ki67+/Ki67−, High-Grade/Low-Grade and TN/NTN were obtained. Specifically, the highest performances were obtained for discriminating HER2+/HER2− (90.87%), ER+/ER− (83.79%) and Ki67+/Ki67− (84.80%). Our results suggest an interesting role for radiomics in CESM to predict histological outcomes and particular tumors’ molecular subtype

    Multiple RF classifier for the hippocampus segmentation: method and validation on EADC-ADNI harmonized hippocampal protocol

    Get PDF
    AbstractThe hippocampus has a key role in a number of neurodegenerative diseases, such as Alzheimer's Disease. Here we present a novel method for the automated segmentation of the hippocampus from structural magnetic resonance images (MRI), based on a combination of multiple classifiers. The method is validated on a cohort of 50 T1 MRI scans, comprehending healthy control, mild cognitive impairment, and Alzheimer's Disease subjects. The preliminary release of the EADC-ADNI Harmonized Protocol training labels is used as gold standard. The fully automated pipeline consists of a registration using an affine transformation, the extraction of a local bounding box, and the classification of each voxel in two classes (background and hippocampus). The classification is performed slice-by-slice along each of the three orthogonal directions of the 3D-MRI using a Random Forest (RF) classifier, followed by a fusion of the three full segmentations. Dice coefficients obtained by multiple RF (0.87 ± 0.03) are larger than those obtained by a single monolithic RF applied to the entire bounding box, and are comparable to state-of-the-art. A test on an external cohort of 50 T1 MRI scans shows that the presented method is robust and reliable. Additionally, a comparison of local changes in the morphology of the hippocampi between the three subject groups is performed. Our work showed that a multiple classification approach can be implemented for the segmentation for the measurement of volume and shape changes of the hippocampus with diagnostic purposes

    Automated hippocampal segmentation in 3D MRI using random undersampling with boosting algorithm

    Get PDF
    The automated identification of brain structure in Magnetic Resonance Imaging is very important both in neuroscience research and as a possible clinical diagnostic tool. In this study, a novel strategy for fully automated hippocampal segmentation in MRI is presented. It is based on a supervised algorithm, called RUSBoost, which combines data random undersampling with a boosting algorithm. RUSBoost is an algorithm specifically designed for imbalanced classification, suitable for large data sets because it uses random undersampling of the majority class. The RUSBoost performances were compared with those of ADABoost, Random Forest and the publicly available brain segmentation package, FreeSurfer. This study was conducted on a data set of 50 T1-weighted structural brain images. The RUSBoost-based segmentation tool achieved the best results with a Dice’s index of (Formula presented.) (Formula presented.) for the left (right) brain hemisphere. An independent data set of 50 T1-weighted structural brain scans was used for an independent validation of the fully trained strategies. Again the RUSBoost segmentations compared favorably with manual segmentations with the highest performances among the four tools. Moreover, the Pearson correlation coefficient between hippocampal volumes computed by manual and RUSBoost segmentations was 0.83 (0.82) for left (right) side, statistically significant, and higher than those computed by Adaboost, Random Forest and FreeSurfer. The proposed method may be suitable for accurate, robust and statistically significant segmentations of hippocampi

    hybrid x space a new approach for mpi reconstruction

    Get PDF
    Magnetic particle imaging (MPI) is a new medical imaging technique capable of recovering the distribution of superparamagnetic particles from their measured induced signals. In literature there are two main MPI reconstruction techniques: measurement-based (MB) and x-space (XS). The MB method is expensive because it requires a long calibration procedure as well as a reconstruction phase that can be numerically costly. On the other side, the XS method is simpler than MB but the exact knowledge of the field free point (FFP) motion is essential for its implementation. Our simulation work focuses on the implementation of a new approach for MPI reconstruction: it is called hybrid x-space (HXS), representing a combination of the previous methods. Specifically, our approach is based on XS reconstruction because it requires the knowledge of the FFP position and velocity at each time instant. The difference with respect to the original XS formulation is how the FFP velocity is computed: we estimate it from the experimental measurements of the calibration scans, typical of the MB approach. Moreover, a compressive sensing technique is applied in order to reduce the calibration time, setting a fewer number of sampling positions. Simulations highlight that HXS and XS methods give similar results. Furthermore, an appropriate use of compressive sensing is crucial for obtaining a good balance between time reduction and reconstructed image quality. Our proposal is suitable for open geometry configurations of human size devices, where incidental factors could make the currents, the fields and the FFP trajectory irregular

    CADe tools for early detection of breast cancer

    Get PDF
    A breast neoplasia is often marked by the presence of microcalcifications and massive lesions in the mammogram: hence the need for tools able to recognize such lesions at an early stage. Our collaboration, among italian physicists and radiologists, has built a large distributed database of digitized mammographic images and has developed a Computer Aided Detection (CADe) system for the automatic analysis of mammographic images and installed it in some Italian hospitals by a GRID connection. Regarding microcalcifications, in our CADe digital mammogram is divided into wide windows which are processed by a convolution filter; after a self-organizing map analyzes each window and produces 8 principal components which are used as input of a neural network (FFNN) able to classify the windows matched to a threshold. Regarding massive lesions we select all important maximum intensity position and define the ROI radius. From each ROI found we extract the parameters which are used as input in a FFNN to distinguish between pathological and non-pathological ROI. We present here a test of our CADe system, used as a second reader and a comparison with another (commercial) CADe system.Comment: 4 pages, Proceedings of the 4th International Symposium on Nuclear and Related Techniques 2003, Vol. unico, pp. d10/1-d10/4 Havana, Cub

    GPCALMA: a Grid-based tool for Mammographic Screening

    Get PDF
    The next generation of High Energy Physics (HEP) experiments requires a GRID approach to a distributed computing system and the associated data management: the key concept is the Virtual Organisation (VO), a group of distributed users with a common goal and the will to share their resources. A similar approach is being applied to a group of Hospitals which joined the GPCALMA project (Grid Platform for Computer Assisted Library for MAmmography), which will allow common screening programs for early diagnosis of breast and, in the future, lung cancer. HEP techniques come into play in writing the application code, which makes use of neural networks for the image analysis and proved to be useful in improving the radiologists' performances in the diagnosis. GRID technologies allow remote image analysis and interactive online diagnosis, with a potential for a relevant reduction of the delays presently associated to screening programs. A prototype of the system, based on AliEn GRID Services, is already available, with a central Server running common services and several clients connecting to it. Mammograms can be acquired in any location; the related information required to select and access them at any time is stored in a common service called Data Catalogue, which can be queried by any client. The result of a query can be used as input for analysis algorithms, which are executed on nodes that are in general remote to the user (but always local to the input images) thanks to the PROOF facility. The selected approach avoids data transfers for all the images with a negative diagnosis (about 95% of the sample) and allows an almost real time diagnosis for the 5% of images with high cancer probability.Comment: 9 pages, 4 figures; Proceedings of the HealthGrid Workshop 2004, January 29-30, Clermont-Ferrand, Franc
    • …
    corecore