36 research outputs found

    An easy way to increase confidence in beta-amyloid PET evaluation

    Get PDF
    BACKGROUND: In patients with brain atrophy, it is not easy to distinguish pathologic uptake of flutemetamol (FMM) in the gray matter from nonspecific, physiologic uptake in the white matter. In this paper we suggest an easy image processing method. MATERIAL AND METHODS: The proof-of-concept study involved three patients with mild cognitive impairment and different graphical findings at FMM-PET. Two-phase FMM-PET was acquired; the early phase represented the perfusion of gray matter, while the late phase depicted the white matter and beta-amyloid load in the gray matter. The border of the gray matter was easily extracted from the early-phase images using thresholding and the isocontour “Edges” color table. The late phase was registered with the edge images of the early phase and displayed using alpha-blending. RESULTS: Early- and late-phase image fusion displayed with appropriate color tables is presented in three different cases to illustrate the added value of the suggested approach. CONCLUSIONS: Composite late-phase images with enhanced gray matter borders strongly facilitate assessment of beta-amyloid presence in the gray matter. This is especially helpful in patients with brain atrophy

    FLT-PET in previously untreated patients with low-grade glioma can predict their overall survival

    Get PDF
    BACKGROUND: Low-grade gliomas (LGG) of the brain have an uncertain prognosis, as many of them show continuous growth or upgrade over the course of time. We retrospectively investigated the role of positron emission tomography with 3’-deoxy-3’-[18F]fluorothymidine (FLT-PET) in the prediction of overall survival and event free survival in patients with untreated LGG. No such information is yet available in the literature. MATERIAL AND METHODS: Forty-one patients with previously untreated LGG underwent 55 FLT-PET investigations during their follow-up because of subjective complaints, objective worsening of clinical conditions, equivocal findings or progression on magnetic resonance imaging. The time interval before referral for neurosurgical or radiation treatment was considered to be event free survival, the interval until death as overall survival, respectively. Standardized uptake values (SUV) were measured, and a 3-point scale of subjective assessment was also applied. ROC analysis was used to define cut-off values. The log rank test was used for comparison of Kaplan-Meier survival curves. RESULTS: Eight patients (a total of 9 FLT-PET studies performed) died during follow-up. Progression leading to referral to therapy was recorded in 24 patients (a total of 33 FLT-PET studies). With a cut-off value of SUVmean = 0.236, a median overall survival of 1.007 days was observed in the test positive subgroup while median overall survival for the test negative subgroup was not achieved (p = 0.0002), hazard ratio = 17.6. Subjective assessment resulted in hazard ratio 11.5 (p = 0.0001). Only marginal significance (p=0.0562) was achieved in prediction of event free survival. CONCLUSIONS: Increased FLT uptake in previously untreated patients with LGG is a strong predictor of overall survival. On the other hand, prediction of event free survival was not successful in our cohort, probably because of high prevalence of patients who needed treatment due to symptoms caused by a space-occupying lesion without respect to the proliferative activity of the tumour

    Characterization of 46 patient-specific BCR-ABL1 fusions and detection of SNPs upstream and downstream the breakpoints in chronic myeloid leukemia using next generation sequencing

    Get PDF
    In chronic myeloid leukemia, the identification of individual BCR-ABL1 fusions is required for the development of personalized medicine approach for minimal residual disease monitoring at the DNA level. Next generation sequencing (NGS) of amplicons larger than 1000 bp simplified and accelerated a process of characterization of patient-specific BCR-ABL1 genomic fusions. NGS of large regions upstream and downstream the individual breakpoints in BCR and ABL1 genes, respectively, also provided information about the sequence variants such are single nucleotide polymorphisms

    Orthophosphate-P in the nutrient impacted River Taw and its catchment (SW England) between 1990 and 2013.

    Get PDF
    Excess dissolved phosphorus (as orthophosphate-P) contributes to reduced river water quality within Europe and elsewhere. This study reports results from analysis of a 23 year (1990-2013) water quality dataset for orthophosphate-P in the rural Taw catchment (SW England). Orthophosphate-P and river flow relationships and temporal variations in orthophosphate-P concentrations indicate the significant contribution of sewage (across the catchment) and industrial effluent (upper R. Taw) to orthophosphate-P concentrations (up to 96%), particularly during the low flow summer months when maximum algal growth occurs. In contrast, concentrations of orthophosphate-P from diffuse sources within the catchment were more important (>80%) at highest river flows. The results from a 3 end-member mixing model incorporating effluent, groundwater and diffuse orthophosphate-P source terms suggested that sewage and/or industrial effluent contributes ≥50% of the orthophosphate-P load for 27-48% of the time across the catchment. The Water Framework Directive (WFD) Phase 2 standards for reactive phosphorus, introduced in 2015, showed the R. Taw to be generally classified as Poor to Moderate Ecological Status, with a Good Status occurring more frequently in the tributary rivers. Failure to achieve Good Ecological Status occurred even though, since the early-2000s, riverine orthophosphate-P concentrations have decreased (although the mechanism(s) responsible for this could not be identified). For the first time it has been demonstrated that sewage and industrial effluent sources of alkalinity to the river can give erroneous boundary concentrations of orthophosphate-P for WFD Ecological Status classification, the extent of which is dependent on the proportion of effluent alkalinity present. This is likely to be a European - wide issue which should be examined in more detail

    A Unified Multi-Functional Dynamic Spectrum Access Framework: Tutorial, Theory and Multi-GHz Wideband Testbed

    Get PDF
    Dynamic spectrum access is a must-have ingredient for future sensors that are ideally cognitive. The goal of this paper is a tutorial treatment of wideband cognitive radio and radar—a convergence of (1) algorithms survey, (2) hardware platforms survey, (3) challenges for multi-function (radar/communications) multi-GHz front end, (4) compressed sensing for multi-GHz waveforms—revolutionary A/D, (5) machine learning for cognitive radio/radar, (6) quickest detection, and (7) overlay/underlay cognitive radio waveforms. One focus of this paper is to address the multi-GHz front end, which is the challenge for the next-generation cognitive sensors. The unifying theme of this paper is to spell out the convergence for cognitive radio, radar, and anti-jamming. Moore’s law drives the system functions into digital parts. From a system viewpoint, this paper gives the first comprehensive treatment for the functions and the challenges of this multi-function (wideband) system. This paper brings together the inter-disciplinary knowledge

    2023 CERN openlab Technical Workshop

    No full text

    Ensemble Models for Calorimeter Simulations

    No full text
    Foreseen increasing demand for simulations of particle transport through detectors in High Energy Physics motivated the search for faster alternatives to Monte Carlo-based simulations. Deep learning approaches provide promising results in terms of speed up and accuracy, among which generative adversarial networks (GANs) appear to be particularly successful in reproducing realistic detector data. However, the GANs tend to suffer from different issues such as not reproducing the full variability of the training data, missing modes problem, and unstable convergence. Various ensemble techniques applied to image generation proved that these issues can be moderated either by deploying multiple generators or multiple discriminators. This work follows a development of a GAN with two-dimensional convolutions that reproduces 3D images of an electromagnetic calorimeter. We build on top of this model and construct an ensemble of generators. With each new generator, the ensemble shows better agreement with the Monte Carlo images in terms of shower shapes and the sampling fraction
    corecore