2,727 research outputs found

    Investigation of the Effects of Image Signal-to-Noise Ratio on TSPO PET Quantification of Neuroinflammation

    Get PDF
    Neuroinflammation may be imaged using positron emission tomography (PET) and the tracer [11C]-PK11195. Accurate and precise quantification of 18 kilodalton Translocator Protein (TSPO) binding parameters in the brain has proven difficult with this tracer, due to an unfavourable combination of low target concentration in tissue, low brain uptake of the tracer and relatively high non-specific binding, all of which leads to higher levels of relative image noise. To address these limitations, research into new radioligands for the TSPO, with higher brain uptake and lower non-specific binding relative to [11C]-PK11195, is being conducted world-wide. However, factors other than radioligand properties are known to influence signal-to-noise ratio in quantitative PET studies, including the scanner sensitivity, image reconstruction algorithms and data analysis methodology. The aim of this thesis was to investigate and validate computational tools for predicting image noise in dynamic TSPO PET studies, and to employ those tools to investigate the factors that affect image SNR and reliability of TSPO quantification in the human brain. The feasibility of performing multiple (n≥40) independent Monte Carlo simulations for each dynamic [11C]-PK11195 frame- with realistic modelling of the radioactivity source, attenuation and PET tomograph geometries- was investigated. A Beowulf-type high performance computer cluster, constructed from commodity components, was found to be well suited to this task. Timing tests on a single desktop computer system indicated that a computer cluster capable of simulating an hour-long dynamic [11C]-PK11195 PET scan, with 40 independent repeats, and with a total simulation time of less than 6 weeks, could be constructed for less than 10,000 Australian dollars. A computer cluster containing 44 computing cores was therefore assembled, and a peak simulation rate of 2.84x105 photon pairs per second was achieved using the GEANT4 Application for Tomographic Emission (GATE) Monte Carlo simulation software. A simulated PET tomograph was developed in GATE that closely modelled the performance characteristics of several real-world clinical PET systems in terms of spatial resolution, sensitivity, scatter fraction and counting rate performance. The simulated PET system was validated using adaptations of the National Electrical Manufacturers Association (NEMA) quality assurance procedures within GATE. Image noise in dynamic TSPO PET scans was estimated by performing n=40 independent Monte Carlo simulations of an hour-long [11C]-PK11195 scan, and of an hour- long dynamic scan for a hypothetical TSPO ligand with double the brain activity concentration of [11C]-PK11195. From these data an analytical noise model was developed that allowed image noise to be predicted for any combination of brain tissue activity concentration and scan duration. The noise model was validated for the purpose of determining the precision of kinetic parameter estimates for TSPO PET. An investigation was made into the effects of activity concentration in tissue, radionuclide half-life, injected dose and compartmental model complexity on the reproducibility of kinetic parameters. Injecting 555 MBq of carbon-11 labelled TSPO tracer produced similar binding parameter precision to 185 MBq of fluorine-18, and a moderate (20%) reduction in precision was observed for the reduced carbon-11 dose of 370 MBq. Results indicated that a factor of 2 increase in frame count level (relative to [11C]-PK11195, and due for example to higher ligand uptake, injected dose or absolute scanner sensitivity) is required to obtain reliable binding parameter estimates for small regions of interest when fitting a two-tissue compartment, four-parameter compartmental model. However, compartmental model complexity had a similarly large effect, with the reduction of model complexity from the two-tissue compartment, four-parameter to a one-tissue compartment, two-parameter model producing a 78% reduction in coefficient of variation of the binding parameter estimates at each tissue activity level and region size studied. In summary, this thesis describes the development and validation of Monte Carlo methods for estimating image noise in dynamic TSPO PET scans, and analytical methods for predicting relative image noise for a wide range of tissue activity concentration and acquisition durations. The findings of this research suggest that a broader consideration of the kinetic properties of novel TSPO radioligands, with a view to selection of ligands that are potentially amenable to analysis with a simple one-tissue compartment model, is at least as important as efforts directed towards reducing image noise, such as higher brain uptake, in the search for the next generation of TSPO PET tracers

    PSF Estimation by Gradient Descent Fit to the ESF

    Get PDF
    Calibration of scanners and cameras usually involves measuring the point spread function (PSF). When edge data is used to measure the PSF, the differentiation step amplifies the noise. A parametric fit of the functional form of the edge spread function (ESF) directly to the measured edge data is proposed to eliminate this. Experiments used to test this method show that the Cauchy functional form fits better than the Gaussian or other forms tried. The effect of using a functional form of the PSF that differs from the true PSF is explored by considering bilevel images formed by thresholding. The amount of mismatch seen can be related to the difference between the respective kurtosis factors

    A case study in digitizing a photographic collection

    Get PDF
    This paper reviews the processes involved in the digitisation, display and storage of medium size collections of photographs using mid-range commercially available equipment. Guidelines for evaluating the performance of these digitisation processes based on aspects of image quality are provided. A collection of photographic slides, representing first-generation analogue reproductions of a photographic collection from the nineteenth century, is treated as a case study. Constraints on the final image quality and the implications of digital archiving are discussed. Full descriptions of device characterisation and calibration procedures are given and results from objective measurements carried out to assess the digitisation system are presented. The important issues of file format, physical storage and data migration are also addressed

    Estimating Scanning Characteristics from Corners in Bilevel Images

    Get PDF
    Degradations that occur during scanning can cause errors in Optical Character Recognition (OCR). Scans made in bilevel mode (no grey scale) from high contrast source patterns are the input to the estimation processes. Two scanner system parameters are estimated from bilevel scans using models of the scanning process and bilevel source patterns. The scanner\u27s point spread function (PSF) width and the binarization threshold are estimated by using corner features in the scanned images. These estimation algorithms were tested in simulation and with scanned test patterns. The resulting estimates are close in value to what is expected based on grey-level analysis. The results of estimation are used to produce synthetically scanned characters that in most cases bear a strong resemblance to the characters scanned on the scanner at the same settings as the test pattern used for estimation

    A Study of the preferability of desktop generated color separations over high end generated color separations in newspaper printing

    Get PDF
    Over the last decade, newspapers have been compelled to introduce a lot more process color to their pages. This coincides with the introduction of USA Today, but also with the fact that all forms of media, including television and magazines, have become significantly more colorful. There is considerable market pressure on newspapers to be colorful, as well. Continued audience interest and advertising dollars are at stake. This is not a trivial issue. At the same time, newspapers are being admonished to make sure that, if they do introduce color to their pages, then it should be of high quality. Low quality color is worse than no color at all. During most of this period, traditional, high end scanners comprised the only reasonable option available to address the task of generating color separation films. In recent years, with the improvement of microcomputer based technology, desktop scanning systems have become an option. The quality of the output of these systems has been suspect, however. On the other hand, it is generally acknowledged that these systems are improving. Of all the forms of lithographic printing, newspaper printing (coldset offset lithography on newsprint) is least able to take advantage of all the data that the upstream processes, particularly the color separation process, can provide. It represents the lowest level of reproduction fidelity. It has a shorter density range, requires a lower screening frequency, and is restricted to a more limited color gamut, for example, than the other forms of lithographic printing. This is due primarily to the substrate, newsprint, but also in part to news inks. Is it possible that today\u27s desktop scanners now provide output whose quality level is sufficient for this printing process? Would readers show no preference for reproductions made from separations generated by these systems over those made from separations generated by high end scanning systems? That is the fundamental question addressed in this thesis. A common set of transparencies were separated through a high end system and a desktop system. Care was taken to prevent either system from being disadvantaged through parameter settings. The separations from both systems were stripped into a common test form, and printed on an offset newspaper press. Judges evaluated the pairs of images in a paired comparison test, indicating a preference for the high end generated image or the desktop generated image. The results indicate that readers do indeed continue to show a clear preference for images generated on a high end scanner. The reader is requested to take note of two caveats. First, this test represents a comparison of two scanning systems, not two families of scanners. The author found the two best scanners from each family that were available to him at the time of the test. The reader applies the findings of this test to the families of scanners that these two scanners represent at his or her own peril. Second, the reader is admonished to recognize that these are times of rapidly changing technology. The results of an experiment such as this could change as the technology advances

    Task adapted reconstruction for inverse problems

    Full text link
    The paper considers the problem of performing a task defined on a model parameter that is only observed indirectly through noisy data in an ill-posed inverse problem. A key aspect is to formalize the steps of reconstruction and task as appropriate estimators (non-randomized decision rules) in statistical estimation problems. The implementation makes use of (deep) neural networks to provide a differentiable parametrization of the family of estimators for both steps. These networks are combined and jointly trained against suitable supervised training data in order to minimize a joint differentiable loss function, resulting in an end-to-end task adapted reconstruction method. The suggested framework is generic, yet adaptable, with a plug-and-play structure for adjusting both the inverse problem and the task at hand. More precisely, the data model (forward operator and statistical model of the noise) associated with the inverse problem is exchangeable, e.g., by using neural network architecture given by a learned iterative method. Furthermore, any task that is encodable as a trainable neural network can be used. The approach is demonstrated on joint tomographic image reconstruction, classification and joint tomographic image reconstruction segmentation

    Functional imaging of the developing brain with wearable high-density diffuse optical tomography: a new benchmark for infant neuroimaging outside the scanner environment

    Get PDF
    Studies of cortical function in the awake infant are extremely challenging to undertake with traditional neuroimaging approaches. Partly in response to this challenge, functional near-infrared spectroscopy (fNIRS) has become increasingly common in developmental neuroscience, but has significant limitations including resolution, spatial specificity and ergonomics. In adults, high-density arrays of near-infrared sources and detectors have recently been shown to yield dramatic improvements in spatial resolution and specificity when compared to typical fNIRS approaches. However, most existing fNIRS devices only permit the acquisition of ∼20-100 sparsely distributed fNIRS channels, and increasing the number of optodes presents significant mechanical challenges, particularly for infant applications. A new generation of wearable, modular, high-density diffuse optical tomography (HD-DOT) technologies has recently emerged that overcomes many of the limitations of traditional, fibre-based and low-density fNIRS measurements. Driven by the development of this new technology, we have undertaken the first study of the infant brain using wearable HD-DOT. Using a well-established social stimulus paradigm, and combining this new imaging technology with advances in cap design and spatial registration, we show that it is now possible to obtain high-quality, functional images of the infant brain with minimal constraints on either the environment or on the infant participants. Our results are consistent with prior low-density fNIRS measures based on similar paradigms, but demonstrate superior spatial localization, improved depth specificity, higher SNR and a dramatic improvement in the consistency of the responses across participants. Our data retention rates also demonstrate that this new generation of wearable technology is well tolerated by the infant population

    Comparison of phosphor screen autoradiography and micro-pattern gas detector based autoradiography for the porosity of altered rocks

    Get PDF
    This study aims to further develop the C-14-PMMA porosity calculation method with a novel autoradiography technique, the Micro-pattern gas detector autoradiography (MPGDA). In this study, the MPGDA is compared with phosphor screen autoradiography (SPA). A set of rock samples from Martinique Island exhibiting a large range of connected porosities was used to validate the MPGDA method. Calculated porosities were found to be in agreement with ones from the SPA and the triple-weight method (TW). The filmless nature of MPGDA as well as straightforward determination of C-14 radioactivity from the source rock makes the porosity calculation less uncertain. The real-time visualization of radioactivity from C-14 beta emissions by MPGDA is a noticeable improvement in comparison to SPA.Peer reviewe
    corecore