17 research outputs found
Multi-objective optimisation for receiver operating characteristic analysis
Copyright © 2006 Springer-Verlag Berlin Heidelberg. The final publication is available at link.springer.comBook title: Multi-Objective Machine LearningSummary
Receiver operating characteristic (ROC) analysis is now a standard tool for the comparison of binary classifiers and the selection operating parameters when the costs of misclassification are unknown.
This chapter outlines the use of evolutionary multi-objective optimisation techniques for ROC analysis, in both its traditional binary classification setting, and in the novel multi-class ROC situation.
Methods for comparing classifier performance in the multi-class case, based on an analogue of the Gini coefficient, are described, which leads to a natural method of selecting the classifier operating point. Illustrations are given concerning synthetic data and an application to Short Term Conflict Alert
Recommended from our members
Observer models utilizing compressed textures
We have previously presented a method for sorting textures based on whether they obscure a signal, and thus hinder the ability of an observer to perform a signal-detection task, or whether the presence of certain textures can be easily ignored by the observer, and thus do little to impede performance. This analysis has led to a surrogate figure of merit that was demonstrated to correlate with human-observer performance as measured by the channelized Hotelling observer. In this work, we generalize our previous results to include more tasks including estimation and combined detection/estimation tasks. We demonstrate the ability of this method to determine the textures present in a set of images that are the most detrimental to the specified task. We further devise alternative surrogate figures of merit can utilize this texture-compression method as a mechanism for generating channels for observer-performance computations. Copyright © 2021 SPIE.Immediate accessThis item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at [email protected]
Recommended from our members
Simulations and analysis of fluorescence effects in semiconductor x-ray and gamma-ray detectors
Photon-counting semiconductor detectors are a key technology for reducing dose in clinical x-ray imaging procedures, such as CT, and improving performance in gamma-ray imaging procedures such as SPECT. These detectors offer excellent energy resolution and high spatial resolution. To stop high-energy photons, high-Z semiconductors must be used, such as CdTe, TlBr or other emerging candidates. These crystals often suffer from poor hole transport due to hole trapping, which can greatly affect signal, even when data is primarily collected from anodes. There are many interesting challenges in the production of these detectors as well as in developing complete quantitative models of the photon-matter interaction, charge transport, and signal induction. Prior work in our group has focused on optimal ways to estimate photon interaction parameters (x,y,z) and energy (E). This work is based on statistical models and calibration data. In recent work we are exploring a method to account for k x-ray fluorescence and to model signals induced on a double-sided strip detector. Our approach is Monte-Carlo sampling of interaction details, followed by charge transport and signal induction modeling via weighting potentials. First our simulation creates first and second order statistics for three charge induction cases: simple transport, charge sharing, and x-ray fluorescence. Using mean signals and covariance matrices from these cases we build a likelihood that can be used with maximum likelihood methods to estimate the primary interaction location and classify whether the event's energy deposition involved fluorescence. In planned work we will test the model against experimental semiconductor detector data. © 2022 SPIE.Immediate accessThis item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at [email protected]
Task-based measures of image quality and their relation to radiation dose and patient risk.
The theory of task-based assessment of image quality is reviewed in the context of imaging with ionizing radiation, and objective figures of merit (FOMs) for image quality are summarized. The variation of the FOMs with the task, the observer and especially with the mean number of photons recorded in the image is discussed. Then various standard methods for specifying radiation dose are reviewed and related to the mean number of photons in the image and hence to image quality. Current knowledge of the relation between local radiation dose and the risk of various adverse effects is summarized, and some graphical depictions of the tradeoffs between image quality and risk are introduced. Then various dose-reduction strategies are discussed in terms of their effect on task-based measures of image quality
Recommended from our members
Fisher information comparison between a monolithic and a fiber-optic light guide in a modular gamma camera
Single-photon emission computed tomography (SPECT) performed with a pinhole collimator often suffers from parallax error due to depth-of-interaction uncertainty. One possible way to reduce the parallax error for a new generation of SPECT pinhole cameras would be to incorporate fiber optics to control the spread of light and improve 3D position estimation. In this work, we have developed a Monte Carlo simulation for an SiPMbased modular gamma camera that incorporates a fiber optic-plate as a light guide. We have created a custom photon transport code written in Swift and we perform the computationally taxing components on a GPU using Metal. This code includes refraction according to Snell's law as well as reflection according to Fresnel's laws at material boundaries. The plate is modeled as a hexagonally-packed array of individual fibers. We also include the scintillation statistics of NaI(Tl) and the detection efficiency of the silicon photomultipliers. We use the simulation code to create mean-detector-response functions (MDRFs) from which Fisher information on event positioning can be assessed. We compare planar detectors with different light guides to determine the effects of the fiber optics. We model three geometries; one that only uses a monolithic light guide, one that only has a fiber-optic plate, and one that has a monolithic light guide and a fiber-optic plate in combination. The spatial resolutions are compared by using Fisher Information Matrices to calculate the Craḿer-Rao Lower Bounds on position estimate variances. © 2022 SPIE.Immediate accessThis item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at [email protected]
Recommended from our members
A read-out strategy for high-resolution large-area SiPM-based modular gamma-ray cameras
Ongoing developments in the field of molecular imaging have increased the need for gamma-ray detectors with better spatial resolution, while maintaining a large detection area. One approach to improve spatial resolution is to utilize smaller light sensors for finer sampling of scintillation light distribution. However, the number of required sensors per camera must increase significantly, which in turn increases the complexity of the imaging system. Examples of challenges that arise are the analog-to-digital conversion of large numbers of channels, and a bottleneck effect that results from transferring large amounts of raw list-mode data to an acquisition computer. Here we present the design of a read-out electronics system that addresses these challenges. The read-out system, which is designed for a 10"× 10"SiPM-based scintillation gamma-ray camera, can process up to 162 light-sensor signals per event. This is achieved by implementing 1-bit and non-uniform 2-bit sigma-delta modulation analogto-digital conversion, and an on-board processing system with a large number of input/output user pins and relatively high processing power. The processor is a system-on-a-module that also has SDRAM, which allows us to buffer raw list-mode data on board. The bottleneck effect is avoided by buffering event data on the camera module, and only transferring it when the main acquisition computer requests it. This design can be adapted for other crystal/sensor configurations, and can be scaled for a different number of channels. © 2022 SPIE.Immediate accessThis item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at [email protected]