427 research outputs found

    Using ROC surface to predict preterm delivery based on hemoglobin level in the first trimester of pregnancy

    Get PDF
    Receiver Operating Characteristics (ROC) curves have numerous applications for identifying a cut-off point in diagnostic tests. Nonetheless, given that sometimes two cut-off points have to be specified simultaneously, the ROC curve can be used to identify such points. The Volume under the ROC Surface (VUS) serves as a criterion for the accuracy of diagnostic tests. One of the unfortunate outcomes in pregnancy is pre-term delivery; it has been noted that an increase in the level of hemoglobin in the first trimester of pregnancy could result in preterm delivery in weeks 34 to 37 and that an ongoing hemoglobin increase could result in the delivery of a premature fetus before the 34th week of pregnancy. In this regard, in order to separate three groups of on-time delivery, pre-term delivery and immature delivery two cut-off points have to be identified, simultaneously. A suitable measure to identify such points is the ROC surface. In the current study, the hemoglobin information of the first trimester of pregnancy and delivery time of 623 pregnant ladies referring to Milad Hospital in Tehran in 2009-2010 was obtained. ROC surface was adopted to draw two ideal cut-off points for the first trimester of pregnancy. The optimal points for hemoglobin of the first trimester computed with the ROC surface were 12.54 and 13.2. While a hemoglobin rate less than 12.54 indicated an on-time delivery, a rate between the two cut-off points referred to pre-term delivery and hemoglobin more than 13.2 showed a premature fetus. The three-dimensional ROC surface is a useful tool that can visually summarize the ability of a biological marker to classify individuals between more than two groups.

    Optimal cutoff points for classification in diagnostic studies: new contributions and software development

    Get PDF
    Continuous diagnostic tests (biomarkers or risk markers) are often used to discriminate between healthy and diseased populations. For the clinical application of such tests, the key aspect is how to select an appropriate cutpoint or discrimination value c that defines positive and negative test results. In general, individuals with a diagnostic test value smaller than c are classified as healthy and otherwise as diseased. In the literature, several methods have been proposed to select the threshold value c in terms of different specific criteria of optimality. Among others, one of the methods most used in clinical practice is the Symmetry point that maximizes simultaneously both types of correct classifications. From a graphical viewpoint, the Symmetry point is associated to the operating point on the Receiver Operating Characteristic (ROC) curve that intersects the diagonal line passing through the points (0,1) and (1,0). However, this cutpoint is actually valid only when the error of misclassifying a diseased patient has the same severity than the error of misclassifying a healthy patient. Since this may not be the case in practice, an important issue in order to assess the clinical effectiveness of a biomarker is to take into account the costs associated with the decisions taken when selecting the threshold value. Moreover, to facilitate the task of selecting the optimal cut-off point in clinical practice, it is essential to have software that implements the existing optimal criteria in an user-friendly environment. Another interesting issue appears when the marker shows an irregular distribution, with a dominance of diseased subjects in noncontiguous regions. Using a single cutpoint, as common practice in traditional ROC analysis, would not be appropriate for these scenarios because it would lead to erroneous conclusions, not taking full advantage of the intrinsic classificatory capacity of the marke

    TOWARDS FURTHER OPTIMIZATION OF RECONSTRUCTION METHODS FOR DUAL-RADIONUCLIDE MYOCARDIAL PERFUSION SPECT

    Get PDF
    Coronary artery disease (CAD) is the most prevalent type of heart disease and a leading cause of death both in the United States and worldwide. Myocardial perfusion SPECT (MPS) is a well-established and widely-used non-invasive imaging technique to diagnose CAD. MPS images the distribution of radioactive perfusion agent in the myocardium to assess the myocardial perfusion status at rest and stress state and allow diagnosis of CAD and allow differentiation of CAD and previous myocardial infarctions. The overall goal of this dissertation was to optimize the image reconstruction methods for MPS by patient-specific optimization of two advanced iterative reconstruction methods based on simulations of realistic patients population modeling existing hardware and previously optimized dual-isotope simultaneous-acquisition imaging protocols. After optimization, the two algorithms were compared to determine the optimal reconstruction methods for MPS. First, we developed a model observer strategy to evaluate image quality and allow optimization of the reconstruction methods using a population of phantoms modeling the variability seen in human populations. The Hotelling Observer (HO) is widely used to evaluate image quality, often in conjunction with anthropomorphic channels to model human observer performance. However, applying the HO to non- multivariate-normally (MVN) distributed, such as the output from a channel model applied to images with variable signals and background, is not optimal. In this work, we proposed a novel model observer strategy to evaluate the image quality of such data. First, the entire data ensemble is divided into sub-ensembles that are exactly or approximately MVN and homoscedastic. Next, the Linear Discriminant (LD) is applied to estimate test statistics for each sub-ensemble, and a single area under the receiver operating characteristics curve (AUC) is calculated using the pooled test statistics from all the sub-ensembles. The AUC serves as the figure of merit for performance on the defect detection task. The proposed multi-template LD was compared to other model observer strategies and was shown to be a practical, theoretically justified, and produced higher AUC values for non-MVN data such as that arising from the clinically-realistic SKS task used in the remainder of this work. We then optimized two regularized statistical reconstruction algorithms. One is the widely used post-filtered ordered subsets-expectation maximization (OS-EM) algorithm. The other is a maximum a posteriori (MAP) algorithm with dual-tracer prior (DTMAP) that was proposed for dual-isotope MPS study and was expected to outperform the post-filtered OS-EM algorithm. Of importance, we proposed to investigate patient-specific optimization of the reconstruction parameters. To accomplish this, the phantom population was divided into three anatomy groups based on metrics that expected to affect image noise and resolution and thus the optimal reconstruction parameters. In particular, these metrics were the distance from the center of the heart to the face of the collimator, which is directly related to image resolution, heart size, and counts from the myocardium, which is expected to determine image noise. Reconstruction parameters were optimized for each of these groups using the proposed model observer strategy. Parameters for the rest and stress images were optimized separately, and the parameters that achieve the highest AUC were deemed optimal. The results showed that the proposed group-wise optimization method offered slightly better task performance than using a single set of parameters for all the phantoms. For DTMAP, we also applied the group-wise optimization approach. The extra challenges for DTMAP optimization are that it has three parameters to be optimized simultaneously, and it is substantially more computationally expensive than OS-EM. Thus, we adopted optimization strategies to reduce the size of the parameter search space. In particular, we searched in two parameter ranges expected to give result in good image quality. We also reduced the computation burden by exploiting limiting behavior of the penalty function to reduce the number of parameters that need to be optimized. Despite this effort, the optimized DTMAP had poorer task performance compared to the optimized OS-EM algorithm. As a result, we studied the limitations of the DTMAP algorithm and suggest reasons of its worse performance for the task investigated. The results of this study indicate that there is benefit from patient-specific optimization. The methods and optimal patient-specific parameters may be applicable to clinical MPS studies. In addition, the model observer strategy and the group-wise optimization approach may also be applicable both to future work in MPS and to other relevant fields

    RIGOROUS TASK-BASED OPTIMIZATION OF INSTRUMENTATION, ACQUISITION PARAMETERS AND RECONSTRUCTION METHODS FOR MYOCARDIAL PERFUSION SPECT

    Get PDF
    Coronary artery disease (CAD) is the most common type of heart disease and a major cause of death in the United States. Myocardial perfusion SPECT (MPS) is a well-established noninvasive diagnostic imaging technique for the detection and functional characterization of CAD. MPS involves intravenous injection of a radiopharmaceutical (e.g. Tc-99m sestamibi) followed by acquiring planar images of the 3-D distribution of the radioactive labeled agent, using one or more gamma cameras that are rotated around the patient, at different projection views. Transaxial reconstructed images are formed from these projections using tomographic image reconstruction methods. The quality of SPECT images is affected by instrumentation, acquisition parameters and reconstruction/compensation methods used. The overall goal of this dissertation was to perform rigorous optimization of MPS using task-based image quality assessment methods and metrics, in which image quality is evaluated based on the performance of an observer on diagnostic tasks relevant to MPS. In this work, we used three different model observers: the Ideal Observer (IO), and its extension, the Ideal Observer with Model Mismatch (IO-MM) and an anthropomorphic observer, the Channelized Hotelling Observer (CHO). The IO makes optimal use of the available information in the image data. However, due to its implicit perfect knowledge about the image formation process, using the IO to optimize imaging systems could lead to differences in optimal parameters compared to those optimized for humans (or CHO) interpreting images that are reconstructed with imperfect compensation for image-degrading factors. To address this, we developed the IO-MM that allows optimization of acquisition and instrumentation parameters in the absence of compensation or the presence of non-ideal compensation methods and evaluates them in terms of the IO. In order to perform clinically relevant optimization of MPS and due to radiation concerns that limit system evaluation using patient studies, we designed and developed a population of digital phantoms based on the 3-D eXtended CArdiac Torso (XCAT) phantom that provides an extremely realistic model of the human anatomy. To make the simulation of the population computationally feasible, we developed and used methods to efficiently simulate a database of Tc-99m and Tl-201 MPS projections using full Monte Carlo (MC) simulations. We used the phantom population and the projection database to optimize and evaluate the major acquisition and instrumentation parameters for MPS. An important acquisition parameter is the width of the acquisition energy window, which controls the tradeoff between scatter and noise. We used the IO, IO–MM and CHO to find the optimal acquisition energy window width and evaluate various scatter modeling and compensation methods, including the dual and triple energy window and the Effective Source Scatter Estimation (ESSE). Results indicated that the ESSE scatter estimation method provided very similar performance to the perfect scatter model implicit in the IO. Collimators are a major factor limiting image quality and largely determine the noise and resolution of SPECT images. We sought the optimal collimator with respect to the IO performance on two tasks related to MPS: binary detection and joint detection and localization. The results of this study suggested that higher sensitivity collimators than those currently used clinically appear optimal for both of the diagnostic tasks. In a different study, we evaluated and compared various CDR modeling and compensation methods using the IO (i.e. the observer implicitly used a true CDR model), IO-MM (using an approximate or no model of the CDR) and CHO, operating on images reconstructed using the same compensation methods. Results from the collimator and acquisition energy window optimization studies indicated that the IO-MM had good agreement with the CHO, in terms of the range of optimal Tc-99m acquisition energy window widths, optimal collimators, and the ranking of scatter and CDR compensation methods. The IO was in agreement with the CHO when model mismatch was small. Dual isotope simultaneous acquisition (DISA) rest Tl-201/stress Tc-99m MPS has the potential to provide reduced acquisition time, increased patient comfort, and perfectly registered images compared to separate acquisition protocols, the current clinical protocols of choice. However, crosstalk contamination, where photons emitted by one radionuclide contribute to the image of the other, degrades image quality. In this work, we optimized, compared and evaluated dual isotope MPS imaging with separate and simultaneous acquisition using the IO in the context of 3-class defect detection task. Optimal acquisition parameters were different for the two protocols. Results suggested that DISA methods, when used with accurate crosstalk compensation methods, could potentially provide image quality as good as that obtained with separate acquisition protocols

    A Novel Malware Target Recognition Architecture for Enhanced Cyberspace Situation Awareness

    Get PDF
    The rapid transition of critical business processes to computer networks potentially exposes organizations to digital theft or corruption by advanced competitors. One tool used for these tasks is malware, because it circumvents legitimate authentication mechanisms. Malware is an epidemic problem for organizations of all types. This research proposes and evaluates a novel Malware Target Recognition (MaTR) architecture for malware detection and identification of propagation methods and payloads to enhance situation awareness in tactical scenarios using non-instruction-based, static heuristic features. MaTR achieves a 99.92% detection accuracy on known malware with false positive and false negative rates of 8.73e-4 and 8.03e-4 respectively. MaTR outperforms leading static heuristic methods with a statistically significant 1% improvement in detection accuracy and 85% and 94% reductions in false positive and false negative rates respectively. Against a set of publicly unknown malware, MaTR detection accuracy is 98.56%, a 65% performance improvement over the combined effectiveness of three commercial antivirus products

    Image quality assessment : utility, beauty, appearance

    Get PDF

    Advances in Reinforcement Learning

    Get PDF
    Reinforcement Learning (RL) is a very dynamic area in terms of theory and application. This book brings together many different aspects of the current research on several fields associated to RL which has been growing rapidly, producing a wide variety of learning algorithms for different applications. Based on 24 Chapters, it covers a very broad variety of topics in RL and their application in autonomous systems. A set of chapters in this book provide a general overview of RL while other chapters focus mostly on the applications of RL paradigms: Game Theory, Multi-Agent Theory, Robotic, Networking Technologies, Vehicular Navigation, Medicine and Industrial Logistic

    The extension and application of Swet's theory of information retrieval

    Get PDF
    Phd ThesisThe thesis comprises (1) 8 critical interpretation of Swets's contribution to information retrieval, (2) development (i.e. "extension") of the formalism, as so interpreted, and (3) a description of an experiment that identifies hypotheses consistent with the extended formalism. The early sections of the thesis place the original contribution by Swets in the contexts of both signal-detection theory and information retrieval theory. It is then argued that as the original theoretical contribution is ambiguous in key respects, an interpretation of it is necessary. The interpretation given constitutes an initial development of Swets's work but other developments, not simply a consequence of the interpretation of the original description by Swets, are also put forward. The major one of these is the explicit incorporation in the formalism of logical search expressions. Elementary logical conjuncts of search terms are seen as (1) being weakly ordered by "document ordering expressions", and (2) having probability-pairs attached to disjunctions of them defined by the ordering. A major part of the thesis is the identification of novel hypotheses, expressed within the extension of the original formalism, which relate to triples of: (1) instances of information need in medicine, represented by prespecified partitionings of a medical-literature data base (MEDLARS), (2) an analytical document ordering expression, and (3) an algorithmically-derived set of terms characterising the information need. An enhancement is suggested to data base management programs that at present employ only user-specified logical search expressions by way of search input, this enhancement stemming directly from the extension of the original formalism. The broad conclusion of the thesis is that when the original contribution of Swets is suitably interpreted and extended, a robust, hospitable conceptual framework for describing information retrieval at the macroscopic level is provided
    corecore