287,934 research outputs found
A deep learning masked segmentation alternative to manual segmentation in biparametric MRI prostate cancer radiomics
OBJECTIVES: To determine the value of a deep learning masked (DLM) auto-fixed volume of interest (VOI) segmentation method as an alternative to manual segmentation for radiomics-based diagnosis of clinically significant (CS) prostate cancer (PCa) on biparametric magnetic resonance imaging (bpMRI). MATERIALS AND METHODS: This study included a retrospective multi-center dataset of 524 PCa lesions (of which 204 are CS PCa) on bpMRI. All lesions were both semi-automatically segmented with a DLM auto-fixed VOI method (averaging < 10 s per lesion) and manually segmented by an expert uroradiologist (averaging 5 min per lesion). The DLM auto-fixed VOI method uses a spherical VOI (with its center at the location of the lowest apparent diffusion coefficient of the prostate lesion as indicated with a single mouse click) from which non-prostate voxels are removed using a deep learning-based prostate segmentation algorithm. Thirteen different DLM auto-fixed VOI diameters (ranging from 6 to 30 mm) were explored. Extracted radiomics data were split into training and test sets (4:1 ratio). Performance was assessed with receiver operating characteristic (ROC) analysis. RESULTS: In the test set, the area under the ROC curve (AUCs) of the DLM auto-fixed VOI method with a VOI diameter of 18 mm (0.76 [95% CI: 0.66-0.85]) was significantly higher (p = 0.0198) than that of the manual segmentation method (0.62 [95% CI: 0.52-0.73]). CONCLUSIONS: A DLM auto-fixed VOI segmentation can provide a potentially more accurate radiomics diagnosis of CS PCa than expert manual segmentation while also reducing expert time investment by more than 97%. KEY POINTS: * Compared to traditional expert-based segmentation, a deep learning mask (DLM) auto-fixed VOI placement is more accurate at detecting CS PCa. * Compared to traditional expert-based segmentation, a DLM auto-fixed VOI placement is faster and can result in a 97% time reduction. * Applying deep learning to an auto-fixed VOI radiomics approach can be valuable
Mapping constrained optimization problems to quantum annealing with application to fault diagnosis
Current quantum annealing (QA) hardware suffers from practical limitations
such as finite temperature, sparse connectivity, small qubit numbers, and
control error. We propose new algorithms for mapping boolean constraint
satisfaction problems (CSPs) onto QA hardware mitigating these limitations. In
particular we develop a new embedding algorithm for mapping a CSP onto a
hardware Ising model with a fixed sparse set of interactions, and propose two
new decomposition algorithms for solving problems too large to map directly
into hardware.
The mapping technique is locally-structured, as hardware compatible Ising
models are generated for each problem constraint, and variables appearing in
different constraints are chained together using ferromagnetic couplings. In
contrast, global embedding techniques generate a hardware independent Ising
model for all the constraints, and then use a minor-embedding algorithm to
generate a hardware compatible Ising model. We give an example of a class of
CSPs for which the scaling performance of D-Wave's QA hardware using the local
mapping technique is significantly better than global embedding.
We validate the approach by applying D-Wave's hardware to circuit-based
fault-diagnosis. For circuits that embed directly, we find that the hardware is
typically able to find all solutions from a min-fault diagnosis set of size N
using 1000N samples, using an annealing rate that is 25 times faster than a
leading SAT-based sampling method. Further, we apply decomposition algorithms
to find min-cardinality faults for circuits that are up to 5 times larger than
can be solved directly on current hardware.Comment: 22 pages, 4 figure
Clustering by soft-constraint affinity propagation: Applications to gene-expression data
Motivation: Similarity-measure based clustering is a crucial problem
appearing throughout scientific data analysis. Recently, a powerful new
algorithm called Affinity Propagation (AP) based on message-passing techniques
was proposed by Frey and Dueck \cite{Frey07}. In AP, each cluster is identified
by a common exemplar all other data points of the same cluster refer to, and
exemplars have to refer to themselves. Albeit its proved power, AP in its
present form suffers from a number of drawbacks. The hard constraint of having
exactly one exemplar per cluster restricts AP to classes of regularly shaped
clusters, and leads to suboptimal performance, {\it e.g.}, in analyzing gene
expression data. Results: This limitation can be overcome by relaxing the AP
hard constraints. A new parameter controls the importance of the constraints
compared to the aim of maximizing the overall similarity, and allows to
interpolate between the simple case where each data point selects its closest
neighbor as an exemplar and the original AP. The resulting soft-constraint
affinity propagation (SCAP) becomes more informative, accurate and leads to
more stable clustering. Even though a new {\it a priori} free-parameter is
introduced, the overall dependence of the algorithm on external tuning is
reduced, as robustness is increased and an optimal strategy for parameter
selection emerges more naturally. SCAP is tested on biological benchmark data,
including in particular microarray data related to various cancer types. We
show that the algorithm efficiently unveils the hierarchical cluster structure
present in the data sets. Further on, it allows to extract sparse gene
expression signatures for each cluster.Comment: 11 pages, supplementary material:
http://isiosf.isi.it/~weigt/scap_supplement.pd
Analysis of time-to-event for observational studies: Guidance to the use of intensity models
This paper provides guidance for researchers with some mathematical
background on the conduct of time-to-event analysis in observational studies
based on intensity (hazard) models. Discussions of basic concepts like time
axis, event definition and censoring are given. Hazard models are introduced,
with special emphasis on the Cox proportional hazards regression model. We
provide check lists that may be useful both when fitting the model and
assessing its goodness of fit and when interpreting the results. Special
attention is paid to how to avoid problems with immortal time bias by
introducing time-dependent covariates. We discuss prediction based on hazard
models and difficulties when attempting to draw proper causal conclusions from
such models. Finally, we present a series of examples where the methods and
check lists are exemplified. Computational details and implementation using the
freely available R software are documented in Supplementary Material. The paper
was prepared as part of the STRATOS initiative.Comment: 28 pages, 12 figures. For associated Supplementary material, see
http://publicifsv.sund.ku.dk/~pka/STRATOSTG8
Bayesian sequential change diagnosis
Sequential change diagnosis is the joint problem of detection and
identification of a sudden and unobservable change in the distribution of a
random sequence. In this problem, the common probability law of a sequence of
i.i.d. random variables suddenly changes at some disorder time to one of
finitely many alternatives. This disorder time marks the start of a new regime,
whose fingerprint is the new law of observations. Both the disorder time and
the identity of the new regime are unknown and unobservable. The objective is
to detect the regime-change as soon as possible, and, at the same time, to
determine its identity as accurately as possible. Prompt and correct diagnosis
is crucial for quick execution of the most appropriate measures in response to
the new regime, as in fault detection and isolation in industrial processes,
and target detection and identification in national defense. The problem is
formulated in a Bayesian framework. An optimal sequential decision strategy is
found, and an accurate numerical scheme is described for its implementation.
Geometrical properties of the optimal strategy are illustrated via numerical
examples. The traditional problems of Bayesian change-detection and Bayesian
sequential multi-hypothesis testing are solved as special cases. In addition, a
solution is obtained for the problem of detection and identification of
component failure(s) in a system with suspended animation
Medical imaging analysis with artificial neural networks
Given that neural networks have been widely reported in the research community of medical imaging, we provide a focused literature survey on recent neural network developments in computer-aided diagnosis, medical image segmentation and edge detection towards visual content analysis, and medical image registration for its pre-processing and post-processing, with the aims of increasing awareness of how neural networks can be applied to these areas and to provide a foundation for further research and practical development. Representative techniques and algorithms are explained in detail to provide inspiring examples illustrating: (i) how a known neural network with fixed structure and training procedure could be applied to resolve a medical imaging problem; (ii) how medical images could be analysed, processed, and characterised by neural networks; and (iii) how neural networks could be expanded further to resolve problems relevant to medical imaging. In the concluding section, a highlight of comparisons among many neural network applications is included to provide a global view on computational intelligence with neural networks in medical imaging
- …