2,930 research outputs found
Recommended from our members
Deep learning for cardiac image segmentation: A review
Deep learning has become the most widely used approach for cardiac image segmentation in recent years. In this paper, we provide a review of over 100 cardiac image segmentation papers using deep learning, which covers common imaging modalities including magnetic resonance imaging (MRI), computed tomography (CT), and ultrasound (US) and major anatomical structures of interest (ventricles, atria and vessels). In addition, a summary of publicly available cardiac image datasets and code repositories are included to provide a base for encouraging reproducible research. Finally, we discuss the challenges and limitations with current deep learning-based approaches (scarcity of labels, model generalizability across different domains, interpretability) and suggest potential directions for future research
Automated Inline Analysis of Myocardial Perfusion MRI with Deep Learning
Recent development of quantitative myocardial blood flow (MBF) mapping allows
direct evaluation of absolute myocardial perfusion, by computing pixel-wise
flow maps. Clinical studies suggest quantitative evaluation would be more
desirable for objectivity and efficiency. Objective assessment can be further
facilitated by segmenting the myocardium and automatically generating reports
following the AHA model. This will free user interaction for analysis and lead
to a 'one-click' solution to improve workflow. This paper proposes a deep
neural network based computational workflow for inline myocardial perfusion
analysis. Adenosine stress and rest perfusion scans were acquired from three
hospitals. Training set included N=1,825 perfusion series from 1,034 patients.
Independent test set included 200 scans from 105 patients. Data were
consecutively acquired at each site. A convolution neural net (CNN) model was
trained to provide segmentation for LV cavity, myocardium and right ventricular
by processing incoming 2D+T perfusion Gd series. Model outputs were compared to
manual ground-truth for accuracy of segmentation and flow measures derived on
global and per-sector basis. The trained models were integrated onto MR
scanners for effective inference. Segmentation accuracy and myocardial flow
measures were compared between CNN models and manual ground-truth. The mean
Dice ratio of CNN derived myocardium was 0.93 +/- 0.04. Both global flow and
per-sector values showed no significant difference, compared to manual results.
The AHA 16 segment model was automatically generated and reported on the MR
scanner. As a result, the fully automated analysis of perfusion flow mapping
was achieved. This solution was integrated on the MR scanner, enabling
'one-click' analysis and reporting of myocardial blood flow.Comment: This work has been submitted to Radiology: Artificial Intelligence
for possible publicatio
MR Imaging Texture Analysis in the Abdomen and Pelvis
Texture analysis (TA) is a form of radiomics and refers to quantitative measurements of the
histogram, distribution and/or relationship of pixel intensities or gray scales within a region of
interest on an image. TA can be applied to MRI of the abdomen and pelvis, with the main
strength being quantitative analysis of pixel intensities and heterogeneity rather than
subjective/qualitative analysis. There are multiple limitations of MR texture analysis (MRTA)
including a dependency on image acquisition and reconstruction parameters, non-standardized
approaches without or with image filtration, diverse software methods and applications, and
statistical challenges relating numerous texture analysis results to clinical outcomes in
retrospective pilot studies with small sample sizes. Despite these limitations, there is a growing
body of literature supporting MRTA. In this review, the application of MRTA to the abdomen
and pelvis will be discussed, including tissue or tumor characterization and response evaluation
or prediction of outcomes in various tumors
Towards Improving Learning with Consumer-Grade, Closed-Loop, Electroencephalographic Neurofeedback
Learning is an enigmatic process composed of a multitude of cognitive systems that are functionally and neuroanatomically distinct. Nevertheless, two undeniable pillars which underpin learning are attention and memory; to learn, one must attend, and maintain a representation of, an event. Psychological and neuroscientific technologies that permit researchers to “mind-read” have revealed much about the dynamics of these distinct processes that contribute to learning. This investigation first outlines the cognitive pillars which support learning and the technologies that permit such an understanding. It then employs a novel task—the amSMART paradigm—with the goal of building a real-time, closed-loop, electroencephalographic (EEG) neurofeedback paradigm using consumergrade brain-computer interface (BCI) hardware. Data are presented which indicate the current status of consumer-grade BCI for EEG cognition classification and enhancement, and directions are suggested for the developing world of consumer neurofeedback
Removing the influence of a group variable in high-dimensional predictive modelling
In many application areas, predictive models are used to support or make
important decisions. There is increasing awareness that these models may
contain spurious or otherwise undesirable correlations. Such correlations may
arise from a variety of sources, including batch effects, systematic
measurement errors, or sampling bias. Without explicit adjustment, machine
learning algorithms trained using these data can produce poor out-of-sample
predictions which propagate these undesirable correlations. We propose a method
to pre-process the training data, producing an adjusted dataset that is
statistically independent of the nuisance variables with minimum information
loss. We develop a conceptually simple approach for creating an adjusted
dataset in high-dimensional settings based on a constrained form of matrix
decomposition. The resulting dataset can then be used in any predictive
algorithm with the guarantee that predictions will be statistically independent
of the group variable. We develop a scalable algorithm for implementing the
method, along with theory support in the form of independence guarantees and
optimality. The method is illustrated on some simulation examples and applied
to two case studies: removing machine-specific correlations from brain scan
data, and removing race and ethnicity information from a dataset used to
predict recidivism. That the motivation for removing undesirable correlations
is quite different in the two applications illustrates the broad applicability
of our approach.Comment: Update. 18 pages, 3 figure
- …