2 research outputs found
Image to Images Translation for Multi-Task Organ Segmentation and Bone Suppression in Chest X-Ray Radiography
Chest X-ray radiography is one of the earliest medical imaging technologies
and remains one of the most widely-used for diagnosis, screening, and treatment
follow up of diseases related to lungs and heart. The literature in this field
of research reports many interesting studies dealing with the challenging tasks
of bone suppression and organ segmentation but performed separately, limiting
any learning that comes with the consolidation of parameters that could
optimize both processes. This study, and for the first time, introduces a
multitask deep learning model that generates simultaneously the bone-suppressed
image and the organ-segmented image, enhancing the accuracy of tasks,
minimizing the number of parameters needed by the model and optimizing the
processing time, all by exploiting the interplay between the network parameters
to benefit the performance of both tasks. The architectural design of this
model, which relies on a conditional generative adversarial network, reveals
the process on how the well-established pix2pix network (image-to-image
network) is modified to fit the need for multitasking and extending it to the
new image-to-images architecture. The developed source code of this multitask
model is shared publicly on Github as the first attempt for providing the
two-task pix2pix extension, a supervised/paired/aligned/registered
image-to-images translation which would be useful in many multitask
applications. Dilated convolutions are also used to improve the results through
a more effective receptive field assessment. The comparison with
state-of-the-art algorithms along with ablation study and a demonstration video
are provided to evaluate efficacy and gauge the merits of the proposed
approach
Modeling EEG data distribution with a Wasserstein Generative Adversarial Network to predict RSVP Events
Electroencephalography (EEG) data are difficult to obtain due to complex
experimental setups and reduced comfort with prolonged wearing. This poses
challenges to train powerful deep learning model with the limited EEG data.
Being able to generate EEG data computationally could address this limitation.
We propose a novel Wasserstein Generative Adversarial Network with gradient
penalty (WGAN-GP) to synthesize EEG data. This network addresses several
modeling challenges of simulating time-series EEG data including frequency
artifacts and training instability. We further extended this network to a
class-conditioned variant that also includes a classification branch to perform
event-related classification. We trained the proposed networks to generate one
and 64-channel data resembling EEG signals routinely seen in a rapid serial
visual presentation (RSVP) experiment and demonstrated the validity of the
generated samples. We also tested intra-subject cross-session classification
performance for classifying the RSVP target events and showed that
class-conditioned WGAN-GP can achieve improved event-classification performance
over EEGNet