418 research outputs found
Blind Inpainting with Object-aware Discrimination for Artificial Marker Removal
Medical images often contain artificial markers added by doctors, which can
negatively affect the accuracy of AI-based diagnosis. To address this issue and
recover the missing visual contents, inpainting techniques are highly needed.
However, existing inpainting methods require manual mask input, limiting their
application scenarios. In this paper, we introduce a novel blind inpainting
method that automatically completes visual contents without specifying masks
for target areas in an image. Our proposed model includes a mask-free
reconstruction network and an object-aware discriminator. The reconstruction
network consists of two branches that predict the corrupted regions with
artificial markers and simultaneously recover the missing visual contents. The
object-aware discriminator relies on the powerful recognition capabilities of
the dense object detector to ensure that the markers of reconstructed images
cannot be detected in any local regions. As a result, the reconstructed image
can be close to the clean one as much as possible. Our proposed method is
evaluated on different medical image datasets, covering multiple imaging
modalities such as ultrasound (US), magnetic resonance imaging (MRI), and
electron microscopy (EM), demonstrating that our method is effective and robust
against various unknown missing region patterns
Deformable Image Registration Using Convolutional Neural Networks for Connectomics
Department of Computer Science and EngineeringIn this thesis, a new novel method to align two images with recent deep learning scheme called ssEMnet is presented. The reconstruction of serial-section electron microscopy (ssEM) images gives critical insight to neuroscientist understanding real brains. However, alignment of each ssEM plane is not straightforward because of its densely twisted circuit structures. In addition, dynamic deformations are applied to images in the process of acquiring ssEM dataset from specimens. Even worse, non-matched artifacts like dusts and folds occur in the EM images.
In recent deep learning researches, especially related with convolutional neural networks (CNNs) have shown to be able to handle various problems in computer vision area. However, there is no clear success on ssEM image registration problem using CNNs. ssEMnet is constructed with two parts. The first part is a spatial transformer module which supports differentiable transformation of images in deep neural network. A convolutional autoencoder (CAE) which encodes dense features follows. The CAE is trained by unsupervised fashion and its features give wide receptive field information to align the source and target images. This method is compared with two other major ssEM image registration methods and increases accuracy and robustness, although it has less number of user parameters.ope
Self-Supervised Neuron Segmentation with Multi-Agent Reinforcement Learning
The performance of existing supervised neuron segmentation methods is highly
dependent on the number of accurate annotations, especially when applied to
large scale electron microscopy (EM) data. By extracting semantic information
from unlabeled data, self-supervised methods can improve the performance of
downstream tasks, among which the mask image model (MIM) has been widely used
due to its simplicity and effectiveness in recovering original information from
masked images. However, due to the high degree of structural locality in EM
images, as well as the existence of considerable noise, many voxels contain
little discriminative information, making MIM pretraining inefficient on the
neuron segmentation task. To overcome this challenge, we propose a
decision-based MIM that utilizes reinforcement learning (RL) to automatically
search for optimal image masking ratio and masking strategy. Due to the vast
exploration space, using single-agent RL for voxel prediction is impractical.
Therefore, we treat each input patch as an agent with a shared behavior policy,
allowing for multi-agent collaboration. Furthermore, this multi-agent model can
capture dependencies between voxels, which is beneficial for the downstream
segmentation task. Experiments conducted on representative EM datasets
demonstrate that our approach has a significant advantage over alternative
self-supervised methods on the task of neuron segmentation. Code is available
at \url{https://github.com/ydchen0806/dbMiM}.Comment: IJCAI 23 main track pape
- …