3,863 research outputs found
SEGMENT3D: A Web-based Application for Collaborative Segmentation of 3D images used in the Shoot Apical Meristem
The quantitative analysis of 3D confocal microscopy images of the shoot
apical meristem helps understanding the growth process of some plants. Cell
segmentation in these images is crucial for computational plant analysis and
many automated methods have been proposed. However, variations in signal
intensity across the image mitigate the effectiveness of those approaches with
no easy way for user correction. We propose a web-based collaborative 3D image
segmentation application, SEGMENT3D, to leverage automatic segmentation
results. The image is divided into 3D tiles that can be either segmented
interactively from scratch or corrected from a pre-existing segmentation.
Individual segmentation results per tile are then automatically merged via
consensus analysis and then stitched to complete the segmentation for the
entire image stack. SEGMENT3D is a comprehensive application that can be
applied to other 3D imaging modalities and general objects. It also provides an
easy way to create supervised data to advance segmentation using machine
learning models
Guided Proofreading of Automatic Segmentations for Connectomics
Automatic cell image segmentation methods in connectomics produce merge and
split errors, which require correction through proofreading. Previous research
has identified the visual search for these errors as the bottleneck in
interactive proofreading. To aid error correction, we develop two classifiers
that automatically recommend candidate merges and splits to the user. These
classifiers use a convolutional neural network (CNN) that has been trained with
errors in automatic segmentations against expert-labeled ground truth. Our
classifiers detect potentially-erroneous regions by considering a large context
region around a segmentation boundary. Corrections can then be performed by a
user with yes/no decisions, which reduces variation of information 7.5x faster
than previous proofreading methods. We also present a fully-automatic mode that
uses a probability threshold to make merge/split decisions. Extensive
experiments using the automatic approach and comparing performance of novice
and expert users demonstrate that our method performs favorably against
state-of-the-art proofreading methods on different connectomics datasets.Comment: Supplemental material available at
http://rhoana.org/guidedproofreading/supplemental.pd
Proof of concept of a workflow methodology for the creation of basic canine head anatomy veterinary education tool using augmented reality
Neuroanatomy can be challenging to both teach and learn within the undergraduate veterinary medicine and surgery curriculum. Traditional techniques have been used for many years, but there has now been a progression to move towards alternative digital models and interactive 3D models to engage the learner. However, digital innovations in the curriculum have typically involved the medical curriculum rather than the veterinary curriculum. Therefore, we aimed to create a simple workflow methodology to highlight the simplicity there is in creating a mobile augmented reality application of basic canine head anatomy. Using canine CT and MRI scans and widely available software programs, we demonstrate how to create an interactive model of head anatomy. This was applied to augmented reality for a popular Android mobile device to demonstrate the user-friendly interface. Here we present the processes, challenges and resolutions for the creation of a highly accurate, data based anatomical model that could potentially be used in the veterinary curriculum. This proof of concept study provides an excellent framework for the creation of augmented reality training products for veterinary education. The lack of similar resources within this field provides the ideal platform to extend this into other areas of veterinary education and beyond
- …