793 research outputs found
Topological Navigation of Simulated Robots using Occupancy Grid
Formerly I presented a metric navigation method in the Webots mobile robot
simulator. The navigating Khepera-like robot builds an occupancy grid of the
environment and explores the square-shaped room around with a value iteration
algorithm. Now I created a topological navigation procedure based on the
occupancy grid process. The extension by a skeletonization algorithm results a
graph of important places and the connecting routes among them. I also show the
significant time profit gained during the process
Focused Proofreading: Efficiently Extracting Connectomes from Segmented EM Images
Identifying complex neural circuitry from electron microscopic (EM) images
may help unlock the mysteries of the brain. However, identifying this circuitry
requires time-consuming, manual tracing (proofreading) due to the size and
intricacy of these image datasets, thus limiting state-of-the-art analysis to
very small brain regions. Potential avenues to improve scalability include
automatic image segmentation and crowd sourcing, but current efforts have had
limited success. In this paper, we propose a new strategy, focused
proofreading, that works with automatic segmentation and aims to limit
proofreading to the regions of a dataset that are most impactful to the
resulting circuit. We then introduce a novel workflow, which exploits
biological information such as synapses, and apply it to a large dataset in the
fly optic lobe. With our techniques, we achieve significant tracing speedups of
3-5x without sacrificing the quality of the resulting circuit. Furthermore, our
methodology makes the task of proofreading much more accessible and hence
potentially enhances the effectiveness of crowd sourcing
Semiautomated Skeletonization of the Pulmonary Arterial Tree in Micro-CT Images
We present a simple and robust approach that utilizes planar images at different angular rotations combined with unfiltered back-projection to locate the central axes of the pulmonary arterial tree. Three-dimensional points are selected interactively by the user. The computer calculates a sub- volume unfiltered back-projection orthogonal to the vector connecting the two points and centered on the first point. Because more x-rays are absorbed at the thickest portion of the vessel, in the unfiltered back-projection, the darkest pixel is assumed to be the center of the vessel. The computer replaces this point with the newly computer-calculated point. A second back-projection is calculated around the original point orthogonal to a vector connecting the newly-calculated first point and user-determined second point. The darkest pixel within the reconstruction is determined. The computer then replaces the second point with the XYZ coordinates of the darkest pixel within this second reconstruction. Following a vector based on a moving average of previously determined 3- dimensional points along the vessel\u27s axis, the computer continues this skeletonization process until stopped by the user. The computer estimates the vessel diameter along the set of previously determined points using a method similar to the full width-half max algorithm. On all subsequent vessels, the process works the same way except that at each point, distances between the current point and all previously determined points along different vessels are determined. If the difference is less than the previously estimated diameter, the vessels are assumed to branch. This user/computer interaction continues until the vascular tree has been skeletonized
CherryPicker: Semantic Skeletonization and Topological Reconstruction of Cherry Trees
In plant phenotyping, accurate trait extraction from 3D point clouds of trees
is still an open problem. For automatic modeling and trait extraction of tree
organs such as blossoms and fruits, the semantically segmented point cloud of a
tree and the tree skeleton are necessary. Therefore, we present CherryPicker,
an automatic pipeline that reconstructs photo-metric point clouds of trees,
performs semantic segmentation and extracts their topological structure in form
of a skeleton. Our system combines several state-of-the-art algorithms to
enable automatic processing for further usage in 3D-plant phenotyping
applications. Within this pipeline, we present a method to automatically
estimate the scale factor of a monocular reconstruction to overcome scale
ambiguity and obtain metrically correct point clouds. Furthermore, we propose a
semantic skeletonization algorithm build up on Laplacian-based contraction. We
also show by weighting different tree organs semantically, our approach can
effectively remove artifacts induced by occlusion and structural size
variations. CherryPicker obtains high-quality topology reconstructions of
cherry trees with precise details.Comment: Accepted by CVPR 2023 Vision for Agriculture Worksho
Measuring decline in white matter integrity after systemic treatment for breast cancer:Omitting skeletonization enhances sensitivity
Chemotherapy for non-central nervous system cancers is associated with abnormalities in brain structure and function. Diffusion tensor imaging (DTI) allows for studying in vivo microstructural changes in brain white matter. Tract-based spatial statistics (TBSS) is a widely used processing pipeline in which DTI data are typically normalized to a generic DTI template and then ‘skeletonized’ to compensate for misregistration effects. However, this approach greatly reduces the overall white matter volume that is subjected to statistical analysis, leading to information loss. Here, we present a re-analysis of longitudinal data previously analyzed with standard TBSS (Menning et al., BIB 2018, 324–334). For our current approach, we constructed a pipeline with an optimized registration method in Advanced Normalization Tools (ANTs) where DTI data are registered to a study-specific, high-resolution T1 template and the skeletonization step is omitted. In a head to head comparison, we show that with our novel approach breast cancer survivors who had received chemotherapy plus or minus endocrine therapy (BC + SYST, n = 26) showed a global decline in overall FA that was not present in breast cancer survivors who did not receive systemic therapy (BC-SYST, n = 23) or women without a cancer diagnosis (no cancer controls, NC, n = 30). With the standard TBSS approach we did not find any group differences. Moreover, voxel-based analysis for our novel pipeline showed a widespread decline in FA in the BC + SYST compared to the NC group. Interestingly, the BC-SYST group also showed a decline in FA compared to the NC group, although in much less voxels. These results were not found with the standard TBSS approach. We demonstrate that a modified processing pipeline makes DTI data more sensitive to detecting changes in white matter integrity in non-CNS cancer patients after treatment, particularly chemotherapy
- …