104,708 research outputs found
Quantification of Nematic Cell Polarity in Three-dimensional Tissues
How epithelial cells coordinate their polarity to form functional tissues is
an open question in cell biology. Here, we characterize a unique type of
polarity found in liver tissue, nematic cell polarity, which is different from
vectorial cell polarity in simple, sheet-like epithelia. We propose a
conceptual and algorithmic framework to characterize complex patterns of
polarity proteins on the surface of a cell in terms of a multipole expansion.
To rigorously quantify previously observed tissue-level patterns of nematic
cell polarity (Morales-Navarette et al., eLife 8:e44860, 2019), we introduce
the concept of co-orientational order parameters, which generalize the known
biaxial order parameters of the theory of liquid crystals. Applying these
concepts to three-dimensional reconstructions of single cells from
high-resolution imaging data of mouse liver tissue, we show that the axes of
nematic cell polarity of hepatocytes exhibit local coordination and are aligned
with the biaxially anisotropic sinusoidal network for blood transport. Our
study characterizes liver tissue as a biological example of a biaxial liquid
crystal. The general methodology developed here could be applied to other
tissues or in-vitro organoids.Comment: 27 pages, 9 color figure
Robust and Fast 3D Scan Alignment using Mutual Information
This paper presents a mutual information (MI) based algorithm for the
estimation of full 6-degree-of-freedom (DOF) rigid body transformation between
two overlapping point clouds. We first divide the scene into a 3D voxel grid
and define simple to compute features for each voxel in the scan. The two scans
that need to be aligned are considered as a collection of these features and
the MI between these voxelized features is maximized to obtain the correct
alignment of scans. We have implemented our method with various simple point
cloud features (such as number of points in voxel, variance of z-height in
voxel) and compared the performance of the proposed method with existing
point-to-point and point-to- distribution registration methods. We show that
our approach has an efficient and fast parallel implementation on GPU, and
evaluate the robustness and speed of the proposed algorithm on two real-world
datasets which have variety of dynamic scenes from different environments
Simultaneous Facial Landmark Detection, Pose and Deformation Estimation under Facial Occlusion
Facial landmark detection, head pose estimation, and facial deformation
analysis are typical facial behavior analysis tasks in computer vision. The
existing methods usually perform each task independently and sequentially,
ignoring their interactions. To tackle this problem, we propose a unified
framework for simultaneous facial landmark detection, head pose estimation, and
facial deformation analysis, and the proposed model is robust to facial
occlusion. Following a cascade procedure augmented with model-based head pose
estimation, we iteratively update the facial landmark locations, facial
occlusion, head pose and facial de- formation until convergence. The
experimental results on benchmark databases demonstrate the effectiveness of
the proposed method for simultaneous facial landmark detection, head pose and
facial deformation estimation, even if the images are under facial occlusion.Comment: International Conference on Computer Vision and Pattern Recognition,
201
Perception-aware Path Planning
In this paper, we give a double twist to the problem of planning under
uncertainty. State-of-the-art planners seek to minimize the localization
uncertainty by only considering the geometric structure of the scene. In this
paper, we argue that motion planning for vision-controlled robots should be
perception aware in that the robot should also favor texture-rich areas to
minimize the localization uncertainty during a goal-reaching task. Thus, we
describe how to optimally incorporate the photometric information (i.e.,
texture) of the scene, in addition to the the geometric one, to compute the
uncertainty of vision-based localization during path planning. To avoid the
caveats of feature-based localization systems (i.e., dependence on feature type
and user-defined thresholds), we use dense, direct methods. This allows us to
compute the localization uncertainty directly from the intensity values of
every pixel in the image. We also describe how to compute trajectories online,
considering also scenarios with no prior knowledge about the map. The proposed
framework is general and can easily be adapted to different robotic platforms
and scenarios. The effectiveness of our approach is demonstrated with extensive
experiments in both simulated and real-world environments using a
vision-controlled micro aerial vehicle.Comment: 16 pages, 20 figures, revised version. Conditionally accepted for
IEEE Transactions on Robotic
- …