262 research outputs found
Effect of AFM nanoindentation loading rate on the characterization of mechanical properties of vascular endothelial cell
Vascular endothelial cells form a barrier that blocks the delivery of drugs entering into brain tissue for central nervous system disease treatment. The mechanical responses of vascular endothelial cells play a key role in the progress of drugs passing through the blood–brain barrier. Although nanoindentation experiment by using AFM (Atomic Force Microscopy) has been widely used to investigate the mechanical properties of cells, the particular mechanism that determines the mechanical response of vascular endothelial cells is still poorly understood. In order to overcome this limitation, nanoindentation experiments were performed at different loading rates during the ramp stage to investigate the loading rate effect on the characterization of the mechanical properties of bEnd.3 cells (mouse brain endothelial cell line). Inverse finite element analysis was implemented to determine the mechanical properties of bEnd.3 cells. The loading rate effect appears to be more significant in short-term peak force than that in long-term force. A higher loading rate results in a larger value of elastic modulus of bEnd.3 cells, while some mechanical parameters show ambiguous regulation to the variation of indentation rate. This study provides new insights into the mechanical responses of vascular endothelial cells, which is important for a deeper understanding of the cell mechanobiological mechanism in the blood–brain barrier
MFM-Net: Unpaired Shape Completion Network with Multi-stage Feature Matching
Unpaired 3D object completion aims to predict a complete 3D shape from an
incomplete input without knowing the correspondence between the complete and
incomplete shapes during training. To build the correspondence between two data
modalities, previous methods usually apply adversarial training to match the
global shape features extracted by the encoder. However, this ignores the
correspondence between multi-scaled geometric information embedded in the
pyramidal hierarchy of the decoder, which makes previous methods struggle to
generate high-quality complete shapes. To address this problem, we propose a
novel unpaired shape completion network, named MFM-Net, using multi-stage
feature matching, which decomposes the learning of geometric correspondence
into multi-stages throughout the hierarchical generation process in the point
cloud decoder. Specifically, MFM-Net adopts a dual path architecture to
establish multiple feature matching channels in different layers of the
decoder, which is then combined with the adversarial learning to merge the
distribution of features from complete and incomplete modalities. In addition,
a refinement is applied to enhance the details. As a result, MFM-Net makes use
of a more comprehensive understanding to establish the geometric correspondence
between complete and incomplete shapes in a local-to-global perspective, which
enables more detailed geometric inference for generating high-quality complete
shapes. We conduct comprehensive experiments on several datasets, and the
results show that our method outperforms previous methods of unpaired point
cloud completion with a large margin
MCS: Multi-Target Masked Point Modeling with Learnable Codebook and Siamese Decoders
Masked point modeling has become a promising scheme of self-supervised
pre-training for point clouds. Existing methods reconstruct either the original
points or related features as the objective of pre-training. However,
considering the diversity of downstream tasks, it is necessary for the model to
have both low- and high-level representation modeling capabilities to capture
geometric details and semantic contexts during pre-training. To this end,
MCS is proposed to enable the model with the above abilities. Specifically,
with masked point cloud as input, MCS introduces two decoders to predict
masked representations and the original points simultaneously. While an extra
decoder doubles parameters for the decoding process and may lead to
overfitting, we propose siamese decoders to keep the amount of learnable
parameters unchanged. Further, we propose an online codebook projecting
continuous tokens into discrete ones before reconstructing masked points. In
such way, we can enforce the decoder to take effect through the combinations of
tokens rather than remembering each token. Comprehensive experiments show that
MCS achieves superior performance at both classification and segmentation
tasks, outperforming existing methods
One-shot Implicit Animatable Avatars with Model-based Priors
Existing neural rendering methods for creating human avatars typically either
require dense input signals such as video or multi-view images, or leverage a
learned prior from large-scale specific 3D human datasets such that
reconstruction can be performed with sparse-view inputs. Most of these methods
fail to achieve realistic reconstruction when only a single image is available.
To enable the data-efficient creation of realistic animatable 3D humans, we
propose ELICIT, a novel method for learning human-specific neural radiance
fields from a single image. Inspired by the fact that humans can effortlessly
estimate the body geometry and imagine full-body clothing from a single image,
we leverage two priors in ELICIT: 3D geometry prior and visual semantic prior.
Specifically, ELICIT utilizes the 3D body shape geometry prior from a skinned
vertex-based template model (i.e., SMPL) and implements the visual clothing
semantic prior with the CLIP-based pretrained models. Both priors are used to
jointly guide the optimization for creating plausible content in the invisible
areas. Taking advantage of the CLIP models, ELICIT can use text descriptions to
generate text-conditioned unseen regions. In order to further improve visual
details, we propose a segmentation-based sampling strategy that locally refines
different parts of the avatar. Comprehensive evaluations on multiple popular
benchmarks, including ZJU-MoCAP, Human3.6M, and DeepFashion, show that ELICIT
has outperformed strong baseline methods of avatar creation when only a single
image is available. The code is public for research purposes at
https://huangyangyi.github.io/ELICIT/.Comment: To appear at ICCV 2023. Project website:
https://huangyangyi.github.io/ELICIT
- …