50 research outputs found
Viewpoint-Aware Loss with Angular Regularization for Person Re-Identification
Although great progress in supervised person re-identification (Re-ID) has
been made recently, due to the viewpoint variation of a person, Re-ID remains a
massive visual challenge. Most existing viewpoint-based person Re-ID methods
project images from each viewpoint into separated and unrelated sub-feature
spaces. They only model the identity-level distribution inside an individual
viewpoint but ignore the underlying relationship between different viewpoints.
To address this problem, we propose a novel approach, called
\textit{Viewpoint-Aware Loss with Angular Regularization }(\textbf{VA-reID}).
Instead of one subspace for each viewpoint, our method projects the feature
from different viewpoints into a unified hypersphere and effectively models the
feature distribution on both the identity-level and the viewpoint-level. In
addition, rather than modeling different viewpoints as hard labels used for
conventional viewpoint classification, we introduce viewpoint-aware adaptive
label smoothing regularization (VALSR) that assigns the adaptive soft label to
feature representation. VALSR can effectively solve the ambiguity of the
viewpoint cluster label assignment. Extensive experiments on the Market1501 and
DukeMTMC-reID datasets demonstrated that our method outperforms the
state-of-the-art supervised Re-ID methods
Rethinking Temporal Fusion for Video-based Person Re-identification on Semantic and Time Aspect
Recently, the research interest of person re-identification (ReID) has
gradually turned to video-based methods, which acquire a person representation
by aggregating frame features of an entire video. However, existing video-based
ReID methods do not consider the semantic difference brought by the outputs of
different network stages, which potentially compromises the information
richness of the person features. Furthermore, traditional methods ignore
important relationship among frames, which causes information redundancy in
fusion along the time axis. To address these issues, we propose a novel general
temporal fusion framework to aggregate frame features on both semantic aspect
and time aspect. As for the semantic aspect, a multi-stage fusion network is
explored to fuse richer frame features at multiple semantic levels, which can
effectively reduce the information loss caused by the traditional single-stage
fusion. While, for the time axis, the existing intra-frame attention method is
improved by adding a novel inter-frame attention module, which effectively
reduces the information redundancy in temporal fusion by taking the
relationship among frames into consideration. The experimental results show
that our approach can effectively improve the video-based re-identification
accuracy, achieving the state-of-the-art performance
Comparison of immature and mature bone marrow-derived dendritic cells by atomic force microscopy
A comparative study of immature and mature bone marrow-derived dendritic cells (BMDCs) was first performed through an atomic force microscope (AFM) to clarify differences of their nanostructure and adhesion force. AFM images revealed that the immature BMDCs treated by granulocyte macrophage-colony stimulating factor plus IL-4 mainly appeared round with smooth surface, whereas the mature BMDCs induced by lipopolysaccharide displayed an irregular shape with numerous pseudopodia or lamellapodia and ruffles on the cell membrane besides becoming larger, flatter, and longer. AFM quantitative analysis further showed that the surface roughness of the mature BMDCs greatly increased and that the adhesion force of them was fourfold more than that of the immature BMDCs. The nano-features of the mature BMDCs were supported by a high level of IL-12 produced from the mature BMDCs and high expression of MHC-II on the surface of them. These findings provide a new insight into the nanostructure of the immature and mature BMDCs
Single cell atlas for 11 non-model mammals, reptiles and birds.
The availability of viral entry factors is a prerequisite for the cross-species transmission of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Large-scale single-cell screening of animal cells could reveal the expression patterns of viral entry genes in different hosts. However, such exploration for SARS-CoV-2 remains limited. Here, we perform single-nucleus RNA sequencing for 11 non-model species, including pets (cat, dog, hamster, and lizard), livestock (goat and rabbit), poultry (duck and pigeon), and wildlife (pangolin, tiger, and deer), and investigated the co-expression of ACE2 and TMPRSS2. Furthermore, cross-species analysis of the lung cell atlas of the studied mammals, reptiles, and birds reveals core developmental programs, critical connectomes, and conserved regulatory circuits among these evolutionarily distant species. Overall, our work provides a compendium of gene expression profiles for non-model animals, which could be employed to identify potential SARS-CoV-2 target cells and putative zoonotic reservoirs