10,434 research outputs found
Learning Robust Visual-Semantic Embedding for Generalizable Person Re-identification
Generalizable person re-identification (Re-ID) is a very hot research topic
in machine learning and computer vision, which plays a significant role in
realistic scenarios due to its various applications in public security and
video surveillance. However, previous methods mainly focus on the visual
representation learning, while neglect to explore the potential of semantic
features during training, which easily leads to poor generalization capability
when adapted to the new domain. In this paper, we propose a Multi-Modal
Equivalent Transformer called MMET for more robust visual-semantic embedding
learning on visual, textual and visual-textual tasks respectively. To further
enhance the robust feature learning in the context of transformer, a dynamic
masking mechanism called Masked Multimodal Modeling strategy (MMM) is
introduced to mask both the image patches and the text tokens, which can
jointly works on multimodal or unimodal data and significantly boost the
performance of generalizable person Re-ID. Extensive experiments on benchmark
datasets demonstrate the competitive performance of our method over previous
approaches. We hope this method could advance the research towards
visual-semantic representation learning. Our source code is also publicly
available at https://github.com/JeremyXSC/MMET
Recommended from our members
Ensuring Access to Safe and Nutritious Food for All Through the Transformation of Food Systems
Multi-modal Facial Affective Analysis based on Masked Autoencoder
Human affective behavior analysis focuses on analyzing human expressions or
other behaviors to enhance the understanding of human psychology. The CVPR 2023
Competition on Affective Behavior Analysis in-the-wild (ABAW) is dedicated to
providing high-quality and large-scale Aff-wild2 for the recognition of
commonly used emotion representations, such as Action Units (AU), basic
expression categories(EXPR), and Valence-Arousal (VA). The competition is
committed to making significant strides in improving the accuracy and
practicality of affective analysis research in real-world scenarios. In this
paper, we introduce our submission to the CVPR 2023: ABAW5. Our approach
involves several key components. First, we utilize the visual information from
a Masked Autoencoder(MAE) model that has been pre-trained on a large-scale face
image dataset in a self-supervised manner. Next, we finetune the MAE encoder on
the image frames from the Aff-wild2 for AU, EXPR and VA tasks, which can be
regarded as a static and uni-modal training. Additionally, we leverage the
multi-modal and temporal information from the videos and implement a
transformer-based framework to fuse the multi-modal features. Our approach
achieves impressive results in the ABAW5 competition, with an average F1 score
of 55.49\% and 41.21\% in the AU and EXPR tracks, respectively, and an average
CCC of 0.6372 in the VA track. Our approach ranks first in the EXPR and AU
tracks, and second in the VA track. Extensive quantitative experiments and
ablation studies demonstrate the effectiveness of our proposed method
ADS_UNet: A Nested UNet for Histopathology Image Segmentation
The UNet model consists of fully convolutional network (FCN) layers arranged
as contracting encoder and upsampling decoder maps. Nested arrangements of
these encoder and decoder maps give rise to extensions of the UNet model, such
as UNete and UNet++. Other refinements include constraining the outputs of the
convolutional layers to discriminate between segment labels when trained end to
end, a property called deep supervision. This reduces feature diversity in
these nested UNet models despite their large parameter space. Furthermore, for
texture segmentation, pixel correlations at multiple scales contribute to the
classification task; hence, explicit deep supervision of shallower layers is
likely to enhance performance. In this paper, we propose ADS UNet, a stage-wise
additive training algorithm that incorporates resource-efficient deep
supervision in shallower layers and takes performance-weighted combinations of
the sub-UNets to create the segmentation model. We provide empirical evidence
on three histopathology datasets to support the claim that the proposed ADS
UNet reduces correlations between constituent features and improves performance
while being more resource efficient. We demonstrate that ADS_UNet outperforms
state-of-the-art Transformer-based models by 1.08 and 0.6 points on CRAG and
BCSS datasets, and yet requires only 37% of GPU consumption and 34% of training
time as that required by Transformers.Comment: To be published in Expert Systems With Application
CLIP-Guided Vision-Language Pre-training for Question Answering in 3D Scenes
Training models to apply linguistic knowledge and visual concepts from 2D
images to 3D world understanding is a promising direction that researchers have
only recently started to explore. In this work, we design a novel 3D
pre-training Vision-Language method that helps a model learn semantically
meaningful and transferable 3D scene point cloud representations. We inject the
representational power of the popular CLIP model into our 3D encoder by
aligning the encoded 3D scene features with the corresponding 2D image and text
embeddings produced by CLIP. To assess our model's 3D world reasoning
capability, we evaluate it on the downstream task of 3D Visual Question
Answering. Experimental quantitative and qualitative results show that our
pre-training method outperforms state-of-the-art works in this task and leads
to an interpretable representation of 3D scene features.Comment: CVPRW 2023. Code will be made publicly available:
https://github.com/AlexDelitzas/3D-VQ
Corporate Social Responsibility: the institutionalization of ESG
Understanding the impact of Corporate Social Responsibility (CSR) on firm performance as it relates to industries reliant on technological innovation is a complex and perpetually evolving challenge. To thoroughly investigate this topic, this dissertation will adopt an economics-based structure to address three primary hypotheses. This structure allows for each hypothesis to essentially be a standalone empirical paper, unified by an overall analysis of the nature of impact that ESG has on firm performance. The first hypothesis explores the evolution of CSR to the modern quantified iteration of ESG has led to the institutionalization and standardization of the CSR concept. The second hypothesis fills gaps in existing literature testing the relationship between firm performance and ESG by finding that the relationship is significantly positive in long-term, strategic metrics (ROA and ROIC) and that there is no correlation in short-term metrics (ROE and ROS). Finally, the third hypothesis states that if a firm has a long-term strategic ESG plan, as proxied by the publication of CSR reports, then it is more resilience to damage from controversies. This is supported by the finding that pro-ESG firms consistently fared better than their counterparts in both financial and ESG performance, even in the event of a controversy. However, firms with consistent reporting are also held to a higher standard than their nonreporting peers, suggesting a higher risk and higher reward dynamic. These findings support the theory of good management, in that long-term strategic planning is both immediately economically beneficial and serves as a means of risk management and social impact mitigation. Overall, this contributes to the literature by fillings gaps in the nature of impact that ESG has on firm performance, particularly from a management perspective
Loop Closure Detection Based on Object-level Spatial Layout and Semantic Consistency
Visual simultaneous localization and mapping (SLAM) systems face challenges
in detecting loop closure under the circumstance of large viewpoint changes. In
this paper, we present an object-based loop closure detection method based on
the spatial layout and semanic consistency of the 3D scene graph. Firstly, we
propose an object-level data association approach based on the semantic
information from semantic labels, intersection over union (IoU), object color,
and object embedding. Subsequently, multi-view bundle adjustment with the
associated objects is utilized to jointly optimize the poses of objects and
cameras. We represent the refined objects as a 3D spatial graph with semantics
and topology. Then, we propose a graph matching approach to select
correspondence objects based on the structure layout and semantic property
similarity of vertices' neighbors. Finally, we jointly optimize camera
trajectories and object poses in an object-level pose graph optimization, which
results in a globally consistent map. Experimental results demonstrate that our
proposed data association approach can construct more accurate 3D semantic
maps, and our loop closure method is more robust than point-based and
object-based methods in circumstances with large viewpoint changes
XKD: Cross-modal Knowledge Distillation with Domain Alignment for Video Representation Learning
We present XKD, a novel self-supervised framework to learn meaningful
representations from unlabelled video clips. XKD is trained with two pseudo
tasks. First, masked data reconstruction is performed to learn individual
representations from audio and visual streams. Next, self-supervised
cross-modal knowledge distillation is performed between the two modalities
through teacher-student setups to learn complementary information. To identify
the most effective information to transfer and also to tackle the domain gap
between audio and visual modalities which could hinder knowledge transfer, we
introduce a domain alignment and feature refinement strategy for effective
cross-modal knowledge distillation. Lastly, to develop a general-purpose
network capable of handling both audio and visual streams, modality-agnostic
variants of our proposed framework are introduced, which use the same backbone
for both audio and visual modalities. Our proposed cross-modal knowledge
distillation improves linear evaluation top-1 accuracy of video action
classification by 8.6% on UCF101, 8.2% on HMDB51, 13.9% on Kinetics-Sound, and
15.7% on Kinetics400. Additionally, our modality-agnostic variant shows
promising results in developing a general-purpose network capable of learning
both data streams for solving different downstream tasks
Scalable and Accurate Self-supervised Multimodal Representation Learning without Aligned Video and Text Data
Scaling up weakly-supervised datasets has shown to be highly effective in the
image-text domain and has contributed to most of the recent state-of-the-art
computer vision and multimodal neural networks. However, existing large-scale
video-text datasets and mining techniques suffer from several limitations, such
as the scarcity of aligned data, the lack of diversity in the data, and the
difficulty of collecting aligned data. Currently popular video-text data mining
approach via automatic speech recognition (ASR) used in HowTo100M provides
low-quality captions that often do not refer to the video content. Other mining
approaches do not provide proper language descriptions (video tags) and are
biased toward short clips (alt text). In this work, we show how recent advances
in image captioning allow us to pre-train high-quality video models without any
parallel video-text data. We pre-train several video captioning models that are
based on an OPT language model and a TimeSformer visual backbone. We fine-tune
these networks on several video captioning datasets. First, we demonstrate that
image captioning pseudolabels work better for pre-training than the existing
HowTo100M ASR captions. Second, we show that pre-training on both images and
videos produces a significantly better network (+4 CIDER on MSR-VTT) than
pre-training on a single modality. Our methods are complementary to the
existing pre-training or data mining approaches and can be used in a variety of
settings. Given the efficacy of the pseudolabeling method, we are planning to
publicly release the generated captions
Neuroanatomical and gene expression features of the rabbit accessory olfactory system. Implications of pheromone communication in reproductive behaviour and animal physiology
Mainly driven by the vomeronasal system (VNS), pheromone
communication is involved in many species-specific fundamental innate socio-sexual behaviors such as mating and
fighting, which are essential for animal reproduction and survival. Rabbits are a unique model for studying
chemocommunication due to the discovery of the rabbit mammary pheromone, but paradoxically there has been a
lack of knowledge regarding its VNS pathway. In this work, we aim at filling this gap by approaching the system
from an integrative point of view, providing extensive anatomical and genomic data of the rabbit VNS, as well as
pheromone-mediated reproductive and behavioural studies. Our results build strong foundation for further
translational studies which aim at implementing the use of pheromones to improve animal production and welfare
- …