18,540 research outputs found
Collaborative Deep Reinforcement Learning for Joint Object Search
We examine the problem of joint top-down active search of multiple objects
under interaction, e.g., person riding a bicycle, cups held by the table, etc..
Such objects under interaction often can provide contextual cues to each other
to facilitate more efficient search. By treating each detector as an agent, we
present the first collaborative multi-agent deep reinforcement learning
algorithm to learn the optimal policy for joint active object localization,
which effectively exploits such beneficial contextual information. We learn
inter-agent communication through cross connections with gates between the
Q-networks, which is facilitated by a novel multi-agent deep Q-learning
algorithm with joint exploitation sampling. We verify our proposed method on
multiple object detection benchmarks. Not only does our model help to improve
the performance of state-of-the-art active localization models, it also reveals
interesting co-detection patterns that are intuitively interpretable
Learning to Terminate in Object Navigation
This paper tackles the critical challenge of object navigation in autonomous
navigation systems, particularly focusing on the problem of target approach and
episode termination in environments with long optimal episode length in Deep
Reinforcement Learning (DRL) based methods. While effective in environment
exploration and object localization, conventional DRL methods often struggle
with optimal path planning and termination recognition due to a lack of depth
information. To overcome these limitations, we propose a novel approach, namely
the Depth-Inference Termination Agent (DITA), which incorporates a supervised
model called the Judge Model to implicitly infer object-wise depth and decide
termination jointly with reinforcement learning. We train our judge model along
with reinforcement learning in parallel and supervise the former efficiently by
reward signal. Our evaluation shows the method is demonstrating superior
performance, we achieve a 9.3% gain on success rate than our baseline method
across all room types and gain 51.2% improvements on long episodes environment
while maintaining slightly better Success Weighted by Path Length (SPL). Code
and resources, visualization are available at:
https://github.com/HuskyKingdom/DITA_acml2023Comment: 16 page
Localizing by Describing: Attribute-Guided Attention Localization for Fine-Grained Recognition
A key challenge in fine-grained recognition is how to find and represent
discriminative local regions. Recent attention models are capable of learning
discriminative region localizers only from category labels with reinforcement
learning. However, not utilizing any explicit part information, they are not
able to accurately find multiple distinctive regions. In this work, we
introduce an attribute-guided attention localization scheme where the local
region localizers are learned under the guidance of part attribute
descriptions. By designing a novel reward strategy, we are able to learn to
locate regions that are spatially and semantically distinctive with
reinforcement learning algorithm. The attribute labeling requirement of the
scheme is more amenable than the accurate part location annotation required by
traditional part-based fine-grained recognition methods. Experimental results
on the CUB-200-2011 dataset demonstrate the superiority of the proposed scheme
on both fine-grained recognition and attribute recognition
Read, Watch, and Move: Reinforcement Learning for Temporally Grounding Natural Language Descriptions in Videos
The task of video grounding, which temporally localizes a natural language
description in a video, plays an important role in understanding videos.
Existing studies have adopted strategies of sliding window over the entire
video or exhaustively ranking all possible clip-sentence pairs in a
pre-segmented video, which inevitably suffer from exhaustively enumerated
candidates. To alleviate this problem, we formulate this task as a problem of
sequential decision making by learning an agent which regulates the temporal
grounding boundaries progressively based on its policy. Specifically, we
propose a reinforcement learning based framework improved by multi-task
learning and it shows steady performance gains by considering additional
supervised boundary information during training. Our proposed framework
achieves state-of-the-art performance on ActivityNet'18 DenseCaption dataset
and Charades-STA dataset while observing only 10 or less clips per video.Comment: AAAI 201
Learning Intelligent Dialogs for Bounding Box Annotation
We introduce Intelligent Annotation Dialogs for bounding box annotation. We
train an agent to automatically choose a sequence of actions for a human
annotator to produce a bounding box in a minimal amount of time. Specifically,
we consider two actions: box verification, where the annotator verifies a box
generated by an object detector, and manual box drawing. We explore two kinds
of agents, one based on predicting the probability that a box will be
positively verified, and the other based on reinforcement learning. We
demonstrate that (1) our agents are able to learn efficient annotation
strategies in several scenarios, automatically adapting to the image
difficulty, the desired quality of the boxes, and the detector strength; (2) in
all scenarios the resulting annotation dialogs speed up annotation compared to
manual box drawing alone and box verification alone, while also outperforming
any fixed combination of verification and drawing in most scenarios; (3) in a
realistic scenario where the detector is iteratively re-trained, our agents
evolve a series of strategies that reflect the shifting trade-off between
verification and drawing as the detector grows stronger.Comment: This paper appeared at CVPR 201
- …