13,833 research outputs found
An Integrated Enhancement Solution for 24-hour Colorful Imaging
The current industry practice for 24-hour outdoor imaging is to use a silicon
camera supplemented with near-infrared (NIR) illumination. This will result in
color images with poor contrast at daytime and absence of chrominance at
nighttime. For this dilemma, all existing solutions try to capture RGB and NIR
images separately. However, they need additional hardware support and suffer
from various drawbacks, including short service life, high price, specific
usage scenario, etc. In this paper, we propose a novel and integrated
enhancement solution that produces clear color images, whether at abundant
sunlight daytime or extremely low-light nighttime. Our key idea is to separate
the VIS and NIR information from mixed signals, and enhance the VIS signal
adaptively with the NIR signal as assistance. To this end, we build an optical
system to collect a new VIS-NIR-MIX dataset and present a physically meaningful
image processing algorithm based on CNN. Extensive experiments show outstanding
results, which demonstrate the effectiveness of our solution.Comment: AAAI 2020 (Oral
Steve-Eye: Equipping LLM-based Embodied Agents with Visual Perception in Open Worlds
Recent studies have presented compelling evidence that large language models
(LLMs) can equip embodied agents with the self-driven capability to interact
with the world, which marks an initial step toward versatile robotics. However,
these efforts tend to overlook the visual richness of open worlds, rendering
the entire interactive process akin to "a blindfolded text-based game."
Consequently, LLM-based agents frequently encounter challenges in intuitively
comprehending their surroundings and producing responses that are easy to
understand. In this paper, we propose Steve-Eye, an end-to-end trained large
multimodal model designed to address this limitation. Steve-Eye integrates the
LLM with a visual encoder which enables it to process visual-text inputs and
generate multimodal feedback. In addition, we use a semi-automatic strategy to
collect an extensive dataset comprising 850K open-world instruction pairs,
empowering our model to encompass three essential functions for an agent:
multimodal perception, foundational knowledge base, and skill prediction and
planning. Lastly, we develop three open-world evaluation benchmarks, then carry
out extensive experiments from a wide range of perspectives to validate our
model's capability to strategically act and plan. Codes and datasets will be
released.Comment: 19 pages, 19 figure
- …