2,181 research outputs found
06311 Abstracts Collection -- Sensor Data and Information Fusion in Computer Vision and Medicine
From 30.07.06 to 04.08.06, the Dagstuhl Seminar 06311 ``Sensor Data and Information Fusion in Computer Vision and Medicine\u27\u27 was held
in the International Conference and Research Center (IBFI),
Schloss Dagstuhl.
Sensor data fusion is of increasing importance for many
research fields and applications. Multi-modal imaging
is routine in medicine, and in robitics it is
common to use multi-sensor data fusion.
During the seminar, researchers and application experts
working in the field of sensor data
fusion presented their current
research, and ongoing work and open problems were discussed.
Abstracts of the presentations given during
the seminar as well as abstracts of seminar
results and ideas are put together in this paper.
The first section describes the seminar topics and goals in general.
The second part briefly summarizes the contributions
Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery
One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions
RGB-D And Thermal Sensor Fusion: A Systematic Literature Review
In the last decade, the computer vision field has seen significant progress
in multimodal data fusion and learning, where multiple sensors, including
depth, infrared, and visual, are used to capture the environment across diverse
spectral ranges. Despite these advancements, there has been no systematic and
comprehensive evaluation of fusing RGB-D and thermal modalities to date. While
autonomous driving using LiDAR, radar, RGB, and other sensors has garnered
substantial research interest, along with the fusion of RGB and depth
modalities, the integration of thermal cameras and, specifically, the fusion of
RGB-D and thermal data, has received comparatively less attention. This might
be partly due to the limited number of publicly available datasets for such
applications. This paper provides a comprehensive review of both,
state-of-the-art and traditional methods used in fusing RGB-D and thermal
camera data for various applications, such as site inspection, human tracking,
fault detection, and others. The reviewed literature has been categorised into
technical areas, such as 3D reconstruction, segmentation, object detection,
available datasets, and other related topics. Following a brief introduction
and an overview of the methodology, the study delves into calibration and
registration techniques, then examines thermal visualisation and 3D
reconstruction, before discussing the application of classic feature-based
techniques as well as modern deep learning approaches. The paper concludes with
a discourse on current limitations and potential future research directions. It
is hoped that this survey will serve as a valuable reference for researchers
looking to familiarise themselves with the latest advancements and contribute
to the RGB-DT research field.Comment: 33 pages, 20 figure
RGB-D Salient Object Detection: A Survey
Salient object detection (SOD), which simulates the human visual perception
system to locate the most attractive object(s) in a scene, has been widely
applied to various computer vision tasks. Now, with the advent of depth
sensors, depth maps with affluent spatial information that can be beneficial in
boosting the performance of SOD, can easily be captured. Although various RGB-D
based SOD models with promising performance have been proposed over the past
several years, an in-depth understanding of these models and challenges in this
topic remains lacking. In this paper, we provide a comprehensive survey of
RGB-D based SOD models from various perspectives, and review related benchmark
datasets in detail. Further, considering that the light field can also provide
depth maps, we review SOD models and popular benchmark datasets from this
domain as well. Moreover, to investigate the SOD ability of existing models, we
carry out a comprehensive evaluation, as well as attribute-based evaluation of
several representative RGB-D based SOD models. Finally, we discuss several
challenges and open directions of RGB-D based SOD for future research. All
collected models, benchmark datasets, source code links, datasets constructed
for attribute-based evaluation, and codes for evaluation will be made publicly
available at https://github.com/taozh2017/RGBDSODsurveyComment: 24 pages, 12 figures. Has been accepted by Computational Visual Medi
CoCoNet: Coupled Contrastive Learning Network with Multi-level Feature Ensemble for Multi-modality Image Fusion
Infrared and visible image fusion targets to provide an informative image by
combining complementary information from different sensors. Existing
learning-based fusion approaches attempt to construct various loss functions to
preserve complementary features from both modalities, while neglecting to
discover the inter-relationship between the two modalities, leading to
redundant or even invalid information on the fusion results. To alleviate these
issues, we propose a coupled contrastive learning network, dubbed CoCoNet, to
realize infrared and visible image fusion in an end-to-end manner. Concretely,
to simultaneously retain typical features from both modalities and remove
unwanted information emerging on the fused result, we develop a coupled
contrastive constraint in our loss function.In a fused imge, its foreground
target/background detail part is pulled close to the infrared/visible source
and pushed far away from the visible/infrared source in the representation
space. We further exploit image characteristics to provide data-sensitive
weights, which allows our loss function to build a more reliable relationship
with source images. Furthermore, to learn rich hierarchical feature
representation and comprehensively transfer features in the fusion process, a
multi-level attention module is established. In addition, we also apply the
proposed CoCoNet on medical image fusion of different types, e.g., magnetic
resonance image and positron emission tomography image, magnetic resonance
image and single photon emission computed tomography image. Extensive
experiments demonstrate that our method achieves the state-of-the-art (SOTA)
performance under both subjective and objective evaluation, especially in
preserving prominent targets and recovering vital textural details.Comment: 25 pages, 16 figure
MISFIT-V: Misaligned Image Synthesis and Fusion using Information from Thermal and Visual
Detecting humans from airborne visual and thermal imagery is a fundamental
challenge for Wilderness Search-and-Rescue (WiSAR) teams, who must perform this
function accurately in the face of immense pressure. The ability to fuse these
two sensor modalities can potentially reduce the cognitive load on human
operators and/or improve the effectiveness of computer vision object detection
models. However, the fusion task is particularly challenging in the context of
WiSAR due to hardware limitations and extreme environmental factors. This work
presents Misaligned Image Synthesis and Fusion using Information from Thermal
and Visual (MISFIT-V), a novel two-pronged unsupervised deep learning approach
that utilizes a Generative Adversarial Network (GAN) and a cross-attention
mechanism to capture the most relevant features from each modality.
Experimental results show MISFIT-V offers enhanced robustness against
misalignment and poor lighting/thermal environmental conditions compared to
existing visual-thermal image fusion methods
- …