613 research outputs found
Full Reference Objective Quality Assessment for Reconstructed Background Images
With an increased interest in applications that require a clean background
image, such as video surveillance, object tracking, street view imaging and
location-based services on web-based maps, multiple algorithms have been
developed to reconstruct a background image from cluttered scenes.
Traditionally, statistical measures and existing image quality techniques have
been applied for evaluating the quality of the reconstructed background images.
Though these quality assessment methods have been widely used in the past,
their performance in evaluating the perceived quality of the reconstructed
background image has not been verified. In this work, we discuss the
shortcomings in existing metrics and propose a full reference Reconstructed
Background image Quality Index (RBQI) that combines color and structural
information at multiple scales using a probability summation model to predict
the perceived quality in the reconstructed background image given a reference
image. To compare the performance of the proposed quality index with existing
image quality assessment measures, we construct two different datasets
consisting of reconstructed background images and corresponding subjective
scores. The quality assessment measures are evaluated by correlating their
objective scores with human subjective ratings. The correlation results show
that the proposed RBQI outperforms all the existing approaches. Additionally,
the constructed datasets and the corresponding subjective scores provide a
benchmark to evaluate the performance of future metrics that are developed to
evaluate the perceived quality of reconstructed background images.Comment: Associated source code: https://github.com/ashrotre/RBQI, Associated
Database:
https://drive.google.com/drive/folders/1bg8YRPIBcxpKIF9BIPisULPBPcA5x-Bk?usp=sharing
(Email for permissions at: ashrotreasuedu
Making Deep Heatmaps Robust to Partial Occlusions for 3D Object Pose Estimation
We introduce a novel method for robust and accurate 3D object pose estimation
from a single color image under large occlusions. Following recent approaches,
we first predict the 2D projections of 3D points related to the target object
and then compute the 3D pose from these correspondences using a geometric
method. Unfortunately, as the results of our experiments show, predicting these
2D projections using a regular CNN or a Convolutional Pose Machine is highly
sensitive to partial occlusions, even when these methods are trained with
partially occluded examples. Our solution is to predict heatmaps from multiple
small patches independently and to accumulate the results to obtain accurate
and robust predictions. Training subsequently becomes challenging because
patches with similar appearances but different positions on the object
correspond to different heatmaps. However, we provide a simple yet effective
solution to deal with such ambiguities. We show that our approach outperforms
existing methods on two challenging datasets: The Occluded LineMOD dataset and
the YCB-Video dataset, both exhibiting cluttered scenes with highly occluded
objects. Project website:
https://www.tugraz.at/institute/icg/research/team-lepetit/research-projects/robust-object-pose-estimation
- …