13,897 research outputs found
Full Reference Objective Quality Assessment for Reconstructed Background Images
With an increased interest in applications that require a clean background
image, such as video surveillance, object tracking, street view imaging and
location-based services on web-based maps, multiple algorithms have been
developed to reconstruct a background image from cluttered scenes.
Traditionally, statistical measures and existing image quality techniques have
been applied for evaluating the quality of the reconstructed background images.
Though these quality assessment methods have been widely used in the past,
their performance in evaluating the perceived quality of the reconstructed
background image has not been verified. In this work, we discuss the
shortcomings in existing metrics and propose a full reference Reconstructed
Background image Quality Index (RBQI) that combines color and structural
information at multiple scales using a probability summation model to predict
the perceived quality in the reconstructed background image given a reference
image. To compare the performance of the proposed quality index with existing
image quality assessment measures, we construct two different datasets
consisting of reconstructed background images and corresponding subjective
scores. The quality assessment measures are evaluated by correlating their
objective scores with human subjective ratings. The correlation results show
that the proposed RBQI outperforms all the existing approaches. Additionally,
the constructed datasets and the corresponding subjective scores provide a
benchmark to evaluate the performance of future metrics that are developed to
evaluate the perceived quality of reconstructed background images.Comment: Associated source code: https://github.com/ashrotre/RBQI, Associated
Database:
https://drive.google.com/drive/folders/1bg8YRPIBcxpKIF9BIPisULPBPcA5x-Bk?usp=sharing
(Email for permissions at: ashrotreasuedu
Subjective Annotation for a Frame Interpolation Benchmark using Artefact Amplification
Current benchmarks for optical flow algorithms evaluate the estimation either
directly by comparing the predicted flow fields with the ground truth or
indirectly by using the predicted flow fields for frame interpolation and then
comparing the interpolated frames with the actual frames. In the latter case,
objective quality measures such as the mean squared error are typically
employed. However, it is well known that for image quality assessment, the
actual quality experienced by the user cannot be fully deduced from such simple
measures. Hence, we conducted a subjective quality assessment crowdscouring
study for the interpolated frames provided by one of the optical flow
benchmarks, the Middlebury benchmark. We collected forced-choice paired
comparisons between interpolated images and corresponding ground truth. To
increase the sensitivity of observers when judging minute difference in paired
comparisons we introduced a new method to the field of full-reference quality
assessment, called artefact amplification. From the crowdsourcing data, we
reconstructed absolute quality scale values according to Thurstone's model. As
a result, we obtained a re-ranking of the 155 participating algorithms w.r.t.
the visual quality of the interpolated frames. This re-ranking not only shows
the necessity of visual quality assessment as another evaluation metric for
optical flow and frame interpolation benchmarks, the results also provide the
ground truth for designing novel image quality assessment (IQA) methods
dedicated to perceptual quality of interpolated images. As a first step, we
proposed such a new full-reference method, called WAE-IQA. By weighing the
local differences between an interpolated image and its ground truth WAE-IQA
performed slightly better than the currently best FR-IQA approach from the
literature.Comment: arXiv admin note: text overlap with arXiv:1901.0536
High-speed Video from Asynchronous Camera Array
This paper presents a method for capturing high-speed video using an
asynchronous camera array. Our method sequentially fires each sensor in a
camera array with a small time offset and assembles captured frames into a
high-speed video according to the time stamps. The resulting video, however,
suffers from parallax jittering caused by the viewpoint difference among
sensors in the camera array. To address this problem, we develop a dedicated
novel view synthesis algorithm that transforms the video frames as if they were
captured by a single reference sensor. Specifically, for any frame from a
non-reference sensor, we find the two temporally neighboring frames captured by
the reference sensor. Using these three frames, we render a new frame with the
same time stamp as the non-reference frame but from the viewpoint of the
reference sensor. Specifically, we segment these frames into super-pixels and
then apply local content-preserving warping to warp them to form the new frame.
We employ a multi-label Markov Random Field method to blend these warped
frames. Our experiments show that our method can produce high-quality and
high-speed video of a wide variety of scenes with large parallax, scene
dynamics, and camera motion and outperforms several baseline and
state-of-the-art approaches.Comment: 10 pages, 82 figures, Published at IEEE WACV 201
- …