28,891 research outputs found
Full Reference Objective Quality Assessment for Reconstructed Background Images
With an increased interest in applications that require a clean background
image, such as video surveillance, object tracking, street view imaging and
location-based services on web-based maps, multiple algorithms have been
developed to reconstruct a background image from cluttered scenes.
Traditionally, statistical measures and existing image quality techniques have
been applied for evaluating the quality of the reconstructed background images.
Though these quality assessment methods have been widely used in the past,
their performance in evaluating the perceived quality of the reconstructed
background image has not been verified. In this work, we discuss the
shortcomings in existing metrics and propose a full reference Reconstructed
Background image Quality Index (RBQI) that combines color and structural
information at multiple scales using a probability summation model to predict
the perceived quality in the reconstructed background image given a reference
image. To compare the performance of the proposed quality index with existing
image quality assessment measures, we construct two different datasets
consisting of reconstructed background images and corresponding subjective
scores. The quality assessment measures are evaluated by correlating their
objective scores with human subjective ratings. The correlation results show
that the proposed RBQI outperforms all the existing approaches. Additionally,
the constructed datasets and the corresponding subjective scores provide a
benchmark to evaluate the performance of future metrics that are developed to
evaluate the perceived quality of reconstructed background images.Comment: Associated source code: https://github.com/ashrotre/RBQI, Associated
Database:
https://drive.google.com/drive/folders/1bg8YRPIBcxpKIF9BIPisULPBPcA5x-Bk?usp=sharing
(Email for permissions at: ashrotreasuedu
Understanding Traffic Density from Large-Scale Web Camera Data
Understanding traffic density from large-scale web camera (webcam) videos is
a challenging problem because such videos have low spatial and temporal
resolution, high occlusion and large perspective. To deeply understand traffic
density, we explore both deep learning based and optimization based methods. To
avoid individual vehicle detection and tracking, both methods map the image
into vehicle density map, one based on rank constrained regression and the
other one based on fully convolution networks (FCN). The regression based
method learns different weights for different blocks in the image to increase
freedom degrees of weights and embed perspective information. The FCN based
method jointly estimates vehicle density map and vehicle count with a residual
learning framework to perform end-to-end dense prediction, allowing arbitrary
image resolution, and adapting to different vehicle scales and perspectives. We
analyze and compare both methods, and get insights from optimization based
method to improve deep model. Since existing datasets do not cover all the
challenges in our work, we collected and labelled a large-scale traffic video
dataset, containing 60 million frames from 212 webcams. Both methods are
extensively evaluated and compared on different counting tasks and datasets.
FCN based method significantly reduces the mean absolute error from 10.99 to
5.31 on the public dataset TRANCOS compared with the state-of-the-art baseline.Comment: Accepted by CVPR 2017. Preprint version was uploaded on
http://welcome.isr.tecnico.ulisboa.pt/publications/understanding-traffic-density-from-large-scale-web-camera-data
SINet: A Scale-insensitive Convolutional Neural Network for Fast Vehicle Detection
Vision-based vehicle detection approaches achieve incredible success in
recent years with the development of deep convolutional neural network (CNN).
However, existing CNN based algorithms suffer from the problem that the
convolutional features are scale-sensitive in object detection task but it is
common that traffic images and videos contain vehicles with a large variance of
scales. In this paper, we delve into the source of scale sensitivity, and
reveal two key issues: 1) existing RoI pooling destroys the structure of small
scale objects, 2) the large intra-class distance for a large variance of scales
exceeds the representation capability of a single network. Based on these
findings, we present a scale-insensitive convolutional neural network (SINet)
for fast detecting vehicles with a large variance of scales. First, we present
a context-aware RoI pooling to maintain the contextual information and original
structure of small scale objects. Second, we present a multi-branch decision
network to minimize the intra-class distance of features. These lightweight
techniques bring zero extra time complexity but prominent detection accuracy
improvement. The proposed techniques can be equipped with any deep network
architectures and keep them trained end-to-end. Our SINet achieves
state-of-the-art performance in terms of accuracy and speed (up to 37 FPS) on
the KITTI benchmark and a new highway dataset, which contains a large variance
of scales and extremely small objects.Comment: Accepted by IEEE Transactions on Intelligent Transportation Systems
(T-ITS
Recommended from our members
Emerging Methods to Objectively Assess Pruritus in Atopic Dermatitis.
INTRODUCTION:Atopic dermatitis (AD) is an inflammatory skin disease with a chronic, relapsing course. Clinical features of AD vary by age, duration, and severity but can include papules, vesicles, erythema, exudate, xerosis, scaling, and lichenification. However, the most defining and universal symptom of AD is pruritus. Pruritus or itch, defined as an unpleasant urge to scratch, is problematic for many reasons, particularly its negative impact on quality of life. Despite the profoundly negative impact of pruritus on patients with AD, clinicians and researchers lack standardized and validated methods to objectively measure pruritus. The purpose of this review is to discuss emerging methods to assess pruritus in AD by describing objective patient-centered tools developed or enhanced over the last decade that can be utilized by clinicians and researchers alike. METHODS:This review is based on a literature search in Medline, Embase, and Web of Science databases. The search was performed in February 2019. The keywords were used "pruritus," "itch," "atopic dermatitis," "eczema," "measurements," "tools," "instruments," "accelerometer," "wrist actigraphy," "smartwatch," "transducer," "vibration," "brain mapping," "magnetic resonance imaging," and "positron emission tomography." Only articles written in English were included, and no restrictions were set on study type. To focus on emerging methods, prioritization was given to results from the last decade (2009-2019). RESULTS:The search yielded 49 results in PubMed, 134 results in Embase, and 85 results in Web of Science. Each result was independently reviewed in a standardized manner by two of the authors (M.S., K.L.), and disagreements between reviewers were resolved by consensus. Relevant findings were categorized into the following sections: video surveillance, acoustic surveillance, wrist actigraphy, smart devices, vibration transducers, and neurological imaging. Examples are provided along with descriptions of how each technology works, instances of use in research or clinical practice, and as applicable, reports of validation studies and correlation with other methods. CONCLUSION:The variety of new and improved methods to evaluate pruritus in AD is welcomed by clinicians, researchers, and patients alike. Future directions include next-generation smart devices as well as exploring new territories, such as identifying biomarkers that correlate to itch and machine-learning programs to identify itch processing in the brain. As these efforts continue, it will be essential to remain patient-centered by developing techniques that minimize discomfort, respect privacy, and provide accurate data that can be used to better manage itch in AD
- …