14,905 research outputs found
Lightweight HDR Camera ISP for Robust Perception in Dynamic Illumination Conditions via Fourier Adversarial Networks
The limited dynamic range of commercial compact camera sensors results in an
inaccurate representation of scenes with varying illumination conditions,
adversely affecting image quality and subsequently limiting the performance of
underlying image processing algorithms. Current state-of-the-art (SoTA)
convolutional neural networks (CNN) are developed as post-processing techniques
to independently recover under-/over-exposed images. However, when applied to
images containing real-world degradations such as glare, high-beam, color
bleeding with varying noise intensity, these algorithms amplify the
degradations, further degrading image quality. We propose a lightweight
two-stage image enhancement algorithm sequentially balancing illumination and
noise removal using frequency priors for structural guidance to overcome these
limitations. Furthermore, to ensure realistic image quality, we leverage the
relationship between frequency and spatial domain properties of an image and
propose a Fourier spectrum-based adversarial framework (AFNet) for consistent
image enhancement under varying illumination conditions. While current
formulations of image enhancement are envisioned as post-processing techniques,
we examine if such an algorithm could be extended to integrate the
functionality of the Image Signal Processing (ISP) pipeline within the camera
sensor benefiting from RAW sensor data and lightweight CNN architecture. Based
on quantitative and qualitative evaluations, we also examine the practicality
and effects of image enhancement techniques on the performance of common
perception tasks such as object detection and semantic segmentation in varying
illumination conditions.Comment: Accepted in BMVC 202
Joint Learning of Intrinsic Images and Semantic Segmentation
Semantic segmentation of outdoor scenes is problematic when there are
variations in imaging conditions. It is known that albedo (reflectance) is
invariant to all kinds of illumination effects. Thus, using reflectance images
for semantic segmentation task can be favorable. Additionally, not only
segmentation may benefit from reflectance, but also segmentation may be useful
for reflectance computation. Therefore, in this paper, the tasks of semantic
segmentation and intrinsic image decomposition are considered as a combined
process by exploring their mutual relationship in a joint fashion. To that end,
we propose a supervised end-to-end CNN architecture to jointly learn intrinsic
image decomposition and semantic segmentation. We analyze the gains of
addressing those two problems jointly. Moreover, new cascade CNN architectures
for intrinsic-for-segmentation and segmentation-for-intrinsic are proposed as
single tasks. Furthermore, a dataset of 35K synthetic images of natural
environments is created with corresponding albedo and shading (intrinsics), as
well as semantic labels (segmentation) assigned to each object/scene. The
experiments show that joint learning of intrinsic image decomposition and
semantic segmentation is beneficial for both tasks for natural scenes. Dataset
and models are available at: https://ivi.fnwi.uva.nl/cv/intrinsegComment: ECCV 201
Illumination invariant stationary object detection
A real-time system for the detection and tracking of moving objects that becomes stationary in a restricted zone. A new pixel classification method based on the segmentation history image is used to identify stationary objects in the scene. These objects are then tracked using a novel adaptive edge orientation-based tracking method. Experimental results have shown that the tracking technique gives more than a 95% detection success rate, even if objects are partially occluded. The tracking results, together with the historic edge maps, are analysed to remove objects that are no longer stationary or are falsely identified as foreground regions because of sudden changes in the illumination conditions. The technique has been tested on over 7 h of video recorded at different locations and time of day, both outdoors and indoors. The results obtained are compared with other available state-of-the-art methods
Review of Person Re-identification Techniques
Person re-identification across different surveillance cameras with disjoint
fields of view has become one of the most interesting and challenging subjects
in the area of intelligent video surveillance. Although several methods have
been developed and proposed, certain limitations and unresolved issues remain.
In all of the existing re-identification approaches, feature vectors are
extracted from segmented still images or video frames. Different similarity or
dissimilarity measures have been applied to these vectors. Some methods have
used simple constant metrics, whereas others have utilised models to obtain
optimised metrics. Some have created models based on local colour or texture
information, and others have built models based on the gait of people. In
general, the main objective of all these approaches is to achieve a
higher-accuracy rate and lowercomputational costs. This study summarises
several developments in recent literature and discusses the various available
methods used in person re-identification. Specifically, their advantages and
disadvantages are mentioned and compared.Comment: Published 201
A Novel Framework for Highlight Reflectance Transformation Imaging
We propose a novel pipeline and related software tools for processing the multi-light image collections (MLICs) acquired in different application contexts to obtain shape and appearance information of captured surfaces, as well as to derive compact relightable representations of them. Our pipeline extends the popular Highlight Reflectance Transformation Imaging (H-RTI) framework, which is widely used in the Cultural Heritage domain. We support, in particular, perspective camera modeling, per-pixel interpolated light direction estimation, as well as light normalization correcting vignetting and uneven non-directional illumination. Furthermore, we propose two novel easy-to-use software tools to simplify all processing steps. The tools, in addition to support easy processing and encoding of pixel data, implement a variety of visualizations, as well as multiple reflectance-model-fitting options. Experimental tests on synthetic and real-world MLICs demonstrate the usefulness of the novel algorithmic framework and the potential benefits of the proposed tools for end-user applications.Terms: "European Union (EU)" & "Horizon 2020" / Action: H2020-EU.3.6.3. - Reflective societies - cultural heritage and European identity / Acronym: Scan4Reco / Grant number: 665091DSURF project (PRIN 2015) funded by the Italian Ministry of University and ResearchSardinian Regional Authorities under projects VIGEC and Vis&VideoLa
How good are detection proposals, really?
Current top performing Pascal VOC object detectors employ detection proposals
to guide the search for objects thereby avoiding exhaustive sliding window
search across images. Despite the popularity of detection proposals, it is
unclear which trade-offs are made when using them during object detection. We
provide an in depth analysis of ten object proposal methods along with four
baselines regarding ground truth annotation recall (on Pascal VOC 2007 and
ImageNet 2013), repeatability, and impact on DPM detector performance. Our
findings show common weaknesses of existing methods, and provide insights to
choose the most adequate method for different settings
- …