492 research outputs found
Video Deinterlacing using Control Grid Interpolation Frameworks
abstract: Video deinterlacing is a key technique in digital video processing, particularly with the widespread usage of LCD and plasma TVs. This thesis proposes a novel spatio-temporal, non-linear video deinterlacing technique that adaptively chooses between the results from one dimensional control grid interpolation (1DCGI), vertical temporal filter (VTF) and temporal line averaging (LA). The proposed method performs better than several popular benchmarking methods in terms of both visual quality and peak signal to noise ratio (PSNR). The algorithm performs better than existing approaches like edge-based line averaging (ELA) and spatio-temporal edge-based median filtering (STELA) on fine moving edges and semi-static regions of videos, which are recognized as particularly challenging deinterlacing cases. The proposed approach also performs better than the state-of-the-art content adaptive vertical temporal filtering (CAVTF) approach. Along with the main approach several spin-off approaches are also proposed each with its own characteristics.Dissertation/ThesisM.S. Electrical Engineering 201
Learning-based depth estimation from 2D images using GIST and saliency
Although there has been a significant proliferation of 3D displays in the last decade, the availability of 3D content is still scant compared to the volume of 2D data. To fill this gap, automatic 2D to 3D conversion algorithms are needed. In this paper, we present an automatic approach, inspired by machine learning principles, for estimating the depth of a 2D image. The depth of a query image is inferred from a dataset of color and depth images by searching this repository for images that are photometrically similar to the query. We measure the photometric similarity between two images by comparing their GIST descriptors. Since not all regions in the query image require the same visual attention, we give more weight in the GIST-descriptor comparison to regions with high saliency. Subsequently, we fuse the depths of the most similar images and adaptively filter the result to obtain a depth estimate. Our experimental results indicate that the proposed algorithm outperforms other state-of-the-art approaches on the commonly-used Kinect-NYU dataset
Light Field Salient Object Detection: A Review and Benchmark
Salient object detection (SOD) is a long-standing research topic in computer
vision and has drawn an increasing amount of research interest in the past
decade. This paper provides the first comprehensive review and benchmark for
light field SOD, which has long been lacking in the saliency community.
Firstly, we introduce preliminary knowledge on light fields, including theory
and data forms, and then review existing studies on light field SOD, covering
ten traditional models, seven deep learning-based models, one comparative
study, and one brief review. Existing datasets for light field SOD are also
summarized with detailed information and statistical analyses. Secondly, we
benchmark nine representative light field SOD models together with several
cutting-edge RGB-D SOD models on four widely used light field datasets, from
which insightful discussions and analyses, including a comparison between light
field SOD and RGB-D SOD models, are achieved. Besides, due to the inconsistency
of datasets in their current forms, we further generate complete data and
supplement focal stacks, depth maps and multi-view images for the inconsistent
datasets, making them consistent and unified. Our supplemental data makes a
universal benchmark possible. Lastly, because light field SOD is quite a
special problem attributed to its diverse data representations and high
dependency on acquisition hardware, making it differ greatly from other
saliency detection tasks, we provide nine hints into the challenges and future
directions, and outline several open issues. We hope our review and
benchmarking could help advance research in this field. All the materials
including collected models, datasets, benchmarking results, and supplemented
light field datasets will be publicly available on our project site
https://github.com/kerenfu/LFSOD-Survey
Real-time object detection using monocular vision for low-cost automotive sensing systems
This work addresses the problem of real-time object detection in automotive environments
using monocular vision. The focus is on real-time feature detection,
tracking, depth estimation using monocular vision and finally, object detection by
fusing visual saliency and depth information.
Firstly, a novel feature detection approach is proposed for extracting stable and
dense features even in images with very low signal-to-noise ratio. This methodology
is based on image gradients, which are redefined to take account of noise as
part of their mathematical model. Each gradient is based on a vector connecting a
negative to a positive intensity centroid, where both centroids are symmetric about
the centre of the area for which the gradient is calculated. Multiple gradient vectors
define a feature with its strength being proportional to the underlying gradient
vector magnitude. The evaluation of the Dense Gradient Features (DeGraF) shows
superior performance over other contemporary detectors in terms of keypoint density,
tracking accuracy, illumination invariance, rotation invariance, noise resistance
and detection time.
The DeGraF features form the basis for two new approaches that perform dense
3D reconstruction from a single vehicle-mounted camera. The first approach tracks
DeGraF features in real-time while performing image stabilisation with minimal
computational cost. This means that despite camera vibration the algorithm can
accurately predict the real-world coordinates of each image pixel in real-time by comparing
each motion-vector to the ego-motion vector of the vehicle. The performance
of this approach has been compared to different 3D reconstruction methods in order
to determine their accuracy, depth-map density, noise-resistance and computational
complexity. The second approach proposes the use of local frequency analysis of
i
ii
gradient features for estimating relative depth. This novel method is based on the
fact that DeGraF gradients can accurately measure local image variance with subpixel
accuracy. It is shown that the local frequency by which the centroid oscillates
around the gradient window centre is proportional to the depth of each gradient
centroid in the real world. The lower computational complexity of this methodology
comes at the expense of depth map accuracy as the camera velocity increases, but
it is at least five times faster than the other evaluated approaches.
This work also proposes a novel technique for deriving visual saliency maps by
using Division of Gaussians (DIVoG). In this context, saliency maps express the
difference of each image pixel is to its surrounding pixels across multiple pyramid
levels. This approach is shown to be both fast and accurate when evaluated against
other state-of-the-art approaches. Subsequently, the saliency information is combined
with depth information to identify salient regions close to the host vehicle.
The fused map allows faster detection of high-risk areas where obstacles are likely
to exist. As a result, existing object detection algorithms, such as the Histogram of
Oriented Gradients (HOG) can execute at least five times faster.
In conclusion, through a step-wise approach computationally-expensive algorithms
have been optimised or replaced by novel methodologies to produce a fast object
detection system that is aligned to the requirements of the automotive domain
Perceptual modelling for 2D and 3D
Livrable D1.1 du projet ANR PERSEECe rapport a été réalisé dans le cadre du projet ANR PERSEE (n° ANR-09-BLAN-0170). Exactement il correspond au livrable D1.1 du projet
Streaming and User Behaviour in Omnidirectional Videos
Omnidirectional videos (ODVs) have gone beyond the passive paradigm of traditional video,
offering higher degrees of immersion and interaction. The revolutionary novelty of this technology is the possibility for users to interact with the surrounding environment, and to feel a
sense of engagement and presence in a virtual space. Users are clearly the main driving force of
immersive applications and consequentially the services need to be properly tailored to them.
In this context, this chapter highlights the importance of the new role of users in ODV streaming applications, and thus the need for understanding their behaviour while navigating within
ODVs. A comprehensive overview of the research efforts aimed at advancing ODV streaming
systems is also presented. In particular, the state-of-the-art solutions under examination in this
chapter are distinguished in terms of system-centric and user-centric streaming approaches: the
former approach comes from a quite straightforward extension of well-established solutions for
the 2D video pipeline while the latter one takes the benefit of understanding users’ behaviour
and enable more personalised ODV streaming
- …