696 research outputs found
Motion compensation and very low bit rate video coding
Recently, many activities of the International Telecommunication Union (ITU) and the International Standard Organization (ISO) are leading to define new standards for very low bit-rate video coding, such as H.263 and MPEG-4 after successful applications of the international standards H.261 and MPEG-1/2 for video coding above 64kbps. However, at very low bit-rate the classic block matching based DCT video coding scheme suffers seriously from blocking artifacts which degrade the quality of reconstructed video frames considerably. To solve this problem, a new technique in which motion compensation is based on dense motion field is presented in this dissertation.
Four efficient new video coding algorithms based on this new technique for very low bit-rate are proposed. (1) After studying model-based video coding algorithms, we propose an optical flow based video coding algorithm with thresh-olding techniques. A statistic model is established for distribution of intensity difference between two successive frames, and four thresholds are used to control the bit-rate and the quality of reconstructed frames. It outperforms the typical model-based techniques in terms of complexity and quality of reconstructed frames. (2) An efficient algorithm using DCT coded optical flow. It is found that dense motion fields can be modeled as the first order auto-regressive model, and efficiently compressed with DCT technique, hence achieving very low bit-rate and higher visual quality than the H.263/TMN5. (3) A region-based discrete wavelet transform video coding algorithm. This algorithm implements dense motion field and regions are segmented according to their content significance. The DWT is applied to residual images region by region, and bits are adaptively allocated to regions. It improves the visual quality and PSNR of significant regions while maintaining low bit-rate. (4) A segmentation-based video coding algorithm for stereo sequence. A correlation-feedback algorithm with Kalman filter is utilized to improve the accuracy of optical flow fields. Three criteria, which are associated with 3-D information, 2-D connectivity and motion vector fields, respectively, are defined for object segmentation. A chain code is utilized to code the shapes of the segmented objects. it can achieve very high compression ratio up to several thousands
Automatic face recognition using stereo images
Face recognition is an important pattern recognition problem, in the study of both natural and artificial learning problems. Compaxed to other biometrics, it is non-intrusive, non- invasive and requires no paxticipation from the subjects. As a result, it has many applications varying from human-computer-interaction to access control and law-enforcement to crowd surveillance. In typical optical image based face recognition systems, the systematic vaxiability arising from representing the three-dimensional (3D) shape of a face by a two-dimensional (21)) illumination intensity matrix is treated as random vaxiability. Multiple examples of the face displaying vaxying pose and expressions axe captured in different imaging conditions. The imaging environment, pose and expressions are strictly controlled and the images undergo rigorous normalisation and pre-processing. This may be implemented in a paxtially or a fully automated system. Although these systems report high classification accuracies (>90%), they lack versatility and tend to fail when deployed outside laboratory conditions. Recently, more sophisticated 3D face recognition systems haxnessing the depth information have emerged. These systems usually employ specialist equipment such as laser scanners and structured light projectors. Although more accurate than 2D optical image based recognition, these systems are equally difficult to implement in a non-co-operative environment. Existing face recognition systems, both 2D and 3D, detract from the main advantages of face recognition and fail to fully exploit its non-intrusive capacity. This is either because they rely too much on subject co-operation, which is not always available, or because they cannot cope with noisy data. The main objective of this work was to investigate the role of depth information in face recognition in a noisy environment. A stereo-based system, inspired by the human binocular vision, was devised using a pair of manually calibrated digital off-the-shelf cameras in a stereo setup to compute depth information. Depth values extracted from 2D intensity images using stereoscopy are extremely noisy, and as a result this approach for face recognition is rare. This was cofirmed by the results of our experimental work. Noise in the set of correspondences, camera calibration and triangulation led to inaccurate depth reconstruction, which in turn led to poor classifier accuracy for both 3D surface matching and 211) 2 depth maps. Recognition experiments axe performed on the Sheffield Dataset, consisting 692 images of 22 individuals with varying pose, illumination and expressions
Non-contact vision-based deformation monitoring on bridge structures
Information on deformation is an important metric for bridge condition and performance assessment, e.g. identifying abnormal events, calibrating bridge models and estimating load carrying capacities, etc. However, accurate measurement of bridge deformation, especially for long-span bridges remains as a challenging task. The major aim of this research is to develop practical and cost-effective techniques for accurate deformation monitoring on bridge structures. Vision-based systems are taken as the study focus due to a few reasons: low cost, easy installation, desired sample rates, remote and distributed sensing, etc.
This research proposes an custom-developed vision-based system for bridge deformation monitoring. The system supports either consumer-grade or professional cameras and incorporates four advanced video tracking methods to adapt to different test situations. The sensing accuracy is firstly quantified in laboratory conditions. The working performance in field testing is evaluated on one short-span and one long-span bridge examples considering several influential factors i.e. long-range sensing, low-contrast target patterns, pattern changes and lighting changes. Through case studies, some suggestions about tracking method selection are summarised for field testing. Possible limitations of vision-based systems are illustrated as well.
To overcome observed limitations of vision-based systems, this research further proposes a mixed system combining cameras with accelerometers for accurate deformation measurement. To integrate displacement with acceleration data autonomously, a novel data fusion method based on Kalman filter and maximum likelihood estimation is proposed. Through field test validation, the method is effective for improving displacement accuracy and widening frequency bandwidth. The mixed system based on data fusion is implemented on field testing of a railway bridge considering undesired test conditions (e.g. low-contrast target patterns and camera shake). Analysis results indicate that the system offers higher accuracy than using a camera alone and is viable for bridge influence line estimation.
With considerable accuracy and resolution in time and frequency domains, the potential of vision-based measurement for vibration monitoring is investigated. The proposed vision-based system is applied on a cable-stayed footbridge for deck deformation and cable vibration measurement under pedestrian loading. Analysis results indicate that the measured data enables accurate estimation of modal frequencies and could be used to investigate variations of modal frequencies under varying pedestrian loads. The vision-based system in this application is used for multi-point vibration measurement and provides results comparable to those obtained using an array of accelerometers
Recommended from our members
Reconstruction of 3D scenes from pairs of uncalibrated images. Creation of an interactive system for extracting 3D data points and investigation of automatic techniques for generating dense 3D data maps from pairs of uncalibrated images for remote sensing applications.
Much research effort has been devoted to producing algorithms that contribute directly or indirectly to the extraction of 3D information from a wide variety of types of scenes and conditions of image capture. The research work presented in this thesis is aimed at three distinct applications in this area: interactively extracting 3D points from a pair of uncalibrated images in a flexible way; finding corresponding points automatically in high resolution images, particularly those of archaeological scenes captured from a freely moving light aircraft; and improving a correlation approach to dense disparity mapping leading to 3D surface reconstructions.
The fundamental concepts required to describe the principles of stereo vision, the camera models, and the epipolar geometry described by the fundamental matrix are introduced, followed by a detailed literature review of existing methods.
An interactive system for viewing a scene via a monochrome or colour anaglyph is presented which allows the user to choose the level of compromise between amount of colour and ghosting perceived by controlling colour saturation, and to choose the depth plane of interest. An improved method of extracting 3D coordinates from disparity values when there is significant error is presented.
Interactive methods, while very flexible, require significant effort from the user finding and fusing corresponding points and the thesis continues by presenting several variants of existing scale invariant feature transform methods to automatically find correspondences in uncalibrated high resolution aerial images with improved speed and memory requirements. In addition, a contribution to estimating lens distortion correction by a Levenberg Marquard based method is presented; generating data strings for straight lines which are essential input for estimating lens distortion correction.
The remainder of the thesis presents correlation based methods for generating dense disparity maps based on single and multiple image rectifications using sets of automatically found correspondences and demonstrates improvements obtained using the latter method. Some example views of point clouds for 3D surfaces produced from pairs of uncalibrated images using the methods presented in the thesis are included.Al-Baath UniversityThe appendices files and images are not available online
Real-time object detection using monocular vision for low-cost automotive sensing systems
This work addresses the problem of real-time object detection in automotive environments
using monocular vision. The focus is on real-time feature detection,
tracking, depth estimation using monocular vision and finally, object detection by
fusing visual saliency and depth information.
Firstly, a novel feature detection approach is proposed for extracting stable and
dense features even in images with very low signal-to-noise ratio. This methodology
is based on image gradients, which are redefined to take account of noise as
part of their mathematical model. Each gradient is based on a vector connecting a
negative to a positive intensity centroid, where both centroids are symmetric about
the centre of the area for which the gradient is calculated. Multiple gradient vectors
define a feature with its strength being proportional to the underlying gradient
vector magnitude. The evaluation of the Dense Gradient Features (DeGraF) shows
superior performance over other contemporary detectors in terms of keypoint density,
tracking accuracy, illumination invariance, rotation invariance, noise resistance
and detection time.
The DeGraF features form the basis for two new approaches that perform dense
3D reconstruction from a single vehicle-mounted camera. The first approach tracks
DeGraF features in real-time while performing image stabilisation with minimal
computational cost. This means that despite camera vibration the algorithm can
accurately predict the real-world coordinates of each image pixel in real-time by comparing
each motion-vector to the ego-motion vector of the vehicle. The performance
of this approach has been compared to different 3D reconstruction methods in order
to determine their accuracy, depth-map density, noise-resistance and computational
complexity. The second approach proposes the use of local frequency analysis of
i
ii
gradient features for estimating relative depth. This novel method is based on the
fact that DeGraF gradients can accurately measure local image variance with subpixel
accuracy. It is shown that the local frequency by which the centroid oscillates
around the gradient window centre is proportional to the depth of each gradient
centroid in the real world. The lower computational complexity of this methodology
comes at the expense of depth map accuracy as the camera velocity increases, but
it is at least five times faster than the other evaluated approaches.
This work also proposes a novel technique for deriving visual saliency maps by
using Division of Gaussians (DIVoG). In this context, saliency maps express the
difference of each image pixel is to its surrounding pixels across multiple pyramid
levels. This approach is shown to be both fast and accurate when evaluated against
other state-of-the-art approaches. Subsequently, the saliency information is combined
with depth information to identify salient regions close to the host vehicle.
The fused map allows faster detection of high-risk areas where obstacles are likely
to exist. As a result, existing object detection algorithms, such as the Histogram of
Oriented Gradients (HOG) can execute at least five times faster.
In conclusion, through a step-wise approach computationally-expensive algorithms
have been optimised or replaced by novel methodologies to produce a fast object
detection system that is aligned to the requirements of the automotive domain
Accurate depth from defocus estimation with video-rate implementation
The science of measuring depth from images at video rate using „defocus‟ has been investigated. The method required two differently focussed images acquired from a single view point using a single camera. The relative blur between the images was used to determine the in-focus axial points of each pixel and hence depth.
The depth estimation algorithm researched by Watanabe and Nayar was employed to recover the depth estimates, but the broadband filters, referred as the Rational filters were designed using a new procedure: the Two Step Polynomial Approach. The filters designed by the new model were largely insensitive to object texture and were shown to model the blur more precisely than the previous method. Experiments with real planar images demonstrated a maximum RMS depth error of 1.18% for the proposed filters, compared to 1.54% for the previous design.
The researched software program required five 2D convolutions to be processed in parallel and these convolutions were effectively implemented on a FPGA using a two channel, five stage pipelined architecture, however the precision of the filter coefficients and the variables had to be limited within the processor. The number of multipliers required for each convolution was reduced from 49 to 10 (79.5% reduction) using a Triangular design procedure. Experimental results suggested that the pipelined processor provided depth estimates comparable in accuracy to the full precision Matlab‟s output, and generated depth maps of size 400 x 400 pixels in 13.06msec, that is faster than the video rate.
The defocused images (near and far-focused) were optically registered for magnification using Telecentric optics. A frequency domain approach based on phase correlation was employed to measure the radial shifts due to magnification and also to optimally position the external aperture. The telecentric optics ensured pixel to pixel registration between the defocused images was correct and provided more accurate depth estimates
LDMIC: Learning-based Distributed Multi-view Image Coding
Multi-view image compression plays a critical role in 3D-related
applications. Existing methods adopt a predictive coding architecture, which
requires joint encoding to compress the corresponding disparity as well as
residual information. This demands collaboration among cameras and enforces the
epipolar geometric constraint between different views, which makes it
challenging to deploy these methods in distributed camera systems with randomly
overlapping fields of view. Meanwhile, distributed source coding theory
indicates that efficient data compression of correlated sources can be achieved
by independent encoding and joint decoding, which motivates us to design a
learning-based distributed multi-view image coding (LDMIC) framework. With
independent encoders, LDMIC introduces a simple yet effective joint context
transfer module based on the cross-attention mechanism at the decoder to
effectively capture the global inter-view correlations, which is insensitive to
the geometric relationships between images. Experimental results show that
LDMIC significantly outperforms both traditional and learning-based MIC methods
while enjoying fast encoding speed. Code will be released at
https://github.com/Xinjie-Q/LDMIC.Comment: Accepted by ICLR 202
- …