1,140 research outputs found
Image-Dependent Spatial Shape-Error Concealment
Existing spatial shape-error concealment techniques are broadly based upon either parametric curves that exploit geometric information concerning a shape's contour or object shape statistics using a combination of Markov random fields and maximum a posteriori estimation. Both categories are to some extent, able to mask errors caused by information loss, provided the shape is considered independently of the image/video. They palpably however, do not afford the best solution in applications where shape is used as metadata to describe image and video content. This paper presents a novel image-dependent spatial shape-error concealment (ISEC) algorithm that uses both image and shape information by employing the established rubber-band contour detecting function, with the novel enhancement of automatically determining the optimal width of the band to achieve superior error concealment. Experimental results corroborate both qualitatively and numerically, the enhanced performance of the new ISEC strategy compared with established techniques
A Wavelet Transform Applet for Interactive Learning
In recent years, new forms and techniques of teaching have appeared, based on the Internet and on multimedia applications. In the teleteaching Project Virtual University of the Upper Rhine Valley (VIROR), multimedia simulations and animations complement traditional teaching material. Lecturers use Java applets in their courses to explain complex structures. These are then stored in a multimedia database to enable asynchronous learning. The wavelet transform has become the most interesting new algorithm for still image compression. Yet, there are many parameters within a wavelet analysis and synthesis: choice of the wavelet filter bank, decomposition strategy, image boundary policy, quantization threshold, etc. We consider the wavelet transform to be a typical example of a complex, hard-to-understand algorithm that needs illustration by interactive multimedia. In this article, we present the didactic background and the implementation of a sample applet on the discrete wavelet transform, as taught in our multimedia course
Hashing for Similarity Search: A Survey
Similarity search (nearest neighbor search) is a problem of pursuing the data
items whose distances to a query item are the smallest from a large database.
Various methods have been developed to address this problem, and recently a lot
of efforts have been devoted to approximate search. In this paper, we present a
survey on one of the main solutions, hashing, which has been widely studied
since the pioneering work locality sensitive hashing. We divide the hashing
algorithms two main categories: locality sensitive hashing, which designs hash
functions without exploring the data distribution and learning to hash, which
learns hash functions according the data distribution, and review them from
various aspects, including hash function design and distance measure and search
scheme in the hash coding space
Centralized and distributed semi-parametric compression of piecewise smooth functions
This thesis introduces novel wavelet-based semi-parametric centralized and distributed
compression methods for a class of piecewise smooth functions. Our proposed compression schemes are based on a non-conventional transform coding structure with simple
independent encoders and a complex joint decoder.
Current centralized state-of-the-art compression schemes are based on the conventional structure where an encoder is relatively complex and nonlinear. In addition, the
setting usually allows the encoder to observe the entire source. Recently, there has been
an increasing need for compression schemes where the encoder is lower in complexity
and, instead, the decoder has to handle more computationally intensive tasks. Furthermore, the setup may involve multiple encoders, where each one can only partially
observe the source. Such scenario is often referred to as distributed source coding.
In the first part, we focus on the dual situation of the centralized compression where
the encoder is linear and the decoder is nonlinear. Our analysis is centered around a
class of 1-D piecewise smooth functions. We show that, by incorporating parametric
estimation into the decoding procedure, it is possible to achieve the same distortion-
rate performance as that of a conventional wavelet-based compression scheme. We also
present a new constructive approach to parametric estimation based on the sampling
results of signals with finite rate of innovation.
The second part of the thesis focuses on the distributed compression scenario, where
each independent encoder partially observes the 1-D piecewise smooth function. We
propose a new wavelet-based distributed compression scheme that uses parametric estimation to perform joint decoding. Our distortion-rate analysis shows that it is possible
for the proposed scheme to achieve that same compression performance as that of a
joint encoding scheme.
Lastly, we apply the proposed theoretical framework in the context of distributed
image and video compression. We start by considering a simplified model of the video
signal and show that we can achieve distortion-rate performance close to that of a joint
encoding scheme. We then present practical compression schemes for real world signals.
Our simulations confirm the improvement in performance over classical schemes, both
in terms of the PSNR and the visual quality
Investigation of Different Video Compression Schemes Using Neural Networks
Image/Video compression has great significance in the communication of motion pictures and still images. The need for compression has resulted in the development of various techniques including transform coding, vector quantization and neural networks. this thesis neural network based methods are investigated to achieve good compression ratios while maintaining the image quality. Parts of this investigation include motion detection, and weight retraining. An adaptive technique is employed to improve the video frame quality for a given compression ratio by frequently updating the weights obtained from training. More specifically, weight retraining is performed only when the error exceeds a given threshold value. Image quality is measured objectively, using the peak signal-to-noise ratio versus performance measure. Results show the improved performance of the proposed architecture compared to existing approaches. The proposed method is implemented in MATLAB and the results obtained such as compression ratio versus signalto- noise ratio are presented
- …