95 research outputs found
Recommended from our members
Multi-scale edge-guided image gap restoration
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University London.The focus of this research work is the estimation of gaps (missing blocks) in digital images. To progress the research two main issues were identified as (1) the appropriate domains for image gap restoration and (2) the methodologies for gap interpolation. Multi-scale transforms provide an appropriate framework for gap restoration. The main advantages are transformations into a set of frequency and scales and the ability to progressively reduce the size of the gap to one sample wide at the transform apex. Two types of multi-scale transform were considered for comparative evaluation; 2-dimensional (2D) discrete cosines (DCT) pyramid and 2D discrete wavelets (DWT). For image gap estimation, a family of conventional weighted interpolators and directional edge-guided interpolators are developed and evaluated. Two types of edges were considered; ‘local’ edges or textures and ‘global’ edges such as the boundaries between objects or within/across patterns in the image. For local edge, or texture, modelling a number of methods were explored which aim to reconstruct a set of gradients across the restored gap as those computed from the known neighbourhood. These differential gradients are estimated along the geometrical vertical, horizontal and cross directions for each pixel of the gap. The edge-guided interpolators aim to operate on distinct regions confined within edge lines. For global edge-guided interpolation, two main methods explored are Sobel and Canny detectors. The latter provides improved edge detection. The combination and integration of different multi-scale domains, local edge interpolators, global edge-guided interpolators and iterative estimation of edges provided a variety of configurations that were comparatively explored and evaluated. For evaluation a set of images commonly used in the literature work were employed together with simulated regular and random image gaps at a variety of loss rate. The performance measures used are the peak signal to noise ratio (PSNR) and structure similarity index (SSIM). The results obtained are better than the state of the art reported in the literature
Construction de mosaïques de super-résolution à partir de la vidéo de basse résolution. Application au résumé vidéo et la dissimulation d'erreurs de transmission.
La numérisation des vidéos existantes ainsi que le développement explosif des services multimédia par des réseaux comme la diffusion de la télévision numérique ou les communications mobiles ont produit une énorme quantité de vidéos compressées. Ceci nécessite des outils d’indexation et de navigation efficaces, mais une indexation avant l’encodage n’est pas habituelle. L’approche courante est le décodage complet des ces vidéos pour ensuite créer des indexes. Ceci est très coûteux et par conséquent non réalisable en temps réel. De plus, des informations importantes comme le mouvement, perdus lors du décodage, sont reestimées bien que déjà présentes dans le flux comprimé. Notre but dans cette thèse est donc la réutilisation des données déjà présents dans le flux comprimé MPEG pour l’indexation et la navigation rapide. Plus précisément, nous extrayons des coefficients DC et des vecteurs de mouvement. Dans le cadre de cette thèse, nous nous sommes en particulier intéressés à la construction de mosaïques à partir des images DC extraites des images I. Une mosaïque est construite par recalage et fusion de toutes les images d’une séquence vidéo dans un seul système de coordonnées. Ce dernier est en général aligné avec une des images de la séquence : l’image de référence. Il en résulte une seule image qui donne une vue globale de la séquence. Ainsi, nous proposons dans cette thèse un système complet pour la construction des mosaïques à partir du flux MPEG-1/2 qui tient compte de différentes problèmes apparaissant dans des séquences vidéo réeles, comme par exemple des objets en mouvment ou des changements d’éclairage. Une tâche essentielle pour la construction d’une mosaïque est l’estimation de mouvement entre chaque image de la séquence et l’image de référence. Notre méthode se base sur une estimation robuste du mouvement global de la caméra à partir des vecteurs de mouvement des images P. Cependant, le mouvement global de la caméra estimé pour une image P peut être incorrect car il dépend fortement de la précision des vecteurs encodés. Nous détectons les images P concernées en tenant compte des coefficients DC de l’erreur encodée associée et proposons deux méthodes pour corriger ces mouvements. Unemosaïque construite à partir des images DC a une résolution très faible et souffre des effets d’aliasing dus à la nature des images DC. Afin d’augmenter sa résolution et d’améliorer sa qualité visuelle, nous appliquons une méthode de super-résolution basée sur des rétro-projections itératives. Les méthodes de super-résolution sont également basées sur le recalage et la fusion des images d’une séquence vidéo, mais sont accompagnées d’une restauration d’image. Dans ce cadre, nous avons développé une nouvelleméthode d’estimation de flou dû au mouvement de la caméra ainsi qu’une méthode correspondante de restauration spectrale. La restauration spectrale permet de traiter le flou globalement, mais, dans le cas des obvi jets ayant un mouvement indépendant du mouvement de la caméra, des flous locaux apparaissent. C’est pourquoi, nous proposons un nouvel algorithme de super-résolution dérivé de la restauration spatiale itérative de Van Cittert et Jansson permettant de restaurer des flous locaux. En nous basant sur une segmentation d’objets en mouvement, nous restaurons séparément lamosaïque d’arrière-plan et les objets de l’avant-plan. Nous avons adapté notre méthode d’estimation de flou en conséquence. Dans une premier temps, nous avons appliqué notre méthode à la construction de résumé vidéo avec pour l’objectif la navigation rapide par mosaïques dans la vidéo compressée. Puis, nous établissions comment la réutilisation des résultats intermédiaires sert à d’autres tâches d’indexation, notamment à la détection de changement de plan pour les images I et à la caractérisation dumouvement de la caméra. Enfin, nous avons exploré le domaine de la récupération des erreurs de transmission. Notre approche consiste en construire une mosaïque lors du décodage d’un plan ; en cas de perte de données, l’information manquante peut être dissimulée grace à cette mosaïque
Enhanced coding, clock recovery and detection for a magnetic credit card
Merged with duplicate record 10026.1/2299 on 03.04.2017 by CS (TIS)This thesis describes the background, investigation and construction of a system
for storing data on the magnetic stripe of a standard three-inch plastic credit
in: inch card. Investigation shows that the information storage limit within a 3.375 in
by 0.11 in rectangle of the stripe is bounded to about 20 kBytes. Practical issues
limit the data storage to around 300 Bytes with a low raw error rate: a four-fold
density increase over the standard. Removal of the timing jitter (that is prob-'
ably caused by the magnetic medium particle size) would increase the limit to
1500 Bytes with no other system changes. This is enough capacity for either a
small digital passport photograph or a digitized signature: making it possible
to remove printed versions from the surface of the card.
To achieve even these modest gains has required the development of a new
variable rate code that is more resilient to timing errors than other codes in its
efficiency class. The tabulation of the effects of timing errors required the construction
of a new code metric and self-recovering decoders. In addition, a new
method of timing recovery, based on the signal 'snatches' has been invented to
increase the rapidity with which a Bayesian decoder can track the changing velocity
of a hand-swiped card. The timing recovery and Bayesian detector have
been integrated into one computation (software) unit that is self-contained and
can decode a general class of (d, k) constrained codes. Additionally, the unit has
a signal truncation mechanism to alleviate some of the effects of non-linear distortion
that are present when a magnetic card is read with a magneto-resistive
magnetic sensor that has been driven beyond its bias magnetization.
While the storage density is low and the total storage capacity is meagre in
comparison with contemporary storage devices, the high density card may still
have a niche role to play in society. Nevertheless, in the face of the Smart card its
long term outlook is uncertain. However, several areas of coding and detection
under short-duration extreme conditions have brought new decoding methods
to light. The scope of these methods is not limited just to the credit card
Investigation of coding and equalization for the digital HDTV terrestrial broadcast channel
Includes bibliographical references (p. 241-248).Supported by the Advanced Telecommunications Research Program.Julien J. Nicolas
Recent Advances in Signal Processing
The signal processing task is a very critical issue in the majority of new technological inventions and challenges in a variety of applications in both science and engineering fields. Classical signal processing techniques have largely worked with mathematical models that are linear, local, stationary, and Gaussian. They have always favored closed-form tractability over real-world accuracy. These constraints were imposed by the lack of powerful computing tools. During the last few decades, signal processing theories, developments, and applications have matured rapidly and now include tools from many areas of mathematics, computer science, physics, and engineering. This book is targeted primarily toward both students and researchers who want to be exposed to a wide variety of signal processing techniques and algorithms. It includes 27 chapters that can be categorized into five different areas depending on the application at hand. These five categories are ordered to address image processing, speech processing, communication systems, time-series analysis, and educational packages respectively. The book has the advantage of providing a collection of applications that are completely independent and self-contained; thus, the interested reader can choose any chapter and skip to another without losing continuity
Recommended from our members
Intelligent Side Information Generation in Distributed Video Coding
Distributed video coding (DVC) reverses the traditional coding paradigm of complex encoders allied with basic decoding to one where the computational cost is largely incurred by the decoder. This is attractive as the proven theoretical work of Wyner-Ziv (WZ) and Slepian-Wolf (SW) shows that the performance by such a system should be exactly the same as a conventional coder. Despite the solid theoretical foundations, current DVC qualitative and quantitative performance falls short of existing conventional coders and there remain crucial limitations. A key constraint governing DVC performance is the quality of side information (SI), a coarse representation of original video frames which are not available at the decoder. Techniques to generate SI have usually been based on linear motion compensated temporal interpolation (LMCTI), though these do not always produce satisfactory SI quality, especially in sequences exhibiting non-linear motion.
This thesis presents an intelligent higher order piecewise trajectory temporal interpolation (HOPTTI) framework for SI generation with original contributions that afford better SI quality in comparison to existing LMCTI-based approaches. The major elements in this framework are: (i) a cubic trajectory interpolation algorithm model that significantly improves the accuracy of motion vector estimations; (ii) an adaptive overlapped block motion compensation (AOBMC) model which reduces both blocking and overlapping artefacts in the SI emanating from the block matching algorithm; (iii) the development of an empirical mode switching algorithm; and (iv) an intelligent switching mechanism to construct SI by automatically selecting the best macroblock from the intermediate SI generated by HOPTTI and AOBMC algorithms. Rigorous analysis and evaluation confirms that significant quantitative and perceptual improvements in SI quality are achieved with the new framework
- …