25 research outputs found

    A family of stereoscopic image compression algorithms using wavelet transforms

    Get PDF
    With the standardization of JPEG-2000, wavelet-based image and video compression technologies are gradually replacing the popular DCT-based methods. In parallel to this, recent developments in autostereoscopic display technology is now threatening to revolutionize the way in which consumers are used to enjoying the traditional 2D display based electronic media such as television, computer and movies. However, due to the two-fold bandwidth/storage space requirement of stereoscopic imaging, an essential requirement of a stereo imaging system is efficient data compression. In this thesis, seven wavelet-based stereo image compression algorithms are proposed, to take advantage of the higher data compaction capability and better flexibility of wavelets. In the proposed CODEC I, block-based disparity estimation/compensation (DE/DC) is performed in pixel domain. However, this results in an inefficiency when DWT is applied on the whole predictive error image that results from the DE process. This is because of the existence of artificial block boundaries between error blocks in the predictive error image. To overcome this problem, in the remaining proposed CODECs, DE/DC is performed in the wavelet domain. Due to the multiresolution nature of the wavelet domain, two methods of disparity estimation and compensation have been proposed. The first method is performing DEJDC in each subband of the lowest/coarsest resolution level and then propagating the disparity vectors obtained to the corresponding subbands of higher/finer resolution. Note that DE is not performed in every subband due to the high overhead bits that could be required for the coding of disparity vectors of all subbands. This method is being used in CODEC II. In the second method, DEJDC is performed m the wavelet-block domain. This enables disparity estimation to be performed m all subbands simultaneously without increasing the overhead bits required for the coding disparity vectors. This method is used by CODEC III. However, performing disparity estimation/compensation in all subbands would result in a significant improvement of CODEC III. To further improve the performance of CODEC ill, pioneering wavelet-block search technique is implemented in CODEC IV. The pioneering wavelet-block search technique enables the right/predicted image to be reconstructed at the decoder end without the need of transmitting the disparity vectors. In proposed CODEC V, pioneering block search is performed in all subbands of DWT decomposition which results in an improvement of its performance. Further, the CODEC IV and V are able to perform at very low bit rates(< 0.15 bpp). In CODEC VI and CODEC VII, Overlapped Block Disparity Compensation (OBDC) is used with & without the need of coding disparity vector. Our experiment results showed that no significant coding gains could be obtained for these CODECs over CODEC IV & V. All proposed CODECs m this thesis are wavelet-based stereo image coding algorithms that maximise the flexibility and benefits offered by wavelet transform technology when applied to stereo imaging. In addition the use of a baseline-JPEG coding architecture would enable the easy adaptation of the proposed algorithms within systems originally built for DCT-based coding. This is an important feature that would be useful during an era where DCT-based technology is only slowly being phased out to give way for DWT based compression technology. In addition, this thesis proposed a stereo image coding algorithm that uses JPEG-2000 technology as the basic compression engine. The proposed CODEC, named RASTER is a rate scalable stereo image CODEC that has a unique ability to preserve the image quality at binocular depth boundaries, which is an important requirement in the design of stereo image CODEC. The experimental results have shown that the proposed CODEC is able to achieve PSNR gains of up to 3.7 dB as compared to directly transmitting the right frame using JPEG-2000

    A family of stereoscopic image compression algorithms using wavelet transforms

    Get PDF
    With the standardization of JPEG-2000, wavelet-based image and video compression technologies are gradually replacing the popular DCT-based methods. In parallel to this, recent developments in autostereoscopic display technology is now threatening to revolutionize the way in which consumers are used to enjoying the traditional 2-D display based electronic media such as television, computer and movies. However, due to the two-fold bandwidth/storage space requirement of stereoscopic imaging, an essential requirement of a stereo imaging system is efficient data compression. In this thesis, seven wavelet-based stereo image compression algorithms are proposed, to take advantage of the higher data compaction capability and better flexibility of wavelets. [Continues.

    On the automatic detection of otolith features for fish species identification and their age estimation

    Get PDF
    This thesis deals with the automatic detection of features in signals, either extracted from photographs or captured by means of electronic sensors, and its possible application in the detection of morphological structures in fish otoliths so as to identify species and estimate their age at death. From a more biological perspective, otoliths, which are calcified structures located in the auditory system of all teleostean fish, constitute one of the main elements employed in the study and management of marine ecology. In this sense, the application of Fourier descriptors to otolith images, combined with component analysis, is habitually a first and a key step towards characterizing their morphology and identifying fish species. However, some of the main limitations arise from the poor interpretation that can be obtained with this representation and the use that is made of the coefficients, as generally they are selected manually for classification purposes, both in quantity and representativity. The automatic detection of irregularities in signals, and their interpretation, was first addressed in the so-called Best-Basis paradigm. In this sense, Saito's Local discriminant Bases algorithm (LDB) uses the Discrete Wavelet Packet Transform (DWPT) as the main descriptive tool for positioning the irregularities in the time-frequency space, and an energy-based discriminant measure to guide the automatic search of relevant features in this domain. Current density-based proposals have tried to overcome the limitations of the energy-based functions with relatively little success. However, other measure strategies more consistent with the true classification capability, and which can provide generalization while reducing the dimensionality of features, are yet to be developed. The proposal of this work focuses on a new framework for one-dimensional signals. An important conclusion extracted therein is that such generalization involves a mesure system of bounded values representing the density where no class overlaps. This determines severely the selection of features and the vector size that is needed for proper class identification, which must be implemented not only based on global discriminant values but also on the complementary information regarding the provision of samples in the domain. The new tools have been used in the biological study of different hake species, yielding good classification results. However, a major contribution lies on the further interpretation of features the tool performs, including the structure of irregularities, time-frequency position, extension support and degree of importance, which is highlighted automatically on the same images or signals. As for aging applications, a new demodulation strategy for compensating the nonlinear growth effect on the intensity profile has been developed. Although the method is, in principle, able to adapt automatically to the specific growth of individual specimens, preliminary results with LDB-based techniques suggest to study the effect of lighting conditions on the otoliths in order to design more reliable techniques for reducing image contrast variation. In the meantime, a new theoretic framework for otolith-based fish age estimation has been presented. This theory suggests that if the true fish growth curve is known, the regular periodicity of age structures in the demodulated profile is related to the radial length the original intensity profile is extracted from. Therefore, if this periodicity can be measured, it is possible to infer the exact fish age omitting feature extractors and classifiers. This could have important implications in the use of computational resources anc current aging approaches.El eje principal de esta tesis trata sobre la detección automática de singularidades en señales, tanto si se extraen de imágenes fotográ cas como si se capturan de sensores electrónicos, así como su posible aplicación en la detección de estructuras morfológicas en otolitos de peces para identi car especies, y realizar una estimación de la edad en el momento de su muerte. Desde una vertiente más biológica, los otolitos, que son estructuras calcáreas alojadas en el sistema auditivo de todos los peces teleósteos, constituyen uno de los elementos principales en el estudio y la gestión de la ecología marina. En este sentido, el uso combinado de descriptores de Fourier y el análisis de componentes es el primer paso y la clave para caracterizar su morfología e identi car especies marinas. Sin embargo, una de las limitaciones principales de este sistema de representación subyace en la interpretación limitada que se puede obtener de las irregularidades, así como el uso que se hace de los coe cientes en tareas de clasi cación que, por lo general, acostumbra a seleccionarse manualmente tanto por lo que respecta a la cantidad y a su importancia. La detección automática de irregularidades en señales, y su interpretación, se abordó por primera bajo el marco del Best-Basis paradigm. En este sentido, el algoritmo Local Discriminant Bases (LDB) de N. Saito utiliza la Transformada Wavelet Discreta (DWT) para describir el posicionamiento de características en el espacio tiempo-frecuencia, y una medida discriminante basada en la energía para guiar la búsqueda automática de características en dicho dominio. Propuestas recientes basadas en funciones de densidad han tratado de superar las limitaciones que presentaban las medidas de energía con un éxito relativo. No obstante, todavía están por desarrollar nuevas estrategias más consistentes con la capacidad real de clasi cación y que ofrezcan mayor generalización al reducir la dimensión de los datos de entrada. La propuesta de este trabajo se centra en un nuevo marco para señales unidimensionales. Una conclusión principal que se extrae es que dicha generalización pasa por un marco de medidas de valores acotados que re ejen la densidad donde las clases no se solapan. Esto condiciona severamente el proceso de selección de características y el tamaño del vector necesario para identi car las clases correctamente, que se ha de establecer no sólo en base a valores discriminantes globales sino también en la información complementaria sobre la disposición de las muestras en el dominio. Las nuevas herramientas han sido utilizadas en el estudio biológico de diferentes especies de merluza, donde se han conseguido buenos resultados de identi cación. No obstante, la contribución principal subyace en la interpretación que dicha herramienta hace de las características seleccionadas, y que incluye la estructura de las irregularidades, su posición temporal-frecuencial, extensión en el eje y grado de relevancia, el cual, se resalta automáticamente sobre la misma imagen o señal. Por lo que respecta a la determinación de la edad, se ha planteado una nueva estrategia de demodulación para compensar el efecto del crecimiento no lineal en los per les de intensidad. Inicialmente, aunque el método implementa un proceso de optimización capaz de adaptarse al crecimiento individual de cada pez automáticamente, resultados preliminares obtenidos con técnicas basadas en el LDB sugieren estudiar el efecto de las condiciones lumínicas sobre los otolitos con el n de diseñar algoritmos que reduzcan la variación del contraste de la imagen más ablemente. Mientras tanto, se ha planteado una nueva teoría para estimar la edad de los peces en base a otolitos. Esta teoría sugiere que si la curva de crecimiento real del pez se conoce, el período regular de los anillos en el per l demodulado está relacionado con la longitud total del radio donde se extrae el per l original. Por tanto, si dicha periodicidad es medible, es posible determinar la edad exacta sin necesidad de utilizar extractores de características o clasi cadores, lo cual tendría implicaciones importantes en el uso de recursos computacionales y en las técnicas actuales de estimación de la edad.L'eix principal d'aquesta tesi tracta sobre la detecció automàtica d'irregularitats en senyals, tant si s'extreuen de les imatges fotogrà ques com si es capturen de sensors electrònics, així com la seva possible aplicació en la detecció d'estructures morfològiques en otòlits de peixos per identi car espècies, i realitzar una estimació de l'edat en el moment de la seva mort. Des de la vesant més biològica, els otòlits, que son estructures calcàries que es troben en el sistema auditiu de tots els peixos teleostis, constitueixen un dels elements principals en l'estudi i la gestió de l'ecologia marina. En aquest sentit, l'ús combinat de descriptors de Fourier i l'anàlisi de components es el primer pas i la clau per caracteritzar la seva morfologia i identi car espècies marines. No obstant, una de les limitacions principals d'aquest sistema de representació consisteix en la interpretació limitada de les irregularitats que pot desenvolupar, així com l'ús que es realitza dels coe cients en tasques de classi cació, els quals, acostumen a ser seleccionats manualment tant pel que respecta a la quantitat com la seva importància. La detecció automàtica d'irregularitats en senyals, així com la seva interpretació, es va tractar per primera vegada sota el marc del Best-Basis paradigm. En aquest sentit, l'algorisme Local Discriminant Bases (LDB) de N. Saito es basa en la Transformada Wavelet Discreta (DWT) per descriure el posicionament de característiques dintre de l'espai temporal-freqüencial, i en una mesura discriminant basada en l'energia per guiar la cerca automàtica de característiques dintre d'aquest domini. Propostes més recents basades en funcions de densitat han tractat de superar les limitacions de les mesures d'energia amb un èxit relatiu. No obstant, encara s'han de desenvolupar noves estratègies que siguin més consistents amb la capacitat real de classi cació i ofereixin més generalització al reduir la dimensió de les dades d'entrada. La proposta d'aquest treball es centra en un nou marc per senyals unidimensionals. Una de las conclusions principals que s'extreu es que aquesta generalització passa per establir un marc de mesures acotades on els valors re ecteixin la densitat on cap classe es solapa. Això condiciona bastant el procés de selecció de característiques i la mida del vector necessari per identi car les classes correctament, que s'han d'establir no només en base a valors discriminants globals si no també en informació complementària sobre la disposició de les mostres en el domini. Les noves eines s'han utilitzat en diferents estudis d'espècies de lluç, on s'han obtingut bons resultats d'identi cació. No obstant, l'aportació principal consisteix en la interpretació que l'eina extreu de les característiques seleccionades, i que inclou l'estructura de les irregularitats, la seva posició temporal-freqüencial, extensió en l'eix i grau de rellevància, el qual, es ressalta automàticament sobre les mateixa imatge o senyal. En quan a l'àmbit de determinació de l'edat, s'ha plantejat una nova estratègia de demodulació de senyals per compensar l'efecte del creixement no lineal en els per ls d'intensitat. Tot i que inicialment aquesta tècnica desenvolupa un procés d'optimització capaç d'adaptar-se automàticament al creixement individual de cada peix, els resultats amb el LDB suggereixen estudiar l'efecte de les condicions lumíniques sobre els otòlits amb la nalitat de dissenyar algorismes que redueixin la variació del contrast de les imatges més ablement. Mentrestant s'ha plantejat una nova teoria per realitzar estimacions d'edat en peixos en base als otòlits. Aquesta teoria suggereix que si la corba de creixement és coneguda, el període regular dels anells en el per l d'intensitat demodulat està relacionat amb la longitud total de radi d'on s'agafa el per l original. Per tant, si la periodicitat es pot mesurar, es possible conèixer l'edat exacta del peix sense usar extractors de característiques o classi cadors, la qual cosa tindria implicacions importants en l'ús de recursos computacionals i en les tècniques actuals d'estimació de l'edat.Postprint (published version

    Super Resolution of Wavelet-Encoded Images and Videos

    Get PDF
    In this dissertation, we address the multiframe super resolution reconstruction problem for wavelet-encoded images and videos. The goal of multiframe super resolution is to obtain one or more high resolution images by fusing a sequence of degraded or aliased low resolution images of the same scene. Since the low resolution images may be unaligned, a registration step is required before super resolution reconstruction. Therefore, we first explore in-band (i.e. in the wavelet-domain) image registration; then, investigate super resolution. Our motivation for analyzing the image registration and super resolution problems in the wavelet domain is the growing trend in wavelet-encoded imaging, and wavelet-encoding for image/video compression. Due to drawbacks of widely used discrete cosine transform in image and video compression, a considerable amount of literature is devoted to wavelet-based methods. However, since wavelets are shift-variant, existing methods cannot utilize wavelet subbands efficiently. In order to overcome this drawback, we establish and explore the direct relationship between the subbands under a translational shift, for image registration and super resolution. We then employ our devised in-band methodology, in a motion compensated video compression framework, to demonstrate the effective usage of wavelet subbands. Super resolution can also be used as a post-processing step in video compression in order to decrease the size of the video files to be compressed, with downsampling added as a pre-processing step. Therefore, we present a video compression scheme that utilizes super resolution to reconstruct the high frequency information lost during downsampling. In addition, super resolution is a crucial post-processing step for satellite imagery, due to the fact that it is hard to update imaging devices after a satellite is launched. Thus, we also demonstrate the usage of our devised methods in enhancing resolution of pansharpened multispectral images

    Robust Transmission of Images Based on JPEG2000 Using Edge Information

    Get PDF
    In multimedia communication and data storage, compression of data is essential to speed up the transmission rate, minimize the use of channel bandwidth, and minimize storage space. JPEG2000 is the new standard for image compression for transmission and storage. The drawback of Compression is that compressed data are more vulnerable to channel noise during transmission. Previous techniques for error concealment are classified into three groups depending on the Approach employed by the encoder and decoder: Forward Error Concealment, Error Concealment by Post Processing and Interactive Error Concealment. The objective of this thesis is to develop a Concealment methodology that has the capability of both error detection and concealment, be Compatible with the JPEG2000 standard, and guarantees minimum use of channel bandwidth. A new methodology is developed to detect corrupted regions/coefficients in the received Images the edge information. The methodology requires transmission of edge information of wavelet coefficients of the original image along with JPEG2000 compressed image. At the receiver, the edge information of received wavelet coefficients is computed and compared with the received edge information of the original image to determine the corrupted coefficients. Three methods of concealment, each including a filter, are investigated to handle the corrupted regions/coefficients. MATLABâ„¢ functions are developed that simulate channel noise, image transmission Using JPEG2000 standard and the proposed methodology. The objective quality measure such as Peak-signal-to-noise ratio (PSNR), root-mean-square error (rms) and subjective quality Measure are used to evaluate processed images. The simulation results are presented to demonstrate The performance of the proposed methodology. The results are also compared with recent approaches Found in the literature. Based on performance of the proposed approach, it is claimed that the Proposed approach can be successfully used in wireless and Internet communications

    High-Fidelity and Perfect Reconstruction Techniques for Synthesizing Modulation Domain Filtered Images

    Get PDF
    Biomimetic processing inspired by biological vision systems has long been a goal of the image processing research community, both to further understanding of what it means to perceive and interpret image content and to facilitate advancements in applications ranging from processing large volumes of image data to engineering artificial intelligence systems. In recent years, the AM-FM transform has emerged as a useful tool that enables processing that is intuitive to human observers but would be difficult or impossible to achieve using traditional linear processing methods. The transform makes use of the multicomponent AM-FM image model, which represents imagery in terms of amplitude modulations, representative of local image contrast, and frequency modulations, representative of local spacing and orientation of lines and patterns. The model defines image components using an array of narrowband filterbank channels that is designed to be similar to the spatial frequency channel decomposition that occurs in the human visual system. The AM-FM transform entails the computation of modulation functions for all components of an image and the subsequent exact recovery of the image from those modulation functions. The process of modifying the modulation functions to alter visual information in a predictable way and then recovering the modified image through the AM-FM transform is known as modulation domain filtering. Past work in modulation domain filtering has produced dramatic results, but has faced challenges due to phase wrapping inherent in the transform computations and due to unknown integration constants associated with modified frequency content. The approaches developed to overcome these challenges have led to a loss of both stability and intuitive simplicity within the AM-FM model. In this dissertation, I have made significant advancements in the underlying processes that comprise the AM-FM transform. I have developed a new phase unwrapping method that increases the stability of the AM-FM transform, allowing higher quality modulation domain filtering results. I have designed new reconstruction techniques that allow for successful recovery from modified frequency modulations. These developments have allowed the design of modulation domain filters that, for the first time, do not require any departure from the simple and intuitive nature of the basic AM-FM model. Using the new modulation domain filters, I have produced new and striking results that achieve a variety of image processing tasks which are motivated by biological visual perception. These results represent a significant advancement relative to the state of the art and are a foundation from which future advancements in the field may be attained

    Non-Cartesian distributed approximating functional

    Get PDF
    Presented in this work is the Non-Cartesian Distributed Approximation Functional (NCDAF). It is a multi-dimensional generalization of the (one-dimensional) Hermite DAF that is non-separable and isotropic. Demonstrated here is an approximation method based on the NCDAF that can construct a continuous approximation of a function and derivatives from a discrete sampling of points. Under appropriate choice of conditions this approximation is free from artifacts originating from (1) the sampling scheme, and (2) the orientation of the sampled data. The NCDAF is also viewed as a compromise between the minimum uncertainty state (i.e. Gaussian) and the ideal filter. The NCDAF (kernel) is shown (1) to have a very small uncertainty product, (2) to be infinitely smooth, (3) to possess the same set of invariances as the minimum uncertainty state, (4) to propagates in convenient closed form under quantum mechanical free propagation, and (5) can be made arbitrarily close to the ideal filter

    Digital Filters and Signal Processing

    Get PDF
    Digital filters, together with signal processing, are being employed in the new technologies and information systems, and are implemented in different areas and applications. Digital filters and signal processing are used with no costs and they can be adapted to different cases with great flexibility and reliability. This book presents advanced developments in digital filters and signal process methods covering different cases studies. They present the main essence of the subject, with the principal approaches to the most recent mathematical models that are being employed worldwide

    Discontinuity-Aware Base-Mesh Modeling of Depth for Scalable Multiview Image Synthesis and Compression

    Full text link
    This thesis is concerned with the challenge of deriving disparity from sparsely communicated depth for performing disparity-compensated view synthesis for compression and rendering of multiview images. The modeling of depth is essential for deducing disparity at view locations where depth is not available and is also critical for visibility reasoning and occlusion handling. This thesis first explores disparity derivation methods and disparity-compensated view synthesis approaches. Investigations reveal the merits of adopting a piece-wise continuous mesh description of depth for deriving disparity at target view locations to enable disparity-compensated backward warping of texture. Visibility information can be reasoned due to the correspondence relationship between views that a mesh model provides, while the connectivity of a mesh model assists in resolving depth occlusion. The recent JPEG 2000 Part-17 extension defines tools for scalable coding of discontinuous media using breakpoint-dependent DWT, where breakpoints describe discontinuity boundary geometry. This thesis proposes a method to efficiently reconstruct depth coded using JPEG 2000 Part-17 as a piece-wise continuous mesh, where discontinuities are driven by the encoded breakpoints. Results show that the proposed mesh can accurately represent decoded depth while its complexity scales along with decoded depth quality. The piece-wise continuous mesh model anchored at a single viewpoint or base-view can be augmented to form a multi-layered structure where the underlying layers carry depth information of regions that are occluded at the base-view. Such a consolidated mesh representation is termed a base-mesh model and can be projected to many viewpoints, to deduce complete disparity fields between any pair of views that are inherently consistent. Experimental results demonstrate the superior performance of the base-mesh model in multiview synthesis and compression compared to other state-of-the-art methods, including the JPEG Pleno light field codec. The proposed base-mesh model departs greatly from conventional pixel-wise or block-wise depth models and their forward depth mapping for deriving disparity ingrained in existing multiview processing systems. When performing disparity-compensated view synthesis, there can be regions for which reference texture is unavailable, and inpainting is required. A new depth-guided texture inpainting algorithm is proposed to restore occluded texture in regions where depth information is either available or can be inferred using the base-mesh model

    Remote Sensing Data Compression

    Get PDF
    A huge amount of data is acquired nowadays by different remote sensing systems installed on satellites, aircrafts, and UAV. The acquired data then have to be transferred to image processing centres, stored and/or delivered to customers. In restricted scenarios, data compression is strongly desired or necessary. A wide diversity of coding methods can be used, depending on the requirements and their priority. In addition, the types and properties of images differ a lot, thus, practical implementation aspects have to be taken into account. The Special Issue paper collection taken as basis of this book touches on all of the aforementioned items to some degree, giving the reader an opportunity to learn about recent developments and research directions in the field of image compression. In particular, lossless and near-lossless compression of multi- and hyperspectral images still remains current, since such images constitute data arrays that are of extremely large size with rich information that can be retrieved from them for various applications. Another important aspect is the impact of lossless compression on image classification and segmentation, where a reasonable compromise between the characteristics of compression and the final tasks of data processing has to be achieved. The problems of data transition from UAV-based acquisition platforms, as well as the use of FPGA and neural networks, have become very important. Finally, attempts to apply compressive sensing approaches in remote sensing image processing with positive outcomes are observed. We hope that readers will find our book useful and interestin
    corecore