10 research outputs found

    A Real Time Image Processing Subsystem: GEZGIN

    Get PDF
    In this study, a real-time image processing subsystem, GEZGIN, which is currently being developed for BILSAT-1, a 100kg class micro-satellite, is presented. BILSAT-1 is being constructed in accordance with a technology transfer agreement between TĂśBITAK-BILTEN (Turkey) and SSTL (UK) and planned to be placed into a 650 km sunsynchronous orbit in Summer 2003. GEZGIN is one of the two Turkish R&D payloads to be hosted on BILSAT-1. One of the missions of BILSAT-1 is constructing a Digital Elevation Model of Turkey using both multi-spectral and panchromatic imagers. Due to limited down-link bandwidth and on-board storage capacity, employment of a realtime image compression scheme is highly advantageous for the mission. GEZGIN has evolved as an implementation to achieve image compression tasks that would lead to an efficient utilization of both the down-link and on-board storage. The image processing on GEZGIN includes capturing of 4-band multi-spectral images of size 2048x2048 8- bit pixels, compressing them simultaneously with the new industry standard JPEG2000 algorithm and forwarding the compressed multi-spectral image to Solid State Data Recorders (SSDR) of BILSAT-1 for storage and down-link transmission. The mission definition together with orbital parameters impose a 6.5 seconds constraint on real-time image compression. GEZGIN meets this constraint by exploiting the parallelism among image processing units and assigning compute intensive tasks to dedicated hardware. The proposed hardware also allows for full reconfigurability of all processing units

    Efficient and secured wireless monitoring systems for detection of cardiovascular diseases

    Get PDF
    Cardiovascular Disease (CVD) is the number one killer for modern era. Majority of the deaths associated with CVD can entirely be prevented if the CVD struck person is treated with urgency. This thesis is our effort in minimizing the delay associated with existing tele-cardiology application. We harnessed the computational power of modern day mobile phones to detect abnormality in Electrocardiogram (ECG). If abnormality is detected, our innovative ECG compression algorithm running on the patient's mobile phone compresses and encrypts the ECG signal and then performs efficient transmission towards the doctors or hospital services. According to the literature, we have achieved the highest possible compression ratio of 20.06 (95% compression) on ECG signal, without any loss of information. Our 3 layer permutation cipher based ECG encoding mechanism can raise the security strength substantially higher than conventional AES or DES algorithms. If in near future, a grid of supercomputers can compare a trillion trillion trillion (1036) combinations of one ECG segment (comprising 500 ECG samples) per second for ECG morphology matching, it will take approximately 9.333 X 10970 years to enumerate all the combinations. After receiving the compressed ECG packets the doctor's mobile phone or the hospital server authenticates the patient using our proposed set of ECG biometric based authentication mechanisms. Once authenticated, the patients are diagnosed with our faster ECG diagnosis algorithms. In a nutshell, this thesis contains a set of algorithms that can save a CVD affected patient's life by harnessing the power of mobile computation and wireless communication

    Wavelets and multirate filter banks : theory, structure, design, and applications

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering, 2004.Includes bibliographical references (p. 219-230) and index.Wavelets and filter banks have revolutionized signal processing with their ability to process data at multiple temporal and spatial resolutions. Fundamentally, continuous-time wavelets are governed by discrete-time filter banks with properties such as perfect reconstruction, linear phase and regularity. In this thesis, we study multi-channel filter bank factorization and parameterization strategies, which facilitate designs with specified properties that are enforced by the actual factorization structure. For M-channel filter banks (M =/> 2), we develop a complete factorization, M-channel lifting factorization, using simple ladder-like structures as predictions between channels to provide robust and efficient implementation; perfect reconstruction is structurally enforced, even under finite precision arithmetic and quantization of lifting coefficients. With lifting, optimal low-complexity integer wavelet transforms can thus be designed using a simple and fast algorithm that incorporates prescribed limits on hardware operations for power-constrained environments. As filter bank regularity is important for a variety of reasons, an aspect of particular interest is the structural imposition of regularity onto factorizations based on the dyadic form uvt. We derive the corresponding structural conditions for regularity, for which M-channel lifting factorization provides an essential parameterization. As a result, we are able to design filter banks that are exactly regular and amenable to fast implementations with perfect reconstruction, regardless of the choice of free parameters and possible finite precision effects. Further constraining u = v ensures regular orthogonal filter banks,(cont.) whereas a special dyadic form is developed that guarantees linear phase. We achieve superior coding gains within 0.1% of the optimum, and benchmarks conducted on image compression applications show clear improvements in perceptual and objective performance. We also consider the problem of completing an M-channel filter bank, given only its scaling filter. M-channel lifting factorization can efficiently complete such biorthogonal filter banks. On the other hand, an improved scheme for completing paraunitary filter banks is made possible by a novel order-one factorization which allows greater design flexibility, resulting in improved frequency selectivity and energy compaction over existing state of the art methods. In a dual setting, the technique can be applied to transmultiplexer design to achieve higher-rate data transmissions.by Ying-Jui Chen.Ph.D

    Scalable video compression with optimized visual performance and random accessibility

    Full text link
    This thesis is concerned with maximizing the coding efficiency, random accessibility and visual performance of scalable compressed video. The unifying theme behind this work is the use of finely embedded localized coding structures, which govern the extent to which these goals may be jointly achieved. The first part focuses on scalable volumetric image compression. We investigate 3D transform and coding techniques which exploit inter-slice statistical redundancies without compromising slice accessibility. Our study shows that the motion-compensated temporal discrete wavelet transform (MC-TDWT) practically achieves an upper bound to the compression efficiency of slice transforms. From a video coding perspective, we find that most of the coding gain is attributed to offsetting the learning penalty in adaptive arithmetic coding through 3D code-block extension, rather than inter-frame context modelling. The second aspect of this thesis examines random accessibility. Accessibility refers to the ease with which a region of interest is accessed (subband samples needed for reconstruction are retrieved) from a compressed video bitstream, subject to spatiotemporal code-block constraints. We investigate the fundamental implications of motion compensation for random access efficiency and the compression performance of scalable interactive video. We demonstrate that inclusion of motion compensation operators within the lifting steps of a temporal subband transform incurs a random access penalty which depends on the characteristics of the motion field. The final aspect of this thesis aims to minimize the perceptual impact of visible distortion in scalable reconstructed video. We present a visual optimization strategy based on distortion scaling which raises the distortion-length slope of perceptually significant samples. This alters the codestream embedding order during post-compression rate-distortion optimization, thus allowing visually sensitive sites to be encoded with higher fidelity at a given bit-rate. For visual sensitivity analysis, we propose a contrast perception model that incorporates an adaptive masking slope. This versatile feature provides a context which models perceptual significance. It enables scene structures that otherwise suffer significant degradation to be preserved at lower bit-rates. The novelty in our approach derives from a set of "perceptual mappings" which account for quantization noise shaping effects induced by motion-compensated temporal synthesis. The proposed technique reduces wavelet compression artefacts and improves the perceptual quality of video

    Understanding and advancing PDE-based image compression

    Get PDF
    This thesis is dedicated to image compression with partial differential equations (PDEs). PDE-based codecs store only a small amount of image points and propagate their information into the unknown image areas during the decompression step. For certain classes of images, PDE-based compression can already outperform the current quasi-standard, JPEG2000. However, the reasons for this success are not yet fully understood, and PDE-based compression is still in a proof-of-concept stage. With a probabilistic justification for anisotropic diffusion, we contribute to a deeper insight into design principles for PDE-based codecs. Moreover, by analysing the interaction between efficient storage methods and image reconstruction with diffusion, we can rank PDEs according to their practical value in compression. Based on these observations, we advance PDE-based compression towards practical viability: First, we present a new hybrid codec that combines PDE- and patch-based interpolation to deal with highly textured images. Furthermore, a new video player demonstrates the real-time capacities of PDE-based image interpolation and a new region of interest coding algorithm represents important image areas with high accuracy. Finally, we propose a new framework for diffusion-based image colourisation that we use to build an efficient codec for colour images. Experiments on real world image databases show that our new method is qualitatively competitive to current state-of-the-art codecs.Diese Dissertation ist der Bildkompression mit partiellen Differentialgleichungen (PDEs, partial differential equations) gewidmet. PDE-Codecs speichern nur einen geringen Anteil aller Bildpunkte und transportieren deren Information in fehlende Bildregionen. In einigen Fällen kann PDE-basierte Kompression den aktuellen Quasi-Standard, JPEG2000, bereits schlagen. Allerdings sind die Gründe für diesen Erfolg noch nicht vollständig erforscht, und PDE-basierte Kompression befindet sich derzeit noch im Anfangsstadium. Wir tragen durch eine probabilistische Rechtfertigung anisotroper Diffusion zu einem tieferen Verständnis PDE-basierten Codec-Designs bei. Eine Analyse der Interaktion zwischen effizienten Speicherverfahren und Bildrekonstruktion erlaubt es uns, PDEs nach ihrem Nutzen für die Kompression zu beurteilen. Anhand dieser Einsichten entwickeln wir PDE-basierte Kompression hinsichtlich ihrer praktischen Nutzbarkeit weiter: Wir stellen einen Hybrid-Codec für hochtexturierte Bilder vor, der umgebungsbasierte Interpolation mit PDEs kombiniert. Ein neuer Video-Dekodierer demonstriert die Echtzeitfähigkeit PDE-basierter Interpolation und eine Region-of-Interest-Methode erlaubt es, wichtige Bildbereiche mit hoher Genauigkeit zu speichern. Schlussendlich stellen wir ein neues diffusionsbasiertes Kolorierungsverfahren vor, welches uns effiziente Kompression von Farbbildern ermöglicht. Experimente auf Realwelt-Bilddatenbanken zeigen die Konkurrenzfähigkeit dieses Verfahrens auf

    Understanding and advancing PDE-based image compression

    Get PDF
    This thesis is dedicated to image compression with partial differential equations (PDEs). PDE-based codecs store only a small amount of image points and propagate their information into the unknown image areas during the decompression step. For certain classes of images, PDE-based compression can already outperform the current quasi-standard, JPEG2000. However, the reasons for this success are not yet fully understood, and PDE-based compression is still in a proof-of-concept stage. With a probabilistic justification for anisotropic diffusion, we contribute to a deeper insight into design principles for PDE-based codecs. Moreover, by analysing the interaction between efficient storage methods and image reconstruction with diffusion, we can rank PDEs according to their practical value in compression. Based on these observations, we advance PDE-based compression towards practical viability: First, we present a new hybrid codec that combines PDE- and patch-based interpolation to deal with highly textured images. Furthermore, a new video player demonstrates the real-time capacities of PDE-based image interpolation and a new region of interest coding algorithm represents important image areas with high accuracy. Finally, we propose a new framework for diffusion-based image colourisation that we use to build an efficient codec for colour images. Experiments on real world image databases show that our new method is qualitatively competitive to current state-of-the-art codecs.Diese Dissertation ist der Bildkompression mit partiellen Differentialgleichungen (PDEs, partial differential equations) gewidmet. PDE-Codecs speichern nur einen geringen Anteil aller Bildpunkte und transportieren deren Information in fehlende Bildregionen. In einigen Fällen kann PDE-basierte Kompression den aktuellen Quasi-Standard, JPEG2000, bereits schlagen. Allerdings sind die Gründe für diesen Erfolg noch nicht vollständig erforscht, und PDE-basierte Kompression befindet sich derzeit noch im Anfangsstadium. Wir tragen durch eine probabilistische Rechtfertigung anisotroper Diffusion zu einem tieferen Verständnis PDE-basierten Codec-Designs bei. Eine Analyse der Interaktion zwischen effizienten Speicherverfahren und Bildrekonstruktion erlaubt es uns, PDEs nach ihrem Nutzen für die Kompression zu beurteilen. Anhand dieser Einsichten entwickeln wir PDE-basierte Kompression hinsichtlich ihrer praktischen Nutzbarkeit weiter: Wir stellen einen Hybrid-Codec für hochtexturierte Bilder vor, der umgebungsbasierte Interpolation mit PDEs kombiniert. Ein neuer Video-Dekodierer demonstriert die Echtzeitfähigkeit PDE-basierter Interpolation und eine Region-of-Interest-Methode erlaubt es, wichtige Bildbereiche mit hoher Genauigkeit zu speichern. Schlussendlich stellen wir ein neues diffusionsbasiertes Kolorierungsverfahren vor, welches uns effiziente Kompression von Farbbildern ermöglicht. Experimente auf Realwelt-Bilddatenbanken zeigen die Konkurrenzfähigkeit dieses Verfahrens auf

    Surface Remeshing and Applications

    Get PDF
    Due to the focus of popular graphic accelerators, triangle meshes remain the primary representation for 3D surfaces. They are the simplest form of interpolation between surface samples, which may have been acquired with a laser scanner, computed from a 3D scalar field resolved on a regular grid, or identified on slices of medical data. Typical methods for the generation of triangle meshes from raw data attempt to lose as less information as possible, so that the resulting surface models can be used in the widest range of scenarios. When such a general-purpose model has to be used in a particular application context, however, a pre-processing is often worth to be considered. In some cases, it is convenient to slightly modify the geometry and/or the connectivity of the mesh, so that further processing can take place more easily. Other applications may require the mesh to have a pre-defined structure, which is often different from the one of the original general-purpose mesh. The central focus of this thesis is the automatic remeshing of highly detailed surface triangulations. Besides a thorough discussion of state-of-the-art applications such as real-time rendering and simulation, new approaches are proposed which use remeshing for topological analysis, flexible mesh generation and 3D compression. Furthermore, innovative methods are introduced to post-process polygonal models in order to recover information which was lost, or hidden, by a prior remeshing process. Besides the technical contributions, this thesis aims at showing that surface remeshing is much more useful than it may seem at a first sight, as it represents a nearly fundamental step for making several applications feasible in practice

    Telemedicine

    Get PDF
    Telemedicine is a rapidly evolving field as new technologies are implemented for example for the development of wireless sensors, quality data transmission. Using the Internet applications such as counseling, clinical consultation support and home care monitoring and management are more and more realized, which improves access to high level medical care in underserved areas. The 23 chapters of this book present manifold examples of telemedicine treating both theoretical and practical foundations and application scenarios

    Recent Advances in Signal Processing

    Get PDF
    The signal processing task is a very critical issue in the majority of new technological inventions and challenges in a variety of applications in both science and engineering fields. Classical signal processing techniques have largely worked with mathematical models that are linear, local, stationary, and Gaussian. They have always favored closed-form tractability over real-world accuracy. These constraints were imposed by the lack of powerful computing tools. During the last few decades, signal processing theories, developments, and applications have matured rapidly and now include tools from many areas of mathematics, computer science, physics, and engineering. This book is targeted primarily toward both students and researchers who want to be exposed to a wide variety of signal processing techniques and algorithms. It includes 27 chapters that can be categorized into five different areas depending on the application at hand. These five categories are ordered to address image processing, speech processing, communication systems, time-series analysis, and educational packages respectively. The book has the advantage of providing a collection of applications that are completely independent and self-contained; thus, the interested reader can choose any chapter and skip to another without losing continuity

    Fractional Calculus and the Future of Science

    Get PDF
    Newton foresaw the limitations of geometry’s description of planetary behavior and developed fluxions (differentials) as the new language for celestial mechanics and as the way to implement his laws of mechanics. Two hundred years later Mandelbrot introduced the notion of fractals into the scientific lexicon of geometry, dynamics, and statistics and in so doing suggested ways to see beyond the limitations of Newton’s laws. Mandelbrot’s mathematical essays suggest how fractals may lead to the understanding of turbulence, viscoelasticity, and ultimately to end of dominance of the Newton’s macroscopic world view.Fractional Calculus and the Future of Science examines the nexus of these two game-changing contributions to our scientific understanding of the world. It addresses how non-integer differential equations replace Newton’s laws to describe the many guises of complexity, most of which lay beyond Newton’s experience, and many had even eluded Mandelbrot’s powerful intuition. The book’s authors look behind the mathematics and examine what must be true about a phenomenon’s behavior to justify the replacement of an integer-order with a noninteger-order (fractional) derivative. This window into the future of specific science disciplines using the fractional calculus lens suggests how what is seen entails a difference in scientific thinking and understanding
    corecore