145 research outputs found

    Deep learning for ultrasound data-rate reduction

    Get PDF
    Ultrasound (US) is a widely used medical imaging modality mostly because of its non-invasive and real-time characteristics. Recent advances in US imaging (e.g. ultrafast imaging, 3D imaging, elastography, functional imaging etc.) gave rise to a crucial challenge: dealing with the huge amount of data that has to be transferred and processed in real-time. To address this problem, the LTS5 is focusing on two main aspects: 1) Maximizing the image quality for a given amount of data using advanced image reconstruction methods 2) Minimizing the data-rate to reach a given image qUS devices generate a set of signals that are carried from a transducer probe to a computer for further processing in order to obtain images. Those signals are transmitted between both ends through a set of cables, making up a high capacity data transmission channel. In order to achieve a portable US device, it will be required to transfer the data through a much lower capacity channel. To reduce the data - rate, deep/convolutional neural networks are used for this purpose in this master thesis, showing that it is possible to reduce remarkably the data rates generated by those devices while keeping a high quality in the final reconstructed ultrasound images

    Remote Sensing Data Compression

    Get PDF
    A huge amount of data is acquired nowadays by different remote sensing systems installed on satellites, aircrafts, and UAV. The acquired data then have to be transferred to image processing centres, stored and/or delivered to customers. In restricted scenarios, data compression is strongly desired or necessary. A wide diversity of coding methods can be used, depending on the requirements and their priority. In addition, the types and properties of images differ a lot, thus, practical implementation aspects have to be taken into account. The Special Issue paper collection taken as basis of this book touches on all of the aforementioned items to some degree, giving the reader an opportunity to learn about recent developments and research directions in the field of image compression. In particular, lossless and near-lossless compression of multi- and hyperspectral images still remains current, since such images constitute data arrays that are of extremely large size with rich information that can be retrieved from them for various applications. Another important aspect is the impact of lossless compression on image classification and segmentation, where a reasonable compromise between the characteristics of compression and the final tasks of data processing has to be achieved. The problems of data transition from UAV-based acquisition platforms, as well as the use of FPGA and neural networks, have become very important. Finally, attempts to apply compressive sensing approaches in remote sensing image processing with positive outcomes are observed. We hope that readers will find our book useful and interestin

    Compression and protection of multidimensional data

    Get PDF
    2013 - 2014The main objective of this thesis is to explore and discuss novel techniques related to the compression and protection of multidimensional data (i.e., 3-D medical images, hyperspectral images, 3-D microscopy images and 5-D functional Magnetic Resonance Images). First, we outline a lossless compression scheme based on the predictive model, denoted as Medical Images Lossless Compression algorithm (MILC). MILC is characterized to provide a good trade-off between the compression performances and reduced usage of the hardware resources. Since in the medical and medical-related fields, the execution speed of an algorithm, could be a “critical” parameter, we investigate the parallelization of the compression strategy of the MILC algorithm, which is denoted as Parallel MILC. Parallel MILC can be executed on heterogeneous devices (i.e., CPUs, GPUs, etc.) and provides significant results in terms of speedup with respect to the MILC. This is followed by the important aspects related to the protection of two sensitive typologies of multidimensional data: 3-D medical images and 3-D microscopy images. Regarding the protection of 3-D medical images, we outline a novel hybrid approach, which allows for the efficient compression of 3-D medical images as well as the embedding of a digital watermark, at the same time. In relation to the protection of 3-D microscopy images, the simultaneous embedding of two watermarks is explained. It should be noted that 3-D microscopy images are often used in delicate tasks (i.e., forensic analysis, etc.). Subsequently, we review a novel predictive structure that is appropriate for the lossless compression of different typologies of multidimensional data... [edited by Author]XIII n.s

    Telemedicine

    Get PDF
    Telemedicine is a rapidly evolving field as new technologies are implemented for example for the development of wireless sensors, quality data transmission. Using the Internet applications such as counseling, clinical consultation support and home care monitoring and management are more and more realized, which improves access to high level medical care in underserved areas. The 23 chapters of this book present manifold examples of telemedicine treating both theoretical and practical foundations and application scenarios

    Analysis and resynthesis of polyphonic music

    Get PDF
    This thesis examines applications of Digital Signal Processing to the analysis, transformation, and resynthesis of musical audio. First I give an overview of the human perception of music. I then examine in detail the requirements for a system that can analyse, transcribe, process, and resynthesise monaural polyphonic music. I then describe and compare the possible hardware and software platforms. After this I describe a prototype hybrid system that attempts to carry out these tasks using a method based on additive synthesis. Next I present results from its application to a variety of musical examples, and critically assess its performance and limitations. I then address these issues in the design of a second system based on Gabor wavelets. I conclude by summarising the research and outlining suggestions for future developments

    Kernelized Supervised Dictionary Learning

    Get PDF
    The representation of a signal using a learned dictionary instead of predefined operators, such as wavelets, has led to state-of-the-art results in various applications such as denoising, texture analysis, and face recognition. The area of dictionary learning is closely associated with sparse representation, which means that the signal is represented using few atoms in the dictionary. Despite recent advances in the computation of a dictionary using fast algorithms such as K-SVD, online learning, and cyclic coordinate descent, which make the computation of a dictionary from millions of data samples computationally feasible, the dictionary is mainly computed using unsupervised approaches such as k-means. These approaches learn the dictionary by minimizing the reconstruction error without taking into account the category information, which is not optimal in classification tasks. In this thesis, we propose a supervised dictionary learning (SDL) approach by incorporating information on class labels into the learning of the dictionary. To this end, we propose to learn the dictionary in a space where the dependency between the signals and their corresponding labels is maximized. To maximize this dependency, the recently-introduced Hilbert Schmidt independence criterion (HSIC) is used. The learned dictionary is compact and has closed form; the proposed approach is fast. We show that it outperforms other unsupervised and supervised dictionary learning approaches in the literature on real-world data. Moreover, the proposed SDL approach has as its main advantage that it can be easily kernelized, particularly by incorporating a data-driven kernel such as a compression-based kernel, into the formulation. In this thesis, we propose a novel compression-based (dis)similarity measure. The proposed measure utilizes a 2D MPEG-1 encoder, which takes into consideration the spatial locality and connectivity of pixels in the images. The proposed formulation has been carefully designed based on MPEG encoder functionality. To this end, by design, it solely uses P-frame coding to find the (dis)similarity among patches/images. We show that the proposed measure works properly on both small and large patch sizes on textures. Experimental results show that by incorporating the proposed measure as a kernel into our SDL, it significantly improves the performance of a supervised pixel-based texture classification on Brodatz and outdoor images compared to other compression-based dissimilarity measures, as well as state-of-the-art SDL methods. It also improves the computation speed by about 40% compared to its closest rival. Eventually, we have extended the proposed SDL to multiview learning, where more than one representation is available on a dataset. We propose two different multiview approaches: one fusing the feature sets in the original space and then learning the dictionary and sparse coefficients on the fused set; and the other by learning one dictionary and the corresponding coefficients in each view separately, and then fusing the representations in the space of the dictionaries learned. We will show that the proposed multiview approaches benefit from the complementary information in multiple views, and investigate the relative performance of these approaches in the application of emotion recognition

    Medical Image Set Compression Using Wavelet and Lifting Combined With New Scanning Techniques.

    Get PDF
    Today, hospitals are desirous of better methods for replacing their traditional film-based medical imaging. A major problem associated with a film-less hospital is the amount of digital image data that is generated and stored. Image compression must be used to reduce the storage size. This dissertation presents several techniques involving wavelet analysis, lifting, image prediction and image scanning to achieve an efficient diagnostically lossless compression for sets of medical images. This dissertation experimentally determines the optimal wavelet basis for medical images. Then, presents a new wavelet based prediction method for prediction of the intermediate images in a similar set of medical images. The technique uses the correlation between coefficients in the wavelet transforms of the image set to produce a better image prediction compared to direct image prediction. New methods for scanning similar sets of medical images are introduced in this dissertation. These methods significantly reduce the image edges needed for compression with wavelet lifting. Lifting plus new scanning methods have the following advantages: (a) images in the set do not have to be the same size, (b) additional compression is obtained from the continuous image background, and (c) lifting produces better compression. The scanning techniques, introduced in this dissertation, reduce the number of edges. These scanning techniques separate the diagnostic foreground from the continuous background of each image in the set. A theoretical approach for determining an optimal orthogonal wavelet basis with compact support is presented and then demonstrated on medical images. Orthogonal wavelet bases were constructed with this theoretical approach and then another algorithm was used to determine the optimal wavelet basis for each medical image set. One result of this research is that the new image scanning techniques plus lifting and standard compression methods resulted in improved and better compression of medical image sets than achieved by the standard compression alone

    Novi algoritam za kompresiju seizmičkih podataka velike amplitudske rezolucije

    Get PDF
    Renewable sources cannot meet energy demand of a growing global market. Therefore, it is expected that oil & gas will remain a substantial sources of energy in a coming years. To find a new oil & gas deposits that would satisfy growing global energy demands, significant efforts are constantly involved in finding ways to increase efficiency of a seismic surveys. It is commonly considered that, in an initial phase of exploration and production of a new fields, high-resolution and high-quality images of the subsurface are of the great importance. As one part in the seismic data processing chain, efficient managing and delivering of a large data sets, that are vastly produced by the industry during seismic surveys, becomes extremely important in order to facilitate further seismic data processing and interpretation. In this respect, efficiency to a large extent relies on the efficiency of the compression scheme, which is often required to enable faster transfer and access to data, as well as efficient data storage. Motivated by the superior performance of High Efficiency Video Coding (HEVC), and driven by the rapid growth in data volume produced by seismic surveys, this work explores a 32 bits per pixel (b/p) extension of the HEVC codec for compression of seismic data. It is proposed to reassemble seismic slices in a format that corresponds to video signal and benefit from the coding gain achieved by HEVC inter mode, besides the possible advantages of the (still image) HEVC intra mode. To this end, this work modifies almost all components of the original HEVC codec to cater for high bit-depth coding of seismic data: Lagrange multiplier used in optimization of the coding parameters has been adapted to the new data statistics, core transform and quantization have been reimplemented to handle the increased bit-depth range, and modified adaptive binary arithmetic coder has been employed for efficient entropy coding. In addition, optimized block selection, reduced intra prediction modes, and flexible motion estimation are tested to adapt to the structure of seismic data. Even though the new codec after implementation of the proposed modifications goes beyond the standardized HEVC, it still maintains a generic HEVC structure, and it is developed under the general HEVC framework. There is no similar work in the field of the seismic data compression that uses the HEVC as a base codec setting. Thus, a specific codec design has been tailored which, when compared to the JPEG-XR and commercial wavelet-based codec, significantly improves the peak-signal-tonoise- ratio (PSNR) vs. compression ratio performance for 32 b/p seismic data. Depending on a proposed configurations, PSNR gain goes from 3.39 dB up to 9.48 dB. Also, relying on the specific characteristics of seismic data, an optimized encoder is proposed in this work. It reduces encoding time by 67.17% for All-I configuration on trace image dataset, and 67.39% for All-I, 97.96% for P2-configuration and 98.64% for B-configuration on 3D wavefield dataset, with negligible coding performance losses. As a side contribution of this work, HEVC is analyzed within all of its functional units, so that the presented work itself can serve as a specific overview of methods incorporated into the standard

    Ultrasonic sensor platforms for non-destructive evaluation

    Get PDF
    Robotic vehicles are receiving increasing attention for use in Non-Destructive Evaluation (NDE), due to their attractiveness in terms of cost, safety and their accessibility to areas where manual inspection is not practical. A reconfigurable Lamb wave scanner, using autonomous robotic platforms is presented. The scanner is built from a fleet of wireless miniature robotic vehicles, each with a non-contact ultrasonic payload capable of generating the A0 Lamb wave mode in plate specimens. An embedded Kalman filter gives the robots a positional accuracy of 10mm. A computer simulator, to facilitate the design and assessment of the reconfigurable scanner, is also presented. Transducer behaviour has been simulated using a Linear Systems approximation (LS), with wave propagation in the structure modelled using the Local Interaction Simulation Approach (LISA). Integration of the LS and LISA approaches were validated for use in Lamb wave scanning by comparison with both analytical techniques and more computationally intensive commercial finite element/diference codes. Starting with fundamental dispersion data, the work goes on to describe the simulation of wave propagation and the subsequent interaction with artificial defects and plate boundaries. The computer simulator was used to evaluate several imaging techniques, including local inspection of the area under the robot and an extended method that emits an ultrasonic wave and listens for echos (B-Scan). These algorithms were implemented in the robotic platform and experimental results are presented. The Synthetic Aperture Focusing Technique (SAFT) was evaluated as a means of improving the fidelity of B-Scan data. It was found that a SAFT is only effective for transducers with reasonably wide beam divergence, necessitating small transducers with a width of approximately 5mm. Finally, an algorithm for robot localisation relative to plate sections was proposed and experimentally validated

    Wavelet Theory

    Get PDF
    The wavelet is a powerful mathematical tool that plays an important role in science and technology. This book looks at some of the most creative and popular applications of wavelets including biomedical signal processing, image processing, communication signal processing, Internet of Things (IoT), acoustical signal processing, financial market data analysis, energy and power management, and COVID-19 pandemic measurements and calculations. The editor’s personal interest is the application of wavelet transform to identify time domain changes on signals and corresponding frequency components and in improving power amplifier behavior
    corecore