157 research outputs found

    Biometric signals compression with time- and subject-adaptive dictionary for wearable devices

    Get PDF
    This thesis work is dedicated to the design of a lightweight compression technique for the real-time processing of biomedical signals in wearable devices. The proposed approach exploits the unsupervised learning algorithm of the time-adaptive self-organizing map (TASOM) to create a subject-adaptive codebook applied to the vector quantization of a signal. The codebook is obtained and then dynamically refined in an online fashion, without requiring any prior information on the signal itsel

    Fast codeword search algorithm for ECVQ using hyperplane decision rule

    Get PDF
    金沢大学大学院自然科学研究科情報システム金沢大学工学部Vector quantization is the process of encoding vector data as an index to a dictionary or codebook of representative vectors. One of the most serious problems for vector quantization is the high computational complexity involved in searching for the closest codeword through the codebook. Entropy-constrained vector quantization (ECVQ) codebook design based on empirical data involves an expensive training phase in which Lagrangian cost measure has to be minimized over the set of codebook vectors. In this paper, we describe a new method allowing significant acceleration in codebook design process. This method has feature of using a suitable hyperplane to partition the codebook and image data. Experimental results are presented on image block data. These results show that our method performs better than previously known methods

    Multiscale Markov Decision Problems: Compression, Solution, and Transfer Learning

    Full text link
    Many problems in sequential decision making and stochastic control often have natural multiscale structure: sub-tasks are assembled together to accomplish complex goals. Systematically inferring and leveraging hierarchical structure, particularly beyond a single level of abstraction, has remained a longstanding challenge. We describe a fast multiscale procedure for repeatedly compressing, or homogenizing, Markov decision processes (MDPs), wherein a hierarchy of sub-problems at different scales is automatically determined. Coarsened MDPs are themselves independent, deterministic MDPs, and may be solved using existing algorithms. The multiscale representation delivered by this procedure decouples sub-tasks from each other and can lead to substantial improvements in convergence rates both locally within sub-problems and globally across sub-problems, yielding significant computational savings. A second fundamental aspect of this work is that these multiscale decompositions yield new transfer opportunities across different problems, where solutions of sub-tasks at different levels of the hierarchy may be amenable to transfer to new problems. Localized transfer of policies and potential operators at arbitrary scales is emphasized. Finally, we demonstrate compression and transfer in a collection of illustrative domains, including examples involving discrete and continuous statespaces.Comment: 86 pages, 15 figure

    SURF: Subject-Adaptive Unsupervised ECG Signal Compression for Wearable Fitness Monitors

    Get PDF
    Recent advances in wearable devices allow non-invasive and inexpensive collection of biomedical signals including electrocardiogram (ECG), blood pressure, respiration, among others. Collection and processing of various biomarkers are expected to facilitate preventive healthcare through personalized medical applications. Since wearables are based on size- and resource-constrained hardware, and are battery operated, they need to run lightweight algorithms to efficiently manage energy and memory. To accomplish this goal, this paper proposes SURF, a subject-adaptive unsupervised signal compressor for wearable fitness monitors. The core idea is to perform a specialized lossy compression algorithm on the ECG signal at the source (wearable device), to decrease the energy consumption required for wireless transmission and thus prolong the battery lifetime. SURF leverages unsupervised learning techniques to build and maintain, at runtime, a subject-adaptive dictionary without requiring any prior information on the signal. Dictionaries are constructed within a suitable feature space, allowing the addition and removal of code words according to the signal's dynamics (for given target fidelity and energy consumption objectives). Extensive performance evaluation results, obtained with reference ECG traces and with our own measurements from a commercial wearable wireless monitor, show the superiority of SURF against state-of-the-art techniques, including: 1) compression ratios up to 90-times; 2) reconstruction errors between 2% and 7% of the signal's range (depending on the amount of compression sought); and 3) reduction in energy consumption of up to two orders of magnitude with respect to sending the signal uncompressed, while preserving its morphology. SURF, with artifact prone ECG signals, allows for typical compression efficiencies (CE) in the range CE[40,50]\text {CE} \in [{40,50}] , which means that the data rate of 3 kbit/s that would be required to send the uncompressed ECG trace is lowered to 60 and 75 bit/s for CE = 40 and CE = 50, respectively

    Distortion-constraint compression of three-dimensional CLSM images using image pyramid and vector quantization

    Get PDF
    The confocal microscopy imaging techniques, which allow optical sectioning, have been successfully exploited in biomedical studies. Biomedical scientists can benefit from more realistic visualization and much more accurate diagnosis by processing and analysing on a three-dimensional image data. The lack of efficient image compression standards makes such large volumetric image data slow to transfer over limited bandwidth networks. It also imposes large storage space requirements and high cost in archiving and maintenance. Conventional two-dimensional image coders do not take into account inter-frame correlations in three-dimensional image data. The standard multi-frame coders, like video coders, although they have good performance in capturing motion information, are not efficiently designed for coding multiple frames representing a stack of optical planes of a real object. Therefore a real three-dimensional image compression approach should be investigated. Moreover the reconstructed image quality is a very important concern in compressing medical images, because it could be directly related to the diagnosis accuracy. Most of the state-of-the-arts methods are based on transform coding, for instance JPEG is based on discrete-cosine-transform CDCT) and JPEG2000 is based on discrete- wavelet-transform (DWT). However in DCT and DWT methods, the control of the reconstructed image quality is inconvenient, involving considerable costs in computation, since they are fundamentally rate-parameterized methods rather than distortion-parameterized methods. Therefore it is very desirable to develop a transform-based distortion-parameterized compression method, which is expected to have high coding performance and also able to conveniently and accurately control the final distortion according to the user specified quality requirement. This thesis describes our work in developing a distortion-constraint three-dimensional image compression approach, using vector quantization techniques combined with image pyramid structures. We are expecting our method to have: 1. High coding performance in compressing three-dimensional microscopic image data, compared to the state-of-the-art three-dimensional image coders and other standardized two-dimensional image coders and video coders. 2. Distortion-control capability, which is a very desirable feature in medical 2. Distortion-control capability, which is a very desirable feature in medical image compression applications, is superior to the rate-parameterized methods in achieving a user specified quality requirement. The result is a three-dimensional image compression method, which has outstanding compression performance, measured objectively, for volumetric microscopic images. The distortion-constraint feature, by which users can expect to achieve a target image quality rather than the compressed file size, offers more flexible control of the reconstructed image quality than its rate-constraint counterparts in medical image applications. Additionally, it effectively reduces the artifacts presented in other approaches at low bit rates and also attenuates noise in the pre-compressed images. Furthermore, its advantages in progressive transmission and fast decoding make it suitable for bandwidth limited tele-communications and web-based image browsing applications

    Project and development of hardware accelerators for fast computing in multimedia processing

    Get PDF
    2017 - 2018The main aim of the present research work is to project and develop very large scale electronic integrated circuits, with particular attention to the ones devoted to image processing applications and the related topics. In particular, the candidate has mainly investigated four topics, detailed in the following. First, the candidate has developed a novel multiplier circuit capable of obtaining floating point (FP32) results, given as inputs an integer value from a fixed integer range and a set of fixed point (FI) values. The result has been accomplished exploiting a series of theorems and results on a number theory problem, known as Bachet’s problem, which allows the development of a new Distributed Arithmetic (DA) based on 3’s partitions. This kind of application results very fit for filtering applications working on an integer fixed input range, such in image processing applications, in which the pixels are coded on 8 bits per channel. In fact, in these applications the main problem is related to the high area and power consumption due to the presence of many Multiply and Accumulate (MAC) units, also compromising real-time requirements due to the complexity of FP32 operations. For these reasons, FI implementations are usually preferred, at the cost of lower accuracies. The results for the single multiplier and for a filter of dimensions 3x3 show respectively delay of 2.456 ns and 4.7 ns on FPGA platform and 2.18 ns and 4.426 ns on 90nm std_cell TSMC 90 nm implementation. Comparisons with state-of-the-art FP32 multipliers show a speed increase of up to 94.7% and an area reduction of 69.3% on FPGA platform. ... [edited by Author]XXXI cicl
    corecore