157 research outputs found
Hadamard transform based Equal-average Equal-variance Equal-norm nearest neighbor codeword search algorithm
New York, US
Biometric signals compression with time- and subject-adaptive dictionary for wearable devices
This thesis work is dedicated to the design of a lightweight compression technique for the real-time processing of biomedical signals in wearable devices. The proposed approach exploits the unsupervised learning algorithm of the time-adaptive self-organizing map (TASOM) to create a subject-adaptive codebook applied to the vector quantization of a signal. The codebook is obtained and then dynamically refined in an online fashion, without requiring any prior information on the signal itsel
Fast codeword search algorithm for ECVQ using hyperplane decision rule
金沢大学大学院自然科学研究科情報システム金沢大学工学部Vector quantization is the process of encoding vector data as an index to a dictionary or codebook of representative vectors. One of the most serious problems for vector quantization is the high computational complexity involved in searching for the closest codeword through the codebook. Entropy-constrained vector quantization (ECVQ) codebook design based on empirical data involves an expensive training phase in which Lagrangian cost measure has to be minimized over the set of codebook vectors. In this paper, we describe a new method allowing significant acceleration in codebook design process. This method has feature of using a suitable hyperplane to partition the codebook and image data. Experimental results are presented on image block data. These results show that our method performs better than previously known methods
Multiscale Markov Decision Problems: Compression, Solution, and Transfer Learning
Many problems in sequential decision making and stochastic control often have
natural multiscale structure: sub-tasks are assembled together to accomplish
complex goals. Systematically inferring and leveraging hierarchical structure,
particularly beyond a single level of abstraction, has remained a longstanding
challenge. We describe a fast multiscale procedure for repeatedly compressing,
or homogenizing, Markov decision processes (MDPs), wherein a hierarchy of
sub-problems at different scales is automatically determined. Coarsened MDPs
are themselves independent, deterministic MDPs, and may be solved using
existing algorithms. The multiscale representation delivered by this procedure
decouples sub-tasks from each other and can lead to substantial improvements in
convergence rates both locally within sub-problems and globally across
sub-problems, yielding significant computational savings. A second fundamental
aspect of this work is that these multiscale decompositions yield new transfer
opportunities across different problems, where solutions of sub-tasks at
different levels of the hierarchy may be amenable to transfer to new problems.
Localized transfer of policies and potential operators at arbitrary scales is
emphasized. Finally, we demonstrate compression and transfer in a collection of
illustrative domains, including examples involving discrete and continuous
statespaces.Comment: 86 pages, 15 figure
SURF: Subject-Adaptive Unsupervised ECG Signal Compression for Wearable Fitness Monitors
Recent advances in wearable devices allow non-invasive and inexpensive collection of biomedical signals including electrocardiogram (ECG), blood pressure, respiration, among others. Collection and processing of various biomarkers are expected to facilitate preventive healthcare through personalized medical applications. Since wearables are based on size- and resource-constrained hardware, and are battery operated, they need to run lightweight algorithms to efficiently manage energy and memory. To accomplish this goal, this paper proposes SURF, a subject-adaptive unsupervised signal compressor for wearable fitness monitors. The core idea is to perform a specialized lossy compression algorithm on the ECG signal at the source (wearable device), to decrease the energy consumption required for wireless transmission and thus prolong the battery lifetime. SURF leverages unsupervised learning techniques to build and maintain, at runtime, a subject-adaptive dictionary without requiring any prior information on the signal. Dictionaries are constructed within a suitable feature space, allowing the addition and removal of code words according to the signal's dynamics (for given target fidelity and energy consumption objectives). Extensive performance evaluation results, obtained with reference ECG traces and with our own measurements from a commercial wearable wireless monitor, show the superiority of SURF against state-of-the-art techniques, including: 1) compression ratios up to 90-times; 2) reconstruction errors between 2% and 7% of the signal's range (depending on the amount of compression sought); and 3) reduction in energy consumption of up to two orders of magnitude with respect to sending the signal uncompressed, while preserving its morphology. SURF, with artifact prone ECG signals, allows for typical compression efficiencies (CE) in the range , which means that the data rate of 3 kbit/s that would be required to send the uncompressed ECG trace is lowered to 60 and 75 bit/s for CE = 40 and CE = 50, respectively
Distortion-constraint compression of three-dimensional CLSM images using image pyramid and vector quantization
The confocal microscopy imaging techniques, which allow optical sectioning, have
been successfully exploited in biomedical studies. Biomedical scientists can benefit
from more realistic visualization and much more accurate diagnosis by processing and
analysing on a three-dimensional image data. The lack of efficient image compression
standards makes such large volumetric image data slow to transfer over limited
bandwidth networks. It also imposes large storage space requirements and high cost in
archiving and maintenance.
Conventional two-dimensional image coders do not take into account inter-frame
correlations in three-dimensional image data. The standard multi-frame coders, like
video coders, although they have good performance in capturing motion information,
are not efficiently designed for coding multiple frames representing a stack of optical
planes of a real object. Therefore a real three-dimensional image compression
approach should be investigated.
Moreover the reconstructed image quality is a very important concern in compressing
medical images, because it could be directly related to the diagnosis accuracy. Most of
the state-of-the-arts methods are based on transform coding, for instance JPEG is based on discrete-cosine-transform CDCT) and JPEG2000 is based on discrete-
wavelet-transform (DWT). However in DCT and DWT methods, the control
of the reconstructed image quality is inconvenient, involving considerable costs in
computation, since they are fundamentally rate-parameterized methods rather than
distortion-parameterized methods. Therefore it is very desirable to develop a
transform-based distortion-parameterized compression method, which is expected to
have high coding performance and also able to conveniently and accurately control
the final distortion according to the user specified quality requirement.
This thesis describes our work in developing a distortion-constraint three-dimensional
image compression approach, using vector quantization techniques combined with
image pyramid structures. We are expecting our method to have:
1. High coding performance in compressing three-dimensional microscopic
image data, compared to the state-of-the-art three-dimensional image coders
and other standardized two-dimensional image coders and video coders.
2. Distortion-control capability, which is a very desirable feature in medical 2. Distortion-control capability, which is a very desirable feature in medical
image compression applications, is superior to the rate-parameterized methods
in achieving a user specified quality requirement.
The result is a three-dimensional image compression method, which has outstanding
compression performance, measured objectively, for volumetric microscopic images.
The distortion-constraint feature, by which users can expect to achieve a target image
quality rather than the compressed file size, offers more flexible control of the
reconstructed image quality than its rate-constraint counterparts in medical image
applications. Additionally, it effectively reduces the artifacts presented in other
approaches at low bit rates and also attenuates noise in the pre-compressed images.
Furthermore, its advantages in progressive transmission and fast decoding make it
suitable for bandwidth limited tele-communications and web-based image browsing
applications
Project and development of hardware accelerators for fast computing in multimedia processing
2017 - 2018The main aim of the present research work is to project and develop very large scale electronic integrated circuits, with particular attention to the ones devoted to image processing applications and the related topics. In particular, the candidate has mainly investigated four topics, detailed in the following.
First, the candidate has developed a novel multiplier circuit capable of obtaining floating point (FP32) results, given as inputs an integer value from a fixed integer range and a set of fixed point (FI) values. The result has been accomplished exploiting a series of theorems and results on a number theory problem, known as Bachet’s problem, which allows the development of a new Distributed Arithmetic (DA) based on 3’s partitions. This kind of application results very fit for filtering applications working on an integer fixed input range, such in image processing applications, in which the pixels are coded on 8 bits per channel. In fact, in these applications the main problem is related to the high area and power consumption due to the presence of many Multiply and Accumulate (MAC) units, also compromising real-time requirements due to the complexity of FP32 operations. For these reasons, FI implementations are usually preferred, at the cost of lower accuracies. The results for the single multiplier and for a filter of dimensions 3x3 show respectively delay of 2.456 ns and 4.7 ns on FPGA platform and 2.18 ns and 4.426 ns on 90nm std_cell TSMC 90 nm implementation. Comparisons with state-of-the-art FP32 multipliers show a speed increase of up to 94.7% and an area reduction of 69.3% on FPGA platform. ... [edited by Author]XXXI cicl
Recommended from our members
Design Techniques for High-Performance SAR A/D Converters
The design of electronics needs to account for the non-ideal characteristics of the device technologies used to realize practical circuits. This is particularly important in mixed analog-digital design since the best device technologies are very different for digital compared to analog circuits. One solution for this problem is to use a calibration correction approach to remove the errors introduced by devices, but this adds complexity and power dissipation, as well as reducing operation speed, and so must be optimised. This thesis addresses such an approach to improve the performance of certain types of analog-to-digital converter (ADC) used in advanced telecommunications, where speed, accuracy and power dissipation currently limit applications. The thesis specifically focuses on the design of compensation circuits for use in successive approximation register (SAR) ADCs.
ADCs are crucial building blocks in communication systems, in general, and for mobile networks, in particular. The recently launched fifth generation of mobile networks (5G) has required new ADC circuit techniques to meet the higher speed and lower power dissipation requirements for 5G technology. The SAR has become one of the most favoured architectures for designing high-performance ADCs, but the successive nature of the circuit operation makes it difficult to reach ∼GS/s sampling rates at reasonable power consumption.
Here, two calibration techniques for high-performance SAR ADCs are presented. The first uses an on-chip stochastic-based mismatch calibration technique that is able to accurately compute and compensate for the mismatch of a capacitive DAC in a SAR ADC. The stochastic nature of the proposed calibration method enables determination of the mismatch of the CAPDAC with a resolution much better than that of the DAC. This allows the unit capacitor to scale down to as low as 280aF for a 9-bit DAC. Since the CAP-DAC causes a large part of the overall dynamic power consumption and directly determines both the sizes of the driving and sampling switches and the size of the input capacitive load of the ADC and the kT/C noise power, a small CAP-DAC helps the power efficiency. To validate the proposed calibration idea, a 10-bit asynchronous SAR ADC was fabricated in 28-nm CMOS. Measurement results show that the proposed stochastic calibration improves the ADC’s SFDR and SNDR by 14.9 dB, 11.5 dB, respectively. After calibration, the fabricated SAR ADC achieves an ENOB of 9.14 bit at a sampling rate of 85 MS/s, resulting in a Walden FoM of 10.9 fJ/c-s.
The second calibration technique is a timing-skew calibration for a time-interleaved (TI) SAR ADC that calibrates/computes the inter-channel timing and offset mismatch simultaneously. Simulation results show the effectiveness of this calibration method. When used together, the proposed mismatch calibration technique and the timing-skew
calibration technique enables a TI SAR ADC to be designed that can achieve a sampling rate of ∼GS/s with 10-bit resolution and a power consumption as low as ∼10mW; specifications that satisfy the requirements of 5G technology
- …