108 research outputs found

    EEG data compression

    Get PDF
    This paper presents two different ways to compress EEC data-direct data com pression and a data transformation technique. The Adaptive Delta modulation and Huffman coding are used in the former method to predict or interpolate the data. Linear orthognal transformation algorithms are used in the latter method to detect and reduce the redundancies of the data by analyzing the spectral and energy distribution. Each method is implemented by programming the computer. The experimental results of their efficiencies and errors with different requirements and under different situations are compared and discussed. By comparing the EEC data compression degree and normalized square error, the paper shows that the adaptive delta coding followed by Huffman coding is the best way to compress the EEG data

    VLSI signal processing through bit-serial architectures and silicon compilation

    Get PDF

    Digital Filters

    Get PDF
    The new technology advances provide that a great number of system signals can be easily measured with a low cost. The main problem is that usually only a fraction of the signal is useful for different purposes, for example maintenance, DVD-recorders, computers, electric/electronic circuits, econometric, optimization, etc. Digital filters are the most versatile, practical and effective methods for extracting the information necessary from the signal. They can be dynamic, so they can be automatically or manually adjusted to the external and internal conditions. Presented in this book are the most advanced digital filters including different case studies and the most relevant literature

    An empirical study on Chinese text compression: from character-based to word-based approach.

    Get PDF
    by Kwok-Shing Cheng.Thesis (M.Phil.)--Chinese University of Hong Kong, 1997.Includes bibliographical references (leaves 114-120).Abstract --- p.iAcknowledgement --- p.iiiChapter 1 --- Introduction --- p.1Chapter 1.1 --- Importance of Text Compression --- p.1Chapter 1.2 --- Motivation of this Research --- p.2Chapter 1.3 --- Characteristics of Chinese --- p.2Chapter 1.3.1 --- Huge size of character set --- p.3Chapter 1.3.2 --- Lack of word segmentation --- p.3Chapter 1.3.3 --- Rich semantics --- p.3Chapter 1.4 --- Different Coding Schemes for Chinese --- p.4Chapter 1.4.1 --- Big5 Code --- p.4Chapter 1.4.2 --- GB (Guo Biao) Code --- p.4Chapter 1.4.3 --- HZ (Hanzi) Code --- p.5Chapter 1.4.4 --- Unicode Code --- p.5Chapter 1.5 --- Modeling and Coding for Chinese Text --- p.6Chapter 1.6 --- Static and Adaptive Modeling --- p.6Chapter 1.7 --- One-Pass and Two-Pass Modeling --- p.8Chapter 1.8 --- Ordering of models --- p.9Chapter 1.9 --- Two Sets of Benchmark Files and the Platform --- p.9Chapter 1.10 --- Outline of the Thesis --- p.11Chapter 2 --- A Survey of Chinese Text Compression --- p.13Chapter 2.1 --- Entropy for Chinese Text --- p.14Chapter 2.2 --- Weakness of Traditional Compression Algorithms on Chinese Text --- p.15Chapter 2.3 --- Statistical Class Algorithms for Compressing Chinese --- p.16Chapter 2.3.1 --- Huffman coding scheme --- p.17Chapter 2.3.2 --- Arithmetic Coding Scheme --- p.22Chapter 2.3.3 --- Restricted Variable Length Coding Scheme --- p.26Chapter 2.4 --- Dictionary-based Class Algorithms for Compressing Chinese --- p.27Chapter 2.5 --- Experiments and Results --- p.32Chapter 2.6 --- Chapter Summary --- p.35Chapter 3 --- Indicator Dependent Huffman Coding Scheme --- p.37Chapter 3.1 --- Chinese Character Identification Routine --- p.37Chapter 3.2 --- Reduction of Header Size --- p.39Chapter 3.3 --- Semi-adaptive IDC for Chinese Text --- p.44Chapter 3.3.1 --- Theoretical Analysis of Partition Technique for Com- pression --- p.48Chapter 3.3.2 --- Experiments and Results of the Semi-adaptive IDC --- p.50Chapter 3.4 --- Adaptive IDC for Chinese Text --- p.54Chapter 3.4.1 --- Experiments and Results of the Adaptive IDC --- p.57Chapter 3.5 --- Chapter Summary --- p.58Chapter 4 --- Cascading LZ Algorithms with Huffman Coding Schemes --- p.59Chapter 4.1 --- Variations of Huffman Coding Scheme --- p.60Chapter 4.1.1 --- Analysis of EPDC and PDC --- p.60Chapter 4.1.2 --- "Analysis of PDC, 16Huff and IDC" --- p.65Chapter 4.1.3 --- Time and Memory Consumption --- p.71Chapter 4.2 --- "Cascading LZSS with PDC, 16Huff and IDC" --- p.73Chapter 4.2.1 --- Experimental Results --- p.76Chapter 4.3 --- "Cascading LZW with PDC, 16Huff and IDC" --- p.79Chapter 4.3.1 --- Experimental Results --- p.82Chapter 4.4 --- Chapter Summary --- p.84Chapter 5 --- Applying Compression Algorithms to Word-segmented Chi- nese Text --- p.85Chapter 5.1 --- Background of word-based compression algorithms --- p.86Chapter 5.2 --- Terminology and Benchmark Files for Word Segmentation Model --- p.88Chapter 5.3 --- Word Segmentation Model --- p.88Chapter 5.4 --- Chinese Entropy from Byte to Word --- p.91Chapter 5.5 --- The Generalized Compression and Decompression Model for Word-segmented Chinese text --- p.92Chapter 5.6 --- Applying Huffman Coding Scheme to Word-segmented Chinese text --- p.94Chapter 5.7 --- Applying WLZSSHUF to Word-segmented Chinese text --- p.97Chapter 5.8 --- Applying WLZWHUF to Word-segmented Chinese text --- p.102Chapter 5.9 --- Match Ratio and Compression Ratio --- p.105Chapter 5.10 --- Chapter Summary --- p.108Chapter 6 --- Concluding Remarks --- p.110Chapter 6.1 --- Conclusions --- p.110Chapter 6.2 --- Contributions --- p.111Chapter 6.3 --- Future Directions --- p.112Chapter 6.3.1 --- Integrate Decremental Coding Scheme with IDC --- p.112Chapter 6.3.2 --- Re-order the Character Sequences in the Sliding Window of LZSS --- p.113Chapter 6.3.3 --- Multiple Huffman Trees for Word-based Compression --- p.113Bibliography --- p.11

    Efficient FPGA implementation and power modelling of image and signal processing IP cores

    Get PDF
    Field Programmable Gate Arrays (FPGAs) are the technology of choice in a number ofimage and signal processing application areas such as consumer electronics, instrumentation, medical data processing and avionics due to their reasonable energy consumption, high performance, security, low design-turnaround time and reconfigurability. Low power FPGA devices are also emerging as competitive solutions for mobile and thermally constrained platforms. Most computationally intensive image and signal processing algorithms also consume a lot of power leading to a number of issues including reduced mobility, reliability concerns and increased design cost among others. Power dissipation has become one of the most important challenges, particularly for FPGAs. Addressing this problem requires optimisation and awareness at all levels in the design flow. The key achievements of the work presented in this thesis are summarised here. Behavioural level optimisation strategies have been used for implementing matrix product and inner product through the use of mathematical techniques such as Distributed Arithmetic (DA) and its variations including offset binary coding, sparse factorisation and novel vector level transformations. Applications to test the impact of these algorithmic and arithmetic transformations include the fast Hadamard/Walsh transforms and Gaussian mixture models. Complete design space exploration has been performed on these cores, and where appropriate, they have been shown to clearly outperform comparable existing implementations. At the architectural level, strategies such as parallelism, pipelining and systolisation have been successfully applied for the design and optimisation of a number of cores including colour space conversion, finite Radon transform, finite ridgelet transform and circular convolution. A pioneering study into the influence of supply voltage scaling for FPGA based designs, used in conjunction with performance enhancing strategies such as parallelism and pipelining has been performed. Initial results are very promising and indicated significant potential for future research in this area. A key contribution of this work includes the development of a novel high level power macromodelling technique for design space exploration and characterisation of custom IP cores for FPGAs, called Functional Level Power Analysis and Modelling (FLPAM). FLPAM is scalable, platform independent and compares favourably with existing approaches. A hybrid, top-down design flow paradigm integrating FLPAM with commercially available design tools for systematic optimisation of IP cores has also been developed.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Technology for the Future: In-Space Technology Experiments Program, part 2

    Get PDF
    The purpose of the Office of Aeronautics and Space Technology (OAST) In-Space Technology Experiments Program In-STEP 1988 Workshop was to identify and prioritize technologies that are critical for future national space programs and require validation in the space environment, and review current NASA (In-Reach) and industry/ university (Out-Reach) experiments. A prioritized list of the critical technology needs was developed for the following eight disciplines: structures; environmental effects; power systems and thermal management; fluid management and propulsion systems; automation and robotics; sensors and information systems; in-space systems; and humans in space. This is part two of two parts and contains the critical technology presentations for the eight theme elements and a summary listing of critical space technology needs for each theme

    The Fifth NASA Symposium on VLSI Design

    Get PDF
    The fifth annual NASA Symposium on VLSI Design had 13 sessions including Radiation Effects, Architectures, Mixed Signal, Design Techniques, Fault Testing, Synthesis, Signal Processing, and other Featured Presentations. The symposium provides insights into developments in VLSI and digital systems which can be used to increase data systems performance. The presentations share insights into next generation advances that will serve as a basis for future VLSI design

    Proceedings of the Scientific Data Compression Workshop

    Get PDF
    Continuing advances in space and Earth science requires increasing amounts of data to be gathered from spaceborne sensors. NASA expects to launch sensors during the next two decades which will be capable of producing an aggregate of 1500 Megabits per second if operated simultaneously. Such high data rates cause stresses in all aspects of end-to-end data systems. Technologies and techniques are needed to relieve such stresses. Potential solutions to the massive data rate problems are: data editing, greater transmission bandwidths, higher density and faster media, and data compression. Through four subpanels on Science Payload Operations, Multispectral Imaging, Microwave Remote Sensing and Science Data Management, recommendations were made for research in data compression and scientific data applications to space platforms
    • …
    corecore