885 research outputs found

    Arithmetic coding revisited

    Get PDF
    Over the last decade, arithmetic coding has emerged as an important compression tool. It is now the method of choice for adaptive coding on multisymbol alphabets because of its speed, low storage requirements, and effectiveness of compression. This article describes a new implementation of arithmetic coding that incorporates several improvements over a widely used earlier version by Witten, Neal, and Cleary, which has become a de facto standard. These improvements include fewer multiplicative operations, greatly extended range of alphabet sizes and symbol probabilities, and the use of low-precision arithmetic, permitting implementation by fast shift/add operations. We also describe a modular structure that separates the coding, modeling, and probability estimation components of a compression system. To motivate the improved coder, we consider the needs of a word-based text compression program. We report a range of experimental results using this and other models. Complete source code is available

    Hardware-efficient data compression in wireless intracortical brain-machine interfaces

    Get PDF
    Brain-Machine Interfaces (BMI) have emerged as a promising technology for restoring lost motor function in patients with neurological disorders and/or motor impairments, e.g. paraplegia, amputation, stroke, spinal cord injury, amyotrophic lateral sclerosis, etc. The past 2 decades have seen significant advances in BMI performance. This has largely been driven by the invention and uptake of intracortical microelectrode arrays that can isolate the activity of individual neurons. However, the current paradigm involves the use of percutaneous connections, i.e. wires. These wires carry the information from the intracortical array implanted in the brain to outside of the body, where the information is used for neural decoding. These wires carry significant long-term risks ranging from infection, to mechanical injury, to impaired mobility and quality of life for the individual. Therefore, there is a desire to make intracortical BMIs (iBMI) wireless, where the data is communicated out wirelessly, either with the use of electromagnetic or acoustic waves. Unfortunately, this consumes a significant amount of power, which is dissipated from the implant in the form of heat. Heating tissue can cause irreparable damage, and so there are strict limits on heat flux from implants to cortical tissue. Given the ever-increasing number of channels per implant, the required communication power is now exceeding the acceptable cortical heat transfer limits. This cortical heating issue is hampering widespread clinical use. As such, effective data compression would bring Wireless iBMIs (WI-BMI) into alignment with heat transfer limits, enabling large channel counts and small implant sizes without risking tissue damage via heating. This thesis addresses the aforementioned communication power problem from a signal processing and data compression perspective, and is composed of two parts. In the first part, we investigate hardware-efficient ways to compress the Multi-Unit Activity (MUA) signal, which is the most common signal in modern iBMIs. In the second and final part, we look at efficient ways to extract and compress the high-bandwidth Entire Spiking Activity signal, which, while underexplored as a signal, has been the subject of significant interest given its ability to outperform the MUA signal in neural decoding. Overall, this thesis introduces hardware-efficient methods of extracting high-performing neural features, and compressing them by an order of magnitude or more beyond the state-of-the-art in ultra-low power ways. This enables many more recording channels to be fit onto intracortical implants, while remaining within cortical heat transfer safety and channel capacity limits.Open Acces

    Efficient Image Coding and Transmission in Deep Space Communication

    Full text link
    The usefulness of modern digital communication comes from ensuring the data from a source arrives to its destination quickly and correctly. To meet these demands, communication protocols employ data compression and error detection/correction to ensure compactness and accuracy of the data, especially for critical scientific data which requires the use of lossless compression. For example, in deep space communication, information received from satellites to ground stations on Earth come in huge volumes captured with high precision and resolution by space mission instruments, such as Hubble Space Telescope (HST). On-board implementation of communication protocols poses numerous constraints and demands on the high performance given the criticality of data and a high cost of a space mission, including data values. The objectives of this study are to determine which data compression techniques yields the a) minimum data volumes, b) most error resilience, and c) utilize the least amount and power of hardware resources. For this study, a Field Programmable Gate Array (FPGA) will serve as the main component for building the circuitry for each source coding technique. Furthermore, errors are induced based on studies of reported errors rates in deep space communication channels to test for error resilience. Finally, the calculation of resource utilization of the source encoder determines the power and computational usage. Based on the analysis of the error resilience and the characteristics of errors, the requirements to the channel coding are formulated

    Hardware-efficient compression of neural multi-unit activity

    Get PDF
    Brain-machine interfaces (BMI) are tools for measuring neural activity in the brain, used to treat numerous conditions. It is essential that the next generation of intracortical BMIs is wireless so as to remove percutaneous connections, i.e. wires, and the associated mechanical and infection risks. This is required for the effective translation of BMIs into clinical applications and is one of the remaining bottlenecks. However, due to cortical tissue thermal dissipation safety limits, the on-implant power consumption must be strictly limited. Therefore, both the neural signal processing and wireless communication power should be minimal, while the implants should provide signals that offer high behavioural decoding performance (BDP). The Multi-Unit Activity (MUA) signal is the most common signal in modern BMIs. However, with an ever-increasing channel count, the raw data bandwidth is becoming prohibitively high due to the associated communication power exceeding the safety limits. Data compression is therefore required. To meet this need, this work developed hardware-efficient static Huffman compression schemes for MUA data. Our final system reduced the bandwidth to 27 bps/channel, compared to the standard MUA rate of 1 kbps/channel. This compression is over an order of magnitude more than has been achieved before, while using only 0.96 uW/channel processing power and 246 logic cells. Our results were verified on 3 datasets and less than 1% loss in BDP was observed. As such, with the use of effective data compression, an order more of MUA channels can be fitted on-implant, enabling the next generation of high-performance wireless intracortical BMIs

    Entropy and Certainty in Lossless Data Compression

    Get PDF
    Data compression is the art of using encoding techniques to represent data symbols using less storage space compared to the original data representation. The encoding process builds a relationship between the entropy of the data and the certainty of the system. The theoretical limits of this relationship are defined by the theory of entropy in information that was proposed by Claude Shannon. Lossless data compression is uniquely tied to entropy theory as the data and the system have a static definition. The static nature of the two requires a mechanism to reduce the entropy without the ability to alter either of these key components. This dissertation develops the Map of Certainty and Entropy (MaCE) in order to illustrate the entropy and certainty contained within an information system and uses this concept to generate the proposed methods for prefix-free, lossless compression of static data. The first method, Select Level Method (SLM), increases the efficiency of creating Shannon-Fano-Elias code in terms of CPU cycles. SLM is developed using a sideways view of the compression environment provided by MaCE. This view is also used for the second contribution, Sort Linear Method Nivellate (SLMN) which uses the concepts of SLM with the addition of midpoints and a fitting function to increase the compression efficiency of SLM to entropy values L(x) \u3c H(x) + 1. Finally, the third contribution, Jacobs, Ali, Kolibal Encoding (JAKE), extends SLM and SLMN to bases larger than binary to increase the compression even further while maintaining the same relative computation efficiency

    Discovering New Vulnerabilities in Computer Systems

    Get PDF
    Vulnerability research plays a key role in preventing and defending against malicious computer system exploitations. Driven by a multi-billion dollar underground economy, cyber criminals today tirelessly launch malicious exploitations, threatening every aspect of daily computing. to effectively protect computer systems from devastation, it is imperative to discover and mitigate vulnerabilities before they fall into the offensive parties\u27 hands. This dissertation is dedicated to the research and discovery of new design and deployment vulnerabilities in three very different types of computer systems.;The first vulnerability is found in the automatic malicious binary (malware) detection system. Binary analysis, a central piece of technology for malware detection, are divided into two classes, static analysis and dynamic analysis. State-of-the-art detection systems employ both classes of analyses to complement each other\u27s strengths and weaknesses for improved detection results. However, we found that the commonly seen design patterns may suffer from evasion attacks. We demonstrate attacks on the vulnerabilities by designing and implementing a novel binary obfuscation technique.;The second vulnerability is located in the design of server system power management. Technological advancements have improved server system power efficiency and facilitated energy proportional computing. However, the change of power profile makes the power consumption subjected to unaudited influences of remote parties, leaving the server systems vulnerable to energy-targeted malicious exploit. We demonstrate an energy abusing attack on a standalone open Web server, measure the extent of the damage, and present a preliminary defense strategy.;The third vulnerability is discovered in the application of server virtualization technologies. Server virtualization greatly benefits today\u27s data centers and brings pervasive cloud computing a step closer to the general public. However, the practice of physical co-hosting virtual machines with different security privileges risks introducing covert channels that seriously threaten the information security in the cloud. We study the construction of high-bandwidth covert channels via the memory sub-system, and show a practical exploit of cross-virtual-machine covert channels on virtualized x86 platforms

    Wireless sensor network as a distribute database

    Get PDF
    Wireless sensor networks (WSN) have played a role in various fields. In-network data processing is one of the most important and challenging techniques as it affects the key features of WSNs, which are energy consumption, nodes life circles and network performance. In the form of in-network processing, an intermediate node or aggregator will fuse or aggregate sensor data, which are collected from a group of sensors before transferring to the base station. The advantage of this approach is to minimize the amount of information transferred due to lack of computational resources. This thesis introduces the development of a hybrid in-network data processing for WSNs to fulfil the WSNs constraints. An architecture for in-network data processing were proposed in clustering level, data compression level and data mining level. The Neighbour-aware Multipath Cluster Aggregation (NMCA) is designed in the clustering level, which combines cluster-based and multipath approaches to process different packet loss rates. The data compression schemes and Optimal Dynamic Huffman (ODH) algorithm compressed data in the cluster head for the compressed level. A semantic data mining for fire detection was designed for extracting information from the raw data by the semantic data-mining model is developed to improve data accuracy and extract the fire event in the simulation. A demo in-door location system with in-network data processing approach is built to test the performance of the energy reduction of our designed strategy. In conclusion, the added benefits that the technical work can provide for in-network data processing is discussed and specific contributions and future work are highlighted

    Near-Lossless Bitonal Image Compression System

    Get PDF
    The main purpose of this thesis is to develop an efficient near-lossless bitonal compression algorithm and to implement that algorithm on a hardware platform. The current methods for compression of bitonal images include the JBIG and JBIG2 algorithms, however both JBIG and JBIG2 have their disadvantages. Both of these algorithms are covered by patents filed by IBM, making them costly to implement commercially. Also, JBIG only provides means for lossless compression while JBIG2 provides lossy methods only for document-type images. For these reasons a new method for introducing loss and controlling this loss to sustain quality is developed. The lossless bitonal image compression algorithm used for this thesis is called Block Arithmetic Coder for Image Compression (BACIC), which can efficiently compress bitonal images. In this thesis, loss is introduced for cases where better compression efficiency is needed. However, introducing loss in bitonal images is especially difficult, because pixels undergo such a drastic change, either from white to black or black to white. Such pixel flipping introduces salt and pepper noise, which can be very distracting when viewing an image. Two methods are used in combination to control the visual distortion introduced into the image. The first is to keep track of the error created by the flipping of pixels, and using this error to decide whether flipping another pixel will cause the visual distortion to exceed a predefined threshold. The second method is region of interest consideration. In this method, lower loss or no loss is introduced into the important parts of an image, and higher loss is introduced into the less important parts. This allows for a good quality image while increasing the compression efficiency. Also, the ability of BACIC to compress grayscale images is studied and BACICm, a multiplanar BACIC algorithm, is created. A hardware implementation of the BACIC lossless bitonal image compression algorithm is also designed. The hardware implementation is done using VHDL targeting a Xilinx FPGA, which is very useful, because of its flexibility. The programmed FPGA could be included in a product of the facsimile or printing industry to handle the compression or decompression internal to the unit, giving it an advantage in the marketplace
    corecore