141,796 research outputs found

    The Cerevoice Blizzard Entry 2007: Are Small Database Errors Worse than Compression Artifacts?

    Get PDF
    In commercial systems the memory footprint of unit selection systems is often a key issue. This is especially true for PDAs and other embedded devices. In this years Blizzard entry CereProc R○gave itself the criteria that the full database system entered would have a smaller memory footprint than either of the two smaller database entries. This was accomplished by applying speex speech compression to the full database entry. In turn a set of small database techniques used to improve the quality of small database systems in last years entry were extended. Finally, for all systems, two quality control methods were applied to the underlying database to improve the lexicon and transcription match to the underlying data. Results suggest that mild audio quality artifacts introduced by lossy compression have almost as much impact on MOS perceived quality as concatenation errors introduced by sparse data in the smaller systems with bulked diphones. Index Terms: speech synthesis, unit selection. 1

    The Implementation and Performance of Compressed Databases

    Full text link
    In this paper, we show how compression can be integrated into a relational database system. Specifically, we describe how the storage manager, the query execution engine, and the query optimizer of a database system can be extended to deal with compressed data. Our main result is that compression can significantly improve the response time of queries if very light-weight compression techniques are used. We will present such light-weight compression techniques and give the results of running the TPC-D benchmark on a so compressed database and a non-compressed database using the AODB database system, an experimental database system that was developed at the Universities of Mannheim and Passau. Our benchmark results demonstrate that compression indeed offers high performance gains (up to 55%) for IO-intensive queries and moderate gains for CPU-intensive queries. Compression can, however, also increase the running time of certain update operations. In all, we recommend to extend today\'s database systems with light-weight compression techniques and to make extensive use of this feature

    Iris Recognition: The Consequences of Image Compression

    Get PDF
    Iris recognition for human identification is one of the most accurate biometrics, and its employment is expanding globally. The use of portable iris systems, particularly in law enforcement applications, is growing. In many of these applications, the portable device may be required to transmit an iris image or template over a narrow-bandwidth communication channel. Typically, a full resolution image (e.g., VGA) is desired to ensure sufficient pixels across the iris to be confident of accurate recognition results. To minimize the time to transmit a large amount of data over a narrow-bandwidth communication channel, image compression can be used to reduce the file size of the iris image. In other applications, such as the Registered Traveler program, an entire iris image is stored on a smart card, but only 4 kB is allowed for the iris image. For this type of application, image compression is also the solution. This paper investigates the effects of image compression on recognition system performance using a commercial version of the Daugman iris2pi algorithm along with JPEG-2000 compression, and links these to image quality. Using the ICE 2005 iris database, we find that even in the face of significant compression, recognition performance is minimally affected

    An NVM Aware MariaDB Database System and Associated IO Workload on File Systems

    Get PDF
    MariaDB is a community-developed fork of the MySQL relational database management system and originally designed and implemented in order to use the traditional spinning disk architecture. With Non-Volatile memory (NVM) technology now in the forefront and main stream for server storage (Data centers), MariaDB addresses the need by adding support for NVM devices and introduces NVM Compression method. NVM Compression is a novel hybrid technique that combines application level compression with flash awareness for optimal performance and storage efficiency. Utilizing new interface primitives exported by Flash Translation Layers (FTLs), we leverage the garbage collection available in flash devices to optimize the capacity management required by compression systems. We implement NVM Compression in the popular MariaDB database and use variants of commonly available POSIX file system interfaces to provide the extended FTL capabilities to the user space application. The experimental results show that the hybrid approach of NVM Compression can improve compression performance by 2-7x, deliver compression performance for flash devices that is within 5% of uncompressed performance, improve storage efficiency by 19% over legacy Row-Compression, reduce data writes by up to 4x when combined with other flash aware techniques such as Atomic Writes, and deliver further advantages in power efficiency and CPU utilization. Various micro benchmark measurement and findings on sparse files call for required improvement in file systems for handling of punch hole operations on files

    Balancing Performance and Energy for Lightweight Data Compression Algorithms

    Get PDF
    Energy consumption becomes more and more a critical design factor, whereby performance is still an important requirement. Thus, a balance between performance and energy has to be established. To tackle that issue for database systems, we proposed the concept of work-energy profiles. However, generating such profiles requires extensive benchmarking. To overcome that, we propose to approximate work-energy-profiles for complex operations based on the profiles of low-level operations in this paper. To show the feasibility of our approach, we use lightweight data compression algorithms as complex operations, since compression as well as decompression are heavily used in in-memory database systems, where data is always managed in a compressed representation. Furthermore, we evaluate our approach on a concrete hardware system

    A practitioner's guide to data base compression tutorial

    Full text link
    Data compression techniques can improve information system performance by reducing the size of a database by as much as ninety percent. This paper is written to provide assistance to practitioners considering the use of data compression for the storage of a commercial database. It reviews a wealth of literature on data compression and presents facts and guidelines which will assist system designers in evaluating the costs and benefits of compression and in selecting techniques appropriate for their needs.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/25353/1/0000800.pd

    Optimizing XML Compression in XQueC

    Get PDF
    We present our approach to the problem of optimizing compression choices in the context of the XQueC compressed XML database system. In XQueC, data items are aggregated into containers, which are further grouped to be compressed together. This way, XQueC is able to exploit data commonalities and to perform query evaluation in the compressed domain, with the aim of improving both compression and querying performance. However, different compression algorithms have different performance and support different sets of operations in the compressed domain. Therefore, choosing how to group containers and which compression algorithm to apply to each group is a challenging issue. We address this problem through an appropriate cost model and a suitable blend of heuristics which, based on a given query workload, are capable of driving appropriate compression choices

    An ECG-on-Chip with 535-nW/Channel Integrated Lossless Data Compressor for Wireless Sensors

    Get PDF
    This paper presents a low-power ECG recording system-on-chip (SoC) with on-chip low-complexity lossless ECG compression for data reduction in wireless/ambulatory ECG sensor devices. The chip uses a linear slope predictor for data compression, and incorporates a novel low-complexity dynamic coding-packaging scheme to frame the prediction error into fixed-length 16-bit format. The proposed technique achieves an average compression ratio of 2.25x on MIT/BIH ECG database. Implemented in a standard 0.35 um process, the compressor uses 0.565K gates/channel occupying 0.4 mm2 for four channels, and consumes 535 nW/channel at 2.4 V for ECG sampled at 512 Hz. Small size and ultra-low power consumption makes the proposed technique suitable for wearable ECG sensor applications
    corecore