83 research outputs found

    Characterizing Human Random-Sequence Generation in Competitive and Non-Competitive Environments Using Lempel-Ziv Complexity

    Get PDF
    The human ability for random-sequence generation (RSG) is limited but improves in a competitive game environment with feedback. However, it remains unclear how random people can be during games and whether RSG during games can improve when explicitly informing people that they must be as random as possible to win the game. Nor is it known whether any such improvement in RSG transfers outside the game environment. To investigate this, we designed a pre/post intervention paradigm around a Rock-Paper-Scissors game followed by a questionnaire. During the game, we manipulated participants’ level of awareness of the computer’s strategy; they were either (a) not informed of the computer’s algorithm or (b) explicitly informed that the computer used patterns in their choice history against them, so they must be maximally random to win. Using a compressibility metric of randomness, our results demonstrate that human RSG can reach levels statistically indistinguishable from computer pseudo-random generators in a competitive-game setting. However, our results also suggest that human RSG cannot be further improved by explicitly informing participants that they need to be random to win. In addition, the higher RSG in the game setting does not transfer outside the game environment. Furthermore, we found that the underrepresentation of long repetitions of the same entry in the series explains up to 29% of the variability in human RSG, and we discuss what might make up the variance left unexplained

    Optimum Implementation of Compound Compression of a Computer Screen for Real-Time Transmission in Low Network Bandwidth Environments

    Get PDF
    Remote working is becoming increasingly more prevalent in recent times. A large part of remote working involves sharing computer screens between servers and clients. The image content that is presented when sharing computer screens consists of both natural camera captured image data as well as computer generated graphics and text. The attributes of natural camera captured image data differ greatly to the attributes of computer generated image data. An image containing a mixture of both natural camera captured image and computer generated image data is known as a compound image. The research presented in this thesis focuses on the challenge of constructing a compound compression strategy to apply the ‘best fit’ compression algorithm for the mixed content found in a compound image. The research also involves analysis and classification of the types of data a given compound image may contain. While researching optimal types of compression, consideration is given to the computational overhead of a given algorithm because the research is being developed for real time systems such as cloud computing services, where latency has a detrimental impact on end user experience. The previous and current state of the art videos codec’s have been researched along many of the most current publishing’s from academia, to design and implement a novel approach to a low complexity compound compression algorithm that will be suitable for real time transmission. The compound compression algorithm will utilise a mixture of lossless and lossy compression algorithms with parameters that can be used to control the performance of the algorithm. An objective image quality assessment is needed to determine whether the proposed algorithm can produce an acceptable quality image after processing. Both traditional metrics such as Peak Signal to Noise Ratio will be used along with a new more modern approach specifically designed for compound images which is known as Structural Similarity Index will be used to define the quality of the decompressed Image. In finishing, the compression strategy will be tested on a set of generated compound images. Using open source software, the same images will be compressed with the previous and current state of the art video codec’s to compare the three main metrics, compression ratio, computational complexity and objective image quality

    Lempel-Ziv-Yokoo データ圧縮法の簡素な実現と冗長度の実験的解析

    Get PDF
     Lempel-Ziv (LZ)アルゴリズムは,1978年にZivとLempelによって提案された実用的な辞書エンコーダです.Lempel-Ziv-Yokoo (LZY) は,増分辞書木を利用した簡単な無歪データ圧縮法であり,類似したLZ78法に較べて学習効率が高い方法である. 本研究では,LZYのアルゴリズムに対して,後処理(Post_Processing)法を組み込み,増分辞書木の作り方を変えて,1ビット毎に増分辞書木を更新する簡素な実現方法を与える.そこではまず辞書の構成および符号化・復号化において、数え上げ符号で実現し,さらに簡単な後処理メカニズムを用いる.それらの結果は符号器・復号器間の双対性の高い,プログラム複雑度の小さい方法が実現できた.さて,LZY圧縮法の冗長度に関しては,冗長度の理論的解析と実験的解析の二つ問題が未解決である.そこで本研究では冗長度の実験的解析を行うことを目的とする. 論文では, まず,第二章では,LZYアルゴリズムの実現方法(辞書構成、符号化、復号化)について述べる. 第三章では,LZYアルゴリズムに対するソースコードの構成について述べる. 第四章では,冗長度の実験的解析を行う.その結果.これまで理論的に解析されている冗長度はO(loglogN/logN)であるが,実験結果では明らかにO(1/logN)であると見てとれる. 最後に,第五章では,今後の研究のために,LZYのアルゴリズムの冗長度の理論的解析についてを説明した.電気通信大学201

    The contour tree image encoding technique and file format

    Get PDF
    The process of contourization is presented which converts a raster image into a discrete set of plateaux or contours. These contours can be grouped into a hierarchical structure, defining total spatial inclusion, called a contour tree. A contour coder has been developed which fully describes these contours in a compact and efficient manner and is the basis for an image compression method. Simplification of the contour tree has been undertaken by merging contour tree nodes thus lowering the contour tree's entropy. This can be exploited by the contour coder to increase the image compression ratio. By applying general and simple rules derived from physiological experiments on the human vision system, lossy image compression can be achieved which minimises noticeable artifacts in the simplified image. The contour merging technique offers a complementary lossy compression system to the QDCT (Quantised Discrete Cosine Transform). The artifacts introduced by the two methods are very different; QDCT produces a general blurring and adds extra highlights in the form of overshoots, whereas contour merging sharpens edges, reduces highlights and introduces a degree of false contouring. A format based on the contourization technique which caters for most image types is defined, called the contour tree image format. Image operations directly on this compressed format have been studied which for certain manipulations can offer significant operational speed increases over using a standard raster image format. A couple of examples of operations specific to the contour tree format are presented showing some of the features of the new format.Science and Engineering Research Counci

    The Space and Earth Science Data Compression Workshop

    Get PDF
    This document is the proceedings from a Space and Earth Science Data Compression Workshop, which was held on March 27, 1992, at the Snowbird Conference Center in Snowbird, Utah. This workshop was held in conjunction with the 1992 Data Compression Conference (DCC '92), which was held at the same location, March 24-26, 1992. The workshop explored opportunities for data compression to enhance the collection and analysis of space and Earth science data. The workshop consisted of eleven papers presented in four sessions. These papers describe research that is integrated into, or has the potential of being integrated into, a particular space and/or Earth science data information system. Presenters were encouraged to take into account the scientists's data requirements, and the constraints imposed by the data collection, transmission, distribution, and archival system

    Local Editing in Lempel-Ziv Compressed Data

    Get PDF
    This thesis explores the problem of editing data while compressed by a variant of Lempel-Ziv compression. We show that the random-access properties of the LZ-End compression allow random edits, and present the first algorithm to achieve this. The thesis goes on to adapt the LZ-End parsing so that the random access properties become local access, which has tighter memory bounds. Furthermore, the new parsing allows a much improved algorithm to edit the compressed data

    Using semantic knowledge to improve compression on log files

    Get PDF
    With the move towards global and multi-national companies, information technology infrastructure requirements are increasing. As the size of these computer networks increases, it becomes more and more difficult to monitor, control, and secure them. Networks consist of a number of diverse devices, sensors, and gateways which are often spread over large geographical areas. Each of these devices produce log files which need to be analysed and monitored to provide network security and satisfy regulations. Data compression programs such as gzip and bzip2 are commonly used to reduce the quantity of data for archival purposes after the log files have been rotated. However, there are many other compression programs which exist - each with their own advantages and disadvantages. These programs each use a different amount of memory and take different compression and decompression times to achieve different compression ratios. System log files also contain redundancy which is not necessarily exploited by standard compression programs. Log messages usually use a similar format with a defined syntax. In the log files, all the ASCII characters are not used and the messages contain certain "phrases" which often repeated. This thesis investigates the use of compression as a means of data reduction and how the use of semantic knowledge can improve data compression (also applying results to different scenarios that can occur in a distributed computing environment). It presents the results of a series of tests performed on different log files. It also examines the semantic knowledge which exists in maillog files and how it can be exploited to improve the compression results. The results from a series of text preprocessors which exploit this knowledge are presented and evaluated. These preprocessors include: one which replaces the timestamps and IP addresses with their binary equivalents and one which replaces words from a dictionary with unused ASCII characters. In this thesis, data compression is shown to be an effective method of data reduction producing up to 98 percent reduction in filesize on a corpus of log files. The use of preprocessors which exploit semantic knowledge results in up to 56 percent improvement in overall compression time and up to 32 percent reduction in compressed size.TeXpdfTeX-1.40.

    Adaptive arithmetic data compression: An Implementation suitable for noiseless communication channel use

    Get PDF
    Noiseless data compression can provide important benefits in speed improvements and cost savings to computer communication. To be most effective, the compression process should be off-loaded from any processing CPU and be placed into a communication device. To operate transparently, It also should be adaptable to the data, operate in a single pass, and be able to perform at the communication link\u27s speed. Compression methods are surveyed with emphasis given to how well they meet these criteria. In this thesis, a string matching statistical unit paired with arithmetic coding, is investigated in detail. It is implemented and optimized so that its performance (speed, memory use, and compression ratio) can be evaluated. Finally, the requirements and additional concerns for the implementation of this algorithm into a communication device are addressed
    corecore