213,114 research outputs found

    Analysing Compression Techniques for In-Memory Collaborative Filtering

    Get PDF
    Following the recent trend of in-memory data processing, it is a usual practice to maintain collaborative filtering data in the main memory when generating recommendations in academic and industrial recommender systems. In this paper, we study the impact of integer compression techniques for in-memory collaborative filtering data in terms of space and time efficiency. Our results provide relevant observations about when and how to compress collaborative filtering data. First, we observe that, depending on the memory constraints, compression techniques may speed up or slow down the performance of state-of-the art collaborative filtering algorithms. Second, after comparing different compression techniques, we find the Frame of Reference (FOR) technique to be the best option in terms of space and time efficiency under different memory constraints

    Semantic Compression of Episodic Memories

    Get PDF
    Storing knowledge of an agent's environment in the form of a probabilistic generative model has been established as a crucial ingredient in a multitude of cognitive tasks. Perception has been formalised as probabilistic inference over the state of latent variables, whereas in decision making the model of the environment is used to predict likely consequences of actions. Such generative models have earlier been proposed to underlie semantic memory but it remained unclear if this model also underlies the efficient storage of experiences in episodic memory. We formalise the compression of episodes in the normative framework of information theory and argue that semantic memory provides the distortion function for compression of experiences. Recent advances and insights from machine learning allow us to approximate semantic compression in naturalistic domains and contrast the resulting deviations in compressed episodes with memory errors observed in the experimental literature on human memory.Comment: CogSci201

    Capturing Regular Human Activity through a Learning Context Memory

    Get PDF
    A learning context memory consisting of two main parts is presented. The first part performs lossy data compression, keeping the amount of stored data at a minimum by combining similar context attributes — the compression rate for the presented GPS data is 150:1 on average. The resulting data is stored in an appropriate data structure highlighting the level of compression. Elements with a high level of compression are used in the second part to form the start and end points of episodes capturing common activity consisting of consecutive events. The context memory is used to investigate how little context data can be stored containing still enough information to capture regular human activity

    Bidirectional Text Compression in External Memory

    Get PDF
    Bidirectional compression algorithms work by substituting repeated substrings by references that, unlike in the famous LZ77-scheme, can point to either direction. We present such an algorithm that is particularly suited for an external memory implementation. We evaluate it experimentally on large data sets of size up to 128 GiB (using only 16 GiB of RAM) and show that it is significantly faster than all known LZ77 compressors, while producing a roughly similar number of factors. We also introduce an external memory decompressor for texts compressed with any uni- or bidirectional compression scheme

    Spaceborne memory organization, an associative data acquisition system, phase II Final report, Apr. - Dec. 1966

    Get PDF
    Spaceborne memory organization, associative data acquisition system design, and data compression technique

    Parallel Recursive State Compression for Free

    Get PDF
    This paper focuses on reducing memory usage in enumerative model checking, while maintaining the multi-core scalability obtained in earlier work. We present a tree-based multi-core compression method, which works by leveraging sharing among sub-vectors of state vectors. An algorithmic analysis of both worst-case and optimal compression ratios shows the potential to compress even large states to a small constant on average (8 bytes). Our experiments demonstrate that this holds up in practice: the median compression ratio of 279 measured experiments is within 17% of the optimum for tree compression, and five times better than the median compression ratio of SPIN's COLLAPSE compression. Our algorithms are implemented in the LTSmin tool, and our experiments show that for model checking, multi-core tree compression pays its own way: it comes virtually without overhead compared to the fastest hash table-based methods.Comment: 19 page
    corecore