1,657 research outputs found

    EXTRACTION AND PREDICTION OF SYSTEM PROPERTIES USING VARIABLE-N-GRAM MODELING AND COMPRESSIVE HASHING

    Get PDF
    In modern computer systems, memory accesses and power management are the two major performance limiting factors. Accesses to main memory are very slow when compared to operations within a processor chip. Hardware write buffers, caches, out-of-order execution, and prefetch logic, are commonly used to reduce the time spent waiting for main memory accesses. Compiler loop interchange and data layout transformations also can help. Unfortunately, large data structures often have access patterns for which none of the standard approaches are useful. Using smaller data structures can significantly improve performance by allowing the data to reside in higher levels of the memory hierarchy. This dissertation proposes using lossy data compression technology called ’Compressive Hashing’ to create “surrogates”, that can augment original large data structures to yield faster typical data access. One way to optimize system performance for power consumption is to provide a predictive control of system-level energy use. This dissertation creates a novel instruction-level cost model called the variable-n-gram model, which is closely related to N-Gram analysis commonly used in computational linguistics. This model does not require direct knowledge of complex architectural details, and is capable of determining performance relationships between instructions from an execution trace. Experimental measurements are used to derive a context-sensitive model for performance of each type of instruction in the context of an N-instruction sequence. Dynamic runtime power prediction mechanisms often suffer from high overhead costs. To reduce the overhead, this dissertation encodes the static instruction-level predictions into a data structure and uses compressive hashing to provide on-demand runtime access to those predictions. Genetic programming is used to evolve compressive hash functions and performance analysis of applications shows that, runtime access overhead can be reduced by a factor of ~3x-9x

    Towards Lifelong Reasoning with Sparse and Compressive Memory Systems

    Get PDF
    Humans have a remarkable ability to remember information over long time horizons. When reading a book, we build up a compressed representation of the past narrative, such as the characters and events that have built up the story so far. We can do this even if they are separated by thousands of words from the current text, or long stretches of time between readings. During our life, we build up and retain memories that tell us where we live, what we have experienced, and who we are. Adding memory to artificial neural networks has been transformative in machine learning, allowing models to extract structure from temporal data, and more accurately model the future. However the capacity for long-range reasoning in current memory-augmented neural networks is considerably limited, in comparison to humans, despite the access to powerful modern computers. This thesis explores two prominent approaches towards scaling artificial memories to lifelong capacity: sparse access and compressive memory structures. With sparse access, the inspection, retrieval, and updating of only a very small subset of pertinent memory is considered. It is found that sparse memory access is beneficial for learning, allowing for improved data-efficiency and improved generalisation. From a computational perspective - sparsity allows scaling to memories with millions of entities on a simple CPU-based machine. It is shown that memory systems that compress the past to a smaller set of representations reduce redundancy and can speed up the learning of rare classes and improve upon classical data-structures in database systems. Compressive memory architectures are also devised for sequence prediction tasks and are observed to significantly increase the state-of-the-art in modelling natural language

    SISTEMI PER LA MOBILITÀ DEGLI UTENTI E DEGLI APPLICATIVI IN RETI WIRED E WIRELESS

    Get PDF
    The words mobility and network are found together in many contexts. The issue alone of modeling geographical user mobility in wireless networks has countless applications. Depending on one’s background, the concept is investigated with very different tools and aims. Moreover, the last decade saw also a growing interest in code mobility, i.e. the possibility for soft-ware applications (or parts thereof) to migrate and keeps working in different devices and environ-ments. A notable real-life and successful application is distributed computing, which under certain hypothesis can void the need of expensive supercomputers. The general rationale is splitting a very demanding computing task into a large number of independent sub-problems, each addressable by limited-power machines, weakly connected (typically through the Internet, the quintessence of a wired network). Following this lines of thought, we organized this thesis in two distinct and independent parts: Part I It deals with audio fingerprinting, and a special emphasis is put on the application of broadcast mon-itoring and on the implementation aspects. Although the problem is tackled from many sides, one of the most prominent difficulties is the high computing power required for the task. We thus devised and operated a distributed-computing solution, which is described in detail. Tests were conducted on the computing cluster available at the Department of Engineering of the University of Ferrara. Part II It focuses instead on wireless networks. Even if the approach is quite general, the stress is on WiFi networks. More specifically, we tried to evaluate how mobile-users’ experience can be improved. Two tools are considered. In the first place, we wrote a packet-level simulator and used it to esti-mate the impact of pricing strategies in allocating the bandwidth resource, finding out the need for such solutions. Secondly, we developed a high-level simulator that strongly advises to deepen the topic of user cooperation for the selection of the “best” point of access, when many are available. We also propose one such policy

    Recent Developments in Smart Healthcare

    Get PDF
    Medicine is undergoing a sector-wide transformation thanks to the advances in computing and networking technologies. Healthcare is changing from reactive and hospital-centered to preventive and personalized, from disease focused to well-being centered. In essence, the healthcare systems, as well as fundamental medicine research, are becoming smarter. We anticipate significant improvements in areas ranging from molecular genomics and proteomics to decision support for healthcare professionals through big data analytics, to support behavior changes through technology-enabled self-management, and social and motivational support. Furthermore, with smart technologies, healthcare delivery could also be made more efficient, higher quality, and lower cost. In this special issue, we received a total 45 submissions and accepted 19 outstanding papers that roughly span across several interesting topics on smart healthcare, including public health, health information technology (Health IT), and smart medicine
    corecore