1,849 research outputs found

    Data Compression in Multi-Hop Large-Scale Wireless Sensor Networks

    Get PDF
    Data collection from a multi-hop large-scale outdoor WSN deployment for environmental monitoring is full of challenges due to the severe resource constraints on small battery-operated motes (e.g., bandwidth, memory, power, and computing capacity) and the highly dynamic wireless link conditions in an outdoor communication environment. We present a compressed sensing approach which can recover the sensing data at the sink with good accuracy when very few packets are collected, thus leading to a significant reduction of the network traffic and an extension of the WSN lifetime. Interplaying with the dynamic WSN routing topology, the proposed approach is efficient and simple to implement on the resource-constrained motes without motes storing of a part of random measurement matrix, as opposed to other existing compressed sensing based schemes. We provide a systematic method via machine learning to find a suitable representation basis, for the given WSN deployment and data field, which is both sparse and incoherent with the measurement matrix in the compressed sensing. We validate our approach and evaluate its performance using our real-world multi-hop WSN testbed deployment in situ in collecting the humidity and soil moisture data. The results show that our approach significantly outperforms three other compressed sensing based algorithms regarding the data recovery accuracy for the entire WSN observation field under drastically reduced communication costs. For some WSN scenarios, compressed sensing may not be applicable. Therefore we also design a generalized predictive coding framework for unified lossless and lossy data compression. In addition, we devise a novel algorithm for lossless compression to significantly improve data compression performance for variouSs data collections and applications in WSNs. Rigorous simulations show our proposed framework and compression algorithm outperform several recent popular compression algorithms for wireless sensor networks such as LEC, S-LZW and LTC using various real-world sensor data sets, demonstrating the merit of the proposed framework for unified temporal lossless and lossy data compression in WSNs

    Singular Value Decomposition Based Image Coding for Achieving Additional Compression to JPEG Images

    Get PDF
    Computer technology these days is most focused on storage space and speed. Considerable advancements in this direction can be achieved through the usage of digital image compression techniques. In this paper we present a well studied singular value decomposition based JPEG image compression technique. Singular Value Decomposition is a way of factorizing matrices into a series of linear approximations that expose the underlying structure of the matrix. SVD is extraordinarily useful and has many applications such as data analysis, signal processing, pattern recognition, objects detection and weather prediction. An attempt is made to implement this method of factorization to perform second round of compression on JPEG images to optimize storage space. Compression is further enhanced by the removal of singularity after the initial compression performed using SVD. MATLAB R2010a with image processing toolbox is used as the development tool for implementing the algorithm

    Developing 5GL Concepts from User Interactions

    Get PDF
    In the fulfilling of the contracts generated in Test Driven Development, a developer could be said to act as a constraint solver, similar to those used by a 5th Generation Language(5GL). This thesis presents the hypothesis that 5GL linguistic mechanics, such as facts, rules and goals, will be emergent in the communications of developer pairs performing Test Driven Development, validating that 5GL syntax is congruent with the ways that practitioners communicate. Along the way, nomenclatures and linguistic patterns may be observed that could inform the design of future 5GL languages

    "Rotterdam econometrics": publications of the econometric institute 1956-2005

    Get PDF
    This paper contains a list of all publications over the period 1956-2005, as reported in the Rotterdam Econometric Institute Reprint series during 1957-2005.

    Does Technology Change Families? A Tri -Angulation Discussion on the Relation of Family & Technology

    Get PDF
    Families have moved, or have been moved, from the streets into their homes or more specifically, into their bedrooms. Digital technology – computer games, mobiles & the internet and email are referred to as new technology when they are discussed in relation to Health & structure of families and Many parents, clinicians, researchers, and policy makers are concerned that electronic tools, especially those featuring violent content, may be harmful to individuals especially youths. This article is a documentary study, combination of multiple focus is chosen. In this article, Internet, Mobile and computer games. This article looks into the role of the technology on families. Specifically, it examines how the mediated space that this technology creates matters. Technological change often creates ungrounded fears but also overinflated hopes. In order to minimize risks and to seize chances, systematic, empirical, and ideally experimental research is crucial in all over the world. Major changes in family structure and environments might potentially severely disrupt family functioning, thus diminishing a family’s ability to cope with stress, so by the advent of a technology

    Selection of compressible signals from telemetry data

    Get PDF
    Sensors are deployed in all aspects of modern city infrastructure and generate vast amounts of data. Only subsets of this data, however, are relevant to individual organisations. For example, a local council may collect suspension movement from vehicles to detect pot-holes, but this data is not relevant when assessing traffic flow. Supervised feature selection aims to find the set of signals that best predict a target variable. Typical approaches use either measures of correlation or similarity, as in filter methods, or predictive power in a learned model, as in wrapper methods. In both approaches selected features often have high entropies and are not suitable for compression. This is of particular issue in the automotive domain where fast communication and archival of vehicle telemetry data is likely to be prevalent in the near future, especially with technologies such as V2V and V2X. In this paper, we adapt a popular feature selection filter method to consider the compressibility of signals being selected for use in a predictive model. In particular, we add a compression term to the Minimal Redundancy Maximal Relevance (MRMR) filter and introduce Minimal Redundancy Maximal Relevance And Compression (MRMRAC). Using MRMRAC, we then select features from the Controller Area Network (CAN) and predict each of current instantaneous fuel consumption, engine torque, vehicle speed, and gear position, using a Support Vector Machine (SVM). We show that while performance is slightly lower when compression is considered, the compressibility of the selected features is significantly improved

    Handling Massive N-Gram Datasets Efficiently

    Get PDF
    This paper deals with the two fundamental problems concerning the handling of large n-gram language models: indexing, that is compressing the n-gram strings and associated satellite data without compromising their retrieval speed; and estimation, that is computing the probability distribution of the strings from a large textual source. Regarding the problem of indexing, we describe compressed, exact and lossless data structures that achieve, at the same time, high space reductions and no time degradation with respect to state-of-the-art solutions and related software packages. In particular, we present a compressed trie data structure in which each word following a context of fixed length k, i.e., its preceding k words, is encoded as an integer whose value is proportional to the number of words that follow such context. Since the number of words following a given context is typically very small in natural languages, we lower the space of representation to compression levels that were never achieved before. Despite the significant savings in space, our technique introduces a negligible penalty at query time. Regarding the problem of estimation, we present a novel algorithm for estimating modified Kneser-Ney language models, that have emerged as the de-facto choice for language modeling in both academia and industry, thanks to their relatively low perplexity performance. Estimating such models from large textual sources poses the challenge of devising algorithms that make a parsimonious use of the disk. The state-of-the-art algorithm uses three sorting steps in external memory: we show an improved construction that requires only one sorting step thanks to exploiting the properties of the extracted n-gram strings. With an extensive experimental analysis performed on billions of n-grams, we show an average improvement of 4.5X on the total running time of the state-of-the-art approach.Comment: Published in ACM Transactions on Information Systems (TOIS), February 2019, Article No: 2
    corecore