127,714 research outputs found

    Joint Data compression and Computation offloading in Hierarchical Fog-Cloud Systems

    Get PDF
    Data compression has the potential to significantly improve the computation offloading performance in hierarchical fog-cloud systems. However, it remains unknown how to optimally determine the compression ratio jointly with the computation offloading decisions and the resource allocation. This joint optimization problem is studied in the current paper where we aim to minimize the maximum weighted energy and service delay cost (WEDC) of all users. First, we consider a scenario where data compression is performed only at the mobile users. We prove that the optimal offloading decisions have a threshold structure. Moreover, a novel three-step approach employing convexification techniques is developed to optimize the compression ratios and the resource allocation. Then, we address the more general design where data compression is performed at both the mobile users and the fog server. We propose three efficient algorithms to overcome the strong coupling between the offloading decisions and resource allocation. We show that the proposed optimal algorithm for data compression at only the mobile users can reduce the WEDC by a few hundred percent compared to computation offloading strategies that do not leverage data compression or use sub-optimal optimization approaches. Besides, the proposed algorithms for additional data compression at the fog server can further reduce the WEDC

    Optimal Compression and Transmission Rate Control for Node-Lifetime Maximization

    Get PDF
    We consider a system that is composed of an energy constrained sensor node and a sink node, and devise optimal data compression and transmission policies with an objective to prolong the lifetime of the sensor node. While applying compression before transmission reduces the energy consumption of transmitting the sensed data, blindly applying too much compression may even exceed the cost of transmitting raw data, thereby losing its purpose. Hence, it is important to investigate the trade-off between data compression and transmission energy costs. In this paper, we study the joint optimal compression-transmission design in three scenarios which differ in terms of the available channel information at the sensor node, and cover a wide range of practical situations. We formulate and solve joint optimization problems aiming to maximize the lifetime of the sensor node whilst satisfying specific delay and bit error rate (BER) constraints. Our results show that a jointly optimized compression-transmission policy achieves significantly longer lifetime (90% to 2000%) as compared to optimizing transmission only without compression. Importantly, this performance advantage is most profound when the delay constraint is stringent, which demonstrates its suitability for low latency communication in future wireless networks.Comment: accepted for publication in IEEE Transactions on Wireless Communicaiton

    Malleable Coding with Fixed Reuse

    Full text link
    In cloud computing, storage area networks, remote backup storage, and similar settings, stored data is modified with updates from new versions. Representing information and modifying the representation are both expensive. Therefore it is desirable for the data to not only be compressed but to also be easily modified during updates. A malleable coding scheme considers both compression efficiency and ease of alteration, promoting codeword reuse. We examine the trade-off between compression efficiency and malleability cost-the difficulty of synchronizing compressed versions-measured as the length of a reused prefix portion. Through a coding theorem, the region of achievable rates and malleability is expressed as a single-letter optimization. Relationships to common information problems are also described

    Optimum dry-cooling sub-systems for a solar air conditioner

    Get PDF
    Dry-cooling sub-systems for residential solar powered Rankine compression air conditioners were economically optimized and compared with the cost of a wet cooling tower. Results in terms of yearly incremental busbar cost due to the use of dry-cooling were presented for Philadelphia and Miami. With input data corresponding to local weather, energy rate and capital costs, condenser surface designs and performance, the computerized optimization program yields design specifications of the sub-system which has the lowest annual incremental cost

    Compressed positionally encoded record filters in distributed query processing.

    Get PDF
    Different from a centralized database system, distributed query processing involves data transmission among distributed sites, which makes reducing transmission cost a major goal for distributed query optimization. A Positionally Encoded Record Filter (PERF) has attracted research attention as a cost-effective operator to reduce transmission cost. A PERF is a bit array generated by relation tuple scan order instead of hashing, so that it inherits the same compact size benefit as a Bloom filter while suffering no loss of join information caused by hash collisions. Our proposed algorithm PERF_C (Compressed PERF) further reduces the transmission cost in algorithm PERF by compressing both the join attributes and the corresponding PERF filters using arithmetic coding. We prove by time complexity analysis that compression is more efficient than sorting, which was proposed by earlier research to remove duplicates in algorithm PERF. Through the experiments on our synthetic testbed with 36 types of distributed queries, algorithm PERF_C effectively reduces the transmission cost with a cost reduction ratio of 62%--77% over IFS. And PERF_C outperforms PERF with a gain of 16%--36% in cost reduction ratio. A new metric to measure the compression speed in bits per second, compression bps , is defined as a guideline to decide when compression is beneficial. When compression overhead is considered, compression is beneficial only if compression bps is faster than data transfer speed. Tested on both randomly generated and specially designed distributed queries, number of join attributes, size of join attributes and relations, level of duplications are identified to be critical database factors affecting compression. Tested under three typical real computing platforms, compression bps is measured over a wide range of data size and falls in the range from 4M b/s to 9M b/s. Compared to the present relatively slow data transfer rate over Internet, compression is found to be an effective means of reducing transmission cost in distributed query processing. Paper copy at Leddy Library: Theses & Major Papers - Basement, West Bldg. / Call Number: Thesis2004 .Z565. Source: Masters Abstracts International, Volume: 43-01, page: 0249. Adviser: J. Morrissey. Thesis (M.Sc.)--University of Windsor (Canada), 2004

    Combining checkpointing and data compression for large scale seismic inversion

    Get PDF
    Seismic inversion and imaging are adjoint-based optimization problems that processes up to terabytes of data, regularly exceeding the memory capacity of available computers. Data compression is an effective strategy to reduce this memory requirement by a certain factor, particularly if some loss in accuracy is acceptable. A popular alternative is checkpointing, where data is stored at selected points in time, and values at other times are recomputed as needed from the last stored state. This allows arbitrarily large adjoint computations with limited memory, at the cost of additional recomputations. In this paper we combine compression and checkpointing for the first time to compute a realistic seismic inversion. The combination of checkpointing and compression allows larger adjoint computations compared to using only compression, and reduces the recomputation overhead significantly compared to using only checkpointing
    • …
    corecore