11,077 research outputs found

    A Rate-Compatible Sphere-Packing Analysis of Feedback Coding with Limited Retransmissions

    Full text link
    Recent work by Polyanskiy et al. and Chen et al. has excited new interest in using feedback to approach capacity with low latency. Polyanskiy showed that feedback identifying the first symbol at which decoding is successful allows capacity to be approached with surprisingly low latency. This paper uses Chen's rate-compatible sphere-packing (RCSP) analysis to study what happens when symbols must be transmitted in packets, as with a traditional hybrid ARQ system, and limited to relatively few (six or fewer) incremental transmissions. Numerical optimizations find the series of progressively growing cumulative block lengths that enable RCSP to approach capacity with the minimum possible latency. RCSP analysis shows that five incremental transmissions are sufficient to achieve 92% of capacity with an average block length of fewer than 101 symbols on the AWGN channel with SNR of 2.0 dB. The RCSP analysis provides a decoding error trajectory that specifies the decoding error rate for each cumulative block length. Though RCSP is an idealization, an example tail-biting convolutional code matches the RCSP decoding error trajectory and achieves 91% of capacity with an average block length of 102 symbols on the AWGN channel with SNR of 2.0 dB. We also show how RCSP analysis can be used in cases where packets have deadlines associated with them (leading to an outage probability).Comment: To be published at the 2012 IEEE International Symposium on Information Theory, Cambridge, MA, USA. Updated to incorporate reviewers' comments and add new figure

    Multi-dimensional key generation of ICMetrics for cloud computing

    Get PDF
    Despite the rapid expansion and uptake of cloud based services, lack of trust in the provenance of such services represents a significant inhibiting factor in the further expansion of such service. This paper explores an approach to assure trust and provenance in cloud based services via the generation of digital signatures using properties or features derived from their own construction and software behaviour. The resulting system removes the need for a server to store a private key in a typical Public/Private-Key Infrastructure for data sources. Rather, keys are generated at run-time by features obtained as service execution proceeds. In this paper we investigate several potential software features for suitability during the employment of a cloud service identification system. The generation of stable and unique digital identity from features in Cloud computing is challenging because of the unstable operation environments that implies the features employed are likely to vary under normal operating conditions. To address this, we introduce a multi-dimensional key generation technology which maps from multi-dimensional feature space directly to a key space. Subsequently, a smooth entropy algorithm is developed to evaluate the entropy of key space

    The Simulation Model Partitioning Problem: an Adaptive Solution Based on Self-Clustering (Extended Version)

    Full text link
    This paper is about partitioning in parallel and distributed simulation. That means decomposing the simulation model into a numberof components and to properly allocate them on the execution units. An adaptive solution based on self-clustering, that considers both communication reduction and computational load-balancing, is proposed. The implementation of the proposed mechanism is tested using a simulation model that is challenging both in terms of structure and dynamicity. Various configurations of the simulation model and the execution environment have been considered. The obtained performance results are analyzed using a reference cost model. The results demonstrate that the proposed approach is promising and that it can reduce the simulation execution time in both parallel and distributed architectures

    Model-driven Scheduling for Distributed Stream Processing Systems

    Full text link
    Distributed Stream Processing frameworks are being commonly used with the evolution of Internet of Things(IoT). These frameworks are designed to adapt to the dynamic input message rate by scaling in/out.Apache Storm, originally developed by Twitter is a widely used stream processing engine while others includes Flink, Spark streaming. For running the streaming applications successfully there is need to know the optimal resource requirement, as over-estimation of resources adds extra cost.So we need some strategy to come up with the optimal resource requirement for a given streaming application. In this article, we propose a model-driven approach for scheduling streaming applications that effectively utilizes a priori knowledge of the applications to provide predictable scheduling behavior. Specifically, we use application performance models to offer reliable estimates of the resource allocation required. Further, this intuition also drives resource mapping, and helps narrow the estimated and actual dataflow performance and resource utilization. Together, this model-driven scheduling approach gives a predictable application performance and resource utilization behavior for executing a given DSPS application at a target input stream rate on distributed resources.Comment: 54 page
    corecore