4,291 research outputs found

    Enhanced Machine Learning Techniques for Early HARQ Feedback Prediction in 5G

    Full text link
    We investigate Early Hybrid Automatic Repeat reQuest (E-HARQ) feedback schemes enhanced by machine learning techniques as a path towards ultra-reliable and low-latency communication (URLLC). To this end, we propose machine learning methods to predict the outcome of the decoding process ahead of the end of the transmission. We discuss different input features and classification algorithms ranging from traditional methods to newly developed supervised autoencoders. These methods are evaluated based on their prospects of complying with the URLLC requirements of effective block error rates below 10−510^{-5} at small latency overheads. We provide realistic performance estimates in a system model incorporating scheduling effects to demonstrate the feasibility of E-HARQ across different signal-to-noise ratios, subcode lengths, channel conditions and system loads, and show the benefit over regular HARQ and existing E-HARQ schemes without machine learning.Comment: 14 pages, 15 figures; accepted versio

    How to tell if your cloud files are vulnerable to drive crashes

    Get PDF
    This paper presents a new challenge--verifying that a remote server is storing a file in a fault-tolerant manner, i.e., such that it can survive hard-drive failures. We describe an approach called the Remote Assessment of Fault Tolerance (RAFT). The key technique in a RAFT is to measure the time taken for a server to respond to a read request for a collection of file blocks. The larger the number of hard drives across which a file is distributed, the faster the read-request response. Erasure codes also play an important role in our solution. We describe a theoretical framework for RAFTs and offer experimental evidence that RAFTs can work in practice in several settings of interest

    Optical Time-Frequency Packing: Principles, Design, Implementation, and Experimental Demonstration

    Full text link
    Time-frequency packing (TFP) transmission provides the highest achievable spectral efficiency with a constrained symbol alphabet and detector complexity. In this work, the application of the TFP technique to fiber-optic systems is investigated and experimentally demonstrated. The main theoretical aspects, design guidelines, and implementation issues are discussed, focusing on those aspects which are peculiar to TFP systems. In particular, adaptive compensation of propagation impairments, matched filtering, and maximum a posteriori probability detection are obtained by a combination of a butterfly equalizer and four 8-state parallel Bahl-Cocke-Jelinek-Raviv (BCJR) detectors. A novel algorithm that ensures adaptive equalization, channel estimation, and a proper distribution of tasks between the equalizer and BCJR detectors is proposed. A set of irregular low-density parity-check codes with different rates is designed to operate at low error rates and approach the spectral efficiency limit achievable by TFP at different signal-to-noise ratios. An experimental demonstration of the designed system is finally provided with five dual-polarization QPSK-modulated optical carriers, densely packed in a 100 GHz bandwidth, employing a recirculating loop to test the performance of the system at different transmission distances.Comment: This paper has been accepted for publication in the IEEE/OSA Journal of Lightwave Technolog

    Dynamic reconfiguration in distributed hard real-time systems

    Get PDF

    CO2 Highways for Europe: Modelling a Carbon Capture, Transport and Storage Infrastructure for Europe. CEPS Working Document No. 340/November 2010

    Get PDF
    This paper presents a mixed integer, multi-period, cost-minimising model for a carbon capture, transport and storage (CCTS) network in Europe. The model incorporates endogenous decisions about carbon capture, pipeline and storage investments. The capture, flow and injection quantities are based on given costs, certificate prices, storage capacities and point source emissions. The results indicate that CCTS can theoretically contribute to the decarbonisation of Europe’s energy and industrial sectors. This requires a CO2 certificate price rising to €55 per tCO2 in 2050, and sufficient CO2 storage capacity available for both on- and offshore sites. Yet CCTS deployment is highest in CO2-intensive industries where emissions cannot be avoided by fuel switching or alternative production processes. In all scenarios, the importance of the industrial sector as a first-mover to induce the deployment of CCTS is highlighted. By contrast, a decrease in available storage capacity or a more moderate increase in CO2 prices will significantly reduce the role of CCTS as a CO2 mitigation technology, especially in the energy sector. Furthermore, continued public resistance to onshore CO2 storage can only be overcome by constructing expensive offshore storage. Under this restriction, reaching the same levels of CCTS penetration would require a doubling of CO2 certificate prices

    CO2 Highways for Europe: Modeling a Carbon Capture, Transport and Storage Infrastructure for Europe

    Get PDF
    We present a mixed integer, multi-period, cost-minimizing carbon capture, transport and storage (CCTS) network model for Europe. The model incorporates endogenous decisions about carbon capture, pipeline and storage investments; capture, flow and injection quantities based on given costs, certificate prices, storage capacities and point source emissions.The results indicate that CCTS can theoretically contribute to the decarbonization of Europe's energy and industry sectors. This requires a CO2 certificate price rising to 55 EUR in 2050, and sufficient CO2 storage capacity available for both on and offshore sites. However, CCTS deployment is highest in CO2-intensive industries where emissions cannot be avoided byfuel switching or alternative production processes. In all scenarios, the importance of the industrial sector as a first mover to induce the deployment of CCTS is highlighted. By contrast, a decrease of available storage capacity or a more moderate increase in CO2 prices will significantly reduce the role of CCTS as a CO2 mitigation technology, especially in the energy sector. Continued public resistance to onshore CO2 storage can only be overcome by constructing expensive offshore storage. Under this restriction, to reach the same levels of CCTS penetration will require doubling of CO2 certificate prices.carbon capture and storage, pipeline, infrastructure, optimization

    Concatenation of the Gottesman-Kitaev-Preskill code with the XZZX surface code

    Full text link
    Bosonic codes provide an alternative option for quantum error correction. An important category of bosonic codes called the Gottesman-Kitaev-Preskill (GKP) code has aroused much interest recently. Theoretically, the error correction ability of GKP code is limited since it can only correct small shift errors in position and momentum quadratures. A natural approach to promote the GKP error correction for large-scale, fault-tolerant quantum computation is concatenating encoded GKP states with a stabilizer code. The performance of the XZZX surface-GKP code, i.e., the single-mode GKP code concatenated with the XZZX surface code is investigated in this paper under two different noise models. Firstly, in the code-capacity noise model, the asymmetric rectangular GKP code with parameter λ\lambda is introduced. Using the minimum weight perfect matching decoder combined with the continuous-variable GKP information, the optimal threshold of the XZZX-surface GKP code reaches σ≈0.67\sigma\approx0.67 when λ=2.1\lambda=2.1, compared with the threshold σ≈0.60\sigma\approx0.60 of the standard surface-GKP code. Secondly, we analyze the shift errors of two-qubit gates in the actual implementation and build the full circuit-level noise model. By setting the appropriate bias parameters, the logical error rate is reduced by several times in some cases. These results indicate the XZZX surface-GKP codes are more suitable for asymmetric concatenation under the general noise models. We also estimate the overhead of the XZZX-surface GKP code which uses about 291 GKP states with the noise parameter 18.5 dB (Îș/g≈0.71%\kappa/g \approx 0.71\%) to encode a logical qubit with the error rate 2.53×10−72.53\times10^{-7}, compared with the qubit-based surface code using 3041 qubits to achieve almost the same logical error rate.Comment: 17 pages, 10 figure

    Comparative study of quantum error correction strategies for the heavy-hexagonal lattice

    Full text link
    Topological quantum error correction is a milestone in the scaling roadmap of quantum computers, which targets circuits with trillions of gates that would allow running quantum algorithms for real-world problems. The square-lattice surface code has become the workhorse to address this challenge, as it poses milder requirements on current devices both in terms of required error rates and small local connectivities. In some platforms, however, the connectivities are kept even lower in order to minimise gate errors at the hardware level, which limits the error correcting codes that can be directly implemented on them. In this work, we make a comparative study of possible strategies to overcome this limitation for the heavy-hexagonal lattice, the architecture of current IBM superconducting quantum computers. We explore two complementary strategies: the search for an efficient embedding of the surface code into the heavy-hexagonal lattice, as well as the use of codes whose connectivity requirements are naturally tailored to this architecture, such as subsystem-type and Floquet codes. Using noise models of increased complexity, we assess the performance of these strategies for IBM devices in terms of their error thresholds and qubit footprints. An optimized SWAP-based embedding of the surface code is found to be the most promising strategy towards a near-term demonstration of quantum error correction advantage
    • 

    corecore