54 research outputs found

    Early-Stopped Technique for BCH Decoding Algorithm Under Tolerant Fault Probability

    Get PDF
    In this paper, a technique for the Berlekamp Massey(BM) algorithm is provided to reduce the latency of decoding and save decoding power by early termination or early stopped checking. We investigate the consecutive zero discrepan cies during the decoding iteration and decide to early stop the decoding process. This technique is subject to decoding failure in exchange for the decoding latency. We analyze our propose d technique by considering the weight distribution of BCH code and estimating the bounds of undetected error probability as the event of enormous stop checking. The proposed method is effective in numerical results and the probability of decoding failure is lower than 10 −119 for decoding 16383 code length of BCH codes. Furthermore, the complexity compared the conventional early termination method with the proposed approach for decoding the long BCH code. The proposed approach reduces the complexity of the conventional approach by up to 80%. As a result, the FPGA testing on a USB device validates the reliability of the proposed metho

    QRNG Entropy as a Service(EaaS) Platform for Quantum-Safe Entropy and key Delivery

    Get PDF
    Random numbers [1] are widely used in numerical computing, statistical simulation, random sampling, etc. The security of random numbers is getting more attention. In addition to random numbers, information and network environment security [2] are also very important in real life, such as the generation of verification code for login, QR code for online payment, etc. All random numbers that involve important identity information must have extremely high security to protect personal privacy from being leaked or illegally stolen. At present, the mechanism for generating random numbers by computers is at risk of being attacked which is subject that the generated random numbers may be predicted in some cases. Random number generation (RNG) [3] has always been one of the biggest problems. The problem with classic random number generators, i.e., pseudorandom number generators, consists in the possibility to know the deterministic process of pseudorandom generation by unwanted persons. This may result, in the case of cryptography, in compromising a myth of security. Another problem may be the incorrect handling of the generated sequence—mostly in cryptographic uses, the generated random sequence is applied once. Its multiple usage may lead to a security breach (e.g., in the case of the OTP cipher, a sufficiently long key should be truly random and used once in that protocol, otherwise it will be possible to break the code)

    Early-Stopped Approach and Analysis for the Berlekamp-Massey Algorithm

    Get PDF
    BCH codes are being widely used in commercial NAND flash controllers, and the decoding algorithm based on the Berlekamp-Massey (BM) algorithm is a classic solution for solving the key equation used for error correction. The latency of BM decoding is the bottleneck of the Bose-Chaudhuri Hocquenghem (BCH) decoder when correcting a high number of bit errors. However, the flash memory has an error distribution that degrades with usage: few errors occur in the new memory and a low number of errors occur within a code block. With usage, the system performance degrades and BM decoding needs t iterations in order to correct a larger number t of errors. In an attempt to improve the system performance for high speed applications, early termination of the BM decoding is necessary to overcome this degradation. In this paper, a practical solution for early termination checking for BM algorithm is provided. The analysis of proposed method is presented by means of considering the weight distribution of BCH code and deriving the probability of malfunction as the event of undetectable error. The proposed method is presented to be effective by the numerical results and the probability of malfunction for the proposed method is lower than 10−26. As a result, the FPGA testing on a USB device validate the reliability of the proposed method for applying to a commercial product

    Low SLA violation and Low Energy consumption using VM Consolidation in Green Cloud Data Centers

    Get PDF
    Virtual Machines (VM) consolidation is an efficient way towards energy conservation in cloud data centers. The VM consolidation technique is applied to migrate VMs into lesser number of active Physical Machines (PMs), so that the PMs which have no VMs can be turned into sleep state. VM consolidation technique can reduce energy consumption of cloud data centers because of the energy consumption by the PM which is in sleep state. Because of VMs sharing the underlying physical resources, aggressive consolidation of VMs can lead to performance degradation. Furthermore, an application may encounter an unexpected resources requirement which may lead to increased response times or even failures. Before providing cloud services, cloud providers should sign Service Level Agreements (SLA) with customers. To provide reliable Quality of Service (QoS) for cloud providers is quite important of considering this research topic. To strike a tradeoff between energy and performance, minimizing energy consumption on the premise of meeting SLA is considered. One of the optimization challenges is to decide which VMs to migrate, when to migrate, where to migrate, and when and which servers to turn on/off. To achieve this goal optimally, it is important to predict the future host state accurately and make plan for migration of VMs based on the prediction. For example, if a host will be overloaded at next time unit, some VMs should be migrated from the host to keep the host from overloading, and if a host will be underloaded at next time unit, all VMs should be migrated from the host, so that the host can be turned off to save power. The design goal of the controller is to achieve the balance between server energy consumption and application performance. Because of the heterogeneity of cloud resources and various applications in the cloud environment, the workload on hosts is dynamically changing over time. It is essential to develop accurate workload prediction models for effective resource management and allocation. The disadvantage of VM consolidation process in cloud data centers is that they only concentrate on primitive system characteristics such as CPU utilization, memory and the number of active hosts. When originating their models and approaches as the decisive factors, these characteristics ignore the discrepancy in performance-to-power efficiency between heterogeneous infrastructures. Therefore, this is the reason that leads to unreasonable consolidation which may cause redundant number of VM migrations and energy waste. Advance artificial intelligence such as reinforcement learning can learn a management strategy without prior knowledge, which enables us to design a model-free resource allocation control system. For example, VM consolidation could be predicted by using artificial intelligence rather than based on the current resources utilization usag

    An Optimization Approach for an RLL-Constrained LDPC Coded Recording System Using Deliberate Flipping

    Get PDF
    For a recording system that has a run-length-limited (RLL) constraint, this approach imposes the hard error by flipping bits before recording. A high error coding rate limits the correcting capability of the RLL bit error. Since iterative decoding does not include the estimation technique, it has the potential capability of solving the hard error bits within several 7 iterations compared to an LDPC coded system. In this letter, we implement density evolution and the differential evolution approach to provide a performance evaluation of unequal error protection LDPC code to investigate the optimal LDPC code distribution for an RLL flipped system

    AI SMEs IN INDUSTRIAL MANAGEMENT

    Get PDF
    SMEs form the pile in the Romanian overall economy, creating a huge sum of the job and added benefit within the nation, which makes them important in this context. IoT and cloud are processing Romanian but the difficulties encountered during the adoption of those systems by Romanian SMEs. Nevertheless, current literature will not heavily concentrate on SMEs and their particular challenges nor will it include a lot of situation studies focusing upon maturity amounts of impaired computing and IoT technologies. The outcome of this research seeks to contribute to the field of IoT and maturation models by adding more research that is specific to SMEs in Romania. The particular insights created by the conclusions of this thesis goal to help SMEs and researchers in assessing maturity levels and dealing with the challenges connected to the adoption of either IoT or cloud computing technologies

    On soft iterative decoding for ternary recording systems with RLL constraints

    Get PDF
    In this paper, we investigate the soft iterative decoding technique for ternary recoding systems with run-length-limited (RLL) constraints. We employ a simple binary-to-ternary RLL encoder following the LDPC (low density parity check) encoder. In the decoder, the iteratively passing of soft information between the LDPC decoder and a detector is used, where the detector is constructed for a combination of the RLL encoder, PLM (pulse length modulation) precoder and the partial response channel. We provide two different decoding algorithms. For one of the decoding algorithm, we are able to obtain bit-error-rate performance which is inferior to the comparable system without considering the RLL constraint for the high sign-to-noise ratio (SNR) regime and is better for the low-to-moderate SNR regime
    • …
    corecore