3,293 research outputs found

    An Optimal Unequal Error Protection LDPC Coded Recording System

    Full text link
    For efficient modulation and error control coding, the deliberate flipping approach imposes the run-length-limited(RLL) constraint by bit error before recording. From the read side, a high coding rate limits the correcting capability of RLL bit error. In this paper, we study the low-density parity-check (LDPC) coding for RLL constrained recording system based on the Unequal Error Protection (UEP) coding scheme design. The UEP capability of irregular LDPC codes is used for recovering flipped bits. We provide an allocation technique to limit the occurrence of flipped bits on the bit with robust correction capability. In addition, we consider the signal labeling design to decrease the number of nearest neighbors to enhance the robust bit. We also apply the density evolution technique to the proposed system for evaluating the code performances. In addition, we utilize the EXIT characteristic to reveal the decoding behavior of the recommended code distribution. Finally, the optimization approach for the best distribution is proven by differential evolution for the proposed system.Comment: 20 pages, 18 figure

    Early-Stopped Technique for BCH Decoding Algorithm Under Tolerant Fault Probability

    Full text link
    In this paper, a technique for the Berlekamp-Massey(BM) algorithm is provided to reduce the latency of decoding and save decoding power by early termination or early-stopped checking. We investigate the consecutive zero discrepancies during the decoding iteration and decide to early stop the decoding process. This technique is subject to decoding failure in exchange for the decoding latency. We analyze our proposed technique by considering the weight distribution of BCH code and estimating the bounds of undetected error probability as the event of enormous stop checking. The proposed method is effective in numerical results and the probability of decoding failure is lower than 10−11910^{-119} for decoding 16383 code length of BCH codes. Furthermore, the complexity compared the conventional early termination method with the proposed approach for decoding the long BCH code. The proposed approach reduces the complexity of the conventional approach by up to 80\%. As a result, the FPGA testing on a USB device validates the reliability of the proposed method.Comment: 6 pages, 5 figure

    Early-Stopped Approach and Analysis for the Berlekamp-Massey Algorithm

    Get PDF
    BCH codes are being widely used in commercial NAND flash controllers, and the decoding algorithm based on the Berlekamp-Massey (BM) algorithm is a classic solution for solving the key equation used for error correction. The latency of BM decoding is the bottleneck of the Bose-Chaudhuri Hocquenghem (BCH) decoder when correcting a high number of bit errors. However, the flash memory has an error distribution that degrades with usage: few errors occur in the new memory and a low number of errors occur within a code block. With usage, the system performance degrades and BM decoding needs t iterations in order to correct a larger number t of errors. In an attempt to improve the system performance for high speed applications, early termination of the BM decoding is necessary to overcome this degradation. In this paper, a practical solution for early termination checking for BM algorithm is provided. The analysis of proposed method is presented by means of considering the weight distribution of BCH code and deriving the probability of malfunction as the event of undetectable error. The proposed method is presented to be effective by the numerical results and the probability of malfunction for the proposed method is lower than 10−26. As a result, the FPGA testing on a USB device validate the reliability of the proposed method for applying to a commercial product

    Low SLA violation and Low Energy consumption using VM Consolidation in Green Cloud Data Centers

    Get PDF
    Virtual Machines (VM) consolidation is an efficient way towards energy conservation in cloud data centers. The VM consolidation technique is applied to migrate VMs into lesser number of active Physical Machines (PMs), so that the PMs which have no VMs can be turned into sleep state. VM consolidation technique can reduce energy consumption of cloud data centers because of the energy consumption by the PM which is in sleep state. Because of VMs sharing the underlying physical resources, aggressive consolidation of VMs can lead to performance degradation. Furthermore, an application may encounter an unexpected resources requirement which may lead to increased response times or even failures. Before providing cloud services, cloud providers should sign Service Level Agreements (SLA) with customers. To provide reliable Quality of Service (QoS) for cloud providers is quite important of considering this research topic. To strike a tradeoff between energy and performance, minimizing energy consumption on the premise of meeting SLA is considered. One of the optimization challenges is to decide which VMs to migrate, when to migrate, where to migrate, and when and which servers to turn on/off. To achieve this goal optimally, it is important to predict the future host state accurately and make plan for migration of VMs based on the prediction. For example, if a host will be overloaded at next time unit, some VMs should be migrated from the host to keep the host from overloading, and if a host will be underloaded at next time unit, all VMs should be migrated from the host, so that the host can be turned off to save power. The design goal of the controller is to achieve the balance between server energy consumption and application performance. Because of the heterogeneity of cloud resources and various applications in the cloud environment, the workload on hosts is dynamically changing over time. It is essential to develop accurate workload prediction models for effective resource management and allocation. The disadvantage of VM consolidation process in cloud data centers is that they only concentrate on primitive system characteristics such as CPU utilization, memory and the number of active hosts. When originating their models and approaches as the decisive factors, these characteristics ignore the discrepancy in performance-to-power efficiency between heterogeneous infrastructures. Therefore, this is the reason that leads to unreasonable consolidation which may cause redundant number of VM migrations and energy waste. Advance artificial intelligence such as reinforcement learning can learn a management strategy without prior knowledge, which enables us to design a model-free resource allocation control system. For example, VM consolidation could be predicted by using artificial intelligence rather than based on the current resources utilization usag

    Joint QoS-Aware Scheduling and Precoding for Massive MIMO Systems via Deep Reinforcement Learning

    Full text link
    The rapid development of mobile networks proliferates the demands of high data rate, low latency, and high-reliability applications for the fifth-generation (5G) and beyond (B5G) mobile networks. Concurrently, the massive multiple-input-multiple-output (MIMO) technology is essential to realize the vision and requires coordination with resource management functions for high user experiences. Though conventional cross-layer adaptation algorithms have been developed to schedule and allocate network resources, the complexity of resulting rules is high with diverse quality of service (QoS) requirements and B5G features. In this work, we consider a joint user scheduling, antenna allocation, and precoding problem in a massive MIMO system. Instead of directly assigning resources, such as the number of antennas, the allocation process is transformed into a deep reinforcement learning (DRL) based dynamic algorithm selection problem for efficient Markov decision process (MDP) modeling and policy training. Specifically, the proposed utility function integrates QoS requirements and constraints toward a long-term system-wide objective that matches the MDP return. The componentized action structure with action embedding further incorporates the resource management process into the model. Simulations show 7.2% and 12.5% more satisfied users against static algorithm selection and related works under demanding scenarios

    AI SMEs IN INDUSTRIAL MANAGEMENT

    Get PDF
    SMEs form the pile in the Romanian overall economy, creating a huge sum of the job and added benefit within the nation, which makes them important in this context. IoT and cloud are processing Romanian but the difficulties encountered during the adoption of those systems by Romanian SMEs. Nevertheless, current literature will not heavily concentrate on SMEs and their particular challenges nor will it include a lot of situation studies focusing upon maturity amounts of impaired computing and IoT technologies. The outcome of this research seeks to contribute to the field of IoT and maturation models by adding more research that is specific to SMEs in Romania. The particular insights created by the conclusions of this thesis goal to help SMEs and researchers in assessing maturity levels and dealing with the challenges connected to the adoption of either IoT or cloud computing technologies

    On soft iterative decoding for ternary recording systems with RLL constraints

    Get PDF
    In this paper, we investigate the soft iterative decoding technique for ternary recoding systems with run-length-limited (RLL) constraints. We employ a simple binary-to-ternary RLL encoder following the LDPC (low density parity check) encoder. In the decoder, the iteratively passing of soft information between the LDPC decoder and a detector is used, where the detector is constructed for a combination of the RLL encoder, PLM (pulse length modulation) precoder and the partial response channel. We provide two different decoding algorithms. For one of the decoding algorithm, we are able to obtain bit-error-rate performance which is inferior to the comparable system without considering the RLL constraint for the high sign-to-noise ratio (SNR) regime and is better for the low-to-moderate SNR regime

    Optimally Conditioned Channel Matrices in Precoding Enabled Non-Terrestrial Networks

    Get PDF
    peer reviewedThis paper explores how the condition number of the channel matrix affects the performance of different precoding techniques in non-terrestrial network (NTN) communications. Precoding is a technique that can improve the signal-to-interference-plus-noise ratio (SINR) and bit error rate (BER) in massive multi-beam systems. However, the performance of precoding depends on the rank and condition number of the channel matrix, which measures how well-conditioned the matrix is for inversion. We compare three precoding techniques: zero-forcing (ZF), minimum mean square error (MMSE), and semi-linear precoding (SLP), and show that their performance degrades as the condition number increases. To mitigate this problem, we propose a user ordering approach that forms optimally conditioned channel matrices by selecting users with orthogonal channel vectors. We demonstrate that this approach improves the SINR and goodput of all the precoding techniques in full-frequency reuse NTN communications

    Detection of subtle neurological alterations by the Catwalk XT gait analysis system

    Get PDF
    BACKGROUND: A new version of the CatWalk XT system was evaluated as a tool for detecting very subtle alteration in gait based on higher speed sample rate; the system could also demonstrate minor changes in neurological function. In this study, we evaluated the neurological outcome of sciatic nerve injury intervened by local injection of hyaluronic acid. Using the CatWalk XT system, we looked for differences between treated and untreated groups and differences within the same group as a function of time so as to assess the power of the Catwalk XT system for detecting subtle neurological change. METHODS: Peripheral nerve injury was induced in 36 Sprague–Dawley rats by crushing the left sciatic nerve using a vessel clamp. The animals were randomized into one of two groups: Group I: crush injury as the control; Group II: crush injury and local application with hyaluronic acid. These animals were subjected to neurobehavior assessment, histomorphology evaluation, and electrophysiology study periodically. These data were retrieved for statistical analysis. RESULTS: The density of neurofilament and S-100 over the distal end of crushed nerve showed significant differences either in inter-group comparison at various time points or intra-group comparison from 7 to 28 days. Neuronal structure architecture, axon counts, intensity of myelination, electrophysiology, and collagen deposition demonstrate significant differences between the two groups. There was significant difference of SFI and angle of ankle in inter- group analysis from 7 to 28 days, but there were no significant differences in SFI and angle of ankle at time points of 7 and 14 days. In the Cat Walk XT analysis, the intensity, print area, stance duration, and swing duration all showed detectable differences at 7, 14, 21, and 28 days, whereas there were no significant difference at 7 and 14 days with CatWalk 7 testing. In addition, there were no significant differences of step sequence or regularity index between the two versions. CONCLUSION: Hyaluronic acid augmented nerve regeneration as early as 7 days after crush injury. This subtle neurological alteration could be detected through the CatWalk XT gait analysis but not the SFI, angle of ankle, or CatWalk 7 methods

    Clinical Impacts of Delayed Diagnosis of Hirschsprung’s Disease in Newborn Infants

    Get PDF
    BackgroundAsian infants are at a higher risk of having Hirschsprung’s disease (HD). Although HD is surgically correctable, serious and even lethal complications such as Hirschsprung’s-associated enterocolitis (HAEC) can still occur. The aim of this study was to investigate the risk factors of HAEC, and the clinical impacts of delayed diagnosis of HD in newborn infants.Patients and methodsBy review of medical charts in a medical center in Taiwan, 51 cases of neonates with HD between 2002 and 2009 were collected. Patients were divided into two groups based on the time of initial diagnosis: Group I, diagnosis made within 1 week after birth, and Group II after 1 week. Clinical features including demographic distribution, presenting features of HD, short-term and long-term complications related to HD were compared between the two groups of patients.ResultsThere were 25 patients in Group I and 19 in Group II. Group II patients had more severe clinical signs and symptoms of HAEC than Group I patients. The incidence of preoperative HAEC was 12% in Group I and 63% in Group II (adjusted odds ratio = 12.81, confidence interval = 2.60–62.97). Patients with preoperative HAEC were more likely to develop adhesive bowel obstruction after operation (33% vs. 3%, p = 0.013) and failure to thrive (33% vs. 3%, p = 0.013). Also, patients with long-segment or total colonic aganglionosis were at risk of developing both postoperative HAEC (85% vs. 29%, p = 0.001) and failure to thrive (39% vs. 3%, p = 0.002).ConclusionIn our study, we found that delayed diagnosis of HD beyond 1 week after birth significantly increases the risk of serious complications in neonatal patients. Patients with long-segment or total colonic aganglionosis have higher risk of postoperative HAEC and failure to thrive. Patients with preoperative HAEC are more likely to have adhesive bowel obstruction and failure to thrive
    • 

    corecore