2 research outputs found

    Reducing the Complexity of Equalisation and Decoding of Shingled Writing

    Get PDF
    Full version: Access restricted permanently due to 3rd party copyright restrictions. Restriction set on 24/05/2017 by SC, Graduate schoolShingled Magnetic Recording (SMR) technology is important in the immediate need for expansion of magnetic hard disk beyond the limit of current disk technology. SMR provides a solution with the least change from current technology among contending technologies. Robust easy to implement Digital Signal Processing (DSP) techniques are needed to achieve the potentials of SMR. Current DSP techniques proposed border on the usage of Two Dimensional Magnetic Recording (TDMR) techniques in equalisation and detection, coupled with iterative error correction codes such as Low Density Parity Check (LDPC). Currently, Maximum Likelihood (ML) algorithms are normally used in TDMR detection. The shortcomings of the ML detections used is the exponential complexities with respect to the number of bits. Because of that, reducing the complexity of the processes in SMR Media is very important in order to actualise the deployment of this technology to personal computers in the near future. This research investigated means of reducing the complexities of equalisation and detection techniques. Linear equalisers were found to be adequate for low density situations. Combining ML detector across-track with linear equaliser along-track was found to provide low complexity, better performing alternative as compared to use of linear equaliser across track with ML along track. This is achieved if density is relaxed along track and compressed more across track. A gain of up to 10dB was achieved. In a situation with high density in both dimensions, full two dimensional (2D) detectors provide better performance. Low complexity full 2D detector was formed by serially concatenating two ML detectors, one for each direction, instead of single 2D ML detector used in other literature. This reduces complexity with respect to side interference from exponential to linear. The use of a single bit parity as run length limited code at the same time error correction code is also presented with a small gain of about 1dB at BER of 10^-5 recorded for the situation of high density.Emerging Markets Telecommunication Services Limited (Etisalat Nigeria)

    CICM: A Collaborative Integrity Checking Blockchain Consensus Mechanism for Preserving the Originality of Data the Cloud for Forensic Investigation

    Get PDF
    The originality of data is very important for achieving correct results from forensic analysis of data for resolving the issue. Data may be analysed to resolve disputes or review issues by finding trends in the dataset that can give clues to the cause of the issue. Specially designed foolproof protection for data integrity is required for forensic purposes. Collaborative Integrity Checking Mechanism (CICM), for securing the chain-of-custody of data in a blockchain is proposed in this paper. Existing consensus mechanisms are fault-tolerant, allowing a threshold for faults. CICM avoids faults by using a transparent 100% agreement process for validating the originality of data in a blockchain. A group of agreement actors check and record the original status of data at its time of arrival. Acceptance is based on general agreement by all the participants in the consensus process. The solution was tested against practical byzantine fault tolerant (PBFT), Zyzzyva, and hybrid byzantine fault tolerant (hBFT) mechanisms for efficacy to yield correct results and operational performance costs. Binomial distribution was used to examine the CICM efficacy. CICM recorded zero probability of failure while the benchmarks recorded up to 8.44%. Throughput and latency were used to test its operational performance costs. The hBFT recorded the best performance among the benchmarks. CICM achieved 30.61% higher throughput and 21.47% lower latency than hBFT. In the robustness against faults tests, CICM performed better than hBFT with 16.5% higher throughput and 14.93% lower latency than the hBFT in the worst-case fault scenario
    corecore