22 research outputs found

    Expert System Based Network Testing

    Get PDF

    Residual Block Error Rate Prediction for IR HARQ Protocol

    Get PDF
    This paper provides a simple estimation of the Long Term Evolution (LTE) physical and Medium Access Control (MAC) layer peak transmission performance-the irreducible Block-Error-Rate (BLER) that determines the Hybrid Automatic Repeat Request (HARQ) residual channel available to higher-layer protocols. With this regard, the general pre-HARQ BLER prediction is developed for the redundancy version 0 (RV0) codeword transmission, expressed by the Bit-Error-Rate (BER)), considering the cyclic prefix protection against inter-symbol interference sufficient to prevent long error bursts. This implies only sporadic bit error occurrences, exhibiting moderate mutual interdependence that we modelled considering errored bits of each block of data as a sample without replacement and consequently describing it with the hypergeometric distribution instead of the mostly used binomial one. The HARQ BLER estimation model is verified by both problem-dedicated Monte-Carlo simulations and industry-standard LTE software simulation tool, specifically for the LTE FDD downlink channel environment, as the test results exhibit excellent matching with the residual BLER prediction

    Performance of Digitally Compressed (MPEG) Picture Transmission via Real Transmission Systems

    Get PDF
    MPEG je jedna od najpopularnijih serija standarda za video/audiokompresiju, pogodnih za različite aplikacije, ali temeljenih na sličnim principima. Ako se MPEG komprimirani videosignal treba prenijeti do udaljenog korisnika, na raspolaganju su različite mrežne tehnologije, primjerice ATM i IP, pri čemu je od interesa osigurati potrebnu kvalitetu usluge, i to uporabom najekonomičnije tehnologije na raspolaganju. Da bi se to ostvarilo, potrebno je nadzirati parametre kvalitete usluge na razini relevantne prijenosne tehnologije (kakvi su npr. varijacije kaÅ”njenja i gubitak podatkovnih jedinica protokola kojim se koristi ā€“ okvira, ćelija ili paketa), i odabrati odgovarajuće metode za njihovo držanje na razini koju (zbog smanjenja redundancije) osjetljivi komprimirani MPEG niz može tolerirati. S tim u vezi, u ovome je radu, s jedne strane, predstavljen primjer tehnika mjerenja i testiranja prijenosnog sustava (ATM), a s druge strane, dostignuta je perceptualna kvaliteta prenesenoga MPEG signala, a zatim su doneseni odgovarajući zaključci o razini utjecaja gubitaka podataka u mreži na kvalitetu primljenog videosignala.MPEG is one of the most popular families of audio/video compression techniques, suitable for different applications but still based on similar principles. If an MPEG compressed video signal is to be transmitted to the remote user, various network technologies are available, such as eg. ATM and IP, where it is of interest to provide the needed quality of service (QoS) still using the most economical technology available. In order to accomplish that goal, it is necessary to monitor the QoS parameters of the relevant transmission technology (such as eg. delay variation and loss of protocol data units ā€“ frames, cells and packets), and then select appropriate methods for keeping the parameters values at the level that the vulnerable (due to reduction of the redundancy) compressed MPEG stream can tolerate. With this respect, in this paper, from one point of view, the example of test and measurement techniques as applied to the transmission system (ATM), and from another point of view, the achieved perceptual quality of the transmitted MPEG signal, have been considered, and the appropriate conclusions made about the degree of the impact of network packet loss on the quality of the received video signal

    Testing TCP Traffic Congestion by Distributed Protocol Analysis and Statistical Modelling

    Get PDF
    In this paper, a solution is proposed for testing TCP congestion window process in a real-life network situation during stationary time intervals. With this respect, the architecture of hardware and expert-system-based distributed protocol analysis is presented that we used for data acquisition and testing, conducted on a major network with live traffic (Electronic Financial Transactions data transfer), as well as the appropriate algorithm for estimating the actual congestion window size from the measured data that mainly included decoding with precise time-stamps (100ns resolution locally and 1ms with GPS clock distribution) and expert-system comments, resulting from the appropriate processing of the network data, accordingly filtered prior to arriving to the special-hardware-based capture buffer. In addition, the paper presents the statistical analysis model that we developed for the evaluation whether the data belonged to the specific (in this case, normal) cumulative distribution function, or whether two data sets exhibit the same statistical distribution - the conditio sine qua non for a TCP-stable interval. Having identified such stationary intervals, it was found that the measured-data-based congestion window values exhibited very good fitting (with satisfactory statistical significance) to the truncated normal distribution. Finally, an appropriate model was developed and applied, for estimating the relevant parameters of the congestion window distribution: its mean value and the variance. KEY WORDS: protocol analysis, TCP-IP, testing, traffic congestion, statistical analysis, parameter estimatio

    Suppressing the OFDM CFO-Caused Constellation Symbol Phase Deviation by PAPR Reduction

    No full text
    The well-known major drawbacks of the Orthogonal Frequency-Division Multiplexing (OFDM), namely, the transmitter versus receiver Carrier Frequency Offset (CFO), and the Peak-to-Average Power Ratio (PAPR) of the transmitted OFDM signal, may degrade the error performance, by causing Intercarrier Interference (ICI), as well as in-band distortion and adjacent channel interference, respectively. Moreover, in spite of the utmost care given to CFO estimation and compensation in OFDM wireless systems, such as wireless local networks or the mobile radio systems of the fourth generation, e.g., the Long-Term Evolution (LTE), still some residual CFO remains. With this regard, though so far the CFO and the PAPR have been treated independently, in this paper, we develop an Error Vector Magnitude (EVM) based analytical model for the CFO-induced constellation symbol phase distortion, which essentially reveals that the maximal CFO-caused squared phase deviation is linear with the instantaneous (per-OFDM-symbol) PAPR. This implies that any PAPR reduction technique, such as simple clipping or coding, indirectly suppresses the CFO-induced phase deviation, too. The analytically achieved results and conclusions are tested and successfully verified by conducted Monte Carlo simulations

    Efficient Estimation of CFO-Affected OFDM BER Floor in Small Cells with Resource-Limited IoT End-Points

    No full text
    Contemporary wireless networks dramatically enhance data rates and latency to become a key enabler of massive communication among various low-cost devices of limited computational power, standardized by the Long-Term Evolution (LTE) downscaled derivations LTE-M or narrowband Internet of Things (NB IoT), in particular. Specifically, assessment of the physical-layer transmission performance is important for higher-layer protocols determining the extent of the potential error recovery escalation upwards the protocol stack. Thereby, it is needed that the end-points of low processing capacity most efficiently estimate the residual bit error rate (BER) solely determined by the main orthogonal frequency-division multiplexing (OFDM) impairmentā€“carrier frequency offset (CFO), specifically in small cells, where the signal-to-noise ratio is large enough, as well as the OFDM symbol cyclic prefix, preventing inter-symbol interference. However, in contrast to earlier analytical models with computationally demanding estimation of BER from the phase deviation caused by CFO, in this paper, after identifying the optimal sample instant in a power delay profile, we abstract the CFO by equivalent time dispersion (i.e., by additional spreading of the power delay profile that would produce the same BER degradation as the CFO). The proposed BER estimation is verified by means of the industry-standard LTE software simulator

    Efficient Estimation of CFO-Affected OFDM BER Floor in Small Cells with Resource-Limited IoT End-Points

    No full text
    Contemporary wireless networks dramatically enhance data rates and latency to become a key enabler of massive communication among various low-cost devices of limited computational power, standardized by the Long-Term Evolution (LTE) downscaled derivations LTE-M or narrowband Internet of Things (NB IoT), in particular. Specifically, assessment of the physical-layer transmission performance is important for higher-layer protocols determining the extent of the potential error recovery escalation upwards the protocol stack. Thereby, it is needed that the end-points of low processing capacity most efficiently estimate the residual bit error rate (BER) solely determined by the main orthogonal frequency-division multiplexing (OFDM) impairment–carrier frequency offset (CFO), specifically in small cells, where the signal-to-noise ratio is large enough, as well as the OFDM symbol cyclic prefix, preventing inter-symbol interference. However, in contrast to earlier analytical models with computationally demanding estimation of BER from the phase deviation caused by CFO, in this paper, after identifying the optimal sample instant in a power delay profile, we abstract the CFO by equivalent time dispersion (i.e., by additional spreading of the power delay profile that would produce the same BER degradation as the CFO). The proposed BER estimation is verified by means of the industry-standard LTE software simulator

    Improved Model for Estimation of Spatial Averaging Path Length

    No full text
    In mobile communication systems, the transmitted RF signal is subject to mutually independent deterministic path loss and stochastic multipath and shadow fading. As at each spatial location mostly the composite signal samples are measured, their components are distinguished by averaging out the multipath-caused signal level variations, while preserving just the ones due to shadowing. The prerequisite for this is the appropriateness of the local area averaging path length that enables obtaining the local mean (composed of mean path loss and shadow fading) and the multipath fading as difference between the composite signal sample and the local mean. However, the so far reported analytical approaches to estimation of the averaging path length are based on considering either the multipath or just the shadow fading, with applicability limited to only specific topologies and frequencies. Therefore, in this paper, the most widely used Lee analytical method is generalized and improved by considering multipath and shadowing concurrently, so providing the general closed-form elementary-function based estimation of the optimal averaging path length as a function of common multipath and shadow fading parameters characterizing particular propagation environment. The model enables recommendations for the optimal averaging length for all propagation conditions facing the mobile receiver

    Closed-Form Enhanced Detection of Clipped OFDM Symbol

    No full text
    Large peak-to-average power ratio (PAPR) and carrier frequency offset (CFO) are dominant impairments of the orthogonal frequency-division multiplexing (OFDM) symbol transmission that is applied within the state-of-the-art wireless operator networks. In this work, we deal with consequences of the amplitude peak clipping that is commonly used at the transmitter to reduce the PAPR of the OFDM symbol, and thus prevent its non-linear distortion which would otherwise be imposed by the output high-power amplifier (HPA). Accordingly, regardless of the clipping generating mechanism at the transmitter being either inherent (related to the HPA) or deliberate (due to PAPR reduction), the clipped incoming OFDM symbol at the receiver may lead to degraded detection accuracy and transmission performance. However, the methods that have been applied so far at the receiver for compensating non-linear distortion due to clipping, are quite complex and computationally demanding. On the contrary, we propose effective mitigation of the problem to be performed at the receiver, by deriving the closed-form enhanced detection criterion, which requires common measurements of the mean and the rms values, as well as the autocorrelation of the received OFDM symbol comprising both un-clipped and clipped sections. Such improved detection was shown to significantly reduce the side effects of clipping, and restore satisfactory transmission performance – the bit error rate (BER) in particular. The proposed analytical model was preliminarily verified by versatile Monte-Carlo simulations and professional industry-standard vector signal analysis (VSA) test system, as well as by BER testing. The evident convergence of the three methods’ test results leads to the conclusion that the proposed clipped OFDM symbol detection method provides clear improvement with respect to the conventional one

    BER Aided Energy and Spectral Efficiency Estimation in a Heterogeneous Network

    No full text
    In this work, we adopt the analysis of a heterogeneous cellular network by means of stochastic geometry, to estimate energy and spectral network efficiency. More specifically, it has been the widely spread experience that practical field assessment of the Signal-to-Noise and Interference Ratio (SINR), being the key physical-layer performance indicator, involves quite sophisticated test instrumentation that is not always available outside the lab environment. So, in this regard, we present here a simpler test model coming out of the much easier-to-measure Bit Error Rate (BER), as the latter can deteriorate due to various impairments regarded here as equivalent with additive white Gaussian noise (AWGN) abstracting (in terms of equal BER degradation) any actual non-AWGN impairment. We validated the derived analytical model for heterogeneous two-tier networks by means of an ns3 simulator, as it provided the test results that fit well to the analytically estimated corresponding ones, both indicating that small cells enable better energy and spectral efficiencies than the larger-cell networks
    corecore