62,349 research outputs found

    On the Impact of Optimal Modulation and FEC Overhead on Future Optical Networks

    Get PDF
    The potential of optimum selection of modulation and forward error correction (FEC) overhead (OH) in future transparent nonlinear optical mesh networks is studied from an information theory perspective. Different network topologies are studied as well as both ideal soft-decision (SD) and hard-decision (HD) FEC based on demap-and-decode (bit-wise) receivers. When compared to the de-facto QPSK with 7% OH, our results show large gains in network throughput. When compared to SD-FEC, HD-FEC is shown to cause network throughput losses of 12%, 15%, and 20% for a country, continental, and global network topology, respectively. Furthermore, it is shown that most of the theoretically possible gains can be achieved by using one modulation format and only two OHs. This is in contrast to the infinite number of OHs required in the ideal case. The obtained optimal OHs are between 5% and 80%, which highlights the potential advantage of using FEC with high OHs.Comment: Some minor typos were correcte

    Energy and Sampling Constrained Asynchronous Communication

    Full text link
    The minimum energy, and, more generally, the minimum cost, to transmit one bit of information has been recently derived for bursty communication when information is available infrequently at random times at the transmitter. This result assumes that the receiver is always in the listening mode and samples all channel outputs until it makes a decision. If the receiver is constrained to sample only a fraction f>0 of the channel outputs, what is the cost penalty due to sparse output sampling? Remarkably, there is no penalty: regardless of f>0 the asynchronous capacity per unit cost is the same as under full sampling, ie, when f=1. There is not even a penalty in terms of decoding delay---the elapsed time between when information is available until when it is decoded. This latter result relies on the possibility to sample adaptively; the next sample can be chosen as a function of past samples. Under non-adaptive sampling, it is possible to achieve the full sampling asynchronous capacity per unit cost, but the decoding delay gets multiplied by 1/f. Therefore adaptive sampling strategies are of particular interest in the very sparse sampling regime.Comment: Submitted to the IEEE Transactions on Information Theor

    Frequency and fundamental signal measurement algorithms for distributed control and protection applications

    Get PDF
    Increasing penetration of distributed generation within electricity networks leads to the requirement for cheap, integrated, protection and control systems. To minimise cost, algorithms for the measurement of AC voltage and current waveforms can be implemented on a single microcontroller, which also carries out other protection and control tasks, including communication and data logging. This limits the frame rate of the major algorithms, although analogue to digital converters (ADCs) can be oversampled using peripheral control processors on suitable microcontrollers. Measurement algorithms also have to be tolerant of poor power quality, which may arise within grid-connected or islanded (e.g. emergency, battlefield or marine) power system scenarios. This study presents a 'Clarke-FLL hybrid' architecture, which combines a three-phase Clarke transformation measurement with a frequency-locked loop (FLL). This hybrid contains suitable algorithms for the measurement of frequency, amplitude and phase within dynamic three-phase AC power systems. The Clarke-FLL hybrid is shown to be robust and accurate, with harmonic content up to and above 28% total harmonic distortion (THD), and with the major algorithms executing at only 500 samples per second. This is achieved by careful optimisation and cascaded use of exact-time averaging techniques, which prove to be useful at all stages of the measurements: from DC bias removal through low-sample-rate Fourier analysis to sub-harmonic ripple removal. Platform-independent algorithms for three-phase nodal power flow analysis are benchmarked on three processors, including the Infineon TC1796 microcontroller, on which only 10% of the 2000 mus frame time is required, leaving the remainder free for other algorithms

    Data compression techniques applied to high resolution high frame rate video technology

    Get PDF
    An investigation is presented of video data compression applied to microgravity space experiments using High Resolution High Frame Rate Video Technology (HHVT). An extensive survey of methods of video data compression, described in the open literature, was conducted. The survey examines compression methods employing digital computing. The results of the survey are presented. They include a description of each method and assessment of image degradation and video data parameters. An assessment is made of present and near term future technology for implementation of video data compression in high speed imaging system. Results of the assessment are discussed and summarized. The results of a study of a baseline HHVT video system, and approaches for implementation of video data compression, are presented. Case studies of three microgravity experiments are presented and specific compression techniques and implementations are recommended

    Cross-layer based erasure code to reduce the 802.11 performance anomaly : when FEC meets ARF

    Get PDF
    Wireless networks have been widely accepted and deployed in our world nowadays. Consumers are now accustomed to wireless connectivity in their daily life due to the pervasive- ness of the 802.11b/g and wireless LAN standards. Specially, the emergence of the next evolution of Wi-Fi technology known as 802.11n is pushing a new revolution on personal wireless communication. However, in the context of WLAN, although multiple novel wireless access technologies have been proposed and developed to offer high bandwidth and guarantee quality of transmission, some deficiencies still remain due to the original design of WLAN-MAC layer. In particular, the performance anomaly of 802.11 is a serious issue which induces a potentially dramatic reduction of the global bandwidth when one or several mobile nodes downgrade their transmission rates following the signal degradation. In this paper, we study how the use of adaptive erasure code as a replacement of the Auto Rate Feedback mechanism can help to mitigate this performance anomaly issue. Preliminary study shows a global increase of the goodput delivered to mobile hosts attached to an access point

    Reconfigurable rateless codes

    No full text
    We propose novel reconfigurable rateless codes, that are capable of not only varying the block length but also adaptively modify their encoding strategy by incrementally adjusting their degree distribution according to the prevalent channel conditions without the availability of the channel state information at the transmitter. In particular, we characterize a reconfigurable ratelesscode designed for the transmission of 9,500 information bits that achieves a performance, which is approximately 1 dB away from the discrete-input continuous-output memoryless channel’s (DCMC) capacity over a diverse range of channel signal-to-noise (SNR) ratios
    corecore