2,173 research outputs found

    Bit error performance of diffuse indoor optical wireless channel pulse position modulation system employing artificial neural networks for channel equalisation

    Get PDF
    The bit-error rate (BER) performance of a pulse position modulation (PPM) scheme for non-line-of-sight indoor optical links employing channel equalisation based on the artificial neural network (ANN) is reported. Channel equalisation is achieved by training a multilayer perceptrons ANN. A comparative study of the unequalised `soft' decision decoding and the `hard' decision decoding along with the neural equalised `soft' decision decoding is presented for different bit resolutions for optical channels with different delay spread. We show that the unequalised `hard' decision decoding performs the worst for all values of normalised delayed spread, becoming impractical beyond a normalised delayed spread of 0.6. However, `soft' decision decoding with/without equalisation displays relatively improved performance for all values of the delay spread. The study shows that for a highly diffuse channel, the signal-to-noise ratio requirement to achieve a BER of 10−5 for the ANN-based equaliser is ~10 dB lower compared with the unequalised `soft' decoding for 16-PPM at a data rate of 155 Mbps. Our results indicate that for all range of delay spread, neural network equalisation is an effective tool of mitigating the inter-symbol interference

    Deep Reinforcement Learning for Swarm Systems

    Full text link
    Recently, deep reinforcement learning (RL) methods have been applied successfully to multi-agent scenarios. Typically, these methods rely on a concatenation of agent states to represent the information content required for decentralized decision making. However, concatenation scales poorly to swarm systems with a large number of homogeneous agents as it does not exploit the fundamental properties inherent to these systems: (i) the agents in the swarm are interchangeable and (ii) the exact number of agents in the swarm is irrelevant. Therefore, we propose a new state representation for deep multi-agent RL based on mean embeddings of distributions. We treat the agents as samples of a distribution and use the empirical mean embedding as input for a decentralized policy. We define different feature spaces of the mean embedding using histograms, radial basis functions and a neural network learned end-to-end. We evaluate the representation on two well known problems from the swarm literature (rendezvous and pursuit evasion), in a globally and locally observable setup. For the local setup we furthermore introduce simple communication protocols. Of all approaches, the mean embedding representation using neural network features enables the richest information exchange between neighboring agents facilitating the development of more complex collective strategies.Comment: 31 pages, 12 figures, version 3 (published in JMLR Volume 20

    Adaptive equalisation for fading digital communication channels

    Get PDF
    This thesis considers the design of new adaptive equalisers for fading digital communication channels. The role of equalisation is discussed in the context of the functions of a digital radio communication system and both conventional and more recent novel equaliser designs are described. The application of recurrent neural networks to the problem of equalisation is developed from a theoretical study of a single node structure to the design of multinode structures. These neural networks are shown to cancel intersymbol interference in a manner mimicking conventional techniques and simulations demonstrate their sensitivity to symbol estimation errors. In addition the error mechanisms of conventional maximum likelihood equalisers operating on rapidly time-varying channels are investigated and highlight the problems of channel estimation using delayed and often incorrect symbol estimates. The relative sensitivity of Bayesian equalisation techniques to errors in the channel estimate is studied and demonstrates that the structure's equalisation capability is also susceptible to such errors. Applications of multiple channel estimator methods are developed, leading to reduced complexity structures which trade performance for a smaller computational load. These novel structures are shown to provide an improvement over the conventional techniques, especially for rapidly time-varying channels, by reducing the time delay in the channel estimation process. Finally, the use of confidence measures of the equaliser's symbol estimates in order to improve channel estimation is studied and isolates the critical areas in the development of the technique — the production of reliable confidence measures by the equalisers and the statistics of symbol estimation error bursts

    Bacterial Foraging Based Channel Equalizers

    Get PDF
    A channel equalizer is one of the most important subsystems in any digital communication receiver. It is also the subsystem that consumes maximum computation time in the receiver. Traditionally maximum-likelihood sequence estimation (MLSE) was the most popular form of equalizer. Owing to non-stationary characteristics of the communication channel MLSE receivers perform poorly. Under these circumstances ‘Maximum A-posteriori Probability (MAP)’ receivers also called Bayesian receivers perform better. Natural selection tends to eliminate animals with poor “foraging strategies” and favor the propagation of genes of those animals that have successful foraging strategies since they are more likely to enjoy reproductive success. After many generations, poor foraging strategies are either eliminated or shaped into good ones (redesigned). Logically, such evolutionary principles have led scientists in the field of “foraging theory” to hypothesize that it is appropriate to model the activity of foraging as an optimization process. This thesis presents an investigation on design of bacterial foraging based channel equalizer for digital communication. Extensive simulation studies shows that the performance of the proposed receiver is close to optimal receiver for variety of channel conditions. The proposed receiver also provides near optimal performance when channel suffers from nonlinearities

    Automated Website Fingerprinting through Deep Learning

    Full text link
    Several studies have shown that the network traffic that is generated by a visit to a website over Tor reveals information specific to the website through the timing and sizes of network packets. By capturing traffic traces between users and their Tor entry guard, a network eavesdropper can leverage this meta-data to reveal which website Tor users are visiting. The success of such attacks heavily depends on the particular set of traffic features that are used to construct the fingerprint. Typically, these features are manually engineered and, as such, any change introduced to the Tor network can render these carefully constructed features ineffective. In this paper, we show that an adversary can automate the feature engineering process, and thus automatically deanonymize Tor traffic by applying our novel method based on deep learning. We collect a dataset comprised of more than three million network traces, which is the largest dataset of web traffic ever used for website fingerprinting, and find that the performance achieved by our deep learning approaches is comparable to known methods which include various research efforts spanning over multiple years. The obtained success rate exceeds 96% for a closed world of 100 websites and 94% for our biggest closed world of 900 classes. In our open world evaluation, the most performant deep learning model is 2% more accurate than the state-of-the-art attack. Furthermore, we show that the implicit features automatically learned by our approach are far more resilient to dynamic changes of web content over time. We conclude that the ability to automatically construct the most relevant traffic features and perform accurate traffic recognition makes our deep learning based approach an efficient, flexible and robust technique for website fingerprinting.Comment: To appear in the 25th Symposium on Network and Distributed System Security (NDSS 2018
    corecore