27 research outputs found

    Radio frequency traffic classification over WLAN

    Get PDF
    Network traffic classification is the process of analyzing traffic flows and associating them to different categories of network applications. Network traffic classification represents an essential task in the whole chain of network security. Some of the most important and widely spread applications of traffic classification are the ability to classify encrypted traffic, the identification of malicious traffic flows, and the enforcement of security policies on the use of different applications. Passively monitoring a network utilizing low-cost and low-complexity wireless local area network (WLAN) devices is desirable. Mobile devices can be used or existing office desktops can be temporarily utilized when their computational load is low. This reduces the burden on existing network hardware. The aim of this paper is to investigate traffic classification techniques for wireless communications. To aid with intrusion detection, the key goal is to passively monitor and classify different traffic types over WLAN to ensure that network security policies are adhered to. The classification of encrypted WLAN data poses some unique challenges not normally encountered in wired traffic. WLAN traffic is analyzed for features that are then used as an input to six different machine learning (ML) algorithms for traffic classification. One of these algorithms (a Gaussian mixture model incorporating a universal background model) has not been applied to wired or wireless network classification before. The authors also propose a ML algorithm that makes use of the well-known vector quantization algorithm in conjunction with a decision tree—referred to as a TRee Adaptive Parallel Vector Quantiser. This algorithm has a number of advantages over the other ML algorithms tested and is suited to wireless traffic classification. An average F-score (harmonic mean of precision and recall) > 0.84 was achieved when training and testing on the same day across six distinct traffic types

    Reliability enhanced EV using pattern recognition techniques

    Get PDF
    The following paper will contribute to the development of novel data transmission techniques from an IVHM perspective so that Electrical Vehicles (EV) will be able to communicate semantically by directly pointing out to the worst failure/threat scenarios. This is achieved by constructing an image-based data communication in which the data that is monitored by a vast number of different sensors are collected as images; and then, the meaningful failure/threat objects are transmitted among a number of EVs. The meanings of these objects that are clarified for each EV by a set of training patterns are semantically linked from one to other EVs through the similarities that the EVs share. This is a similar approach to wellknown image compression and retrieval techniques, but the difference is that the training patterns, codebook, and codewords within the different EVs are not the same. Hence, the initial image that is compressed at the transmitter side does not exactly match the image retrieved at the receiver's side; as it concerns both EVs semantically that mainly addresses the worst risky scenarios. As an advantage, connected EVs would require less number of communication channels to talk together while also reducing data bandwidth as it only sends the similarity rates and tags of patterns instead of sending the whole initial image that is constructed from various sensors, including cameras. As a case study, this concept is applied to DC-DC converters which refer to a system that presents one of the major problems for EVs

    Performance Evaluation of Connectivity and Capacity of Dynamic Spectrum Access Networks

    Get PDF
    Recent measurements on radio spectrum usage have revealed the abundance of under- utilized bands of spectrum that belong to licensed users. This necessitated the paradigm shift from static to dynamic spectrum access (DSA) where secondary networks utilize unused spectrum holes in the licensed bands without causing interference to the licensed user. However, wide scale deployment of these networks have been hindered due to lack of knowledge of expected performance in realistic environments and lack of cost-effective solutions for implementing spectrum database systems. In this dissertation, we address some of the fundamental challenges on how to improve the performance of DSA networks in terms of connectivity and capacity. Apart from showing performance gains via simulation experiments, we designed, implemented, and deployed testbeds that achieve economics of scale. We start by introducing network connectivity models and show that the well-established disk model does not hold true for interference-limited networks. Thus, we characterize connectivity based on signal to interference and noise ratio (SINR) and show that not all the deployed secondary nodes necessarily contribute towards the network\u27s connectivity. We identify such nodes and show that even-though a node might be communication-visible it can still be connectivity-invisible. The invisibility of such nodes is modeled using the concept of Poisson thinning. The connectivity-visible nodes are combined with the coverage shrinkage to develop the concept of effective density which is used to characterize the con- nectivity. Further, we propose three techniques for connectivity maximization. We also show how traditional flooding techniques are not applicable under the SINR model and analyze the underlying causes for that. Moreover, we propose a modified version of probabilistic flooding that uses lower message overhead while accounting for the node outreach and in- terference. Next, we analyze the connectivity of multi-channel distributed networks and show how the invisibility that arises among the secondary nodes results in thinning which we characterize as channel abundance. We also capture the thinning that occurs due to the nodes\u27 interference. We study the effects of interference and channel abundance using Poisson thinning on the formation of a communication link between two nodes and also on the overall connectivity of the secondary network. As for the capacity, we derive the bounds on the maximum achievable capacity of a randomly deployed secondary network with finite number of nodes in the presence of primary users since finding the exact capacity involves solving an optimization problem that shows in-scalability both in time and search space dimensionality. We speed up the optimization by reducing the optimizer\u27s search space. Next, we characterize the QoS that secondary users can expect. We do so by using vector quantization to partition the QoS space into finite number of regions each of which is represented by one QoS index. We argue that any operating condition of the system can be mapped to one of the pre-computed QoS indices using a simple look-up in Olog (N) time thus avoiding any cumbersome computation for QoS evaluation. We implement the QoS space on an 8-bit microcontroller and show how the mathematically intensive operations can be computed in a shorter time. To demonstrate that there could be low cost solutions that scale, we present and implement an architecture that enables dynamic spectrum access for any type of network ranging from IoT to cellular. The three main components of this architecture are the RSSI sensing network, the DSA server, and the service engine. We use the concept of modular design in these components which allows transparency between them, scalability, and ease of maintenance and upgrade in a plug-n-play manner, without requiring any changes to the other components. Moreover, we provide a blueprint on how to use off-the-shelf commercially available software configurable RF chips to build low cost spectrum sensors. Using testbed experiments, we demonstrate the efficiency of the proposed architecture by comparing its performance to that of a legacy system. We show the benefits in terms of resilience to jamming, channel relinquishment on primary arrival, and best channel determination and allocation. We also show the performance gains in terms of frame error rater and spectral efficiency

    Utilizing Massive Spatiotemporal Samples for Efficient and Accurate Trajectory Prediction

    Get PDF
    Trajectory prediction is widespread in mobile computing, and helps support wireless network operation, location-based services, and applications in pervasive computing. However, most prediction methods are based on very coarse geometric information such as visited base transceiver stations, which cover tens of kilometers. These approaches undermine the prediction accuracy, and thus restrict the variety of application. Recently, due to the advance and dissemination of mobile positioning technology, accurate location tracking has become prevalent. The prediction methods based on precise spatiotemporal information are then possible. Although the prediction accuracy can be raised, a massive amount of data gets involved, which is undoubtedly a huge impact on network bandwidth usage. Therefore, employing fine spatiotemporal information in an accurate prediction must be efficient. However, this problem is not addressed in many prediction methods. Consequently, this paper proposes a novel prediction framework that utilizes massive spatiotemporal samples efficiently. This is achieved by identifying and extracting the information that is beneficial to accurate prediction from the samples. The proposed prediction framework circumvents high bandwidth consumption while maintaining high accuracy and being feasible. The experiments in this study examine the performance of the proposed prediction framework. The results show that it outperforms other popular approaches

    Intrusion detection in mobile ad hoc networks

    Get PDF
    Most existent protocols, applications and services for Mobile Ad Hoc NET-works (MANETs) assume a cooperative and friendly network environment and do not accommodate security. Therefore, Intrusion Detection Systems (IDSs), serving as the second line of defense for information systems, are indispensable for MANETs with high security requirements. Central to the research described in this dissertation is the proposed two-level nonoverlapping Zone-Based Intrusion Detection System (ZBIDS) which fit the unique requirement of MANETs. First, in the low-level of ZBIDS, I propose an intrusion detection agent model and present a Markov Chain based anomaly detection algorithm. Local and trusted communication activities such as routing table related features are periodically selected and formatted with minimum errors from raw data. A Markov Chain based normal profile is then constructed to capture the temporal dependency among network activities and accommodate the dynamic nature of raw data. A local detection model aggregating abnormal behaviors is constructed to reflect recent subject activities in order to achieve low false positive ratio and high detection ratio. A set of criteria to tune parameters is developed and the performance trade-off is discussed. Second, I present a nonoverlapping Zone-based framework to manage locally generated alerts from a wider area. An alert data model conformed to the Intrusion Detection Message Exchange Format (IDMEF) is presented to suit the needs of MANETs. Furthermore, an aggregation algorithm utilizing attribute similarity from alert messages is proposed to integrate security related information from a wider area. In this way, the gateway nodes of ZBIDS can reduce false positive ratio, improve detection ratio, and present more diagnostic information about the attack. Third, MANET IDSs need to consider mobility impact and adjust their behavior dynamically. I first demonstrate that nodes?? moving speed, a commonly used parameter in tuning IDS performance, is not an effective metric for the performance measurement of MANET IDSs. A new feature -link change rate -is then proposed as a unified metric for local MANET IDSs to adaptively select normal profiles . Different mobility models are utilized to evaluate the performance of the adaptive mechanisms

    Scaling up virtual MIMO systems

    Get PDF
    Multiple-input multiple-output (MIMO) systems are a mature technology that has been incorporated into current wireless broadband standards to improve the channel capacity and link reliability. Nevertheless, due to the continuous increasing demand for wireless data traffic new strategies are to be adopted. Very large MIMO antenna arrays represents a paradigm shift in terms of theory and implementation, where the use of tens or hundreds of antennas provides significant improvements in throughput and radiated energy efficiency compared to single antennas setups. Since design constraints limit the number of usable antennas, virtual systems can be seen as a promising technique due to their ability to mimic and exploit the gains of multi-antenna systems by means of wireless cooperation. Considering these arguments, in this work, energy efficient coding and network design for large virtual MIMO systems are presented. Firstly, a cooperative virtual MIMO (V-MIMO) system that uses a large multi-antenna transmitter and implements compress-and-forward (CF) relay cooperation is investigated. Since constructing a reliable codebook is the most computationally complex task performed by the relay nodes in CF cooperation, reduced complexity quantisation techniques are introduced. The analysis is focused on the block error probability (BLER) and the computational complexity for the uniform scalar quantiser (U-SQ) and the Lloyd-Max algorithm (LM-SQ). Numerical results show that the LM-SQ is simpler to design and can achieve a BLER performance comparable to the optimal vector quantiser. Furthermore, due to its low complexity, U-SQ could be consider particularly suitable for very large wireless systems. Even though very large MIMO systems enhance the spectral efficiency of wireless networks, this comes at the expense of linearly increasing the power consumption due to the use of multiple radio frequency chains to support the antennas. Thus, the energy efficiency and throughput of the cooperative V-MIMO system are analysed and the impact of the imperfect channel state information (CSI) on the system’s performance is studied. Finally, a power allocation algorithm is implemented to reduce the total power consumption. Simulation results show that wireless cooperation between users is more energy efficient than using a high modulation order transmission and that the larger the number of transmit antennas the lower the impact of the imperfect CSI on the system’s performance. Finally, the application of cooperative systems is extended to wireless self-backhauling heterogeneous networks, where the decode-and-forward (DF) protocol is employed to provide a cost-effective and reliable backhaul. The associated trade-offs for a heterogeneous network with inhomogeneous user distributions are investigated through the use of sleeping strategies. Three different policies for switching-off base stations are considered: random, load-based and greedy algorithms. The probability of coverage for the random and load-based sleeping policies is derived. Moreover, an energy efficient base station deployment and operation approach is presented. Numerical results show that the average number of base stations required to support the traffic load at peak-time can be reduced by using the greedy algorithm for base station deployment and that highly clustered networks exhibit a smaller average serving distance and thus, a better probability of coverage

    Recent Advances in Signal Processing

    Get PDF
    The signal processing task is a very critical issue in the majority of new technological inventions and challenges in a variety of applications in both science and engineering fields. Classical signal processing techniques have largely worked with mathematical models that are linear, local, stationary, and Gaussian. They have always favored closed-form tractability over real-world accuracy. These constraints were imposed by the lack of powerful computing tools. During the last few decades, signal processing theories, developments, and applications have matured rapidly and now include tools from many areas of mathematics, computer science, physics, and engineering. This book is targeted primarily toward both students and researchers who want to be exposed to a wide variety of signal processing techniques and algorithms. It includes 27 chapters that can be categorized into five different areas depending on the application at hand. These five categories are ordered to address image processing, speech processing, communication systems, time-series analysis, and educational packages respectively. The book has the advantage of providing a collection of applications that are completely independent and self-contained; thus, the interested reader can choose any chapter and skip to another without losing continuity

    Contributions to unsupervised and supervised learning with applications in digital image processing

    Get PDF
    311 p. : il.[EN]This Thesis covers a broad period of research activities with a commonthread: learning processes and its application to image processing. The twomain categories of learning algorithms, supervised and unsupervised, have beentouched across these years. The main body of initial works was devoted tounsupervised learning neural architectures, specially the Self Organizing Map.Our aim was to study its convergence properties from empirical and analyticalviewpoints.From the digital image processing point of view, we have focused on twobasic problems: Color Quantization and filter design. Both problems have beenaddressed from the context of Vector Quantization performed by CompetitiveNeural Networks. Processing of non-stationary data is an interesting paradigmthat has not been explored with Competitive Neural Networks. We have statesthe problem of Non-stationary Clustering and related Adaptive Vector Quantizationin the context of image sequence processing, where we naturally havea Frame Based Adaptive Vector Quantization. This approach deals with theproblem as a sequence of stationary almost-independent Clustering problems.We have also developed some new computational algorithms for Vector Quantizationdesign.The works on supervised learning have been sparsely distributed in time anddirection. First we worked on the use of Self Organizing Map for the independentmodeling of skin and no-skin color distributions for color based face localization. Second, we have collaborated in the realization of a supervised learning systemfor tissue segmentation in Magnetic Resonance Imaging data. Third, we haveworked on the development, implementation and experimentation with HighOrder Boltzmann Machines, which are a very different learning architecture.Finally, we have been working on the application of Sparse Bayesian Learningto a new kind of classification systems based on Dendritic Computing. This lastresearch line is an open research track at the time of writing this Thesis

    Linear Transmit-Receive Strategies for Multi-user MIMO Wireless Communications

    Get PDF
    Die Notwendigkeit zur Unterdrueckung von Interferenzen auf der einen Seite und zur Ausnutzung der durch Mehrfachzugriffsverfahren erzielbaren Gewinne auf der anderen Seite rueckte die raeumlichen Mehrfachzugriffsverfahren (Space Division Multiple Access, SDMA) in den Fokus der Forschung. Ein Vertreter der raeumlichen Mehrfachzugriffsverfahren, die lineare Vorkodierung, fand aufgrund steigender Anzahl an Nutzern und Antennen in heutigen und zukuenftigen Mobilkommunikationssystemen besondere Beachtung, da diese Verfahren das Design von Algorithmen zur Vorcodierung vereinfachen. Aus diesem Grund leistet diese Dissertation einen Beitrag zur Entwicklung linearer Sende- und Empfangstechniken fuer MIMO-Technologie mit mehreren Nutzern. Zunaechst stellen wir ein Framework zur Approximation des Datendurchsatzes in Broadcast-MIMO-Kanaelen mit mehreren Nutzern vor. In diesem Framework nehmen wir das lineare Vorkodierverfahren regularisierte Blockdiagonalisierung (RBD) an. Durch den Vergleich von Dirty Paper Coding (DPC) und linearen Vorkodieralgorithmen (z.B. Zero Forcing (ZF) und Blockdiagonalisierung (BD)) ist es uns moeglich, untere und obere Schranken fuer den Unterschied bezueglich Datenraten und bezueglich Leistung zwischen beiden anzugeben. Im Weiteren entwickeln wir einen Algorithmus fuer koordiniertes Beamforming (Coordinated Beamforming, CBF), dessen Loesung sich in geschlossener Form angeben laesst. Dieser CBF-Algorithmus basiert auf der SeDJoCo-Transformation und loest bisher vorhandene Probleme im Bereich CBF. Im Anschluss schlagen wir einen iterativen CBF-Algorithmus namens FlexCoBF (flexible coordinated beamforming) fuer MIMO-Broadcast-Kanaele mit mehreren Nutzern vor. Im Vergleich mit bis dato existierenden iterativen CBF-Algorithmen kann als vielversprechendster Vorteil die freie Wahl der linearen Sende- und Empfangsstrategie herausgestellt werden. Das heisst, jede existierende Methode der linearen Vorkodierung kann als Sendestrategie genutzt werden, waehrend die Strategie zum Empfangsbeamforming frei aus MRC oder MMSE gewaehlt werden darf. Im Hinblick auf Szenarien, in denen Mobilfunkzellen in Clustern zusammengefasst sind, erweitern wir FlexCoBF noch weiter. Hier wurde das Konzept der koordinierten Mehrpunktverbindung (Coordinated Multipoint (CoMP) transmission) integriert. Zuletzt stellen wir drei Moeglichkeiten vor, Kanalzustandsinformationen (Channel State Information, CSI) unter verschiedenen Kanalumstaenden zu erlangen. Die Qualitaet der Kanalzustandsinformationen hat einen starken Einfluss auf die Guete des Uebertragungssystems. Die durch unsere neuen Algorithmen erzielten Verbesserungen haben wir mittels numerischer Simulationen von Summenraten und Bitfehlerraten belegt.In order to combat interference and exploit large multiplexing gains of the multi-antenna systems, a particular interest in spatial division multiple access (SDMA) techniques has emerged. Linear precoding techniques, as one of the SDMA strategies, have obtained more attention due to the fact that an increasing number of users and antennas involved into the existing and future mobile communication systems requires a simplification of the precoding design. Therefore, this thesis contributes to the design of linear transmit and receive strategies for multi-user MIMO broadcast channels in a single cell and clustered multiple cells. First, we present a throughput approximation framework for multi-user MIMO broadcast channels employing regularized block diagonalization (RBD) linear precoding. Comparing dirty paper coding (DPC) and linear precoding algorithms (e.g., zero forcing (ZF) and block diagonalization (BD)), we further quantify lower and upper bounds of the rate and power offset between them as a function of the system parameters such as the number of users and antennas. Next, we develop a novel closed-form coordinated beamforming (CBF) algorithm (i.e., SeDJoCo based closed-form CBF) to solve the existing open problem of CBF. Our new algorithm can support a MIMO system with an arbitrary number of users and transmit antennas. Moreover, the application of our new algorithm is not only for CBF, but also for blind source separation (BSS), since the same mathematical model has been used in BSS application.Then, we further propose a new iterative CBF algorithm (i.e., flexible coordinated beamforming (FlexCoBF)) for multi-user MIMO broadcast channels. Compared to the existing iterative CBF algorithms, the most promising advantage of our new algorithm is that it provides freedom in the choice of the linear transmit and receive beamforming strategies, i.e., any existing linear precoding method can be chosen as the transmit strategy and the receive beamforming strategy can be flexibly chosen from MRC or MMSE receivers. Considering clustered multiple cell scenarios, we extend the FlexCoBF algorithm further and introduce the concept of the coordinated multipoint (CoMP) transmission. Finally, we present three strategies for channel state information (CSI) acquisition regarding various channel conditions and channel estimation strategies. The CSI knowledge is required at the base station in order to implement SDMA techniques. The quality of the obtained CSI heavily affects the system performance. The performance enhancement achieved by our new strategies has been demonstrated by numerical simulation results in terms of the system sum rate and the bit error rate
    corecore