4,120 research outputs found
Causal evolution of spin networks
A new approach to quantum gravity is described which joins the loop
representation formulation of the canonical theory to the causal set
formulation of the path integral. The theory assigns quantum amplitudes to
special classes of causal sets, which consist of spin networks representing
quantum states of the gravitational field joined together by labeled null
edges. The theory exists in 3+1, 2+1 and 1+1 dimensional versions, and may also
be interepreted as a theory of labeled timelike surfaces. The dynamics is
specified by a choice of functions of the labelings of d+1 dimensional
simplices,which represent elementary future light cones of events in these
discrete spacetimes. The quantum dynamics thus respects the discrete causal
structure of the causal sets. In the 1+1 dimensional case the theory is closely
related to directed percolation models. In this case, at least, the theory may
have critical behavior associated with percolation, leading to the existence of
a classical limit.Comment: latex, 32 pages, 17 figure
Network Management, Optimization and Security with Machine Learning Applications in Wireless Networks
Wireless communication networks are emerging fast with a lot of challenges and ambitions. Requirements that are expected to be delivered by modern wireless networks are complex, multi-dimensional, and sometimes contradicting. In this thesis, we investigate several types of emerging wireless networks and tackle some challenges of these various networks. We focus on three main challenges. Those are Resource Optimization, Network Management, and Cyber Security. We present multiple views of these three aspects and propose solutions to probable scenarios. The first challenge (Resource Optimization) is studied in Wireless Powered Communication Networks (WPCNs). WPCNs are considered a very promising approach towards sustainable, self-sufficient wireless sensor networks. We consider a WPCN with Non-Orthogonal Multiple Access (NOMA) and study two decoding schemes aiming for optimizing the performance with and without interference cancellation. This leads to solving convex and non-convex optimization problems. The second challenge (Network Management) is studied for cellular networks and handled using Machine Learning (ML). Two scenarios are considered. First, we target energy conservation. We propose an ML-based approach to turn Multiple Input Multiple Output (MIMO) technology on/off depending on certain criteria. Turning off MIMO can save considerable energy of the total site consumption. To control enabling and disabling MIMO, a Neural Network (NN) based approach is used. It learns some network features and decides whether the site can achieve satisfactory performance with MIMO off or not. In the second scenario, we take a deeper look into the cellular network aiming for more control over the network features. We propose a Reinforcement Learning-based approach to control three features of the network (relative CIOs, transmission power, and MIMO feature). The proposed approach delivers a stable state of the cellular network and enables the network to self-heal after any change or disturbance in the surroundings. In the third challenge (Cyber Security), we propose an NN-based approach with the target of detecting False Data Injection (FDI) in industrial data. FDI attacks corrupt sensor measurements to deceive the industrial platform. The proposed approach uses an Autoencoder (AE) for FDI detection. In addition, a Denoising AE (DAE) is used to clean the corrupted data for further processing
AI Solutions for MDS: Artificial Intelligence Techniques for Misuse Detection and Localisation in Telecommunication Environments
This report considers the application of Articial Intelligence (AI) techniques to
the problem of misuse detection and misuse localisation within telecommunications
environments. A broad survey of techniques is provided, that covers inter alia
rule based systems, model-based systems, case based reasoning, pattern matching,
clustering and feature extraction, articial neural networks, genetic algorithms, arti
cial immune systems, agent based systems, data mining and a variety of hybrid
approaches. The report then considers the central issue of event correlation, that
is at the heart of many misuse detection and localisation systems. The notion of
being able to infer misuse by the correlation of individual temporally distributed
events within a multiple data stream environment is explored, and a range of techniques,
covering model based approaches, `programmed' AI and machine learning
paradigms. It is found that, in general, correlation is best achieved via rule based approaches,
but that these suffer from a number of drawbacks, such as the difculty of
developing and maintaining an appropriate knowledge base, and the lack of ability
to generalise from known misuses to new unseen misuses. Two distinct approaches
are evident. One attempts to encode knowledge of known misuses, typically within
rules, and use this to screen events. This approach cannot generally detect misuses
for which it has not been programmed, i.e. it is prone to issuing false negatives.
The other attempts to `learn' the features of event patterns that constitute normal
behaviour, and, by observing patterns that do not match expected behaviour, detect
when a misuse has occurred. This approach is prone to issuing false positives,
i.e. inferring misuse from innocent patterns of behaviour that the system was not
trained to recognise. Contemporary approaches are seen to favour hybridisation,
often combining detection or localisation mechanisms for both abnormal and normal
behaviour, the former to capture known cases of misuse, the latter to capture
unknown cases. In some systems, these mechanisms even work together to update
each other to increase detection rates and lower false positive rates. It is concluded
that hybridisation offers the most promising future direction, but that a rule or state
based component is likely to remain, being the most natural approach to the correlation
of complex events. The challenge, then, is to mitigate the weaknesses of
canonical programmed systems such that learning, generalisation and adaptation
are more readily facilitated
Recommended from our members
Discovering Network Control Vulnerabilities and Policies in Evolving Networks
The range and number of new applications and services are growing at an unprecedented rate. Computer networks need to be able to provide connectivity for these services and meet their constantly changing demands. This requires not only support of new network protocols and security requirements, but often architectural redesigns for long-term improvements to efficiency, speed, throughput, cost, and security. Networks are now facing a drastic increase in size and are required to carry a constantly growing amount of heterogeneous traffic. Unfortunately such dynamism greatly complicates security of not only the end nodes in the network, but also of the nodes of the network itself. To make matters worse, just as applications are being developed at faster and faster rates, attacks are becoming more pervasive and complex. Networks need to be able to understand the impact of these attacks and protect against them.
Network control devices, such as routers, firewalls, censorship devices, and base stations, are elements of the network that make decisions on how traffic is handled. Although network control devices are expected to act according to specifications, there can be various reasons why they do not in practice. Protocols could be flawed, ambiguous or incomplete, developers could introduce unintended bugs, or attackers may find vulnerabilities in the devices and exploit them. Malfunction could intentionally or unintentionally threaten the confidentiality, integrity, and availability of end nodes and the data that passes through the network. It can also impact the availability and performance of the control devices themselves and the security policies of the network. The fast-paced evolution and scalability of current and future networks create a dynamic environment for which it is difficult to develop automated tools for testing new protocols and components. At the same time, they make the function of such tools vital for discovering implementation flaws and protocol vulnerabilities as networks become larger and more complex, and as new and potentially unrefined architectures become adopted. This thesis will present the design, implementation, and evaluation of a set of tools designed for understanding implementation of network control nodes and how they react to changes in traffic characteristics as networks evolve. We will first introduce Firecycle, a test bed for analyzing the impact of large-scale attacks and Machine-to-Machine (M2M) traffic on the Long Term Evolution (LTE) network. We will then discuss Autosonda, a tool for automatically discovering rule implementation and finding triggering traffic features in censorship devices.
This thesis provides the following contributions:
1. The design, implementation, and evaluation of two tools to discover models of network control nodes in two scenarios of evolving networks, mobile network and censored internet
2. First existing test bed for analysis of large-scale attacks and impact of traffic scalability on LTE mobile networks
3. First existing test bed for LTE networks that can be scaled to arbitrary size and that deploys traffic models based on real traffic traces taken from a tier-1 operator
4. An analysis of traffic models of various categories of Internet of Things (IoT) devices
5. First study demonstrating the impact of M2M scalability and signaling overload on the packet core of LTE mobile networks
6. A specification for modeling of censorship device decision models
7. A means for automating the discovery of features utilized in censorship device decision models, comparison of these models, and their rule discover
- …