8 research outputs found
Online learning on the programmable dataplane
This thesis makes the case for managing computer networks with datadriven methods automated statistical inference and control based on measurement data and runtime observations—and argues for their tight integration with programmable dataplane hardware to make management decisions faster and from more precise data. Optimisation, defence, and measurement of networked infrastructure are each challenging tasks in their own right, which are currently dominated by the use of hand-crafted heuristic methods. These become harder to reason about and deploy as networks scale in rates and number of forwarding elements, but their design requires expert knowledge and care around unexpected protocol interactions. This makes tailored, per-deployment or -workload solutions infeasible to develop. Recent advances in machine learning offer capable function approximation and closed-loop control which suit many of these tasks. New, programmable dataplane hardware enables more agility in the network— runtime reprogrammability, precise traffic measurement, and low latency on-path processing. The synthesis of these two developments allows complex decisions to be made on previously unusable state, and made quicker by offloading inference to the network.
To justify this argument, I advance the state of the art in data-driven defence of networks, novel dataplane-friendly online reinforcement learning algorithms, and in-network data reduction to allow classification of switchscale data. Each requires co-design aware of the network, and of the failure modes of systems and carried traffic. To make online learning possible in the dataplane, I use fixed-point arithmetic and modify classical (non-neural) approaches to take advantage of the SmartNIC compute model and make use of rich device local state. I show that data-driven solutions still require great care to correctly design, but with the right domain expertise they can improve on pathological cases in DDoS defence, such as protecting legitimate UDP traffic. In-network aggregation to histograms is shown to enable accurate classification from fine temporal effects, and allows hosts to scale such classification to far larger flow counts and traffic volume. Moving reinforcement learning to the dataplane is shown to offer substantial benefits to stateaction latency and online learning throughput versus host machines; allowing policies to react faster to fine-grained network events. The dataplane environment is key in making reactive online learning feasible—to port further algorithms and learnt functions, I collate and analyse the strengths of current and future hardware designs, as well as individual algorithms
A two-fold Perspective on Enterprise Security in the Digital Twin Context
Digital twins represent and can manage an enterprise asset virtually along its lifecycle. The vital technologies the twin relies upon (e.g., Internet of Things) have only recently matured. Since then, literature has taken up on digital twins. The digital twin therefore constitutes a very young concept, where security is currently neglected. This dissertation aims at closing this research gap, and further contributes to the body of knowledge concerning digital twin security. To study digital twin security, a two-fold approach is necessary. On the one hand, digital twins are at risk for being attacked (security for digital twins). However, on the other hand, they can also be leveraged to gain novel security opportunities (digital twins for security). This dissertation lays the general foundations of the digital twin concept in enterprises and studies these two security perspectives hereinafter. It shows that the digital twin’s security can be fostered utilizing the blockchain technology. Furthermore, it proposes digital twins to be of use in corporate security: It is shown that digital twins can collaborate with traditional security tools like Security Information and Event Management (SIEM) systems and organizational structures like the Security Operations Center (SOC). In this regard, the use of digital twins is further proven to be beneficial for digital forensics as well as Cyber Threat Intelligence (CTI)
Efficient integrity verification of replicated data in cloud
The cloud computing is an emerging model in which computing infrastructure resources are provided as a service over the Internet. Data owners can outsource their data by remotely storing them in the cloud and enjoy on-demand high quality services from a shared pool of configurable computing resources. By using these data storage services, the data owners can relieve the burden of local data storage and maintenance. However, since data owners and the cloud servers are not in the same trusted domain, the outsourced data may be at risk as the cloud server may no longer be fully trusted. Therefore, data integrity is of critical importance in such a scenario. Cloud should let the owners or a trusted third party to check for the integrity of their data storage without demanding a local copy of the data. Owners often replicate their data on the cloud servers across multiple data centers to provide a higher level of scalability, availability, and durability. When the data owners ask the Cloud Service Provider (CSP) to replicate data, they are charged a higher storage fee by the CSP. Therefore, the data owners need to be strongly convinced that the CSP is storing data copies agreed on in the service level contract, and data-updates have been correctly executed on all the remotely stored copies. In this thesis, a Dynamic Multi-Replica Provable Data Possession scheme (DMR-PDP) is proposed that prevents the CSP from cheating; for example, by maintaining fewer copies than paid for and/or tampering data. In addition, we also extended the scheme to support a basic file versioning system where only the difference between the original file and the updated file is propagated rather than the propagation of operations for privacy reasons. DMR-PDP also supports efficient dynamic operations like block modification, insertion and deletion on replicas over the cloud servers --Abstract, page iii
WIMAX Basics from PHY Layer to Scheduling and Multicasting Approaches
WiMAX (Worldwide Interoperability for Microwave Access) is an emerging broadband wireless technology for providing Last mile solutions for supporting higher bandwidth and multiple service classes with various quality of service requirement. The unique architecture of the WiMAX MAC and PHY layers that uses OFDMA to allocate multiple channels with different modulation schema and multiple time slots for each channel allows better adaptation of heterogeneous user’s requirements. The main architecture in WiMAX uses PMP (Point to Multipoint), Mesh mode or the new MMR (Mobile Multi hop Mode) deployments where scheduling and multicasting have different approaches. In PMP SS (Subscriber Station) connects directly to BS (Base Station) in a single hop route so channel conditions adaptations and supporting QoS for classes of services is the key points in scheduling, admission control or multicasting, while in Mesh networks SS connects to other SS Stations or to the BS in a multi hop routes, the MMR mode extends the PMP mode in which the SS connects to either a relay station (RS) or to Bs. Both MMR and Mesh uses centralized or distributed scheduling with multicasting schemas based on scheduling trees for routing. In this paper a broad study is conducted About WiMAX technology PMP and Mesh deployments from main physical layers features with differentiation of MAC layer features to scheduling and multicasting approaches in both modes of operations