189 research outputs found
Distributed, multi-level network anomaly detection for datacentre networks
Over the past decade, numerous systems have been proposed to detect and subsequently prevent or mitigate security vulnerabilities. However, many existing intrusion or anomaly detection solutions are limited to a subset of the traffic due to scalability issues, hence failing to operate at line-rate on large, high-speed datacentre networks. In this paper, we present a two-level solution for anomaly detection leveraging independent execution and message passing semantics. We employ these constructs within a network-wide distributed anomaly detection framework that allows for greater detection accuracy and bandwidth cost saving through attack path reconstruction. Experimental results using real operational traffic traces and known network attacks generated through the Pytbull IDS evaluation framework, show that our approach is capable of detecting anomalies in a timely manner while allowing reconstruction of the attack path, hence further enabling the composition of advanced mitigation strategies. The resulting system shows high detection accuracy when compared to similar techniques, at least 20% better at detecting anomalies, and enables full path reconstruction even at small-to-moderate attack traffic intensities (as a fraction of the total traffic), saving up to 75% of bandwidth due to early attack detection
SDN-PANDA: Software-Defined Network Platform for ANomaly Detection Applications
The proliferation of cloud-enabled services has caused an exponential growth in the traffic volume of modern data centres (DCs). An important aspect for the optimal operation of DCs related to the real-time detection of anomalies within the measured traffic volume in order to identify possible threats or challenges that are caused by either malicious or legitimate intent. Therefore in this paper we present SDN-PANDA; a 'pluggable' software platform that aims to provide centralised administration and experimentation for anomaly detection techniques in Software Defined Data Centres (SDDCs). We present the overall design of the proposed scheme, and illustrate some initial results related to the performance of the current prototype with respect to scalability and basic traffic visualisation. We argue that the introduced platform may facilitate the underlying functional basis for a number of real-time anomaly detection applications and provide the necessary foundations for such algorithms to be easily deployed
Artificial intelligence in the cyber domain: Offense and defense
Artificial intelligence techniques have grown rapidly in recent years, and their applications in practice can be seen in many fields, ranging from facial recognition to image analysis. In the cybersecurity domain, AI-based techniques can provide better cyber defense tools and help adversaries improve methods of attack. However, malicious actors are aware of the new prospects too and will probably attempt to use them for nefarious purposes. This survey paper aims at providing an overview of how artificial intelligence can be used in the context of cybersecurity in both offense and defense.Web of Science123art. no. 41
High-performance, Platform-Independent DDoS Detection for IoT Ecosystems
Most Distributed Denial of Service (DDoS) detection and mitigation strategies for Internet of Things (IoT) are based on a remote cloud server or purpose-built middlebox executing complex intrusion detection methods, that impose stringent scalability and performance requirements on the IoT due to the vast amounts of traffic and devices to be handled. In this paper, we present an edge-based detection scheme using BPFabric, a high-speed, programmable data-plane switch architecture, and lightweight network functions to execute upstream anomaly detection. The proposed detection scheme ensures fast detection of DDoS attacks originated from IoT devices, while guaranteeing minimum resource usage and processing overhead. Our solution was compared against two widespread coarse-grained detection techniques, showing detection delays under 5ms, an overall accuracy of 93 − 95% and a bandwidth overhead of less than 1%
Malware Detection in Cloud Computing Infrastructures
Cloud services are prominent within the private, public and commercial domains. Many of these services are expected to be always on and have a critical nature; therefore, security and resilience are increasingly important aspects. In order to remain resilient, a cloud needs to possess the ability to react not only to known threats, but also to new challenges that target cloud infrastructures. In this paper we introduce and discuss an online cloud anomaly detection approach, comprising dedicated detection components of our cloud resilience architecture. More specifically, we exhibit the applicability of novelty detection under the one-class support Vector Machine (SVM) formulation at the hypervisor level, through the utilisation of features gathered at the system and network levels of a cloud node. We demonstrate that our scheme can reach a high detection accuracy of over 90% whilst detecting various types of malware and DoS attacks. Furthermore, we evaluate the merits of considering not only system-level data, but also network-level data depending on the attack type. Finally, the paper shows that our approach to detection using dedicated monitoring components per VM is particularly applicable to cloud scenarios and leads to a flexible detection system capable of detecting new malware strains with no prior knowledge of their functionality or their underlying instructions.
Index Terms—Security, resilience, invasive software, multi-agent systems, network-level security and protection
Recommended from our members
Cloud-based cyber-physical intrusion detection for vehicles using Deep Learning
Detection of cyber attacks against vehicles is of growing interest. As vehicles typically afford limited processing resources, proposed solutions are rule-based or lightweight machine learning techniques. We argue that this limitation can be lifted with computational offloading commonly used for resource-constrained mobile devices. The increased processing resources available in this manner allow access to more advanced techniques. Using as case study a small four-wheel robotic land vehicle, we demonstrate the practicality and benefits of offloading the continuous task of intrusion detection that is based on deep learning. This approach achieves high accuracy much more consistently than with standard machine learning techniques and is not limited to a single type of attack or the in-vehicle CAN bus as previous work. As input, it uses data captured in real-time that relate to both cyber and physical processes, which it feeds as time series data to a neural network architecture. We use both a deep multilayer perceptron and a recurrent neural network architecture, with the latter benefitting from a long-short term memory hidden layer, which proves very useful for learning the temporal context of different attacks. We employ denial of service, command injection and malware as examples of cyber attacks that are meaningful for a robotic vehicle. The practicality of the latter depends on the resources afforded onboard and remotely, as well as the reliability of the communication means between them. Using detection latency as the criterion, we have developed a mathematical model to determine when computation offloading is beneficial given parameters related to the operation of the network and the processing demands of the deep learning model. The more reliable the network and the greater the processing demands, the greater the reduction in detection latency achieved through offloading
Diluting the Scalability Boundaries: Exploring the Use of Disaggregated Architectures for High-Level Network Data Analysis
Traditional data centers are designed with a rigid architecture of
fit-for-purpose servers that provision resources beyond the average workload in
order to deal with occasional peaks of data. Heterogeneous data centers are
pushing towards more cost-efficient architectures with better resource
provisioning. In this paper we study the feasibility of using disaggregated
architectures for intensive data applications, in contrast to the monolithic
approach of server-oriented architectures. Particularly, we have tested a
proactive network analysis system in which the workload demands are highly
variable. In the context of the dReDBox disaggregated architecture, the results
show that the overhead caused by using remote memory resources is significant,
between 66\% and 80\%, but we have also observed that the memory usage is one
order of magnitude higher for the stress case with respect to average
workloads. Therefore, dimensioning memory for the worst case in conventional
systems will result in a notable waste of resources. Finally, we found that,
for the selected use case, parallelism is limited by memory. Therefore, using a
disaggregated architecture will allow for increased parallelism, which, at the
same time, will mitigate the overhead caused by remote memory.Comment: 8 pages, 6 figures, 2 tables, 32 references. Pre-print. The paper
will be presented during the IEEE International Conference on High
Performance Computing and Communications in Bangkok, Thailand. 18 - 20
December, 2017. To be published in the conference proceeding
Transactions and data management in NoSQL cloud databases
NoSQL databases have become the preferred option for storing and processing data in cloud computing as they are capable of providing high data availability, scalability and efficiency. But in order to achieve these attributes, NoSQL databases make certain trade-offs. First, NoSQL databases cannot guarantee strong consistency of data. They only guarantee a weaker consistency which is based on eventual consistency model. Second, NoSQL databases adopt a simple data model which makes it easy for data to be scaled across multiple nodes. Third, NoSQL databases do not support table joins and referential integrity which by implication, means they cannot implement complex queries. The combination of these factors implies that NoSQL databases cannot support transactions. Motivated by these crucial issues this thesis investigates into the transactions and data management in NoSQL databases.
It presents a novel approach that implements transactional support for NoSQL databases in order to ensure stronger data consistency and provide appropriate level of performance. The novelty lies in the design of a Multi-Key transaction model that guarantees the standard properties of transactions in order to ensure stronger consistency and integrity of data. The model is implemented in a novel loosely-coupled architecture that separates the implementation of transactional logic from the underlying data thus ensuring transparency and abstraction in cloud and NoSQL databases. The proposed approach is validated through the development of a prototype system using real MongoDB system. An extended version of the standard Yahoo! Cloud Services Benchmark (YCSB) has been used in order to test and evaluate the proposed approach. Various experiments have been conducted and sets of results have been generated. The results show that the proposed approach meets the research objectives. It maintains stronger consistency of cloud data as well as appropriate level of reliability and performance
- …