65 research outputs found

    Detecting and defending against cyber attacks in a smart home Internet of Things ecosystem

    Get PDF
    The proliferation in Internet of Things (IoT) devices is demonstrated by their prominence in our daily lives. Although such devices simplify and automate everyday tasks, they also introduce tremendous security flaws. Current security measures are insufficient, making IoT one of the weakest links to breaking into a secure infrastructure which can have serious consequences. Subsequently, this thesis is motivated by the need to develop and further enhance novel mechanisms tailored towards strengthening the overall security infrastructures of IoT ecosystems. To estimate the degree to which a hub can improve the overall security of the ecosystem, this thesis presents a design and prototype implementation of a novel secure IoT hub, consisting of various built-in security mechanisms that satisfy key security properties (e.g. authentication, confidentiality, access control) applicable to a range of devices. The effectiveness of the hub was evaluated within a smart home IoT network upon which popular cyber attacks were deployed. To further enhance the security of the IoT environment, the initial experiments towards the development of a three-layered Intrusion Detection System (IDS) is proposed. The IDS aims to: 1) classify IoT devices, 2) identify malicious or benign network packets, and 3) identify the type of attack which has occurred. To support the classification experiments, real network data was collected from a smart home testbed, where a range of cyber attacks from four main attack types were targeted towards the devices. Lastly, the robustness of the IDS was further evaluated against Adversarial Machine Learning (AML) attacks. Such attacks may target models by generating adversarial samples which aim to exploit the weaknesses of the pre-trained model, consequently bypassing the detector. This thesis presents a first approach towards automatically generating adversarial malicious DoS IoT network packets. The analysis further explores how adversarial training can enhance the robustness of the IDS

    Secure data sharing and analysis in cloud-based energy management systems

    Get PDF
    Analysing data acquired from one or more buildings (through specialist sensors, energy generation capability such as PV panels or smart meters) via a cloud-based Local Energy Management System (LEMS) is increasingly gaining in popularity. In a LEMS, various smart devices within a building are monitored and/or controlled to either investigate energy usage trends within a building, or to investigate mechanisms to reduce total energy demand. However, whenever we are connecting externally monitored/controlled smart devices there are security and privacy concerns. We describe the architecture and components of a LEMS and provide a survey of security and privacy concerns associated with data acquisition and control within a LEMS. Our scenarios specifically focus on the integration of Electric Vehicles (EV) and Energy Storage Units (ESU) at the building premises, to identify how EVs/ESUs can be used to store energy and reduce the electricity costs of the building. We review security strategies and identify potential security attacks that could be carried out on such a system, while exploring vulnerable points in the system. Additionally, we will systematically categorize each vulnerability and look at potential attacks exploiting that vulnerability for LEMS. Finally, we will evaluate current counter measures used against these attacks and suggest possible mitigation strategies

    A scalable and automated framework for tracking the likely adoption of emerging technologies

    Get PDF
    While new technologies are expected to revolutionise and become game-changers in improving the efficiencies and practises of our daily lives, it is also critical to investigate and understand the barriers and opportunities faced by their adopters. Such findings can serve as an additional feature in the decision-making process when analysing the risks, costs, and benefits of adopting an emerging technology in a particular setting. Although several studies have attempted to perform such investigations, these approaches adopt a qualitative data collection methodology which is limited in terms of the size of the targeted participant group and is associated with a significant manual overhead when transcribing and inferring results. This paper presents a scalable and automated framework for tracking likely adoption and/or rejection of new technologies from a large landscape of adopters. In particular, a large corpus of social media texts containing references to emerging technologies was compiled. Text mining techniques were applied to extract sentiments expressed towards technology aspects. In the context of the problem definition herein, we hypothesise that the expression of positive sentiment infers an increase in the likelihood of impacting a technology user's acceptance to adopt, integrate, and/or use the technology, and negative sentiment infers an increase in the likelihood of impacting the rejection of emerging technologies by adopters. To quantitatively test our hypothesis, a ground truth analysis was performed to validate that the sentiment captured by the text mining approach is comparable to the results given by human annotators when asked to label whether such texts positively or negatively impact their outlook towards adopting an emerging technology. The collected annotations demonstrated comparable results to those of the text mining approach, illustrating that automatically extracted sentiment expressed towards technologies are useful features in understanding the landscape faced by technology adopters, as well as serving as an important decision-making component when, for example, recognising shifts in user behaviours, new demands, and emerging uncertainties

    Adversarial Attacks on Machine Learning Cybersecurity Defences in Industrial Control Systems

    Get PDF
    The proliferation and application of machine learning based Intrusion Detection Systems (IDS) have allowed for more flexibility and efficiency in the automated detection of cyber attacks in Industrial Control Systems (ICS). However, the introduction of such IDSs has also created an additional attack vector; the learning models may also be subject to cyber attacks, otherwise referred to as Adversarial Machine Learning (AML). Such attacks may have severe consequences in ICS systems, as adversaries could potentially bypass the IDS. This could lead to delayed attack detection which may result in infrastructure damages, financial loss, and even loss of life. This paper explores how adversarial learning can be used to target supervised models by generating adversarial samples using the Jacobian-based Saliency Map attack and exploring classification behaviours. The analysis also includes the exploration of how such samples can support the robustness of supervised models using adversarial training. An authentic power system dataset was used to support the experiments presented herein. Overall, the classification performance of two widely used classifiers, Random Forest and J48, decreased by 16 and 20 percentage points when adversarial samples were present. Their performances improved following adversarial training, demonstrating their robustness towards such attacks.Comment: 9 pages. 7 figures. 7 tables. 46 references. Submitted to a special issue Journal of Information Security and Applications, Machine Learning Techniques for Cyber Security: Challenges and Future Trends, Elsevie

    Comparing hierarchical approaches to enhance supervised emotive text classification

    Get PDF
    The performance of emotive text classification using affective hierarchical schemes (e.g. WordNet-Affect) is often evaluated using the same traditional measures used to evaluate the performance of when a finite set of isolated classes are used. However, applying such measures means the full characteristics and structure of the emotive hierarchical scheme are not considered. Thus, the overall performance of emotive text classification using emotion hierarchical schemes is often inaccurately reported and may lead to ineffective information retrieval and decision making. This paper provides a comparative investigation into how methods used in hierarchical classification problems in other domains, which extend traditional evaluation metrics to consider the characteristics of the hierarchical classification scheme can be applied and subsequently improve the classification of emotive texts. This study investigates the classification performance of three widely used classifiers, Naive Bayes, J48 Decision Tree, and SVM, following the application of the aforementioned methods. The results demonstrated that all methods improved the emotion classification. However, the most notable improvement was recorded when a depth-based method was applied to both the testing and validation data, where the precision, recall, and F1-score were significantly improved by around 70 percentage points for each classifier

    Enhancing Enterprise Network Security: Comparing Machine-Level and Process-Level Analysis for Dynamic Malware Detection

    Full text link
    Analysing malware is important to understand how malicious software works and to develop appropriate detection and prevention methods. Dynamic analysis can overcome evasion techniques commonly used to bypass static analysis and provide insights into malware runtime activities. Much research on dynamic analysis focused on investigating machine-level information (e.g., CPU, memory, network usage) to identify whether a machine is running malicious activities. A malicious machine does not necessarily mean all running processes on the machine are also malicious. If we can isolate the malicious process instead of isolating the whole machine, we could kill the malicious process, and the machine can keep doing its job. Another challenge dynamic malware detection research faces is that the samples are executed in one machine without any background applications running. It is unrealistic as a computer typically runs many benign (background) applications when a malware incident happens. Our experiment with machine-level data shows that the existence of background applications decreases previous state-of-the-art accuracy by about 20.12% on average. We also proposed a process-level Recurrent Neural Network (RNN)-based detection model. Our proposed model performs better than the machine-level detection model; 0.049 increase in detection rate and a false-positive rate below 0.1.Comment: Dataset link: https://github.com/bazz-066/cerberus-trac

    Hardening machine learning Denial of Service (DoS) defences against adversarial attacks in IoT smart home networks

    Get PDF
    Machine learning based Intrusion Detection Systems (IDS) allow flexible and efficient automated detection of cyberattacks in Internet of Things (IoT) networks. However, this has also created an additional attack vector; the machine learning models which support the IDS's decisions may also be subject to cyberattacks known as Adversarial Machine Learning (AML). In the context of IoT, AML can be used to manipulate data and network traffic that traverse through such devices. These perturbations increase the confusion in the decision boundaries of the machine learning classifier, where malicious network packets are often miss-classified as being benign. Consequently, such errors are bypassed by machine learning based detectors, which increases the potential of significantly delaying attack detection and further consequences such as personal information leakage, damaged hardware, and financial loss. Given the impact that these attacks may have, this paper proposes a rule-based approach towards generating AML attack samples and explores how they can be used to target a range of supervised machine learning classifiers used for detecting Denial of Service attacks in an IoT smart home network. The analysis explores which DoS packet features to perturb and how such adversarial samples can support increasing the robustness of supervised models using adversarial training. The results demonstrated that the performance of all the top performing classifiers were affected, decreasing a maximum of 47.2 percentage points when adversarial samples were present. Their performances improved following adversarial training, demonstrating their robustness towards such attacks
    • …
    corecore