15 research outputs found

    Just-in-Time Adaptive Algorithm for Optimal Parameter Setting in 802.15.4 WSNs

    Get PDF
    Recent studies have shown that the IEEE 802.15.4 MAC protocol suffers from severe limitations, in terms of reliability and energy efficiency, when the CSMA/CA parameter setting is not appropriate. However, selecting the optimal setting that guarantees the application reliability requirements, with minimum energy consumption, is not a trivial task in wireless sensor networks, especially when the operating conditions change over time. In this paper we propose a Just-in-Time LEarning-based Adaptive Parameter tuning (JIT-LEAP) algorithm that adapts the CSMA/CA parameter setting to the time-varying operating conditions by also exploiting the past history to find the most appropriate setting for the current conditions. Following the approach of active adaptive algorithms, the adaptation mechanism of JIT-LEAP is triggered by a change detection test only when needed (i.e., in response to a change in the operating conditions). Simulation results show that the proposed algorithm outperforms other similar algorithms, both in stationary and dynamic scenarios

    A Hierarchical Temporal Memory Sequence Classifier for Streaming Data

    Get PDF
    Real-world data streams often contain concept drift and noise. Additionally, it is often the case that due to their very nature, these real-world data streams also include temporal dependencies between data. Classifying data streams with one or more of these characteristics is exceptionally challenging. Classification of data within data streams is currently the primary focus of research efforts in many fields (i.e., intrusion detection, data mining, machine learning). Hierarchical Temporal Memory (HTM) is a type of sequence memory that exhibits some of the predictive and anomaly detection properties of the neocortex. HTM algorithms conduct training through exposure to a stream of sensory data and are thus suited for continuous online learning. This research developed an HTM sequence classifier aimed at classifying streaming data, which contained concept drift, noise, and temporal dependencies. The HTM sequence classifier was fed both artificial and real-world data streams and evaluated using the prequential evaluation method. Cost measures for accuracy, CPU-time, and RAM usage were calculated for each data stream and compared against a variety of modern classifiers (e.g., Accuracy Weighted Ensemble, Adaptive Random Forest, Dynamic Weighted Majority, Leverage Bagging, Online Boosting ensemble, and Very Fast Decision Tree). The HTM sequence classifier performed well when the data streams contained concept drift, noise, and temporal dependencies, but was not the most suitable classifier of those compared against when provided data streams did not include temporal dependencies. Finally, this research explored the suitability of the HTM sequence classifier for detecting stalling code within evasive malware. The results were promising as they showed the HTM sequence classifier capable of predicting coding sequences of an executable file by learning the sequence patterns of the x86 EFLAGs register. The HTM classifier plotted these predictions in a cardiogram-like graph for quick analysis by reverse engineers of malware. This research highlights the potential of HTM technology for application in online classification problems and the detection of evasive malware

    Using High-Order Prior Belief Predictions in Hierarchical Temporal Memory for Streaming Anomaly Detection

    Get PDF
    Autonomous streaming anomaly detection can have a significant impact in any domain where continuous, real-time data is common. Often in these domains, datasets are too large or complex to hand label. Algorithms that require expensive global training procedures and large training datasets impose strict demands on data and are accordingly not fit to scale to real-time applications that are noisy and dynamic. Unsupervised algorithms that learn continuously like humans therefore boast increased applicability to these real-world scenarios. Hierarchical Temporal Memory (HTM) is a biologically constrained theory of machine intelligence inspired by the structure, activity, organization and interaction of pyramidal neurons in the neocortex of the primate brain. At the core of HTM are spatio-temporal learning algorithms that store, learn, recall and predict temporal sequences in an unsupervised and continuous fashion to meet the demands of real-time tasks. Unlike traditional machine learning and deep learning encompassed by the act of complex functional approximation, HTM with the surrounding proposed framework does not require any offline training procedures, any massive stores of training data, any data labels, it does not catastrophically forget previously learned information and it need only make one pass through the temporal data. Proposed in this thesis is an algorithmic framework built upon HTM for intelligent streaming anomaly detection. Unseen in earlier streaming anomaly detection work, the proposed framework uses high-order prior belief predictions in time in the effort to increase the fault tolerance and complex temporal anomaly detection capabilities of the underlying time-series model. Experimental results suggest that the framework when built upon HTM redefines state-of-the-art performance in a popular streaming anomaly benchmark. Comparative results with and without the framework on several third-party datasets collected from real-world scenarios also show a clear performance benefit. In principle, the proposed framework can be applied to any time-series modeling algorithm capable of producing high-order predictions

    Corporate information risk : an information security governance framework

    Get PDF
    Information Security is currently viewed from a technical point of view only. Some authors believe that Information Security is a process that involves more than merely Risk Management at the department level, as it is also a strategic and potentially legal issue. Hence, there is a need to elevate the importance of Information Security to a governance level through Information Security Governance and propose a framework to help guide the Board of Directors in their Information Security Governance efforts. IT is a major facilitator of organizational business processes and these processes manipulate and transmit sensitive customer and financial information. IT, which involves major risks, may threaten the security if corporate information assets. Therefore, IT requires attention at board level to ensure that technology-related information risks are within an organization’s accepted risk appetite. However, IT issues are a neglected topic at board level and this could bring about enronesque disasters. Therefore, there is a need for the Board of Directors to direct and control IT-related risks effectively to reduce the potential for Information Security breaches and bring about a stronger system of internal control. The IT Oversight Committee is a proven means of achieving this, and this study further motivates the necessity for such a committee to solidify an organization’s Information Security posture among other IT-related issues

    Change detection for activity recognition.

    Get PDF
    Activity Recognition is concerned with identifying the physical state of a user at a particular point in time. Activity recognition task requires the training of classification algorithm using the processed sensor data from the representative population of users. The accuracy of the generated model often reduces during classification of new instances due to the non-stationary sensor data and variations in user characteristics. Thus, there is a need to adapt the classification model to new user haracteristics. However, the existing approaches to model adaptation in activity recognition are blind. They continuously adapt a classification model at a regular interval without specific and precise detection of the indicator of the degrading performance of the model. This approach can lead to wastage of system resources dedicated to continuous adaptation. This thesis addresses the problem of detecting changes in the accuracy of activity recognition model. The thesis developed a classifier for activity recognition. The classifier uses three statistical summaries data that can be generated from any dataset for similarity based classification of new samples. The weighted ensemble combination of the classification decision from each statistical summary data results in a better performance than three existing benchmarked classification algorithms. The thesis also presents change detection approaches that can detect the changes in the accuracy of the underlying recognition model without having access to the ground truth label of each activity being recognised. The first approach called `UDetect' computes the change statistics from the window of classified data and employed statistical process control method to detect variations between the classified data and the reference data of a class. Evaluation of the approach indicates a consistent detection that correlates with the error rate of the model. The second approach is a distance based change detection technique that relies on the developed statistical summaries data for comparing new classified samples and detects any drift in the original class of the activity. The implemented approach uses distance function and a threshold parameter to detect the accuracy change in the classifier that is classifying new instances. Evaluation of the approach yields above 90% detection accuracy. Finally, a layered framework for activity recognition is proposed to make model adaptation in activity recognition informed using the developed techniques in this thesis

    Applying a framework for IT governance in South African higher education institutions

    Get PDF
    Background: Higher Education (HE), through HE Institutions, plays a very important role in society. There is thus a need for this sector to be well managed, especially with regards to planning, organising, and controlling. Corporate Governance has received a lot of attention in recent times, especially to engender trust on the part of the stakeholders. There are many similarities, but also significant differences in the governance of HE institutions and public companies. Information Technology (IT) plays an extremely important role in the modern organisation, creating huge opportunities, but also increasing the risk to the organisation. Therefore, effective governance of IT in HE Institutions is of great importance

    Machine Learning Algorithm for the Scansion of Old Saxon Poetry

    Get PDF
    Several scholars designed tools to perform the automatic scansion of poetry in many languages, but none of these tools deal with Old Saxon or Old English. This project aims to be a first attempt to create a tool for these languages. We implemented a Bidirectional Long Short-Term Memory (BiLSTM) model to perform the automatic scansion of Old Saxon and Old English poems. Since this model uses supervised learning, we manually annotated the Heliand manuscript, and we used the resulting corpus as labeled dataset to train the model. The evaluation of the performance of the algorithm reached a 97% for the accuracy and a 99% of weighted average for precision, recall and F1 Score. In addition, we tested the model with some verses from the Old Saxon Genesis and some from The Battle of Brunanburh, and we observed that the model predicted almost all Old Saxon metrical patterns correctly misclassified the majority of the Old English input verses

    Improving cybercrime reporting in Scotland : a systematic literature review

    Get PDF
    Background: The UK system for reporting economic cybercrime is called Action Fraud (AF). AF has been found to prioritise high value and low volume crimes. Therefore, people who have been scammed out of less than £100 000 are less likely to have their crime investigated via AF. Consequently, Scotland severed its ties with AF and proceeded to develop its own systems for reporting low value and high-volume crimes. Another problem with AF was that its reports were inaccurate and incomplete. Interestingly, since the 1930s the compilation and investigation of crime reports has always suffered from inaccuracies and discrepancies. This pattern has not been reversed by rapid technological development. Instead, the trend is preserved, not just in the UK, but across the globe. Aim: An exploration of how to improve cybercrime reporting in Scotland was implemented via a systematic literature review the results of which will inform upcoming fieldwork. Due to the lack of data on Scotland, frequent extrapolations were conducted from both the UK and the West. The research questions were: 1. What is known about cybercrime in the UK to date? 2. What is known about cybercrime victims in the UK to date? 3. What is known about cybercrime reporting to date? Method and Analysis: The answers were retrieved by combining Boolean variables with keywords into Scopus, Web of Science and ProQuest. This resulted in the inclusion of 100 peer-reviewed articles (after the exclusion of unsuitable ones). The articles were analysed using Inductive thematic analysis (ITA). The underlying principle of ITA is based on data immersion to identify the themes within. This analysis revealed a common trend, a novel taxonomy, and an original conclusion. Results: The common trend is that of responsibilisation, which is the shifting of responsibility for policing cybercrime from the government onto the citizens and private sector. For example, the government educating citizens about the risks of cybercrime and disengaging with them thereafter is a case of responsibilisation. This is because the government sees it as the victims’ responsibility to follow its advice. One problem of responsibilisation in cybercrime is that if one person is attacked, then many computers can become infected through their error. Therefore, the government should step-up to the task of protecting its citizens. The novel taxonomy is for classifying cybercrime reporting systems according to three pillars, which I referred to as Human-To-Human (H2H), Human-To-Machine (H2M) and Machine-To-Machine (M2M). The advantage of this classification is parsimony, the disadvantage is reductionism. The risk of reductionism applies specifically to crimes that sit in between pillars. Conclusion: To improve cybercrime reporting in Scotland, the process needs to be treated also as a social one rather than a purely mathematical one. This can be achieved by engaging with psychological principles of how emotionally charged social interactions are encoded into memory. Understanding memory will help the police record cybercrime reports in an effective way. This research will impact society because it serves as a foundation for fieldwork with victims of cybercrime and the police tasked with those investigations. The results of the upcoming fieldwork will serve to inform national guidance on how to improve the reporting of cybercrime, which will reduce it and give victims living in Scotland a sense of closure
    corecore