417 research outputs found

    A new unified intrusion anomaly detection in identifying unseen web attacks

    Get PDF
    The global usage of more sophisticated web-based application systems is obviously growing very rapidly. Major usage includes the storing and transporting of sensitive data over the Internet. The growth has consequently opened up a serious need for more secured network and application security protection devices. Security experts normally equip their databases with a large number of signatures to help in the detection of known web-based threats. In reality, it is almost impossible to keep updating the database with the newly identified web vulnerabilities. As such, new attacks are invisible. This research presents a novel approach of Intrusion Detection System (IDS) in detecting unknown attacks on web servers using the Unified Intrusion Anomaly Detection (UIAD) approach. The unified approach consists of three components (preprocessing, statistical analysis, and classification). Initially, the process starts with the removal of irrelevant and redundant features using a novel hybrid feature selection method. Thereafter, the process continues with the application of a statistical approach to identifying traffic abnormality. We performed Relative Percentage Ratio (RPR) coupled with Euclidean Distance Analysis (EDA) and the Chebyshev Inequality Theorem (CIT) to calculate the normality score and generate a finest threshold. Finally, Logitboost (LB) is employed alongside Random Forest (RF) as a weak classifier, with the aim of minimising the final false alarm rate. The experiment has demonstrated that our approach has successfully identified unknown attacks with greater than a 95% detection rate and less than a 1% false alarm rate for both the DARPA 1999 and the ISCX 2012 datasets

    Improving the Generalisability of Brain Computer Interface Applications via Machine Learning and Search-Based Heuristics

    Get PDF
    Brain Computer Interfaces (BCI) are a domain of hardware/software in which a user can interact with a machine without the need for motor activity, communicating instead via signals generated by the nervous system. These interfaces provide life-altering benefits to users, and refinement will both allow their application to a much wider variety of disabilities, and increase their practicality. The primary method of acquiring these signals is Electroencephalography (EEG). This technique is susceptible to a variety of different sources of noise, which compounds the inherent problems in BCI training data: large dimensionality, low numbers of samples, and non-stationarity between users and recording sessions. Feature Selection and Transfer Learning have been used to overcome these problems, but they fail to account for several characteristics of BCI. This thesis extends both of these approaches by the use of Search-based algorithms. Feature Selection techniques, known as Wrappers use ‘black box’ evaluation of feature subsets, leading to higher classification accuracies than ranking methods known as Filters. However, Wrappers are more computationally expensive, and are prone to over-fitting to training data. In this thesis, we applied Iterated Local Search (ILS) to the BCI field for the first time in literature, and demonstrated competitive results with state-of-the-art methods such as Least Absolute Shrinkage and Selection Operator and Genetic Algorithms. We then developed ILS variants with guided perturbation operators. Linkage was used to develop a multivariate metric, Intrasolution Linkage. This takes into account pair-wise dependencies of features with the label, in the context of the solution. Intrasolution Linkage was then integrated into two ILS variants. The Intrasolution Linkage Score was discovered to have a stronger correlation with the solutions predictive accuracy on unseen data than Cross Validation Error (CVE) on the training set, the typical approach to feature subset evaluation. Mutual Information was used to create Minimum Redundancy Maximum Relevance Iterated Local Search (MRMR-ILS). In this algorithm, the perturbation operator was guided using an existing Mutual Information measure, and compared with current Filter and Wrapper methods. It was found to achieve generally lower CVE rates and higher predictive accuracy on unseen data than existing algorithms. It was also noted that solutions found by the MRMR-ILS provided CVE rates that had a stronger correlation with the accuracy on unseen data than solutions found by other algorithms. We suggest that this may be due to the guided perturbation leading to solutions that are richer in Mutual Information. Feature Selection reduces computational demands and can increase the accuracy of our desired models, as evidenced in this thesis. However, limited quantities of training samples restricts these models, and greatly reduces their generalisability. For this reason, utilisation of data from a wide range of users is an ideal solution. Due to the differences in neural structures between users, creating adequate models is difficult. We adopted an existing state-of-the-art ensemble technique Ensemble Learning Generic Information (ELGI), and developed an initial optimisation phase. This involved using search to transplant instances between user subsets to increase the generalisability of each subset, before combination in the ELGI. We termed this Evolved Ensemble Learning Generic Information (eELGI). The eELGI achieved higher accuracy than user-specific BCI models, across all eight users. Optimisation of the training dataset allowed smaller training sets to be used, offered protection against neural drift, and created models that performed similarly across participants, regardless of neural impairment. Through the introduction and hybridisation of search based algorithms to several problems in BCI we have been able to show improvements in modelling accuracy and efficiency. Ultimately, this represents a step towards more practical BCI systems that will provide life altering benefits to users

    An Intelligent Radiomic Approach for Lung Cancer Screening

    Get PDF
    Funding: This project is supported by the Ministerio de Ciencia e Innovación (MCI), Agencia Estatal de Investigación (AEI) and Fondo Europeo de Desarrollo Regional (FEDER), RTI2018-095209-B-C21 (MCI/AEI/FEDER, UE), Generalitat de Catalunya, 2017-SGR-1624 and CERCA-Programme. Debora Gil is supported by Serra Hunter Fellow.This project is supported by the Ministerio de Ciencia e Innovaci?n (MCI), Agencia Estatal de Investigaci?n (AEI) and Fondo Europeo de Desarrollo Regional (FEDER), RTI2018-095209-B-C21 (MCI/AEI/FEDER, UE), Generalitat de Catalunya, 2017-SGR-1624 and CERCA-Programme. Debora Gil is supported by Serra Hunter Fellow. Barcelona Respiratory Network (BRN), Acad?mia de Ci?ncies M?diques de Catalunya i Balears, i Fundaci? Ramon Pla i Armengol.The efficiency of lung cancer screening for reducing mortality is hindered by the high rate of false positives. Artificial intelligence applied to radiomics could help to early discard benign cases from the analysis of CT scans. The available amount of data and the fact that benign cases are a minority, constitutes a main challenge for the successful use of state of the art methods (like deep learning), which can be biased, over-fitted and lack of clinical reproducibility. We present an hybrid approach combining the potential of radiomic features to characterize nodules in CT scans and the generalization of the feed forward networks. In order to obtain maximal reproducibility with minimal training data, we propose an embedding of nodules based on the statistical significance of radiomic features for malignancy detection. This representation space of lesions is the input to a feed forward network, which architecture and hyperparameters are optimized using own-defined metrics of the diagnostic power of the whole system. Results of the best model on an independent set of patients achieve 100% of sensitivity and 83% of specificity (AUC = 0.94) for malignancy detection

    One-Shot Learning of Ensembles of Temporal Logic Formulas for Anomaly Detection in Cyber-Physical Systems

    Get PDF
    Cyber-Physical Systems (CPS) are prevalent in critical infrastructures and a prime target for cyber-attacks. Multivariate time series data generated by sensors and actuators of a CPS can be monitored for detecting cyber-attacks that introduce anomalies in those data. We use Signal Temporal Logic (STL) formulas to tightly describe the normal behavior of a CPS, identifying data instances that do not satisfy the formulas as anomalies. We learn an ensemble of STL formulas based on observed data, without any specific knowledge of the CPS being monitored. We propose an algorithm based on Grammar-Guided Genetic Programming (G3P) that learns the ensemble automatically in a single evolutionary run. We test the effectiveness of our data-driven proposal on two real-world datasets, finding that the proposed one-shot algorithm provides good detection performance

    Average-case analysis via incompressibility

    Get PDF

    A combined data mining approach using rough set theory and case-based reasoning in medical datasets

    Get PDF
    Case-based reasoning (CBR) is the process of solving new cases by retrieving the most relevant ones from an existing knowledge-base. Since, irrelevant or redundant features not only remarkably increase memory requirements but also the time complexity of the case retrieval, reducing the number of dimensions is an issue worth considering. This paper uses rough set theory (RST) in order to reduce the number of dimensions in a CBR classifier with the aim of increasing accuracy and efficiency. CBR exploits a distance based co-occurrence of categorical data to measure similarity of cases. This distance is based on the proportional distribution of different categorical values of features. The weight used for a feature is the average of co-occurrence values of the features. The combination of RST and CBR has been applied to real categorical datasets of Wisconsin Breast Cancer, Lymphography, and Primary cancer. The 5-fold cross validation method is used to evaluate the performance of the proposed approach. The results show that this combined approach lowers computational costs and improves performance metrics including accuracy and interpretability compared to other approaches developed in the literature

    Impossibility Results in AI: A Survey

    Get PDF
    An impossibility theorem demonstrates that a particular problem or set of problems cannot be solved as described in the claim. Such theorems put limits on what is possible to do concerning artificial intelligence, especially the super-intelligent one. As such, these results serve as guidelines, reminders, and warnings to AI safety, AI policy, and governance researchers. These might enable solutions to some long-standing questions in the form of formalizing theories in the framework of constraint satisfaction without committing to one option. In this paper, we have categorized impossibility theorems applicable to the domain of AI into five categories: deduction, indistinguishability, induction, tradeoffs, and intractability. We found that certain theorems are too specific or have implicit assumptions that limit application. Also, we added a new result (theorem) about the unfairness of explainability, the first explainability-related result in the induction category. We concluded that deductive impossibilities deny 100%-guarantees for security. In the end, we give some ideas that hold potential in explainability, controllability, value alignment, ethics, and group decision-making. They can be deepened by further investigation

    Michelson Interferometry with the Keck I Telescope

    Get PDF
    We report the first use of Michelson interferometry on the Keck I telescope for diffraction-limited imaging in the near infrared JHK and L bands. By using an aperture mask located close to the f/25 secondary, the 10 m Keck primary mirror was transformed into a separate-element, multiple aperture interferometer. This has allowed diffraction-limited imaging of a large number of bright astrophysical targets, including the geometrically complex dust envelopes around a number of evolved stars. The successful restoration of these images, with dynamic ranges in excess of 200:1, highlights the significant capabilities of sparse aperture imaging as compared with more conventional filled-pupil speckle imaging for the class of bright targets considered here. In particular the enhancement of the signal-to-noise ratio of the Fourier data, precipitated by the reduction in atmospheric noise, allows high fidelity imaging of complex sources with small numbers of short-exposure images relative to speckle. Multi-epoch measurements confirm the reliability of this imaging technique and our whole dataset provides a powerful demonstration of the capabilities of aperture masking methods when utilized with the current generation of large-aperture telescopes. The relationship between these new results and recent advances in interferometry and adaptive optics is briefly discussed.Comment: Accepted into Publications of the Astronomical Society of the Pacific. To appear in vol. 112. Paper contains 10 pages, 8 figure

    Botnets and how to automatic detect them: exploring new ways of dealing with botnet classification: Botnets e como detectá-los automaticamente: explorando novas maneiras de lidar com a classificação botnet

    Get PDF
    Threats such as Botnets have become very popular in the current usage of the Internet, such as attacks like distributed denial of services (DoS) which can cause a significant impact on the use of technology. One way to mitigate such issues can be a focus on using intelligent models that can attempt to identify the existence of Botnets in the network traffic early. Thus, this work aims to evaluate the current state of the art on threats related to Botnets and how intelligent technology has been used in real-world restrictions such as real-time deadlines and increased network traffic. From our findings, we have indications that Botnet detection in real-time still is a more significant challenge because the computation power has not grown at the same rate that Internet traffic. This has pointed out other restrictions that must be considered, like privacy legislation and employing cryptography methods for all communications. In this context, we discuss the following steps to deal with the identified issues
    corecore