422 research outputs found

    A Neural Network Approach for Intrusion Detection Systems

    Get PDF
    Intrusion detection systems, alongside firewalls and gateways, represent the first line of defense against computer network attacks. There are various commercial or open source intrusion detection systems in the market; nevertheless they do not perform well in various situations including novel attacks, user activity detection, generating in some cases false positive or negative alerts. The reason behind such performance is probably due to the implementation of merely signature based checks and a high degree of dependence on human interaction. On the other hand, a neural network approach might be the right one to tackle these issues. Neural networks have already been applied successfully to solve many problems related to pattern recognition, data mining, data compression and research is still underway with regards to intrusion detection systems. Unsupervised learning and fast network convergence are some features that can be integrated into an IDS system using neural networks. The networks can be designed to process a variety of data, although there are some constraints regarding input formatting. For this reason, data encoding represents a challenging task in the integration process since it needs to be optimised for the IDS domain. This paper will discuss the integration of IDS and neural networks, including data encoding and performance issues

    The Analysis of Network Manager’s Behaviour using a Self-Organising Neural Networks

    Get PDF
    We present a novel neural network method for the analysis and interpretation of data that describes user interaction with a training tool. The method is applied to the interaction between trainee network managers and a simulated network management system. A simulation based approach to the task of efficiently training network managers, through the use of a simulated network, was originally presented by Pattinson [2000]. The motivation was to provide a tool for exposing trainee network managers to a life like situation, where both normal network operation and ‘fault’ scenarios could be simulated in order to train the network manager. The data logged by this system describes the detailed interaction between trainee network manager and simulated network. The work presented here provides an analysis of this interaction data that enables an assessment of the capabilities of the network manager as well as an understanding of how the network management tasks are being approached. A neural network architecture [Lee, Palmer-Brown, Roadknight 2004] is adapted and implemented in order to perform an exploratory data analysis of the interaction data. The neural network architecture employs a novel form of continuous self-organisation to discover key features, and thus provide new insights into the data

    Diagnostic and adaptive redundant robotic planning and control

    Get PDF
    Neural networks and fuzzy logic are combined into a hierarchical structure capable of planning, diagnosis, and control for a redundant, nonlinear robotic system in a real world scenario. Throughout this work levels of this overall approach are demonstrated for a redundant robot and hand combination as it is commanded to approach, grasp, and successfully manipulate objects for a wheelchair-bound user in a crowded, unpredictable environment. Four levels of hierarchy are developed and demonstrated, from the lowest level upward: diagnostic individual motor control, optimal redundant joint allocation for trajectory planning, grasp planning with tip and slip control, and high level task planning for multiple arms and manipulated objects. Given the expectations of the user and of the constantly changing nature of processes, the robot hierarchy learns from its experiences in order to more efficiently execute the next related task, and allocate this knowledge to the appropriate levels of planning and control. The above approaches are then extended to automotive and space applications

    Analyzing Granger causality in climate data with time series classification methods

    Get PDF
    Attribution studies in climate science aim for scientifically ascertaining the influence of climatic variations on natural or anthropogenic factors. Many of those studies adopt the concept of Granger causality to infer statistical cause-effect relationships, while utilizing traditional autoregressive models. In this article, we investigate the potential of state-of-the-art time series classification techniques to enhance causal inference in climate science. We conduct a comparative experimental study of different types of algorithms on a large test suite that comprises a unique collection of datasets from the area of climate-vegetation dynamics. The results indicate that specialized time series classification methods are able to improve existing inference procedures. Substantial differences are observed among the methods that were tested

    Visual saliency computation for image analysis

    Full text link
    Visual saliency computation is about detecting and understanding salient regions and elements in a visual scene. Algorithms for visual saliency computation can give clues to where people will look in images, what objects are visually prominent in a scene, etc. Such algorithms could be useful in a wide range of applications in computer vision and graphics. In this thesis, we study the following visual saliency computation problems. 1) Eye Fixation Prediction. Eye fixation prediction aims to predict where people look in a visual scene. For this problem, we propose a Boolean Map Saliency (BMS) model which leverages the global surroundedness cue using a Boolean map representation. We draw a theoretic connection between BMS and the Minimum Barrier Distance (MBD) transform to provide insight into our algorithm. Experiment results show that BMS compares favorably with state-of-the-art methods on seven benchmark datasets. 2) Salient Region Detection. Salient region detection entails computing a saliency map that highlights the regions of dominant objects in a scene. We propose a salient region detection method based on the Minimum Barrier Distance (MBD) transform. We present a fast approximate MBD transform algorithm with an error bound analysis. Powered by this fast MBD transform algorithm, our method can run at about 80 FPS and achieve state-of-the-art performance on four benchmark datasets. 3) Salient Object Detection. Salient object detection targets at localizing each salient object instance in an image. We propose a method using a Convolutional Neural Network (CNN) model for proposal generation and a novel subset optimization formulation for bounding box filtering. In experiments, our subset optimization formulation consistently outperforms heuristic bounding box filtering baselines, such as Non-maximum Suppression, and our method substantially outperforms previous methods on three challenging datasets. 4) Salient Object Subitizing. We propose a new visual saliency computation task, called Salient Object Subitizing, which is to predict the existence and the number of salient objects in an image using holistic cues. To this end, we present an image dataset of about 14K everyday images which are annotated using an online crowdsourcing marketplace. We show that an end-to-end trained CNN subitizing model can achieve promising performance without requiring any localization process. A method is proposed to further improve the training of the CNN subitizing model by leveraging synthetic images. 5) Top-down Saliency Detection. Unlike the aforementioned tasks, top-down saliency detection entails generating task-specific saliency maps. We propose a weakly supervised top-down saliency detection approach by modeling the top-down attention of a CNN image classifier. We propose Excitation Backprop and the concept of contrastive attention to generate highly discriminative top-down saliency maps. Our top-down saliency detection method achieves superior performance in weakly supervised localization tasks on challenging datasets. The usefulness of our method is further validated in the text-to-region association task, where our method provides state-of-the-art performance using only weakly labeled web images for training

    Data driven techniques for modal decomposition and reduced-order modelling of fluids

    Get PDF
    In this thesis, a number of data-driven techniques are proposed for the analysis and extraction of reduced-order models of fluid flows. Throughout the thesis, there has been an emphasis on the practicality and interpretability of data-driven feature-extraction techniques to aid practitioners in flow-control and estimation. The first contribution uses a graph theoretic approach to analyse the similarity of modes extracted using data-driven modal decomposition algorithms to give a more intuitive understanding of the degrees of freedom in the underlying system. The method extracts clusters of spatially and spectrally similar modes by post-processing the modes extracted using DMD and its variants. The second contribution proposes a method for extracting coherent structures, using snapshots of high dimensional measurements, that can be mapped to a low dimensional output of the system. The importance of finding such coherent structures is that in the context of active flow control and estimation, the practitioner often has to rely on a limited number of measurable outputs to estimate the state of the flow. Therefore, ensuring that the extracted flow features can be mapped to the measured outputs of the system can be beneficial for estimating the state of the flow. The third contribution concentrates on using neural networks for exploiting the nonlinear relationships amongst linearly extracted modal time series to find a reduced order state, which can then be used for modelling the dynamics of the flow. The method utilises recurrent neural networks to find an encoding of a high dimensional set of modal time series, and fully connected neural networks to find a mapping between the encoded state and the physically interpretable modal coefficients. As a result of this architecture, the significantly reduced-order representation maintains an automatically extracted relationship to a higher-dimensional, interpretable state.Open Acces

    Tracking the Temporal-Evolution of Supernova Bubbles in Numerical Simulations

    Get PDF
    The study of low-dimensional, noisy manifolds embedded in a higher dimensional space has been extremely useful in many applications, from the chemical analysis of multi-phase flows to simulations of galactic mergers. Building a probabilistic model of the manifolds has helped in describing their essential properties and how they vary in space. However, when the manifold is evolving through time, a joint spatio-temporal modelling is needed, in order to fully comprehend its nature. We propose a first-order Markovian process that propagates the spatial probabilistic model of a manifold at fixed time, to its adjacent temporal stages. The proposed methodology is demonstrated using a particle simulation of an interacting dwarf galaxy to describe the evolution of a cavity generated by a Supernov

    Timbral Learning for Musical Robots

    Get PDF
    abstract: The tradition of building musical robots and automata is thousands of years old. Despite this rich history, even today musical robots do not play with as much nuance and subtlety as human musicians. In particular, most instruments allow the player to manipulate timbre while playing; if a violinist is told to sustain an E, they will select which string to play it on, how much bow pressure and velocity to use, whether to use the entire bow or only the portion near the tip or the frog, how close to the bridge or fingerboard to contact the string, whether or not to use a mute, and so forth. Each one of these choices affects the resulting timbre, and navigating this timbre space is part of the art of playing the instrument. Nonetheless, this type of timbral nuance has been largely ignored in the design of musical robots. Therefore, this dissertation introduces a suite of techniques that deal with timbral nuance in musical robots. Chapter 1 provides the motivating ideas and introduces Kiki, a robot designed by the author to explore timbral nuance. Chapter 2 provides a long history of musical robots, establishing the under-researched nature of timbral nuance. Chapter 3 is a comprehensive treatment of dynamic timbre production in percussion robots and, using Kiki as a case-study, provides a variety of techniques for designing striking mechanisms that produce a range of timbres similar to those produced by human players. Chapter 4 introduces a machine-learning algorithm for recognizing timbres, so that a robot can transcribe timbres played by a human during live performance. Chapter 5 introduces a technique that allows a robot to learn how to produce isolated instances of particular timbres by listening to a human play an examples of those timbres. The 6th and final chapter introduces a method that allows a robot to learn the musical context of different timbres; this is done in realtime during interactive improvisation between a human and robot, wherein the robot builds a statistical model of which timbres the human plays in which contexts, and uses this to inform its own playing.Dissertation/ThesisDoctoral Dissertation Media Arts and Sciences 201
    • …
    corecore