8,639 research outputs found

    Analysis on NSAW Reminder Based on Big Data Technology

    Get PDF
    NSAWS is an intelligent and real-time large database management system. By analyzing the user identity data and access rights contained in the collected information, the NSAWS finds out the potential risks and issues an alarm notice in a timely manner. This paper mainly studies how to strengthen the prevention of network attacks in BD environment from the following aspects. This paper first introduces the common technology on the Internet in our country and its application status; secondly, it expounds the architecture, deployment mode and operation mode of the large database system based on the basic security facilities such as cloud computing platform and firewall. Then the traditional NSAWS is analyzed and the simulation platform is tested. The results show that the platform has high accuracy and stability in alerting network security hazards and can effectively protect network security

    Tiresias: Predicting Security Events Through Deep Learning

    Full text link
    With the increased complexity of modern computer attacks, there is a need for defenders not only to detect malicious activity as it happens, but also to predict the specific steps that will be taken by an adversary when performing an attack. However this is still an open research problem, and previous research in predicting malicious events only looked at binary outcomes (e.g., whether an attack would happen or not), but not at the specific steps that an attacker would undertake. To fill this gap we present Tiresias, a system that leverages Recurrent Neural Networks (RNNs) to predict future events on a machine, based on previous observations. We test Tiresias on a dataset of 3.4 billion security events collected from a commercial intrusion prevention system, and show that our approach is effective in predicting the next event that will occur on a machine with a precision of up to 0.93. We also show that the models learned by Tiresias are reasonably stable over time, and provide a mechanism that can identify sudden drops in precision and trigger a retraining of the system. Finally, we show that the long-term memory typical of RNNs is key in performing event prediction, rendering simpler methods not up to the task

    Weight Semi Hidden Markov Model and Driving Situation Classification for Driver Behavior Diagnostic

    Get PDF
    In this study, we propose to use statistical modelling to analyze, model, and categorize driving activity. To achieve this objective, we develop a new statistical model by adding a weight feature to the classic Semi Hidden Markov Model (SHMM) framework. Then, to assess its capacity, we conduct an experiment that allows us to record 718 driving sequences categorized in 36 situations. We then used our modelling to identify the driver\u27s aim and the driving situation he\u27s in. Furthermore, we adapted the ascendant hierarchic classification technique to this modelling. It allows us to understand which situations are close and to define partitions of whole driving situations. Finally, on these sequences, our modelling choice allows us to predict the driver’s situation with, on average, an 85% success rate. These results show the HMM effectiveness to manage temporal and multidimensional data by modelling predicting drivers’ behavior

    Reinforcement learning for efficient network penetration testing

    Get PDF
    Penetration testing (also known as pentesting or PT) is a common practice for actively assessing the defenses of a computer network by planning and executing all possible attacks to discover and exploit existing vulnerabilities. Current penetration testing methods are increasingly becoming non-standard, composite and resource-consuming despite the use of evolving tools. In this paper, we propose and evaluate an AI-based pentesting system which makes use of machine learning techniques, namely reinforcement learning (RL) to learn and reproduce average and complex pentesting activities. The proposed system is named Intelligent Automated Penetration Testing System (IAPTS) consisting of a module that integrates with industrial PT frameworks to enable them to capture information, learn from experience, and reproduce tests in future similar testing cases. IAPTS aims to save human resources while producing much-enhanced results in terms of time consumption, reliability and frequency of testing. IAPTS takes the approach of modeling PT environments and tasks as a partially observed Markov decision process (POMDP) problem which is solved by POMDP-solver. Although the scope of this paper is limited to network infrastructures PT planning and not the entire practice, the obtained results support the hypothesis that RL can enhance PT beyond the capabilities of any human PT expert in terms of time consumed, covered attacking vectors, accuracy and reliability of the outputs. In addition, this work tackles the complex problem of expertise capturing and re-use by allowing the IAPTS learning module to store and re-use PT policies in the same way that a human PT expert would learn but in a more efficient way

    Survey of Attack Projection, Prediction, and Forecasting in Cyber Security

    Get PDF
    This paper provides a survey of prediction, and forecasting methods used in cyber security. Four main tasks are discussed first, attack projection and intention recognition, in which there is a need to predict the next move or the intentions of the attacker, intrusion prediction, in which there is a need to predict upcoming cyber attacks, and network security situation forecasting, in which we project cybersecurity situation in the whole network. Methods and approaches for addressing these tasks often share the theoretical background and are often complementary. In this survey, both methods based on discrete models, such as attack graphs, Bayesian networks, and Markov models, and continuous models, such as time series and grey models, are surveyed, compared, and contrasted. We further discuss machine learning and data mining approaches, that have gained a lot of attention recently and appears promising for such a constantly changing environment, which is cyber security. The survey also focuses on the practical usability of the methods and problems related to their evaluation

    Context Exploitation in Data Fusion

    Get PDF
    Complex and dynamic environments constitute a challenge for existing tracking algorithms. For this reason, modern solutions are trying to utilize any available information which could help to constrain, improve or explain the measurements. So called Context Information (CI) is understood as information that surrounds an element of interest, whose knowledge may help understanding the (estimated) situation and also in reacting to that situation. However, context discovery and exploitation are still largely unexplored research topics. Until now, the context has been extensively exploited as a parameter in system and measurement models which led to the development of numerous approaches for the linear or non-linear constrained estimation and target tracking. More specifically, the spatial or static context is the most common source of the ambient information, i.e. features, utilized for recursive enhancement of the state variables either in the prediction or the measurement update of the filters. In the case of multiple model estimators, context can not only be related to the state but also to a certain mode of the filter. Common practice for multiple model scenarios is to represent states and context as a joint distribution of Gaussian mixtures. These approaches are commonly referred as the join tracking and classification. Alternatively, the usefulness of context was also demonstrated in aiding the measurement data association. Process of formulating a hypothesis, which assigns a particular measurement to the track, is traditionally governed by the empirical knowledge of the noise characteristics of sensors and operating environment, i.e. probability of detection, false alarm, clutter noise, which can be further enhanced by conditioning on context. We believe that interactions between the environment and the object could be classified into actions, activities and intents, and formed into structured graphs with contextual links translated into arcs. By learning the environment model we will be able to make prediction on the target\u2019s future actions based on its past observation. Probability of target future action could be utilized in the fusion process to adjust tracker confidence on measurements. By incorporating contextual knowledge of the environment, in the form of a likelihood function, in the filter measurement update step, we have been able to reduce uncertainties of the tracking solution and improve the consistency of the track. The promising results demonstrate that the fusion of CI brings a significant performance improvement in comparison to the regular tracking approaches
    • …
    corecore