5 research outputs found

    Development of a Response Planner Using the UCT Algorithm for Cyber Defense

    Get PDF
    A need for a quick response to cyber attacks is a prevalent problem for computer network operators today. There is a small window to respond to a cyber attack when it occurs to prevent significant damage to a computer network. Automated response planners offer one solution to resolve this issue. This work presents Network Defense Planner System (NDPS), a planner dependent on the effectiveness of the detection of the cyber attack. This research first explores making classification of network attacks faster for real-time detection, the basic function Intrusion Detection System (IDS) provides. After identifying the type of attack, learning the rewards to use in the NDPS is the second important area of this research. For NDPS to assemble the optimal plan, learning the rewards for resulting network states is critical and often depends on the preferences of the network operator. Using neural networks, the second area of this research demonstrates that capturing the preferences through samples is feasible. After training the neural network, a model can be created to obtain reward estimates. The research performed in these two areas complement the final portion of the research which is assembling the optimal plan through using the Upper Bounds on Confidence for Trees (UCT) algorithm. NDPS is implemented using the UCT algorithm which allows for quick plan formulation by searching through predicted network states based on available network actions. UCT can effectively create a plan quickly and is guaranteed to provide the optimal plan, according to rewards used, if enough time is allotted. NDPS is tested against eight random attack scenarios. For each attack scenario, the plan is polled at specific time intervals to test how quickly the optimal plan can be formulated. Results demonstrate the feasibility of NDPS to be used in real world scenarios since the optimal plans for each attack type can be formulated in real-time allowing for a rapid system response

    A Real-time Strategy Agent Framework and Strategy Classifier for Computer Generated Forces

    Get PDF
    This research effort is concerned with the advancement of computer generated forces AI for Department of Defense (DoD) military training and education. The vision of this work is agents capable of perceiving and intelligently responding to opponent strategies in real-time. Our research goal is to lay the foundations for such an agent. Six research objectives are defined: 1) Formulate a strategy definition schema effective in defining a range of RTS strategies. 2) Create eight strategy definitions via the schema. 3) Design a real-time agent framework that plays the game according to the given strategy definition. 4) Generate an RTS data set. 5) Create an accurate and fast executing strategy classifier. 6) Find the best counterstrategies for each strategy definition. The agent framework is used to play the eight strategies against each other and generate a data set of game observations. To classify the data, we first perform feature reduction using principal component analysis or linear discriminant analysis. Two classifier techniques are employed, k-means clustering with k-nearest neighbor and support vector machine. The resulting classifier is 94.1% accurate with an average classification execution speed of 7.14 us. Our research effort has successfully laid the foundations for a dynamic strategy agent

    A Real-Time Opponent Modeling System For Rush Football

    No full text
    One drawback with using plan recognition in adversarial games is that often players must commit to a plan before it is possible to infer the opponent\u27s intentions. In such cases, it is valuable to couple plan recognition with plan repair, particularly in multi-agent domains where complete replanning is not computationally feasible. This paper presents a method for learning plan repair policies in realtime using Upper Confidence Bounds applied to Trees (UCT). We demonstrate how these policies can be coupled with plan recognition in an American football game (Rush 2008) to create an autonomous offensive team capable of responding to unexpected changes in defensive strategy. Our realtime version of UCT learns play modifications that result in a significantly higher average yardage and fewer interceptions than either the baseline game or domain-specific heuristics. Although it is possible to use the actual game simulator to measure reward offline, to execute UCT in real-time demands a different approach; here we describe two modules for reusing data from offline UCT searches to learn accurate state and reward estimators

    Air Force Institute of Technology Research Report 2011

    Get PDF
    This report summarizes the research activities of the Air Force Institute of Technology’s Graduate School of Engineering and Management. It describes research interests and faculty expertise; lists student theses/dissertations; identifies research sponsors and contributions; and outlines the procedures for contacting the school. Included in the report are: faculty publications, conference presentations, consultations, and funded research projects. Research was conducted in the areas of Aeronautical and Astronautical Engineering, Electrical Engineering and Electro-Optics, Computer Engineering and Computer Science, Systems and Engineering Management, Operational Sciences, Mathematics, Statistics and Engineering Physics
    corecore