2,932 research outputs found

    Feature Set Selection for Improved Classification of Static Analysis Alerts

    Get PDF
    With the extreme growth in third party cloud applications, increased exposure of applications to the internet, and the impact of successful breaches, improving the security of software being produced is imperative. Static analysis tools can alert to quality and security vulnerabilities of an application; however, they present developers and analysts with a high rate of false positives and unactionable alerts. This problem may lead to the loss of confidence in the scanning tools, possibly resulting in the tools not being used. The discontinued use of these tools may increase the likelihood of insecure software being released into production. Insecure software can be successfully attacked resulting in the compromise of one or several information security principles such as confidentiality, availability, and integrity. Feature selection methods have the potential to improve the classification of static analysis alerts and thereby reduce the false positive rates. Thus, the goal of this research effort was to improve the classification of static analysis alerts by proposing and testing a novel method leveraging feature selection. The proposed model was developed and subsequently tested on three open source PHP applications spanning several years. The results were compared to a classification model utilizing all features to gauge the classification improvement of the feature selection model. The model presented did result in the improved classification accuracy and reduction of the false positive rate on a reduced feature set. This work contributes a real-world static analysis dataset based upon three open source PHP applications. It also enhanced an existing data set generation framework to include additional predictive software features. However, the main contribution is a feature selection methodology that may be used to discover optimal feature sets that increase the classification accuracy of static analysis alerts

    Assessing the effectiveness of defensive cyber operations

    Get PDF
    Enormous amounts of resources are being allocated for defensive cyber programs. The White House’s Cyber Security National Action Plan proposes a 35% increase in federal spending on cyber security during Fiscal Year 2017. Without an appropriate understanding of how well the people, processes, defenses, and risk are measured, there will naturally be unproductive tasking, inefficient spending and ineffective reporting. In 2016, the White House established the Commission on enhancing National Cybersecurity to assess the state of our nation’s cybersecurity posture. The report recognized both the difficulty and the need to develop meaningful metrics for cybersecurity in order to better secure the cyber landscape as it pertained to the broader digital ecosystem and its connection to our economy, government, and defense. The commission focused on both the private sector as well as the government and suggested the need to perfect policies, practices and technologies. Additionally, the Marine Corps University recently released research topics addressing some of the most important concerns affecting warfighters. One of the concerns was the lack of a methodology for determining the performance of Defensive Cyber Operations (DCO). Specifically addressed was a need to better understand how actions taken by network defenders facilitate network protection. Previous analysis of this topic led to a reactive and un-actionable approach which was tied to negative events such as the quantity and category of incident reports. As there is currently no framework or scorecard built to evaluate DCO as a whole effort, a methodical approach was taken to scope the problem, compare existing frameworks, develop a framework, and present a scorecard. The first phase of research required scoping exactly what is involved in DCO at the most basic level and understanding how the DoD evaluates performance. This resulted in an understanding of the actionability of metrics, the levels of warfare, and the counterbalance of cyber asymmetry. Also identified was the military doctrine for assessments, which frames evaluations in terms of Measures of Effectiveness and Measures of Performance and supports continuous assessments that provide actionable information to decision makers. The second phase required a detailed analysis of existing frameworks that measured related functions of cybersecurity. Specifically utilized were industry accepted compliance, incident handling, governance, and risk management frameworks. The outcome identified four functional areas common to most frameworks; people, processes, defenses, and risk. The third phase involved developing a framework that evaluated the four functional areas of DCO identified in the problem-framing phase, utilizing the most appropriate features of the already established frameworks. A key facet of this evaluation was that assessments should be weighed over time to demonstrate progress but also be measured against standards, peers, and the adversary. The final phase identified the continuous reporting criteria and the tangible mechanism for evaluating an organization in terms of a scorecard. The framework is not a static list of measurements but rather supports tailoring metrics to the organization’s specific requirements. The fundamentals of the framework are organized into elements, levels, categories, ends/ways, and measures. These metrics should be documented utilizing a standardized rubric that assesses the capability and performance of the metrics. The results should be reviewed and analyzed to determine trends, areas for improvement or investment and actionable information to support decision making. Additionally, a modified Delphi analysis with expert consensus validated the major concepts put forward in this paper. Overall, this research provides a comprehensive framework to evaluate the performance of Defensive Cyber Operations in terms of people, processes, defenses, and risk, filling a knowledge gap that is increasingly vital

    ML + FV = ♡\heartsuit? A Survey on the Application of Machine Learning to Formal Verification

    Get PDF
    Formal Verification (FV) and Machine Learning (ML) can seem incompatible due to their opposite mathematical foundations and their use in real-life problems: FV mostly relies on discrete mathematics and aims at ensuring correctness; ML often relies on probabilistic models and consists of learning patterns from training data. In this paper, we postulate that they are complementary in practice, and explore how ML helps FV in its classical approaches: static analysis, model-checking, theorem-proving, and SAT solving. We draw a landscape of the current practice and catalog some of the most prominent uses of ML inside FV tools, thus offering a new perspective on FV techniques that can help researchers and practitioners to better locate the possible synergies. We discuss lessons learned from our work, point to possible improvements and offer visions for the future of the domain in the light of the science of software and systems modeling.Comment: 13 pages, no figures, 3 table

    APHRODITE: an Anomaly-based Architecture for False Positive Reduction

    Get PDF
    We present APHRODITE, an architecture designed to reduce false positives in network intrusion detection systems. APHRODITE works by detecting anomalies in the output traffic, and by correlating them with the alerts raised by the NIDS working on the input traffic. Benchmarks show a substantial reduction of false positives and that APHRODITE is effective also after a "quick setup", i.e. in the realistic case in which it has not been "trained" and set up optimall

    Mining Fix Patterns for FindBugs Violations

    Get PDF
    In this paper, we first collect and track a large number of fixed and unfixed violations across revisions of software. The empirical analyses reveal that there are discrepancies in the distributions of violations that are detected and those that are fixed, in terms of occurrences, spread and categories, which can provide insights into prioritizing violations. To automatically identify patterns in violations and their fixes, we propose an approach that utilizes convolutional neural networks to learn features and clustering to regroup similar instances. We then evaluate the usefulness of the identified fix patterns by applying them to unfixed violations. The results show that developers will accept and merge a majority (69/116) of fixes generated from the inferred fix patterns. It is also noteworthy that the yielded patterns are applicable to four real bugs in the Defects4J major benchmark for software testing and automated repair.Comment: Accepted for IEEE Transactions on Software Engineerin

    Situation fencing: making geo-fencing personal and dynamic

    Get PDF
    Geo-fencing has recently been applied to multiple applications including media recommendation, advertisements, wildlife monitoring, and recreational activities. However current geo-fencing systems work with static geographical boundaries. Situation Fencing allows for these boundaries to vary automatically based on situations derived by a combination of global and personal data streams. We present a generic approach for situation fencing, and demonstrate how it can be operationalized in practice. The results obtained in a personalized allergy alert application are encouraging and open door for building thousands of similar applications using the same framework in near future
    • 

    corecore