14 research outputs found

    Cybersecurity: Time Series Predictive Modeling of Vulnerabilities of Desktop Operating System Using Linear and Non-Linear Approach

    Get PDF
    Vulnerability forecasting models help us to predict the number of vulnerabilities that may occur in the future for a given Operating System (OS). There exist few models that focus on quantifying future vulnerabilities without consideration of trend, level, seasonality and non linear components of vulnerabilities. Unlike traditional ones, we propose a vulnerability analytic prediction model based on linear and non-linear approaches via time series analysis. We have developed the models based on Auto Regressive Moving Average (ARIMA), Artificial Neural Network (ANN), and Support Vector Machine (SVM) settings. The best model which provides the minimum error rate is selected for prediction of future vulnerabilities. Utilizing time series approach, this study has developed a predictive analytic model for three popular Desktop Operating Systems, namely, Windows 7, Mac OS X, and Linux Kernel by using their reported vulnerabilities on the National Vulnerability Database (NVD). Based on these reported vulnerabilities, we predict ahead their behavior so that the OS companies can make strategic and operational decisions like secure deployment of OS, facilitate backup provisioning, disaster recovery, diversity planning, maintenance scheduling, etc. Similarly, it also helps in assessing current security risks along with estimation of resources needed for handling potential security breaches and to foresee the future releases of security patches. The proposed non-linear analytic models produce very good prediction results in comparison to linear time series models

    A New View on Classification of Software Vulnerability Mitigation Methods

    Get PDF
    Software vulnerability mitigation is a well-known research area and many methods have been proposed for it Some papers try to classify these methods from different specific points of views In this paper we aggregate all proposed classifications and present a comprehensive classification of vulnerability mitigation methods We define software vulnerability as a kind of software fault and correspond the classes of software vulnerability mitigation methods accordingly In this paper the software vulnerability mitigation methods are classified into vulnerability prevention vulnerability tolerance vulnerability removal and vulnerability forecasting We define each vulnerability mitigation method in our new point of view and indicate some methods for each class Our general point of view helps to consider all of the proposed methods in this review We also identify the fault mitigation methods that might be effective in mitigating the software vulnerabilities but are not yet applied in this area Based on that new directions are suggested for the future researc

    Quantifying the security risk of discovering and exploiting software vulnerabilities

    Get PDF
    2016 Summer.Includes bibliographical references.Most of the attacks on computer systems and networks are enabled by vulnerabilities in a software. Assessing the security risk associated with those vulnerabilities is important. Risk mod- els such as the Common Vulnerability Scoring System (CVSS), Open Web Application Security Project (OWASP) and Common Weakness Scoring System (CWSS) have been used to qualitatively assess the security risk presented by a vulnerability. CVSS metrics are the de facto standard and its metrics need to be independently evaluated. In this dissertation, we propose using a quantitative approach that uses an actual data, mathematical and statistical modeling, data analysis, and measurement. We have introduced a novel vulnerability discovery model, Folded model, that estimates the risk of vulnerability discovery based on the number of residual vulnerabilities in a given software. In addition to estimating the risk of vulnerabilities discovery of a whole system, this dissertation has furthermore introduced a novel metrics termed time to vulnerability discovery to assess the risk of an individual vulnerability discovery. We also have proposed a novel vulnerability exploitability risk measure termed Structural Severity. It is based on software properties, namely attack entry points, vulnerability location, the presence of the dangerous system calls, and reachability analysis. In addition to measurement, this dissertation has also proposed predicting vulnerability exploitability risk using internal software metrics. We have also proposed two approaches for evaluating CVSS Base metrics. Using the availability of exploits, we first have evaluated the performance of the CVSS Exploitability factor and have compared its performance to Microsoft (MS) rating system. The results showed that exploitability metrics of CVSS and MS have a high false positive rate. This finding has motivated us to conduct further investigation. To that end, we have introduced vulnerability reward programs (VRPs) as a novel ground truth to evaluate the CVSS Base scores. The results show that the notable lack of exploits for high severity vulnerabilities may be the result of prioritized fixing of vulnerabilities

    The global vulnerability discovery and disclosure system: a thematic system dynamics approach

    Get PDF
    Vulnerabilities within software are the fundamental issue that provide both the means, and opportunity for malicious threat actors to compromise critical IT systems (Younis et al., 2016). Consequentially, the reduction of vulnerabilities within software should be of paramount importance, however, it is argued that software development practitioners have historically failed in reducing the risks associated with software vulnerabilities. This failure is illustrated in, and by the growth of software vulnerabilities over the past 20 years. This increase which is both unprecedented and unwelcome has led to an acknowledgement that novel and radical approaches to both understand the vulnerability discovery and disclosure system (VDDS) and to mitigate the risks associate with software vulnerability centred risk is needed (Bradbury, 2015; Marconato et al., 2012). The findings from this research show that whilst technological mitigations are vital, the social and economic features of the VDDS are of critical importance. For example, hitherto unknown systemic themes identified by this research are of key and include; Perception of Punishment; Vendor Interactions; Disclosure Stance; Ethical Considerations; Economic factors for Discovery and Disclosure and Emergence of New Vulnerability Markets. Each theme uniquely impacts the system, and ultimately the scale of vulnerability based risks. Within the research each theme within the VDDS is represented by several key variables which interact and shape the system. Specifically: Vender Sentiment; Vulnerability Removal Rate; Time to fix; Market Share; Participants within VDDS, Full and Coordinated Disclosure Ratio and Participant Activity. Each variable is quantified and explored, defining both the parameter space and progression over time. These variables are utilised within a system dynamic model to simulate differing policy strategies and assess the impact of these policies upon the VDDS. Three simulated vulnerability disclosure futures are hypothesised and are presented, characterised as depletion, steady and exponential with each scenario dependent upon the parameter space within the key variables

    Network Security Metrics: Estimating the Resilience of Networks Against Zero Day Attacks

    Get PDF
    Computer networks are playing the role of nervous systems in many critical infrastructures, governmental and military organizations, and enterprises today. Protecting such mission critical networks means more than just patching known vulnerabilities and deploying firewalls or IDSs. Proper metrics are needed in evaluating the security level of networks and provide security enhanced solutions. However, without considering unknown zero-day vulnerabilities, security metrics are insufficient to capture the true security level of a network. My Ph.D's work is aiming to develop a series of novel network security metrics with a special focus on modeling zero day attacks and study the relationships between software features and vulnerabilities. In the first work, we take the first step toward formally modeling network diversity as a security metric by designing and evaluating a series of diversity metrics. In particular, we first devise a biodiversity-inspired metric based on the effective number of distinct resources. We then propose two complementary diversity metrics, based on the least and the average attacking efforts, respectively. In the second topic, we lift the attack surface concept, which calculates the intrinsic properties of software applications, to the network level as a security metric for evaluating the resilience of networks against potential zero day attacks. First, we develop models for aggregating the attack surface among different resources inside a network. Second, we design heuristic algorithms to avoid the costly calculation of attack surface. Predicting and studying the software vulnerability both help administrators to improve security deployment for their organizations and to choose the right applications among those with similar functionality, and for the software vendors to estimate the security level of their software applications. In the third topic, we perform a large-scale empirical study on datasets from GitHub and different versions of Chrome to study the relationship between software features and the number of vulnerabilities. This study quantitatively demonstrates the importance of features in the vulnerability discovery process based on machine learning techniques, which provides inputs for network level security metrics. Those features could serve as inputs for future network security metrics

    Vulnerability Scrying Method for Software Vulnerability Discovery Prediction without a Vulnerability Database

    No full text
    Predicting software vulnerability discovery trends can help improve secure deployment of software applications and facilitate backup provisioning, disaster recovery, diversity planning, and maintenance scheduling. Vulnerability discovery models (VDMs) have been studied in the literature as a means to capture the underlying stochastic process. Based on the VDMs, a few vulnerability prediction schemes have been proposed. Unfortunately, all these schemes suffer from the same weaknesses: they require a large amount of historical vulnerability data from a database (hence they are not applicable to a newly released software application), their precision depends on the amount of training data, and they have significant amount of error in their estimates. In this work, we propose vulnerability scrying, a new paradigm for vulnerability discovery prediction based on code properties. Using compiler-based static analysis of a codebase, we extract code properties such as code complexity (cyclomatic complexity), and more importantly code quality (compliance with secure coding rules), from the source code of a software application. Then we propose a stochastic model which uses code properties as its parameters to predict vulnerability discovery. We have studied the impact of code properties on the vulnerability discovery trends by performing static analysis on the source code of four real-world software applications. We have used our scheme to predict vulnerability discovery in three other software applications. The results show that even though we use no historical data in our prediction, vulnerability scrying can predict vulnerability discovery with better precision and less divergence over time

    Summon a Demon and Bind it: A Grounded Theory of LLM Red Teaming in the Wild

    Full text link
    Engaging in the deliberate generation of abnormal outputs from large language models (LLMs) by attacking them is a novel human activity. This paper presents a thorough exposition of how and why people perform such attacks. Using a formal qualitative methodology, we interviewed dozens of practitioners from a broad range of backgrounds, all contributors to this novel work of attempting to cause LLMs to fail. We relate and connect this activity between its practitioners' motivations and goals; the strategies and techniques they deploy; and the crucial role the community plays. As a result, this paper presents a grounded theory of how and why people attack large language models: LLM red teaming in the wild

    Sensory Deprivation as an Experimental Model of Psychosis

    Get PDF
    The development of novel experimental models of schizophrenia and psychosis are critical to developing a better understanding of these complex and poorly understood disorders. Existing approaches such as animal and drug models have major limitations to their use. An alternative approach to modelling psychosis is proposed, built upon the premise of continuum theory, focusing on ‘high risk’ hallucination prone individuals from within the healthy population. A systematic review considered existing non-pharmacological approaches for inducing psychosis-like experiences (PLE’s) in such individuals. The thesis then addressed how one such method, short-term sensory deprivation, can successfully induce transient psychosis-like experiences (PLE’s) in this population. The Revised Hallucinations Scale (RHS: Morrison et al. 2002) was found to accurately predict individuals most likely to experience PLE’s in sensory deprivation. Individual differences that may contribute to reports of PLE’s were explored: the most powerful predictor of PLE’s in sensory deprivation was verified to be hallucination proneness. Additional personality traits such as fantasy proneness and suggestibility were not implicated. A revised four factor structure for the RHS was also developed, using Exploratory Structural Equation Modelling (ESEM). This model showed improved fit to the original non-replicable factor structure. The ESEM approach is arguably more appropriate than traditional factor analysis for modelling data with high inter-factorial correlations. Quantitative Electroencephalogram (EEG) data was collected in order to establish whether this approach could provide a robust neurophysiological correlate for psychosis-like experiences. Initial pilot data suggested hallucination prone individuals may be characterised by reduced levels of theta, alpha and beta activity, alongside elevated levels of cortical hyper-excitability. These findings support weakened inhibitory processing theories of psychosis. Overall, sensory deprivation was found to have the potential to contribute significantly to our understanding of psychosis, and could be utilised effectively on a stand-alone basis, or as an adjunct to existing animal and drug models

    Reality Hackers: The Next Wave of Media Revolutionaries

    Get PDF
    Just as the printing press gave rise to the nation-state, emerging technologies are reshaping collective identities and challenging our understanding of what it means to be human. Should citizens have the right to be truly anonymous on-line? Should we be concerned about the fact that so many people are choosing to migrate to virtual worlds? Are injectible microscopic radio-frequency ID chips a blessing or a curse? Is the use of cognitive enhancing nootropics a human right or an unforgivable transgression? Should genomic data about human beings be hidden away with commercial patents or open-sourced like software? Should hobbyists known as biohackers be allowed to experiment with genetic engineering in their home laboratories? The time-frame for acting on such questions is relatively short, and these decisions are too important to be left up to a small handful of scientists and policymakers. If democracy is to continue as a viable alternative to technocracy, the average citizen must become more involved in these debates. To borrow a line from the computer visionary Ted Nelson, all of us can -- and must -- understand technology now. Challenging the popular stereotype of hackers as ciminal sociopaths, reality hackers uphold the basic tenets of what Steven Levy (1984) terms the hacker ethic. These core principles include a commitment to: sharing, openness, decentralization, public access to information, and the use of new technologies to make the world a better place.https://digitalcommons.trinity.edu/mono/1000/thumbnail.jp
    corecore