86,989 research outputs found

    Formal Analysis of Security Metrics and Risk

    Get PDF
    Abstract. Security metrics are usually defined informally and, therefore, the rigourous analysis of these metrics is a hard task. This analysis is required to identify the existing relations between the security metrics, which try to quantify the same quality: security. Risk, computed as Annualised Loss Expectancy, is often used in order to give the overall assessment of security as a whole. Risk and security metrics are usually defined separately and the relation between these indicators have not been considered thoroughly. In this work we fill this gap by providing a formal definition of risk and formal analysis of relations between security metrics and risk

    International Conference on Computer Systems and Technologies -CompSysTech'11 Risk Analysis supported by Information Security Metrics

    Get PDF
    INTRODUCTION The importance of assuring the security of information assets is becoming more critical every year. The discussion about information security issues is necessary for business enterprise and companies are becoming aware of it. However the key areas of information security risk management and risk metrics still do not receive enough attention. Despite lots of documents describing the managed approach to risk, they do not clearly define a proper risk analysis and assessment. There exists ISO standards which explain the theoretical risk analysis approach and provide generic guidance on choosing security objectives, like the ISO 27000 standards family, however they do not describe the practical aspects and they fall short when evaluating the sufficiency of security mechanisms in a formal way. The situation of knowledge base has improved in a past few years; however there is still need of standardization in a whole risk assessment process. In this paper we propose a formal model for the quantitative risk assessment with the usage of measures and metrics, which minimizes the subjective factors of the security evaluation. This model is designed to make the risk analysis process more automatized, so it could be easily repeated and the results should be consistent and comparable

    Method of Information Security Risk Analysis for Virtualized System

    Get PDF
    The growth of usage of Information Technology (IT) in daily operations of enterprises causes the value and the vulnerability of information to be at the peak of interest. Moreover, distributed computing revolutionized the out-sourcing of computing functions, thus allowing flexible IT solutions. Since the concept of information goes beyond the traditional text documents, reaching manufacturing, machine control, and, to a certain extent – reasoning – it is a great responsibility to maintain appropriate information security. Information Security (IS) risk analysis and maintenance require extensive knowledge about the possessed assets as well as the technologies behind them, to recognize the threats and vulnerabilities the infrastructure is facing. A way of formal description of the infrastructure – the Enterprise Architecture (EA) – offers a multiperspective view of the whole enterprise, linking together business processes as well as the infrastructure. Several IS risk analysis solutions based on the EA exist. However, lack of methods of IS risk analysis for virtualization technologies complicates the procedure, thus leading to reduced availability of such analysis. The dissertation consists of an introduction, three main chapters and general conclusions. The first chapter introduces the problem of information security risk analysis and its’ automation. Moreover, state-of-the-art methodologies and their implementations for automated information security risk analysis are discussed. The second chapter proposes a novel method for risk analysis of virtualization components based on the most recent data, including threat classification and specification, control means and metrics of the impact. The third chapter presents an experimental evaluation of the proposed method, implementing it to the Cyber Security Modeling Language (CySeMoL) and comparing the analysis results to well-calibrated expert knowledge. It was concluded that the automation of virtualization solution risk analysis provides sufficient data for adjustment and implementation of security controls to maintain optimum security level

    Information Modeling for Automated Risk Analysis

    Get PDF
    Abstract. Systematic security risk analysis requires an information model which integrates the system design, the security environment (the attackers, security goals etc) and proposed security requirements. Such a model must be scalable to accommodate large systems, and support the efficient discovery of threat paths and the production of risk-based metrics; the modeling approach must balance complexity, scalability and expressiveness. This paper describes such a model; novel features include combining formal information modeling with informal requirements traceability to support the specification of security requirements on incompletely specified services, and the typing of information flow to quantify path exploitability and model communications security

    The Effect of Security Education and Expertise on Security Assessments: the Case of Software Vulnerabilities

    Get PDF
    In spite of the growing importance of software security and the industry demand for more cyber security expertise in the workforce, the effect of security education and experience on the ability to assess complex software security problems has only been recently investigated. As proxy for the full range of software security skills, we considered the problem of assessing the severity of software vulnerabilities by means of a structured analysis methodology widely used in industry (i.e. the Common Vulnerability Scoring System (\CVSS) v3), and designed a study to compare how accurately individuals with background in information technology but different professional experience and education in cyber security are able to assess the severity of software vulnerabilities. Our results provide some structural insights into the complex relationship between education or experience of assessors and the quality of their assessments. In particular we find that individual characteristics matter more than professional experience or formal education; apparently it is the \emph{combination} of skills that one owns (including the actual knowledge of the system under study), rather than the specialization or the years of experience, to influence more the assessment quality. Similarly, we find that the overall advantage given by professional expertise significantly depends on the composition of the individual security skills as well as on the available information.Comment: Presented at the Workshop on the Economics of Information Security (WEIS 2018), Innsbruck, Austria, June 201

    Trust economics feasibility study

    Get PDF
    We believe that enterprises and other organisations currently lack sophisticated methods and tools to determine if and how IT changes should be introduced in an organisation, such that objective, measurable goals are met. This is especially true when dealing with security-related IT decisions. We report on a feasibility study, Trust Economics, conducted to demonstrate that such methodology can be developed. Assuming a deep understanding of the IT involved, the main components of our trust economics approach are: (i) assess the economic or financial impact of IT security solutions; (ii) determine how humans interact with or respond to IT security solutions; (iii) based on above, use probabilistic and stochastic modelling tools to analyse the consequences of IT security decisions. In the feasibility study we apply the trust economics methodology to address how enterprises should protect themselves against accidental or malicious misuse of USB memory sticks, an acute problem in many industries

    Towards Validating Risk Indicators Based on Measurement Theory (Extended version)

    Get PDF
    Due to the lack of quantitative information and for cost-efficiency, most risk assessment methods use partially ordered values (e.g. high, medium, low) as risk indicators. In practice it is common to validate risk indicators by asking stakeholders whether they make sense. This way of validation is subjective, thus error prone. If the metrics are wrong (not meaningful), then they may lead system owners to distribute security investments inefficiently. For instance, in an extended enterprise this may mean over investing in service level agreements or obtaining a contract that provides a lower security level than the system requires. Therefore, when validating risk assessment methods it is important to validate the meaningfulness of the risk indicators that they use. In this paper we investigate how to validate the meaningfulness of risk indicators based on measurement theory. Furthermore, to analyze the applicability of the measurement theory to risk indicators, we analyze the indicators used by a risk assessment method specially developed for assessing confidentiality risks in networks of organizations

    Seeking Anonymity in an Internet Panopticon

    Full text link
    Obtaining and maintaining anonymity on the Internet is challenging. The state of the art in deployed tools, such as Tor, uses onion routing (OR) to relay encrypted connections on a detour passing through randomly chosen relays scattered around the Internet. Unfortunately, OR is known to be vulnerable at least in principle to several classes of attacks for which no solution is known or believed to be forthcoming soon. Current approaches to anonymity also appear unable to offer accurate, principled measurement of the level or quality of anonymity a user might obtain. Toward this end, we offer a high-level view of the Dissent project, the first systematic effort to build a practical anonymity system based purely on foundations that offer measurable and formally provable anonymity properties. Dissent builds on two key pre-existing primitives - verifiable shuffles and dining cryptographers - but for the first time shows how to scale such techniques to offer measurable anonymity guarantees to thousands of participants. Further, Dissent represents the first anonymity system designed from the ground up to incorporate some systematic countermeasure for each of the major classes of known vulnerabilities in existing approaches, including global traffic analysis, active attacks, and intersection attacks. Finally, because no anonymity protocol alone can address risks such as software exploits or accidental self-identification, we introduce WiNon, an experimental operating system architecture to harden the uses of anonymity tools such as Tor and Dissent against such attacks.Comment: 8 pages, 10 figure
    corecore