94,665 research outputs found

    Towards Validating Risk Indicators Based on Measurement Theory (Extended version)

    Get PDF
    Due to the lack of quantitative information and for cost-efficiency, most risk assessment methods use partially ordered values (e.g. high, medium, low) as risk indicators. In practice it is common to validate risk indicators by asking stakeholders whether they make sense. This way of validation is subjective, thus error prone. If the metrics are wrong (not meaningful), then they may lead system owners to distribute security investments inefficiently. For instance, in an extended enterprise this may mean over investing in service level agreements or obtaining a contract that provides a lower security level than the system requires. Therefore, when validating risk assessment methods it is important to validate the meaningfulness of the risk indicators that they use. In this paper we investigate how to validate the meaningfulness of risk indicators based on measurement theory. Furthermore, to analyze the applicability of the measurement theory to risk indicators, we analyze the indicators used by a risk assessment method specially developed for assessing confidentiality risks in networks of organizations

    Software reliability and dependability: a roadmap

    Get PDF
    Shifting the focus from software reliability to user-centred measures of dependability in complete software-based systems. Influencing design practice to facilitate dependability assessment. Propagating awareness of dependability issues and the use of existing, useful methods. Injecting some rigour in the use of process-related evidence for dependability assessment. Better understanding issues of diversity and variation as drivers of dependability. Bev Littlewood is founder-Director of the Centre for Software Reliability, and Professor of Software Engineering at City University, London. Prof Littlewood has worked for many years on problems associated with the modelling and evaluation of the dependability of software-based systems; he has published many papers in international journals and conference proceedings and has edited several books. Much of this work has been carried out in collaborative projects, including the successful EC-funded projects SHIP, PDCS, PDCS2, DeVa. He has been employed as a consultant t

    Assessing the Value of Time Travel Savings – A Feasibility Study on Humberside.

    No full text
    It is expected that the opening of the Humber Bridge will cause major changes to travel patterns around Humberside; given the level of tolls as currently stated, many travellers will face decisions involving a trade-off between travel time, money outlay on tolls or fares and money outlay on private vehicle running costs; this either in the context of destination choice, mode choice or route choice. This report sets out the conclusions of a preliminary study of the feasibility of inferring values of travel time savings from observations made on the outcomes of these decisions. Methods based on aggregate data of destination choice are found t o be inefficient; a disaggregate mode choice study i s recommended, subject to caveats on sample size

    Using a Machine Learning Approach to Implement and Evaluate Product Line Features

    Get PDF
    Bike-sharing systems are a means of smart transportation in urban environments with the benefit of a positive impact on urban mobility. In this paper we are interested in studying and modeling the behavior of features that permit the end user to access, with her/his web browser, the status of the Bike-Sharing system. In particular, we address features able to make a prediction on the system state. We propose to use a machine learning approach to analyze usage patterns and learn computational models of such features from logs of system usage. On the one hand, machine learning methodologies provide a powerful and general means to implement a wide choice of predictive features. On the other hand, trained machine learning models are provided with a measure of predictive performance that can be used as a metric to assess the cost-performance trade-off of the feature. This provides a principled way to assess the runtime behavior of different components before putting them into operation.Comment: In Proceedings WWV 2015, arXiv:1508.0338

    Fair Inputs and Fair Outputs: The Incompatibility of Fairness in Privacy and Accuracy

    Get PDF
    Fairness concerns about algorithmic decision-making systems have been mainly focused on the outputs (e.g., the accuracy of a classifier across individuals or groups). However, one may additionally be concerned with fairness in the inputs. In this paper, we propose and formulate two properties regarding the inputs of (features used by) a classifier. In particular, we claim that fair privacy (whether individuals are all asked to reveal the same information) and need-to-know (whether users are only asked for the minimal information required for the task at hand) are desirable properties of a decision system. We explore the interaction between these properties and fairness in the outputs (fair prediction accuracy). We show that for an optimal classifier these three properties are in general incompatible, and we explain what common properties of data make them incompatible. Finally we provide an algorithm to verify if the trade-off between the three properties exists in a given dataset, and use the algorithm to show that this trade-off is common in real data

    Multi-criteria analysis: a manual

    Get PDF
    corecore