31,059 research outputs found

    Towards Validating Risk Indicators Based on Measurement Theory (Extended version)

    Get PDF
    Due to the lack of quantitative information and for cost-efficiency, most risk assessment methods use partially ordered values (e.g. high, medium, low) as risk indicators. In practice it is common to validate risk indicators by asking stakeholders whether they make sense. This way of validation is subjective, thus error prone. If the metrics are wrong (not meaningful), then they may lead system owners to distribute security investments inefficiently. For instance, in an extended enterprise this may mean over investing in service level agreements or obtaining a contract that provides a lower security level than the system requires. Therefore, when validating risk assessment methods it is important to validate the meaningfulness of the risk indicators that they use. In this paper we investigate how to validate the meaningfulness of risk indicators based on measurement theory. Furthermore, to analyze the applicability of the measurement theory to risk indicators, we analyze the indicators used by a risk assessment method specially developed for assessing confidentiality risks in networks of organizations

    Towards Validating Risk Indicators Based on Measurement Theory

    Get PDF
    Due to the lack of quantitative information and for cost-efficiency purpose, most risk assessment methods use partially ordered values (e.g. high, medium, low) as risk indicators.\ud In practice it is common to validate risk scales by asking stakeholders whether they make sense. This way of validation is subjective, thus error prone. If the metrics are wrong (not meaningful), then they may lead system owners to distribute security investments inefficiently. Therefore, when validating risk assessment methods it is important to validate the meaningfulness of the risk scales that they use. In this paper we investigate how to validate the meaningfulness of risk indicators based on measurement theory. Furthermore, to analyze the applicability of measurement theory to risk indicators, we analyze the indicators used by a particular risk assessment method specially developed for assessing confidentiality risks in networks of organizations

    Analyzing eye movement patterns to improve map design

    Get PDF
    Recently, the use of eye tracking systems has been introduced in the field of cartography and GIS to support the evaluation of the quality of maps towards the user. The quantitative eye movement metrics are related to for example the duration or the number of the fixations which are subsequently (statistically) compared to detect significant differences in map designs or between different user groups. Hence, besides these standard eye movement metrics, other - more spatial - measurements and visual interpretations of the data are more suitable to investigate how users process, store and retrieve information from a (dynamic and/or) interactive map. This information is crucial to get insights in how users construct their cognitive map: e.g. is there a general search pattern on a map and which elements influence this search pattern, how do users orient a map, what is the influence of for example a pan operation. These insights are in turn crucial to be able to construct more effective maps towards the user, since the visualisation of the information on the map can be keyed to the user his cognitive processes. The study focuses on a qualitative and visual approach of the eye movement data resulting from a user study in which 14 participants were tested while working on 20 different dynamic and interactive demo-maps. Since maps are essentially spatial objects, the analysis of these eye movement data is directed towards the locations of the fixations, the visual representation of the scanpaths, clustering and aggregation of the scanpaths. The results from this study show interesting patterns in the search strategies of users on dynamic and interactive maps

    Characterizing and Subsetting Big Data Workloads

    Full text link
    Big data benchmark suites must include a diversity of data and workloads to be useful in fairly evaluating big data systems and architectures. However, using truly comprehensive benchmarks poses great challenges for the architecture community. First, we need to thoroughly understand the behaviors of a variety of workloads. Second, our usual simulation-based research methods become prohibitively expensive for big data. As big data is an emerging field, more and more software stacks are being proposed to facilitate the development of big data applications, which aggravates hese challenges. In this paper, we first use Principle Component Analysis (PCA) to identify the most important characteristics from 45 metrics to characterize big data workloads from BigDataBench, a comprehensive big data benchmark suite. Second, we apply a clustering technique to the principle components obtained from the PCA to investigate the similarity among big data workloads, and we verify the importance of including different software stacks for big data benchmarking. Third, we select seven representative big data workloads by removing redundant ones and release the BigDataBench simulation version, which is publicly available from http://prof.ict.ac.cn/BigDataBench/simulatorversion/.Comment: 11 pages, 6 figures, 2014 IEEE International Symposium on Workload Characterizatio
    corecore