394,945 research outputs found

    Informatics Security Metrics Comparative Analysis

    Get PDF
    The informatics security concept is defined. For informatics applications which have a classical structure, the development, current use, maintenance, and reengineering particularities are described for distributed systems and m-applications. Metrics are built for the security of open informatics applications and a method for their validation is proposed. To see when a metric is adequate a comparative analysis is made for each indicator using a representative diversity of data sets for the test.security, metrics, informatics, evaluation

    A comparative analysis of web-based GIS applications using usability metrics

    Get PDF
    With the rapid expansion of the internet, Web-based Geographic Information System (WGIS) applications have gained popularity, despite the interface of the WGIS application being difficult to learn and understand because special functions are needed to manipulate the maps. Hence, it is essential to evaluate the usability of WGIS applications. Usability is an important factor in ensuring the development of quality, usable software products. On the other hand, there are a number of standards and models in the literature, each of which describes usability in terms of various set of attributes. These models are vague and difficult to understand. Therefore, the primary purpose of this study is to compare five common usability models (Shackel, Nielsen, ISO 9241 P-11, ISO 9126-1 and QUIM) to identify usability metrics that have most frequently used in the previous models. The questionnaire method and the automated usability evaluation method by using Loop11 tool were used, in order to evaluate the usability metrics for three case studies of commonly used WGIS applications as Google maps, Yahoo maps, and MapQuest. Finally, those case studies were compared and analysed based on usability metrics that have been identified. Based on a comparative study, four usability metrics (Effectiveness, Efficiency, Satisfaction and Learnability) were identified. Those usability metrics were characterized by consistent, comprehensive, not vaguely and proper to evaluate the usability of WGIS applications. In addition, there was a positive correlation between these usability metrics. The comparative analysis indicates that Effectiveness, Satisfaction and Learnability were higher, and the Efficiency was lesser by using the Loop11 tool compared to questionnaire method for the three case studies. In addition, Yahoo Maps and MapQuest have usability metrics rate lesser than Google Maps by applying two methods. Therefore, Google Maps is more usable compared to Yahoo Maps and MapQuest

    Comparative Noninformativities of Quantum Priors Based on Monotone Metrics

    Full text link
    We consider a family of prior probability distributions of particular interest, all being defined on the three-dimensional convex set of two-level quantum systems. Each distribution is, following recent work of Petz and Sudar, taken to be proportional to the volume element of a monotone metric on that Riemannian manifold. We apply an entropy-based test (a variant of one recently developed by Clarke) to determine which of two priors is more noninformative in nature. This involves converting them to posterior probability distributions based on some set of hypothesized outcomes of measurements of the quantum system in question. It is, then, ascertained whether or not the original relative entropy (Kullback-Leibler distance) between a pair of priors increases or decreases when one of them is exchanged with its corresponding posterior. The findings lead us to assert that the maximal monotone metric yields the most noninformative (prior) distribution and the minimal monotone (that is, the Bures) metric, the least. Our conclusions both agree and disagree, in certain respects, with ones recently reached by Hall, who relied upon a less specific test criterion than our entropy-based one.Comment: 7 pages, LaTeX, minor changes, to appear in Physics Letters

    Reactivity to sustainability metrics: A configurational study of motivation and capacity

    Get PDF
    Previous research on reactivity – defined as changing organisational behaviour to better conform to the criteria of measurement in response to being measured – has found significant variation in company responses towards sustainability metrics. We propose that reactivity is driven by dialogue, motivation and capacity in a configurational way. Empirically, we use fuzzy set Qualitative Comparative Analysis (fsQCA) to analyse company responses to the sustainability index FTSE4Good. We find evidence of complimentary and substitute effects between motivation and capacity. Based on these effects we develop a typology of reactivity to sustainability metrics, which also theorises the use of metrics as tools for performance feedback and the building of calculative capacity. We show that when reactivity is studied configurationally, we can identify previously underacknowledged types of responses. We discuss the theoretical and practical implications for studying and using sustainability metrics as governance tools for responsible behaviour

    Temporal similarity metrics for latent network reconstruction: The role of time-lag decay

    Full text link
    When investigating the spreading of a piece of information or the diffusion of an innovation, we often lack information on the underlying propagation network. Reconstructing the hidden propagation paths based on the observed diffusion process is a challenging problem which has recently attracted attention from diverse research fields. To address this reconstruction problem, based on static similarity metrics commonly used in the link prediction literature, we introduce new node-node temporal similarity metrics. The new metrics take as input the time-series of multiple independent spreading processes, based on the hypothesis that two nodes are more likely to be connected if they were often infected at similar points in time. This hypothesis is implemented by introducing a time-lag function which penalizes distant infection times. We find that the choice of this time-lag strongly affects the metrics' reconstruction accuracy, depending on the network's clustering coefficient and we provide an extensive comparative analysis of static and temporal similarity metrics for network reconstruction. Our findings shed new light on the notion of similarity between pairs of nodes in complex networks

    Axiomatic Foundations for Metrics of Distributive Justice Shown by the Example of Needs-Based Justice

    Get PDF
    Distributive justice deals with allocations of goods and bads within a group. Different principles and results of distributions are seen as possible ideals. Often those normative approaches are solely framed verbally, which complicates the application to different concrete distribution situations that are supposed to be evaluated in regard to justice. One possibility in order to frame this precisely and to allow for a fine-grained evaluation of justice lies in formal modelling of these ideals by metrics. Choosing a metric that is supposed to map a certain ideal has to be justified. Such justification might be given by demanding specific substantiated axioms, which have to be met by a metric. This paper introduces such axioms for metrics of distributive justice shown by the example of needs-based justice. Furthermore, some exemplary metrics of needs-based justice and a three dimensional method for visualisation of non-comparative justice axioms or evaluations are presented. Therewith, a base worth discussing for the evaluation and modelling of metrics of distributive justice is given

    How Do Static and Dynamic Test Case Prioritization Techniques Perform on Modern Software Systems? An Extensive Study on GitHub Projects

    Full text link
    Test Case Prioritization (TCP) is an increasingly important regression testing technique for reordering test cases according to a pre-defined goal, particularly as agile practices gain adoption. To better understand these techniques, we perform the first extensive study aimed at empirically evaluating four static TCP techniques, comparing them with state-of-research dynamic TCP techniques across several quality metrics. This study was performed on 58 real-word Java programs encompassing 714 KLoC and results in several notable observations. First, our results across two effectiveness metrics (the Average Percentage of Faults Detected APFD and the cost cognizant APFDc) illustrate that at test-class granularity, these metrics tend to correlate, but this correlation does not hold at test-method granularity. Second, our analysis shows that static techniques can be surprisingly effective, particularly when measured by APFDc. Third, we found that TCP techniques tend to perform better on larger programs, but that program size does not affect comparative performance measures between techniques. Fourth, software evolution does not significantly impact comparative performance results between TCP techniques. Fifth, neither the number nor type of mutants utilized dramatically impact measures of TCP effectiveness under typical experimental settings. Finally, our similarity analysis illustrates that highly prioritized test cases tend to uncover dissimilar faults.Comment: Preprint of Accepted Paper to IEEE Transactions on Software Engineerin
    corecore