86,734 research outputs found

    Monitoring Networked Applications With Incremental Quantile Estimation

    Full text link
    Networked applications have software components that reside on different computers. Email, for example, has database, processing, and user interface components that can be distributed across a network and shared by users in different locations or work groups. End-to-end performance and reliability metrics describe the software quality experienced by these groups of users, taking into account all the software components in the pipeline. Each user produces only some of the data needed to understand the quality of the application for the group, so group performance metrics are obtained by combining summary statistics that each end computer periodically (and automatically) sends to a central server. The group quality metrics usually focus on medians and tail quantiles rather than on averages. Distributed quantile estimation is challenging, though, especially when passing large amounts of data around the network solely to compute quality metrics is undesirable. This paper describes an Incremental Quantile (IQ) estimation method that is designed for performance monitoring at arbitrary levels of network aggregation and time resolution when only a limited amount of data can be transferred. Applications to both real and simulated data are provided.Comment: This paper commented in: [arXiv:0708.0317], [arXiv:0708.0336], [arXiv:0708.0338]. Rejoinder in [arXiv:0708.0339]. Published at http://dx.doi.org/10.1214/088342306000000583 in the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Rigorously assessing software reliability and safety

    Get PDF
    This paper summarises the state of the art in the assessment of software reliability and safety ("dependability"), and describes some promising developments. A sound demonstration of very high dependability is still impossible before operation of the software; but research is finding ways to make rigorous assessment increasingly feasible. While refined mathematical techniques cannot take the place of factual knowledge, they can allow the decision-maker to draw more accurate conclusions from the knowledge that is available

    An Empirical analysis of Open Source Software Defects data through Software Reliability Growth Models

    Get PDF
    The purpose of this study is to analyze the reliability growth of Open Source Software (OSS) using Software Reliability Growth Models (SRGM). This study uses defects data of twenty five different releases of five OSS projects. For each release of the selected projects two types of datasets have been created; datasets developed with respect to defect creation date (created date DS) and datasets developed with respect to defect updated date (updated date DS). These defects datasets are modelled by eight SRGMs; Musa Okumoto, Inflection S-Shaped, Goel Okumoto, Delayed S-Shaped, Logistic, Gompertz, Yamada Exponential, and Generalized Goel Model. These models are chosen due to their widespread use in the literature. The SRGMs are fitted to both types of defects datasets of each project and the their fitting and prediction capabilities are analysed in order to study the OSS reliability growth with respect to defects creation and defects updating time because defect analysis can be used as a constructive reliability predictor. Results show that SRGMs fitting capabilities and prediction qualities directly increase when defects creation date is used for developing OSS defect datasets to characterize the reliability growth of OSS. Hence OSS reliability growth can be characterized with SRGM in a better way if the defect creation date is taken instead of defects updating (fixing) date while developing OSS defects datasets in their reliability modellin

    Evaluation of mobile health education applications for health professionals and patients

    Get PDF
    Paper presented at 8th International conference on e-Health (EH 2016), 1-3 July 2016, Funchal, Madeira, Portugal. ABSTRACT Mobile applications for health education are commonly utilized to support patients and health professionals. A critical evaluation framework is required to ensure the usability and reliability of mobile health education applications in order to facilitate the saving of time and effort for the various user groups; thus, the aim of this paper is to describe a framework for evaluating mobile applications for health education. The intended outcome of this framework is to meet the needs and requirements of the different user categories and to improve the development of mobile health education applications with software engineering approaches, by creating new and more effective techniques to evaluate such software. This paper first highlights the importance of mobile health education apps, then explains the need to establish an evaluation framework for these apps. The paper provides a description of the evaluation framework, along with some specific evaluation metrics: an efficient hybrid of selected heuristic evaluation (HE) and usability evaluation (UE) factors to enable the determination of the usefulness and usability of health education mobile apps. Finally, an explanation of the initial results for the framework was obtained using a Medscape mobile app. The proposed framework - An Evaluation Framework for Mobile Health Education Apps – is a hybrid of five metrics selected from a larger set in heuristic and usability evaluation, filtered based on interviews from patients and health professionals. These five metrics correspond to specific facets of usability identified through a requirements analysis of typical users of mobile health apps. These metrics were decomposed into 21 specific questionnaire questions, which are available on request from first author

    Estimating ToE Risk Level using CVSS

    Get PDF
    Security management is about calculated risk and requires continuous evaluation to ensure cost, time and resource effectiveness. Parts of which is to make future-oriented, cost-benefit investments in security. Security investments must adhere to healthy business principles where both security and financial aspects play an important role. Information on the current and potential risk level is essential to successfully trade-off security and financial aspects. Risk level is the combination of the frequency and impact of a potential unwanted event, often referred to as a security threat or misuse. The paper presents a risk level estimation model that derives risk level as a conditional probability over frequency and impact estimates. The frequency and impact estimates are derived from a set of attributes specified in the Common Vulnerability Scoring System (CVSS). The model works on the level of vulnerabilities (just as the CVSS) and is able to compose vulnerabilities into service levels. The service levels define the potential risk levels and are modelled as a Markov process, which are then used to predict the risk level at a particular time

    An LSPI based reinforcement learning approach to enable network cooperation in cognitive wireless sensor networks

    Get PDF
    The number of wirelessly communicating devices increases every day, along with the number of communication standards and technologies that they use to exchange data. A relatively new form of research is trying to find a way to make all these co-located devices not only capable of detecting each other's presence, but to go one step further - to make them cooperate. One recently proposed way to tackle this problem is to engage into cooperation by activating 'network services' (such as internet sharing, interference avoidance, etc.) that offer benefits for other co-located networks. This approach reduces the problem to the following research topic: how to determine which network services would be beneficial for all the cooperating networks. In this paper we analyze and propose a conceptual solution for this problem using the reinforcement learning technique known as the Least Square Policy Iteration (LSPI). The proposes solution uses a self-learning entity that negotiates between different independent and co-located networks. First, the reasoning entity uses self-learning techniques to determine which service configuration should be used to optimize the network performance of each single network. Afterwards, this performance is used as a reference point and LSPI is used to deduce if cooperating with other co-located networks can lead to even further performance improvements

    Quality measures for ETL processes: from goals to implementation

    Get PDF
    Extraction transformation loading (ETL) processes play an increasingly important role for the support of modern business operations. These business processes are centred around artifacts with high variability and diverse lifecycles, which correspond to key business entities. The apparent complexity of these activities has been examined through the prism of business process management, mainly focusing on functional requirements and performance optimization. However, the quality dimension has not yet been thoroughly investigated, and there is a need for a more human-centric approach to bring them closer to business-users requirements. In this paper, we take a first step towards this direction by defining a sound model for ETL process quality characteristics and quantitative measures for each characteristic, based on existing literature. Our model shows dependencies among quality characteristics and can provide the basis for subsequent analysis using goal modeling techniques. We showcase the use of goal modeling for ETL process design through a use case, where we employ the use of a goal model that includes quantitative components (i.e., indicators) for evaluation and analysis of alternative design decisions.Peer ReviewedPostprint (author's final draft

    Exploratory analysis of high-resolution power interruption data reveals spatial and temporal heterogeneity in electric grid reliability

    Full text link
    Modern grid monitoring equipment enables utilities to collect detailed records of power interruptions. These data are aggregated to compute publicly reported metrics describing high-level characteristics of grid performance. The current work explores the depth of insights that can be gained from public data, and the implications of losing visibility into heterogeneity in grid performance through aggregation. We present an exploratory analysis examining three years of high-resolution power interruption data collected by archiving information posted in real-time on the public-facing website of a utility in the Western United States. We report on the size, frequency and duration of individual power interruptions, and on spatio-temporal variability in aggregate reliability metrics. Our results show that metrics of grid performance can vary spatially and temporally by orders of magnitude, revealing heterogeneity that is not evidenced in publicly reported metrics. We show that limited access to granular information presents a substantive barrier to conducting detailed policy analysis, and discuss how more widespread data access could help to answer questions that remain unanswered in the literature to date. Given open questions about whether grid performance is adequate to support societal needs, we recommend establishing pathways to make high-resolution power interruption data available to support policy research.Comment: Journal submission (in review), 22 pages, 8 figures, 1 tabl
    • 

    corecore