2,197 research outputs found

    Trustworthy Experimentation Under Telemetry Loss

    Full text link
    Failure to accurately measure the outcomes of an experiment can lead to bias and incorrect conclusions. Online controlled experiments (aka AB tests) are increasingly being used to make decisions to improve websites as well as mobile and desktop applications. We argue that loss of telemetry data (during upload or post-processing) can skew the results of experiments, leading to loss of statistical power and inaccurate or erroneous conclusions. By systematically investigating the causes of telemetry loss, we argue that it is not practical to entirely eliminate it. Consequently, experimentation systems need to be robust to its effects. Furthermore, we note that it is nontrivial to measure the absolute level of telemetry loss in an experimentation system. In this paper, we take a top-down approach towards solving this problem. We motivate the impact of loss qualitatively using experiments in real applications deployed at scale, and formalize the problem by presenting a theoretical breakdown of the bias introduced by loss. Based on this foundation, we present a general framework for quantitatively evaluating the impact of telemetry loss, and present two solutions to measure the absolute levels of loss. This framework is used by well-known applications at Microsoft, with millions of users and billions of sessions. These general principles can be adopted by any application to improve the overall trustworthiness of experimentation and data-driven decision making.Comment: Proceedings of the 27th ACM International Conference on Information and Knowledge Management, October 201

    VR-PMS: a new approach for performance measurement and management of industrial systems

    Get PDF
    A new performance measurement and management framework based on value and risk is proposed. The proposed framework is applied to the modelling and evaluation of the a priori performance evaluation of manufacturing processes and to deciding on their alternatives. For this reason, it consistently integrates concepts relevant to objectives, activity, and risk in a single framework comprising a conceptual value/risk model, and it conceptualises the idea of value- and risk based performance management in a process context. In addition, a methodological framework is developed to provide guidelines for the decision-makers or performance evaluators of the processes. To facilitate the performance measurement and management process, this latter framework is organized in four phases: context establishment, performance modelling, performance assessment, and decision-making. Each phase of the framework is then instrumented with state of-the-art quantitative analysis tools and methods. For process design and evaluation, the deliverable of the value- and risk-based performance measurement and management system (VR-PMS) is a set of ranked solutions (i.e. alternative business processes) evaluated against the developed value and risk indicators. The proposed VR-PMS is illustrated with a case study from discrete parts manufacturing but is indeed applicable to a wide range of processes or systems

    Ontology-based metrics computation for business process analysis

    Get PDF
    Business Process Management (BPM) aims to support the whole life-cycle necessary to deploy and maintain business processes in organisations. Crucial within the BPM lifecycle is the analysis of deployed processes. Analysing business processes requires computing metrics that can help determining the health of business activities and thus the whole enterprise. However, the degree of automation currently achieved cannot support the level of reactivity and adaptation demanded by businesses. In this paper we argue and show how the use of Semantic Web technologies can increase to an important extent the level of automation for analysing business processes. We present a domain-independent ontological framework for Business Process Analysis (BPA) with support for automatically computing metrics. In particular, we define a set of ontologies for specifying metrics. We describe a domain-independent metrics computation engine that can interpret and compute them. Finally we illustrate and evaluate our approach with a set of general purpose metrics

    Business-driven IT Management

    Get PDF
    Business-driven IT management (BDIM) aims at ensuring successful alignment of business and IT through thorough understanding of the impact of IT on business results, and vice versa. In this dissertation, we review the state of the art of BDIM research and we position our intended contribution within the BDIM research space along the dimensions of decision support (as opposed of automation) and its application to IT service management processes. Within these research dimensions, we advance the state of the art by 1) contributing a decision theoretical framework for BDIM and 2) presenting two novel BDIM solutions in the IT service management space. First we present a simpler BDIM solution for prioritizing incidents, which can be used as a template for creating BDIM solutions in other IT service management processes. Then, we present a more comprehensive solution for optimizing the business-related performance of an IT support organization in dealing with incidents. Our decision theoretical framework and models for BDIM bring the concepts of business impact and risk to the fore, and are able to cope with both monetizable and intangible aspects of business impact. We start from a constructive and quantitative re-definition of some terms that are widely used in IT service management but for which was never given a rigorous decision: business impact, cost, benefit, risk and urgency. On top of that, we build a coherent methodology for linking IT-level metrics with business level metrics and make progress toward solving the business-IT alignment problem. Our methodology uses a constructive and quantitative definition of alignment with business objectives, taken as the likelihood – to the best of one’s knowledge – that such objectives will be met. That is used as the basis for building an engine for business impact calculation that is in fact an alignment computation engine. We show a sample BDIM solution for incident prioritization that is built using the decision theoretical framework, the methodology and the tools developed. We show how the sample BDIM solution could be used as a blueprint to build BDIM solutions for decision support in other IT service management processes, such as change management for example. However, the full power of BDIM can be best understood by studying the second fully fledged BDIM application that we present in this thesis. While incident management is used as a scenario for this second application as well, the main contribution that it brings about is really to provide a solution for business-driven organizational redesign to optimize the performance of an IT support organization. The solution is quite rich, and features components that orchestrate together advanced techniques in visualization, simulation, data mining and operations research. We show that the techniques we use - in particular the simulation of an IT organization enacting the incident management process – bring considerable benefits both when the performance is measured in terms of traditional IT metrics (mean time to resolution of incidents), and even more so when business impact metrics are brought into the picture, thereby providing a justification for investing time and effort in creating BDIM solutions. In terms of impact, the work presented in this thesis produced about twenty conference and journal publications, and resulted so far in three patent applications. Moreover this work has greatly influenced the design and implementation of Business Impact Optimization module of HP DecisionCenterℱ: a leading commercial software product for IT optimization, whose core has been re-designed to work as described here

    Two-Layer Feed Forward Neural Network (TLFN) in Predicting Loan Default Probability

    Get PDF
    The main objective of the thesis is to apply a Neural Network (NN) approach in the PD used to assess whether a credit operation is granted or not. That is, given an operation, the NN model should predict whether it is granted [0], or not granted [1]. Credit Risk Models and Deep Learning concepts are also explained

    Moderating Effects of Management Control Systems and Innovation on Performance. Simple Methods for Correcting the Effects of Measurement Error for Interaction Effects in Small Samples

    Get PDF
    In the accounting literature, interaction or moderating effects are usually assessed by means of OLS regression and summated rating scales are constructed to reduce measurement error bias. Structural equation models and two-stage least squares regression could be used to completely eliminate this bias, but large samples are needed. Partial Least Squares are appropriate for small samples but do not correct measurement error bias. In this article, disattenuated regression is discussed as a small sample alternative and is illustrated on data of Bisbe and Otley (in press) that examine the interaction effect of innovation and style of use of budgets on performance. Sizeable differences emerge between OLS and disattenuated regression.measurement error; interaction effects; disattenuation; small samples; moderated regression; reliability; Chronbach’s alpha
    • 

    corecore