2,362,593 research outputs found

    An Ontology Based Approach to Data Quality Initiatives Cost-Benefit Evaluation

    Get PDF
    In order to achieve higher data quality targets, organizations need to identify the data quality dimensions that are affected by poor quality, assess them, and evaluate which improvement techniques are suitable to apply. Data quality literature provides methodologies that support complete data quality management by providing guidelines that organizations should contextualize and apply to their scenario. Only a few methodologies use the cost-benefit analysis as a tool to evaluate the feasibility of a data quality improvement project. In this paper, we present an ontological description of the cost-benefit analysis including the most important contributes already proposed in literature. The use of ontologies allows the knowledge improvement by means of the identification of the interdependencies between costs and benefits and enables different complex evaluations. The feasibility and usefulness of the proposed ontology-based tool has been tested by means of a real case study

    Measuring Service Quality: The Opinion of Europeans about Utilities

    Get PDF
    This paper provides a comparative analysis of statistical methods to evaluate the consumer perception about the quality of Services of General Interest. The evaluation of the service quality perceived by users is usually based on Customer Satisfaction Survey data and an ex-post evaluation is then performed. Another approach, consisting in evaluating Consumers preferences, supplies an ex-ante information on Service Quality. Here, the ex-post approach is considered, two non-standard techniques - the Rasch Model and the Nonlinear Principal Component Analysis - are presented and the potential of both methods is discussed. These methods are applied on the Eurobarometer Survey data to assess the consumer satisfaction among European countries and in different years.Service Quality, Eurobarometer, Non Linear Principal Component Analysis, Rasch Analysis, Conjoint Analysis

    Procedural Modeling and Physically Based Rendering for Synthetic Data Generation in Automotive Applications

    Full text link
    We present an overview and evaluation of a new, systematic approach for generation of highly realistic, annotated synthetic data for training of deep neural networks in computer vision tasks. The main contribution is a procedural world modeling approach enabling high variability coupled with physically accurate image synthesis, and is a departure from the hand-modeled virtual worlds and approximate image synthesis methods used in real-time applications. The benefits of our approach include flexible, physically accurate and scalable image synthesis, implicit wide coverage of classes and features, and complete data introspection for annotations, which all contribute to quality and cost efficiency. To evaluate our approach and the efficacy of the resulting data, we use semantic segmentation for autonomous vehicles and robotic navigation as the main application, and we train multiple deep learning architectures using synthetic data with and without fine tuning on organic (i.e. real-world) data. The evaluation shows that our approach improves the neural network's performance and that even modest implementation efforts produce state-of-the-art results.Comment: The project web page at http://vcl.itn.liu.se/publications/2017/TKWU17/ contains a version of the paper with high-resolution images as well as additional materia

    Supporting mediated peer-evaluation to grade answers to open-ended questions

    Get PDF
    We show an approach to semi-automatic grading of answers given by students to open ended questions (open answers). We use both peer-evaluation and teacher evaluation. A learner is modeled by her Knowledge and her assessments quality (Judgment). The data generated by the peer- and teacher- evaluations, and by the learner models is represented by a Bayesian Network, in which the grades of the answers, and the elements of the learner models, are variables, with values in a probability distribution. The initial state of the network is determined by the peer-assessment data. Then, each teacher’s grading of an answer triggers evidence propagation in the network. The framework is implemented in a web-based system. We present also an experimental activity, set to verify the effectiveness of the approach, in terms of correctness of system grading, amount of required teacher's work, and correlation of system outputs with teacher’s grades and student’s final exam grade

    Evaluation of complex integrated care programmes: the approach in North West London

    Get PDF
    Background: Several local attempts to introduce integrated care in the English National Health Service have been tried, with limited success. The Northwest London Integrated Care Pilot attempts to improve the quality of care of the elderly and people with diabetes by providing a novel integration process across primary, secondary and social care organisations. It involves predictive risk modelling, care planning, multidisciplinary management of complex cases and an information technology tool to support information sharing. This paper sets out the evaluation approach adopted to measure its effect. Study design: We present a mixed methods evaluation methodology. It includes a quantitative approach measuring changes in service utilization, costs, clinical outcomes and quality of care using routine primary and secondary data sources. It also contains a qualitative component, involving observations, interviews and focus groups with patients and professionals, to understand participant experiences and to understand the pilot within the national policy context. Theory and discussion: This study considers the complexity of evaluating a large, multi-organisational intervention in a changing healthcare economy. We locate the evaluation within the theory of evaluation of complex interventions. We present the specific challenges faced by evaluating an intervention of this sort, and the responses made to mitigate against them. Conclusions: We hope this broad, dynamic and responsive evaluation will allow us to clarify the contribution of the pilot, and provide a potential model for evaluation of other similar interventions. Because of the priority given to the integrated agenda by governments internationally, the need to develop and improve strong evaluation methodologies remains strikingly important

    Identifying Mislabeled Training Data

    Full text link
    This paper presents a new approach to identifying and eliminating mislabeled training instances for supervised learning. The goal of this approach is to improve classification accuracies produced by learning algorithms by improving the quality of the training data. Our approach uses a set of learning algorithms to create classifiers that serve as noise filters for the training data. We evaluate single algorithm, majority vote and consensus filters on five datasets that are prone to labeling errors. Our experiments illustrate that filtering significantly improves classification accuracy for noise levels up to 30 percent. An analytical and empirical evaluation of the precision of our approach shows that consensus filters are conservative at throwing away good data at the expense of retaining bad data and that majority filters are better at detecting bad data at the expense of throwing away good data. This suggests that for situations in which there is a paucity of data, consensus filters are preferable, whereas majority vote filters are preferable for situations with an abundance of data
    • 

    corecore