1,369,655 research outputs found

    Verification of Query Completeness over Processes [Extended Version]

    Full text link
    Data completeness is an essential aspect of data quality, and has in turn a huge impact on the effective management of companies. For example, statistics are computed and audits are conducted in companies by implicitly placing the strong assumption that the analysed data are complete. In this work, we are interested in studying the problem of completeness of data produced by business processes, to the aim of automatically assessing whether a given database query can be answered with complete information in a certain state of the process. We formalize so-called quality-aware processes that create data in the real world and store it in the company's information system possibly at a later point.Comment: Extended version of a paper that was submitted to BPM 201

    Assessing the Research Process Improves the Product: Results of a Faculty-­Librarian Collaboration

    Full text link
    When an education professor and a reference librarian sought to improve the quality of undergraduate student research, their partnership led to a new focus on assessing the research process in addition to the product. In this study, we reflect on our collaborative experience introducing information literacy as the foundation for undergraduate teacher education research. We examine the outcomes of this collaboration, focusing on the assessment of the process. Using a mixed methods approach, we found that direct instruction supporting effective research strategies positively impacted student projects. Our data also suggest that undergraduate students benefit from not only sound research strategies, but also organization strategies

    Breathing Life into Information Literacy Skills: Results of a Faculty-Librarian Collaboration

    Full text link
    When an education professor and a reference librarian sought to improve the quality of undergraduate student research, their partnership led to a new focus on assessing the research process in addition to the product. In this study, we reflect on our collaborative experience introducing information literacy as the foundation for undergraduate teacher education research. We examine the outcomes of this collaboration, focusing on the assessment of the process. Using a mixed methods approach, we found that direct instruction supporting effective research strategies positively impacted student projects. Our data also suggest that undergraduate students benefit from not only sound research strategies, but also organization strategies

    The VVDS data reduction pipeline: introducing VIPGI, the VIMOS Interactive Pipeline and Graphical Interface

    Get PDF
    The VIMOS VLT Deep Survey (VVDS), designed to measure 150,000 galaxy redshifts, requires a dedicated data reduction and analysis pipeline to process in a timely fashion the large amount of spectroscopic data being produced. This requirement has lead to the development of the VIMOS Interactive Pipeline and Graphical Interface (VIPGI), a new software package designed to simplify to a very high degree the task of reducing astronomical data obtained with VIMOS, the imaging spectrograph built by the VIRMOS Consortium for the European Southern Observatory, and mounted on Unit 3 (Melipal) of the Very Large Telescope (VLT) at Paranal Observatory (Chile). VIPGI provides the astronomer with specially designed VIMOS data reduction functions, a VIMOS-centric data organizer, and dedicated data browsing and plotting tools, that can be used to verify the quality and accuracy of the various stages of the data reduction process. The quality and accuracy of the data reduction pipeline are comparable to those obtained using well known IRAF tasks, but the speed of the data reduction process is significantly increased, thanks to the large set of dedicated features. In this paper we discuss the details of the MOS data reduction pipeline implemented in VIPGI, as applied to the reduction of some 20,000 VVDS spectra, assessing quantitatively the accuracy of the various reduction steps. We also provide a more general overview of VIPGI capabilities, a tool that can be used for the reduction of any kind of VIMOS data.Comment: 10 pages, submitted to Astronomy and Astrophysic

    Aerothermal modeling program. Phase 2, element B: Flow interaction experiment

    Get PDF
    NASA has instituted an extensive effort to improve the design process and data base for the hot section components of gas turbine engines. The purpose of element B is to establish a benchmark quality data set that consists of measurements of the interaction of circular jets with swirling flow. Such flows are typical of those that occur in the primary zone of modern annular combustion liners. Extensive computations of the swirling flows are to be compared with the measurements for the purpose of assessing the accuracy of current physical models used to predict such flows

    Generic unified modelling process for developing semantically rich, dynamic and temporal models

    Get PDF
    Models play a vital role in supporting a range of activities in numerous domains. We rely on models to support the design, visualisation, analysis and representation of parts of the world around us, and as such significant research effort has been invested into numerous areas of modelling; including support for model semantics, dynamic states and behaviour, temporal data storage and visualisation. Whilst these efforts have increased our capabilities and allowed us to create increasingly powerful software-based models, the process of developing models, supporting tools and /or data structures remains difficult, expensive and error-prone. In this paper we define from literature the key factors in assessing a model’s quality and usefulness: semantic richness, support for dynamic states and object behaviour, temporal data storage and visualisation. We also identify a number of shortcomings in both existing modelling standards and model development processes and propose a unified generic process to guide users through the development of semantically rich, dynamic and temporal models

    A national facilitation project to improve primary palliative care : impact of the Gold Standards Framework on process and self-ratings of quality

    Get PDF
    Background: Improving quality of end-of-life care is a key driver of UK policy. The Gold Standards Framework (GSF) for Palliative Care aims to strengthen primary palliative care through facilitating implementation of systematic clinical and organisational processes. Objectives: To describe the general practices that participated in the GSF programme in 2003–5 and the changes in process and perception of quality that occurred in the year following entry into the programme, and to identify factors associated with the extent of change. Methods: Participating practices completed a questionnaire at baseline and another approximately 12 months later. Data were derived from categorical questions about the implementation of 35 organisational and clinical processes, and self-rated assessments of quality, associated with palliative care provision. Participants: 1305 practices (total registered population almost 10 million). Follow-up questionnaire completed by 955 (73.2%) practices (after mean (SD) 12.8 (2.8) months; median 13 months). Findings: Mean increase in total number of processes implemented (maximum = 35) was 9.6 (95% CI 9.0 to 10.2; p<0.001; baseline: 15.7 (SD 6.4), follow-up: 25.2 (SD 5.2)). Extent of change was largest for practices with low baseline scores. Aspects of process related to coordination and communication showed the greatest change. All dimensions of quality improved following GSF implementation; change was highest for the "quality of palliative care for cancer patients" and "confidence in assessing, recording and addressing the physical and psychosocial areas of patient care". Conclusion: Implementation of the GSF seems to have resulted in substantial improvements in process and quality of palliative care. Further research is required of the extent to which this has enhanced care (physical, practical and psychological outcomes) for patients and carers

    An AIHW framework for assessing data sources for population health monitoring: working paper

    Get PDF
    This paper outlines the Australian Institute of Health and Welfare\u27s (AIHW) assessment framework for determining the suitability of specific data sources for population health monitoring. AIHW\u27s Assessment Framework When identifying potential data sources for population health monitoring, it is important to ensure they are \u27fit-for-purpose\u27. The AIHW has developed a 3-step process to assess potential data sources for population health monitoring: Step 1 collects information about the data source Step 2 identifies the potential to inform key monitoring areas Step 3 assesses the quality of the data, using a modified version of the Australian Bureau of Statistics (ABS) Data Quality Framework (ABS 2009), to determine its \u27fitness-for-purpose\u27 by establishing its utility, strengths and limitations. The assessment framework has been designed for use by the AIHW and others with an interest in assessing new data sources for use in population health monitoring. With adaptation, it may also have wider applications in other sectors or subject areas. For an example of the application of the assessment framework, see the AIHW working paper Assessment of the Australian Rheumatology Association Database for national population health monitoring (AIHW 2014a)

    Statistical mechanics of ontology based annotations

    Full text link
    We present a statistical mechanical theory of the process of annotating an object with terms selected from an ontology. The term selection process is formulated as an ideal lattice gas model, but in a highly structured inhomogeneous field. The model enables us to explain patterns recently observed in real-world annotation data sets, in terms of the underlying graph structure of the ontology. By relating the external field strengths to the information content of each node in the ontology graph, the statistical mechanical model also allows us to propose a number of practical metrics for assessing the quality of both the ontology, and the annotations that arise from its use. Using the statistical mechanical formalism we also study an ensemble of ontologies of differing size and complexity; an analysis not readily performed using real data alone. Focusing on regular tree ontology graphs we uncover a rich set of scaling laws describing the growth in the optimal ontology size as the number of objects being annotated increases. In doing so we provide a further possible measure for assessment of ontologies.Comment: 27 pages, 5 figure
    • 

    corecore