514,699 research outputs found

    A Quality Model for Actionable Analytics in Rapid Software Development

    Get PDF
    Background: Accessing relevant data on the product, process, and usage perspectives of software as well as integrating and analyzing such data is crucial for getting reliable and timely actionable insights aimed at continuously managing software quality in Rapid Software Development (RSD). In this context, several software analytics tools have been developed in recent years. However, there is a lack of explainable software analytics that software practitioners trust. Aims: We aimed at creating a quality model (called Q-Rapids quality model) for actionable analytics in RSD, implementing it, and evaluating its understandability and relevance. Method: We performed workshops at four companies in order to determine relevant metrics as well as product and process factors. We also elicited how these metrics and factors are used and interpreted by practitioners when making decisions in RSD. We specified the Q-Rapids quality model by comparing and integrating the results of the four workshops. Then we implemented the Q-Rapids tool to support the usage of the Q-Rapids quality model as well as the gathering, integration, and analysis of the required data. Afterwards we installed the Q-Rapids tool in the four companies and performed semi-structured interviews with eight product owners to evaluate the understandability and relevance of the Q-Rapids quality model. Results: The participants of the evaluation perceived the metrics as well as the product and process factors of the Q-Rapids quality model as understandable. Also, they considered the Q-Rapids quality model relevant for identifying product and process deficiencies (e.g., blocking code situations). Conclusions: By means of heterogeneous data sources, the Q-Rapids quality model enables detecting problems that take more time to find manually and adds transparency among the perspectives of system, process, and usage.Comment: This is an Author's Accepted Manuscript of a paper to be published by IEEE in the 44th Euromicro Conference on Software Engineering and Advanced Applications (SEAA) 2018. The final authenticated version will be available onlin

    Evaluation of complex integrated care programmes: the approach in North West London

    Get PDF
    Background: Several local attempts to introduce integrated care in the English National Health Service have been tried, with limited success. The Northwest London Integrated Care Pilot attempts to improve the quality of care of the elderly and people with diabetes by providing a novel integration process across primary, secondary and social care organisations. It involves predictive risk modelling, care planning, multidisciplinary management of complex cases and an information technology tool to support information sharing. This paper sets out the evaluation approach adopted to measure its effect. Study design: We present a mixed methods evaluation methodology. It includes a quantitative approach measuring changes in service utilization, costs, clinical outcomes and quality of care using routine primary and secondary data sources. It also contains a qualitative component, involving observations, interviews and focus groups with patients and professionals, to understand participant experiences and to understand the pilot within the national policy context. Theory and discussion: This study considers the complexity of evaluating a large, multi-organisational intervention in a changing healthcare economy. We locate the evaluation within the theory of evaluation of complex interventions. We present the specific challenges faced by evaluating an intervention of this sort, and the responses made to mitigate against them. Conclusions: We hope this broad, dynamic and responsive evaluation will allow us to clarify the contribution of the pilot, and provide a potential model for evaluation of other similar interventions. Because of the priority given to the integrated agenda by governments internationally, the need to develop and improve strong evaluation methodologies remains strikingly important

    Identifying and appraising promising sources of UK clinical, health and social care data for use by NICE

    Get PDF
    This report aimed to aid the National Institute of Health and Care Excellence (NICE) in identifying opportunities for greater use of real-world data within its work. NICE identified five key ways in which real-world data was currently informing its work, or could do so in the future through: (i) researching the effectiveness of interventions or practice in real-world (UK) settings (ii) auditing the implementation of guidance (iii) providing information on resource use and evaluating the potential impact of guidance (iv) providing epidemiological information (v) providing information on current practice to inform the development of NICE quality standards. This report took a broad definition of ‘real-world’ data and created a map of UK sources, informed by a number of experts in real-world data, as well as a literature search, to highlight where some of the opportunities may lie for NICE within its clinical, public health and social care remit. The report was commissioned by the NICE, although the findings are likely to be of wider interest to a range of stakeholders interested in the role of real-world data in informing clinical, social care and public health decision-making. Most of the issues raised surrounding the use and appraisal of real-world data are likely to be generic, although the choice of datasets that were profiled in-depth reflected the interests of NICE. We discovered 275 sources that were named as real-world data sources for clinical, social care or public health investigation, 233 of which were deemed as active. The real-world data landscape therefore is highly complex and heterogeneous and composed of sources with different purposes, structures and collection methods. Some real-world data sources are purposefully either set-up or re-developed to enhance their data linkages and to examine the presence/absence/effectiveness of integrated patient care; however, such sources are in the minority. Furthermore, the small number of real-world data sources that are designed to enable the monitoring of care across providers, or at least have the capability to do so at a national level, have been utilised infrequently for this purpose in the literature. Data that offer the capacity to monitor transitions between health and social care do not currently exist at a national level, despite the increasing recognition of the interdependency between these sectors. Among the data sources we included, it was clear that no one data source represented a panacea for NICE’s real world data needs. This does highlight the merits and importance of data linkage projects and is suggestive of a need to triangulate evidence across different data, particularly in order to understand the feasibility and impact of guidance. There exists no overall catalogue or repository of real-world data sources for health, public health and social care, and previous initiatives aimed at creating such a resource have not been maintained. As much as there is a need for enhanced usage of the data, there is also a need for taking stock, integration, standardisation, and quality assurance of different sources. This research highlights a need for a systematic approach to creating an inventory of sources with detailed metadata and the funding to maintain this resource. This would represent an essential first step to support future initiatives aimed at enhancing the use of real-world data

    A quality model for actionable analytics in rapid software development

    Get PDF
    Accessing relevant data on the product, process, and usage perspectives of software as well as integrating and analyzing such data is crucial for getting reliable and timely actionable insights aimed at continuously managing software quality in Rapid Software Development (RSD). In this context, several software analytics tools have been developed in recent years. However, there is a lack of explainable software analytics that software practitioners trust. Aims: We aimed at creating a quality model (called Q-Rapids quality model) for actionable analytics in RSD, implementing it, and evaluating its understandability and relevance. Method: We performed workshops at four companies in order to determine relevant metrics as well as product and process factors. We also elicited how these metrics and factors are used and interpreted by practitioners when making decisions in RSD. We specified the Q-Rapids quality model by comparing and integrating the results of the four workshops. Then we implemented the Q-Rapids tool to support the usage of the Q-Rapids quality model as well as the gathering, integration, and analysis of the required data. Afterwards we installed the Q-Rapids tool in the four companies and performed semi-structured interviews with eight product owners to evaluate the understandability and relevance of the Q-Rapids quality model. Results: The participants of the evaluation perceived the metrics as well as the product and process factors of the Q-Rapids quality model as understandable. Also, they considered the Q-Rapids quality model relevant for identifying product and process deficiencies (e.g., blocking code situations). Conclusions: By means of heterogeneous data sources, the Q-Rapids quality model enables detecting problems that take more time to find manually and adds transparency among the perspectives of system, process, and usage.Peer ReviewedPostprint (author's final draft

    Mining Authoritativeness in Art Historical Photo Archives. Semantic Web Applications for Connoisseurship

    Get PDF
    The purpose of this work is threefold: (i) to facilitate knowledge discovery in art historical photo archives, (ii) to support users' decision-making process when evaluating contradictory artwork attributions, and (iii) to provide policies for information quality improvement in art historical photo archives. The approach is to leverage Semantic Web technologies in order to aggregate, assess, and recommend the most documented authorship attributions. In particular, findings of this work offer art historians an aid for retrieving relevant sources, assessing textual authoritativeness (i.e. internal grounds) of sources of attribution, and evaluating cognitive authoritativeness of cited scholars. At the same time, the retrieval process allows art historical data providers to define a low-cost data integration process to update and enrich their collection data. The contributions of this thesis are the following: (1) a methodology for representing questionable information by means of ontologies; (2) a conceptual framework of Information Quality measures addressing dimensions of textual and cognitive authoritativeness characterising art historical data, (3) a number of policies for metadata quality improvement in art historical photo archives as derived from the application of the framework, (4) a ranking model leveraging the conceptual framework, (5) a semantic crawler, called mAuth, that harvests authorship attributions in the Web of Data, and (6) an API and a Web Application to serve information to applications and final users for consuming data. Despite findings are limited to a restricted number of photo archives and datasets, the research impacts on a broader number of stakeholders, such as archives, museums, and libraries, which can reuse the conceptual framework for assessing questionable information, mutatis mutandi, to other near fields in the Humanities

    Canadian Approaches to Optimizing Quality of Administrative Data for Health System Use, Research, and Linkage

    Get PDF
    Theme: Data and Linkage Quality Objectives: • To define health data quality from clinical, data science, and health system perspectives • To describe some of the international best practices related to quality and how they are being applied to Canada’s administrative health data. • To compare methods for health data quality assessment and improvement in Canada (automated logical checks, chart quality indicators, reabstraction studies, coding manager perspectives) • To highlight how data linkage can be used to provide new insights into the quality of original data sources • To highlight current international initiatives for improving coded data quality including results from current ICD-11 field trials Dr. Keith Denny: Director of Clinical Data Standards and Quality, Canadian Insititute for Health Information (CIHI), Adjunct Research Professor, Carleton University, Ottawa, ON. He provides leadership for CIHI’s information quality initiatives and for the development and application of clinical classifications and terminology standards. Maureen Kelly: Manager of Information Quality at CIHI, Ottawa, ON. She leads CIHI’s corporate quality program that is focused on enhancing the quality of CIHI’s data sources and information products and to fostering CIHI’s quality culture. Dr. Cathy Eastwood: Scientific Manager, Associate Director of Alberta SPOR Methods & Development Platform, Community Health Sciences, Cumming School of Medicine, University of Calgary, Calgary, AB. She has expertise in clinical data collection, evaluation of local and systemic data quality issues, disease classification coding with ICD-10 and ICD-11. Dr. Hude Quan: Professor, Community Health Sciences, Cumming School of Medicine, University of Calgary, Director Alberta SPOR Methods Platform; Co-Chair of Hypertension Canada, Co-Chair of Person to Population Health Collaborative of the Libin Cardiovascular Institute in Calgary, AB. He has expertise in assessing, validating, and linking administrative data sources for conducting data science research including artificial intelligence methods for evaluating and improving data quality. Intended Outcomes: “What is quality health data?” The panel of experts will address this common question by discussing how to define high quality health data, and measures being taken to ensure that they are available in Canada. Optimizing the quality of clinical-administrative data, and their use-value, first requires an understanding of the processes used to create the data. Subsequently, we can address the limitations in data collection and use these data for diverse applications. Current advances in digital data collection are providing more solutions to improve health data quality at lower cost. This panel will describe a number of quality assessment and improvement initiatives aimed at ensuring that health data are fit for a range of secondary uses including data linkage. It will also discuss how the need for the linkage and integration of data sources can influence the views of the data source’s fitness for use. CIHI content will include: • Methods for optimizing the value of clinical-administrative data • CIHI Information Quality Framework • Reabstraction studies (e.g. physician documentation/coders’ experiences) • Linkage analytics for data quality University of Calgary content will include: • Defining/measuring health data quality • Automated methods for quality assessment and improvement • ICD-11 features and coding practices • Electronic health record initiative

    Linked Data Quality Assessment and its Application to Societal Progress Measurement

    Get PDF
    In recent years, the Linked Data (LD) paradigm has emerged as a simple mechanism for employing the Web as a medium for data and knowledge integration where both documents and data are linked. Moreover, the semantics and structure of the underlying data are kept intact, making this the Semantic Web. LD essentially entails a set of best practices for publishing and connecting structure data on the Web, which allows publish- ing and exchanging information in an interoperable and reusable fashion. Many different communities on the Internet such as geographic, media, life sciences and government have already adopted these LD principles. This is confirmed by the dramatically growing Linked Data Web, where currently more than 50 billion facts are represented. With the emergence of Web of Linked Data, there are several use cases, which are possible due to the rich and disparate data integrated into one global information space. Linked Data, in these cases, not only assists in building mashups by interlinking heterogeneous and dispersed data from multiple sources but also empowers the uncovering of meaningful and impactful relationships. These discoveries have paved the way for scientists to explore the existing data and uncover meaningful outcomes that they might not have been aware of previously. In all these use cases utilizing LD, one crippling problem is the underlying data quality. Incomplete, inconsistent or inaccurate data affects the end results gravely, thus making them unreliable. Data quality is commonly conceived as fitness for use, be it for a certain application or use case. There are cases when datasets that contain quality problems, are useful for certain applications, thus depending on the use case at hand. Thus, LD consumption has to deal with the problem of getting the data into a state in which it can be exploited for real use cases. The insufficient data quality can be caused either by the LD publication process or is intrinsic to the data source itself. A key challenge is to assess the quality of datasets published on the Web and make this quality information explicit. Assessing data quality is particularly a challenge in LD as the underlying data stems from a set of multiple, autonomous and evolving data sources. Moreover, the dynamic nature of LD makes assessing the quality crucial to measure the accuracy of representing the real-world data. On the document Web, data quality can only be indirectly or vaguely defined, but there is a requirement for more concrete and measurable data quality metrics for LD. Such data quality metrics include correctness of facts wrt. the real-world, adequacy of semantic representation, quality of interlinks, interoperability, timeliness or consistency with regard to implicit information. Even though data quality is an important concept in LD, there are few methodologies proposed to assess the quality of these datasets. Thus, in this thesis, we first unify 18 data quality dimensions and provide a total of 69 metrics for assessment of LD. The first methodology includes the employment of LD experts for the assessment. This assessment is performed with the help of the TripleCheckMate tool, which was developed specifically to assist LD experts for assessing the quality of a dataset, in this case DBpedia. The second methodology is a semi-automatic process, in which the first phase involves the detection of common quality problems by the automatic creation of an extended schema for DBpedia. The second phase involves the manual verification of the generated schema axioms. Thereafter, we employ the wisdom of the crowds i.e. workers for online crowdsourcing platforms such as Amazon Mechanical Turk (MTurk) to assess the quality of DBpedia. We then compare the two approaches (previous assessment by LD experts and assessment by MTurk workers in this study) in order to measure the feasibility of each type of the user-driven data quality assessment methodology. Additionally, we evaluate another semi-automated methodology for LD quality assessment, which also involves human judgement. In this semi-automated methodology, selected metrics are formally defined and implemented as part of a tool, namely R2RLint. The user is not only provided the results of the assessment but also specific entities that cause the errors, which help users understand the quality issues and thus can fix them. Finally, we take into account a domain-specific use case that consumes LD and leverages on data quality. In particular, we identify four LD sources, assess their quality using the R2RLint tool and then utilize them in building the Health Economic Research (HER) Observatory. We show the advantages of this semi-automated assessment over the other types of quality assessment methodologies discussed earlier. The Observatory aims at evaluating the impact of research development on the economic and healthcare performance of each country per year. We illustrate the usefulness of LD in this use case and the importance of quality assessment for any data analysis

    User Satisfaction with Wearables

    Get PDF
    This study investigates user satisfaction with wearable technologies. It proposes that the integration of expectation confirmation theory with affordance theory sheds light on the sources of user’s (dis)confirmation when evaluating technology performance experiences and explains the origins of satisfaction ratings. A qualitative and quantitative analysis of online user reviews of a popular fitness wristband supports the research model. Since the band lacks buttons and numeric displays, users need to interact with the companion software to obtain the information they need. Findings indicate that satisfaction depends on the interaction’s quality, the value of digitalizing physical activity, and the extent to which the informational feedback meets users’ needs. Moreover, the results suggest that digitalizing physical activity has different effects for different users. While some appreciate data availability in general regardless of their accuracy, those who look for precision do not find such quantification useful. Thus, their evaluative judgments depend on the wearable system’s actual performance and the influence that the feedback has on their pursuit of their fitness goals. These results provide theoretical and practical contributions to advance our understanding of wearable technologies

    Quality Measures in Uncertain Data Management

    Get PDF
    Many applications deal with data that is uncertain. Some examples are applications dealing with sensor information, data integration applications and healthcare applications. Instead of these applications having to deal with the uncertainty, it should be the responsibility of the DBMS to manage all data including uncertain data. Several projects do research on this topic. In this paper, we introduce four measures to be used to assess and compare important characteristics of data and systems
    corecore