6,461 research outputs found

    Using theory to inform capacity-building: Bootstrapping communities of practice in computer science education research

    Get PDF
    In this paper, we describe our efforts in the deliberate creation of a community of practice of researchers in computer science education (CSEd). We understand community of practice in the sense in which Wenger describes it, whereby the community is characterized by mutual engagement in a joint enterprise that gives rise to a shared repertoire of knowledge, artefacts, and practices. We first identify CSEd as a research field in which no shared paradigm exists, and then we describe the Bootstrapping project, its metaphor, structure, rationale, and delivery, as designed to create a community of practice of CSEd researchers. Features of other projects are also outlined that have similar aims of capacity building in disciplinary-specific pedagogic enquiry. A theoretically derived framework for evaluating the success of endeavours of this type is then presented, and we report the results from an empirical study. We conclude with four open questions for our project and others like it: Where is the locus of a community of practice? Who are the core members? Do capacity-building models transfer to other disciplines? Can our theoretically motivated measures of success apply to other projects of the same nature

    COOPER-framework: A Unified Standard Process for Non-parametric Projects

    Get PDF
    Practitioners assess performance of entities in increasingly large and complicated datasets. If non-parametric models, such as Data Envelopment Analysis, were ever considered as simple push-button technologies, this is impossible when many variables are available or when data have to be compiled from several sources. This paper introduces by the ‘COOPER-framework’ a comprehensive model for carrying out non-parametric projects. The framework consists of six interrelated phases: Concepts and objectives, On structuring data, Operational models, Performance comparison model, Evaluation, and Result and deployment. Each of the phases describes some necessary steps a researcher should examine for a well defined and repeatable analysis. The COOPER-framework provides for the novice analyst guidance, structure and advice for a sound non-parametric analysis. The more experienced analyst benefits from a check list such that important issues are not forgotten. In addition, by the use of a standardized framework non-parametric assessments will be more reliable, more repeatable, more manageable, faster and less costly.DEA, non-parametric efficiency, unified standard process, COOPER-framework.

    Confidence in assessing the effectiveness of bath treatments for the control of sea lice on Norwegian salmon farms

    Get PDF
    The salmon louse Lepeophtheirus salmonis is the most important ectoparasite of farmed salmonids in the Northern hemisphere, having a major economic and ecological impact on the sustainability of this sector of the aquaculture industry. To a large extent, control of L. salmonis relies on the use of topical delousing chemical treatments in the form of baths. Improvements in methods for the administration and assessment of bathtreatments have not kept pace with the rapid modernization and intensification of the salmon industry. Bathtreatments present technical and biological challenges, including best practice methods for the estimation of the effect of licetreatment interventions. In this communication, we compare and contrast methods to calculate and interpret treatmenteffectiveness at pen and site level. The methods are illustrated for the calculation of the percentage reduction in mean abundance of mobile lice with a measure of confidence. Six different methods for the calculation of confidence intervals across different probability levels were compared. We found the quasi-Poisson method with a 90% confidence interval to be informative and robust for the measurement of bathtreatment performance

    Sample- and segment-size specific Model Selection in Mixture Regression Analysis

    Get PDF
    As mixture regression models increasingly receive attention from both theory and practice, the question of selecting the correct number of segments gains urgency. A misspecification can lead to an under- or oversegmentation, thus resulting in flawed management decisions on customer targeting or product positioning. This paper presents the results of an extensive simulation study that examines the performance of commonly used information criteria in a mixture regression context with normal data. Unlike with previous studies, the performance is evaluated at a broad range of sample/segment size combinations being the most critical factors for the effectiveness of the criteria from both a theoretical and practical point of view. In order to assess the absolute performance of each criterion with respect to chance, the performance is reviewed against so called chance criteria, derived from discriminant analysis. The results induce recommendations on criterion selection when a certain sample size is given and help to judge what sample size is needed in order to guarantee an accurate decision based on a certain criterion respectively.Mixture Regression; Model Selection; Information Criteria

    Partial least squares (PLS) in Operations Management research: Insights from a systematic literature review

    Get PDF
    [EN] Purpose: The present paper aims to analyze the Operations Management (OM) research between 2014 and 2018 that has made use of Partial Least Squares (PLS) to determine whether the trends shown in previous literature reviews focused on this topic are maintained and whether the analyzed papers comply with the recommendations for reporting Design/methodology/approach: A systematic literature review has been carried out of OM articles that use PLS as an analysis tool. A total of 102 references from 45 journals from 2014 to 2018, published in WOS and Scopus, has been analyzed. Bibliometric analysis and a review of the PLS reporting standards applied in the context of OM have been developed. Findings: PLS is gaining importance and is widely adopted in OM as a statistical analysis method of choice. In general, certain aspects of PLS are correctly applied and adequately reported in the publications. However, some shortcomings continue to be observed in terms of their application and the reporting of results. A detailed comparison has been developed between the current research and previous OM research (as well as previous research on other disciplines) on this topic. Research limitations/implications: OM researchers making use of PLS should be aware of the importance of correctly reasoning and justifying their choice and fully reporting the main parameters in order to provide other researchers with useful information and enable them to reproduce the performed analysis. Originality/value: This article builds a study with results based on a greater number of articles and journals than the two previous literature reviews focused on this topic. Therefore, it provides a richer and more up-to-date evaluation of trends in the use and reporting of PLS. Additionally, the present paper assesses whether the studies follow the indications suggested in recent years triggered by significant changes in the standards of reporting results obtained through the use of PLS.This study has been conducted within the frameworks of the following funded competitive projects: PID2019-105001GB-I00 (Ministerio de Ciencia e Innovacion, Spain); 1381039 (Programa Operativo Feder Andalucia 2014/2020, Junta de Andalucia) and PY20_01209 (PAIDI 2020, Junta de Andalucia).Bayonne, E.; Marin-Garcia, JA.; Alfalla-Luque, R. (2020). Partial least squares (PLS) in Operations Management research: Insights from a systematic literature review. Journal of Industrial Engineering and Management. 13(3):565-597. https://doi.org/10.3926/jiem.3416S56559713

    Relative Abundance of Transcripts (RATs):Identifying differential isoform abundance from RNA-seq [version 1; referees: 1 approved, 2 approved with reservations]

    Get PDF
    The biological importance of changes in RNA expression is reflected by the wide variety of tools available to characterise these changes from RNA-seq data. Several tools exist for detecting differential transcript isoform usage (DTU) from aligned or assembled RNA-seq data, but few exist for DTU detection from alignment-free RNA-seq quantifications. We present the RATs, an R package that identifies DTU transcriptome-wide directly from transcript abundance estimates. RATs is unique in applying bootstrapping to estimate the reliability of detected DTU events and shows good performance at all replication levels (median false positive fraction < 0.05). We compare RATs to two existing DTU tools, DRIM-Seq & SUPPA2, using two publicly available simulated RNA-seq datasets and a published human RNA-seq dataset, in which 248 genes have been previously identified as displaying significant DTU. RATs with default threshold values on the simulated Human data has a sensitivity of 0.55, a Matthews correlation coefficient of 0.71 and a false discovery rate (FDR) of 0.04, outperforming both other tools. Applying the same thresholds for SUPPA2 results in a higher sensitivity (0.61) but poorer FDR performance (0.33). RATs and DRIM-seq use different methods for measuring DTU effect-sizes complicating the comparison of results between these tools, however, for a likelihood-ratio threshold of 30, DRIM-Seq has similar FDR performance to RATs (0.06), but worse sensitivity (0.47). These differences persist for the simulated drosophila dataset. On the published human RNA-seq dataset the greatest agreement between the tools tested is 53%, observed between RATs and SUPPA2. The bootstrapping quality filter in RATs is responsible for removing the majority of DTU events called by SUPPA2 that are not reported by RATs. All methods, including the previously published qRT-PCR of three of the 248 detected DTU events, were found to be sensitive to annotation differences between Ensembl v60 and v87
    corecore