138 research outputs found

    What Developers Want and Need from Program Analysis: An Empirical Study

    Get PDF
    Program Analysis has been a rich and fruitful field of research for many decades, and countless high quality program analysis tools have been produced by academia. Though there are some well-known examples of tools that have found their way into routine use by practitioners, a common challenge faced by researchers is knowing how to achieve broad and lasting adoption of their tools. In an effort to understand what makes a program analyzer most attractive to developers, we mounted a multi-method investigation at Microsoft. Through interviews and surveys of developers as well as analysis of defect data, we provide insight and answers to four high level research questions that can help researchers design program analyzers meeting the needs of software developers. First, we explore what barriers hinder the adoption of program analyzers, like poorly expressed warning messages. Second, we shed light on what functionality developers want from analyzers, including the types of code issues that developers care about. Next, we answer what non-functional characteristics an analyzer should have to be widely used, how the analyzer should fit into the development process, and how its results should be reported. Finally, we investigate defects in one of Microsoft's flagship software services, to understand what types of code issues are most important to minimize, potentially through program analysis

    The differential effects of concurrent planning practice elements on reunification and adoption

    Get PDF
    Objective: The child welfare practice of concurrent planning attempts to shorten children\u27s stays in foster care. There is very little quantitative research on concurrent planning\u27s effects. This study examines the influence of concurrent planning practice elements (reunification prognosis, concurrent plan, full disclosure, and discussion of voluntary relinquishment) on reunification and adoption. Method: Using a sample of 885 children, an observational design, and statistical controls, children who received concurrent planning elements were compared to those who did not. Results: Findings show discussion of voluntary relinquishment to be positively associated with adoption and full disclosure to be negatively associated with reunification. Conclusions: Concurrent planning\u27s benefits may require more intensive services to be fully realized. Care should be taken to ensure activities achieve their intended effects

    Reunifying from behind bars: A quantitative study of the relationship between parental incarceration, service use, and foster care reunification

    Get PDF
    Incarcerated parents attempting to reunify with their children in foster care can find it difficult to complete the activities on their court-ordered case plans, such as drug treatment services and visitation with children. Although much has been written regarding the obstacles that are likely to interfere with reunification for incarcerated parents, very little quantitative research has examined the topic. This study uses secondary data to examine the incarceration experiences and reunification outcomes of a sample of 225 parents in one large urban California county. In multivariate analysis controlling for problems and demographics, incarcerated parents were less likely to reunify with their children; however, service use appeared to mediate this relationship, as the negative association between incarceration and reunification did not persist when service use was included as a variable in the model. Suggestions are made for policy and practice changes to improve reunification outcomes for this population of parents.

    Quantification of Myocardial Blood Flow in Absolute Terms Using (82)Rb PET Imaging: The RUBY-10 Study.

    Get PDF
    OBJECTIVES: The purpose of this study was to compare myocardial blood flow (MBF) and myocardial flow reserve (MFR) estimates from rubidium-82 positron emission tomography ((82)Rb PET) data using 10 software packages (SPs) based on 8 tracer kinetic models. BACKGROUND: It is unknown how MBF and MFR values from existing SPs agree for (82)Rb PET. METHODS: Rest and stress (82)Rb PET scans of 48 patients with suspected or known coronary artery disease were analyzed in 10 centers. Each center used 1 of 10 SPs to analyze global and regional MBF using the different kinetic models implemented. Values were considered to agree if they simultaneously had an intraclass correlation coefficient >0.75 and a difference <20% of the median across all programs. RESULTS: The most common model evaluated was the Ottawa Heart Institute 1-tissue compartment model (OHI-1-TCM). MBF values from 7 of 8 SPs implementing this model agreed best. Values from 2 other models (alternative 1-TCM and Axially distributed) also agreed well, with occasional differences. The MBF results from other models (e.g., 2-TCM and retention) were less in agreement with values from OHI-1-TCM. CONCLUSIONS: SPs using the most common kinetic model-OHI-1-TCM-provided consistent results in measuring global and regional MBF values, suggesting that they may be used interchangeably to process data acquired with a common imaging protocol

    A Replication of Failure, Not a Failure to Replicate

    Full text link
    Purpose: The increasing role of systematic reviews in knowledge production demands greater rigor in the literature search process. The performance of the Social Work Abstracts (SWA) database has been examined multiple times over the past three decades. The current study is a replication within this line of research. Method: Issue level coverage was examined for the same 33 SWA core journals and the same time period as our 2009 study. Results: The mean percentage of issues missing in the current study was 20%. The mean percentage of issues missing in the current study was significantly greater than the mean percentage of issues missing in the 2009 study. Discussion: The research of other groups, and that of our own, has failed to prompt NASW Press to act. SWA was failing, it is failing and NASW Press has failed to correct those failures

    Effective self-regulated science learning through multimedia-enriched skeleton concept maps

    Get PDF
    This study combines work on concept mapping with scripted collaborative learning. Purpose: The objective was to examine the effects of self-regulated science learning through scripting students’ argumentative interactions during collaborative ‘multimedia-enriched skeleton concept mapping’ on meaningful science learning and retention. Programme description: Each concept in the enriched skeleton concept map (ESCoM) contained annotated multimedia-rich content (pictures, text, animations or video clips) that elaborated the concept, and an embedded collaboration script to guide students’ interactions. Sample: The study was performed in a Biomolecules course on the Bachelor of Applied Science program in the Netherlands. All first-year students (N=93, 31 women, 62 men, aged 17–33 years) took part in this study. Design and methods: The design used a control group who received the regular course and an experimental group working together in dyads on an ESCoM under the guidance of collaboration scripts. In order to investigate meaningful understanding and retention, a retention test was administered a month after the final exam. Results: Analysis of covariance demonstrated a significant experimental effect on the Biomolecules exam scores between the experimental group and the control, and the difference between the groups on the retention test also reached statistical significance. Conclusions: Scripted collaborative multimedia ESCoM mapping resulted in meaningful understanding and retention of the conceptual structure of the domain, the concepts, and their relations. Not only was scripted collaborative multimedia ESCoM mapping more effective than the traditional teaching approach, it was also more efficient in requiring far less teacher guidance

    Citizen crowds and experts: observer variability in image-based plant phenotyping

    Get PDF
    Background:Image-based plant phenotyping has become a powerful tool in unravelling genotype–environment interactions. The utilization of image analysis and machine learning have become paramount in extracting data stemming from phenotyping experiments. Yet we rely on observer (a human expert) input to perform the phenotyping process. We assume such input to be a ‘gold-standard’ and use it to evaluate software and algorithms and to train learning-based algorithms. However, we should consider whether any variability among experienced and non-experienced (including plain citizens) observers exists. Here we design a study that measures such variability in an annotation task of an integer-quantifiable phenotype: the leaf count.Results:We compare several experienced and non-experienced observers in annotating leaf counts in images of Arabidopsis Thaliana to measure intra- and inter-observer variability in a controlled study using specially designed annotation tools but also citizens using a distributed citizen-powered web-based platform. In the controlled study observers counted leaves by looking at top-view images, which were taken with low and high resolution optics. We assessed whether the utilization of tools specifically designed for this task can help to reduce such variability. We found that the presence of tools helps to reduce intra-observer variability, and that although intra- and inter-observer variability is present it does not have any effect on longitudinal leaf count trend statistical assessments. We compared the variability of citizen provided annotations (from the web-based platform) and found that plain citizens can provide statistically accurate leaf counts. We also compared a recent machine-learning based leaf counting algorithm and found that while close in performance it is still not within inter-observer variability.Conclusions:While expertise of the observer plays a role, if sufficient statistical power is present, a collection of non-experienced users and even citizens can be included in image-based phenotyping annotation tasks as long they are suitably designed. We hope with these findings that we can re-evaluate the expectations that we have from automated algorithms: as long as they perform within observer variability they can be considered a suitable alternative. In addition, we hope to invigorate an interest in introducing suitably designed tasks on citizen powered platforms not only to obtain useful information (for research) but to help engage the public in this societal important problem
    corecore