23,566 research outputs found

    Hack Weeks as a model for Data Science Education and Collaboration

    Full text link
    Across almost all scientific disciplines, the instruments that record our experimental data and the methods required for storage and data analysis are rapidly increasing in complexity. This gives rise to the need for scientific communities to adapt on shorter time scales than traditional university curricula allow for, and therefore requires new modes of knowledge transfer. The universal applicability of data science tools to a broad range of problems has generated new opportunities to foster exchange of ideas and computational workflows across disciplines. In recent years, hack weeks have emerged as an effective tool for fostering these exchanges by providing training in modern data analysis workflows. While there are variations in hack week implementation, all events consist of a common core of three components: tutorials in state-of-the-art methodology, peer-learning and project work in a collaborative environment. In this paper, we present the concept of a hack week in the larger context of scientific meetings and point out similarities and differences to traditional conferences. We motivate the need for such an event and present in detail its strengths and challenges. We find that hack weeks are successful at cultivating collaboration and the exchange of knowledge. Participants self-report that these events help them both in their day-to-day research as well as their careers. Based on our results, we conclude that hack weeks present an effective, easy-to-implement, fairly low-cost tool to positively impact data analysis literacy in academic disciplines, foster collaboration and cultivate best practices.Comment: 15 pages, 2 figures, submitted to PNAS, all relevant code available at https://github.com/uwescience/HackWeek-Writeu

    Summary of the First Workshop on Sustainable Software for Science: Practice and Experiences (WSSSPE1)

    Get PDF
    Challenges related to development, deployment, and maintenance of reusable software for science are becoming a growing concern. Many scientists’ research increasingly depends on the quality and availability of software upon which their works are built. To highlight some of these issues and share experiences, the First Workshop on Sustainable Software for Science: Practice and Experiences (WSSSPE1) was held in November 2013 in conjunction with the SC13 Conference. The workshop featured keynote presentations and a large number (54) of solicited extended abstracts that were grouped into three themes and presented via panels. A set of collaborative notes of the presentations and discussion was taken during the workshop. Unique perspectives were captured about issues such as comprehensive documentation, development and deployment practices, software licenses and career paths for developers. Attribution systems that account for evidence of software contribution and impact were also discussed. These include mechanisms such as Digital Object Identifiers, publication of “software papers”, and the use of online systems, for example source code repositories like GitHub. This paper summarizes the issues and shared experiences that were discussed, including cross-cutting issues and use cases. It joins a nascent literature seeking to understand what drives software work in science, and how it is impacted by the reward systems of science. These incentives can determine the extent to which developers are motivated to build software for the long-term, for the use of others, and whether to work collaboratively or separately. It also explores community building, leadership, and dynamics in relation to successful scientific software

    Measuring the Use of the Active and Assisted Living Prototype CARIMO for Home Care Service Users: Evaluation Framework and Results

    Get PDF
    To address the challenges of aging societies, various information and communication technology (ICT)-based systems for older people have been developed in recent years. Currently, the evaluation of these so-called active and assisted living (AAL) systems usually focuses on the analyses of usability and acceptance, while some also assess their impact. Little is known about the actual take-up of these assistive technologies. This paper presents a framework for measuring the take-up by analyzing the actual usage of AAL systems. This evaluation framework covers detailed information regarding the entire process including usage data logging, data preparation, and usage data analysis. We applied the framework on the AAL prototype CARIMO for measuring its take-up during an eight-month field trial in Austria and Italy. The framework was designed to guide systematic, comparable, and reproducible usage data evaluation in the AAL field; however, the general applicability of the framework has yet to be validated

    The Critical Role of Statistics in Demostrating the Reliability of Expert Evidence

    Get PDF
    Federal Rule of Evidence 702, which covers testimony by expert witnesses, allows a witness to testify “in the form of an opinion or otherwise” if “the testimony is based on sufficient facts or data” and “is the product of reliable principles and methods” that have been “reliably applied.” The determination of “sufficient” (facts or data) and whether the “reliable principles and methods” relate to the scientific question at hand involve more discrimination than the current Rule 702 may suggest. Using examples from latent fingerprint matching and trace evidence (bullet lead and glass), I offer some criteria that scientists often consider in assessing the “trustworthiness” of evidence to enable courts to better distinguish between “trustworthy” and “questionable” evidence. The codification of such criteria may ultimately strengthen the current Rule 702 so courts can better distinguish between demonstrably scientific sufficiency and “opinion” based on inadequate (or inappurtenant) methods
    • …
    corecore