23,566 research outputs found
Recommended from our members
Skills and Knowledge for Data-Intensive Environmental Research.
The scale and magnitude of complex and pressing environmental issues lend urgency to the need for integrative and reproducible analysis and synthesis, facilitated by data-intensive research approaches. However, the recent pace of technological change has been such that appropriate skills to accomplish data-intensive research are lacking among environmental scientists, who more than ever need greater access to training and mentorship in computational skills. Here, we provide a roadmap for raising data competencies of current and next-generation environmental researchers by describing the concepts and skills needed for effectively engaging with the heterogeneous, distributed, and rapidly growing volumes of available data. We articulate five key skills: (1) data management and processing, (2) analysis, (3) software skills for science, (4) visualization, and (5) communication methods for collaboration and dissemination. We provide an overview of the current suite of training initiatives available to environmental scientists and models for closing the skill-transfer gap
Hack Weeks as a model for Data Science Education and Collaboration
Across almost all scientific disciplines, the instruments that record our
experimental data and the methods required for storage and data analysis are
rapidly increasing in complexity. This gives rise to the need for scientific
communities to adapt on shorter time scales than traditional university
curricula allow for, and therefore requires new modes of knowledge transfer.
The universal applicability of data science tools to a broad range of problems
has generated new opportunities to foster exchange of ideas and computational
workflows across disciplines. In recent years, hack weeks have emerged as an
effective tool for fostering these exchanges by providing training in modern
data analysis workflows. While there are variations in hack week
implementation, all events consist of a common core of three components:
tutorials in state-of-the-art methodology, peer-learning and project work in a
collaborative environment. In this paper, we present the concept of a hack week
in the larger context of scientific meetings and point out similarities and
differences to traditional conferences. We motivate the need for such an event
and present in detail its strengths and challenges. We find that hack weeks are
successful at cultivating collaboration and the exchange of knowledge.
Participants self-report that these events help them both in their day-to-day
research as well as their careers. Based on our results, we conclude that hack
weeks present an effective, easy-to-implement, fairly low-cost tool to
positively impact data analysis literacy in academic disciplines, foster
collaboration and cultivate best practices.Comment: 15 pages, 2 figures, submitted to PNAS, all relevant code available
at https://github.com/uwescience/HackWeek-Writeu
Recommended from our members
Statistical competencies for medical research learners: What is fundamental?
IntroductionIt is increasingly essential for medical researchers to be literate in statistics, but the requisite degree of literacy is not the same for every statistical competency in translational research. Statistical competency can range from 'fundamental' (necessary for all) to 'specialized' (necessary for only some). In this study, we determine the degree to which each competency is fundamental or specialized.MethodsWe surveyed members of 4 professional organizations, targeting doctorally trained biostatisticians and epidemiologists who taught statistics to medical research learners in the past 5 years. Respondents rated 24 educational competencies on a 5-point Likert scale anchored by 'fundamental' and 'specialized.'ResultsThere were 112 responses. Nineteen of 24 competencies were fundamental. The competencies considered most fundamental were assessing sources of bias and variation (95%), recognizing one's own limits with regard to statistics (93%), identifying the strengths, and limitations of study designs (93%). The least endorsed items were meta-analysis (34%) and stopping rules (18%).ConclusionWe have identified the statistical competencies needed by all medical researchers. These competencies should be considered when designing statistical curricula for medical researchers and should inform which topics are taught in graduate programs and evidence-based medicine courses where learners need to read and understand the medical research literature
Summary of the First Workshop on Sustainable Software for Science: Practice and Experiences (WSSSPE1)
Challenges related to development, deployment, and maintenance of reusable software for science are becoming a growing concern. Many scientists’ research increasingly depends on the quality and availability of software upon which their works are built. To highlight some of these issues and share experiences, the First Workshop on Sustainable Software for Science: Practice and Experiences (WSSSPE1) was held in November 2013 in conjunction with the SC13 Conference. The workshop featured keynote presentations and a large number (54) of solicited extended abstracts that were grouped into three themes and presented via panels. A set of collaborative notes of the presentations and discussion was taken during the workshop.
Unique perspectives were captured about issues such as comprehensive documentation, development and deployment practices, software licenses and career paths for developers. Attribution systems that account for evidence of software contribution and impact were also discussed. These include mechanisms such as Digital Object Identifiers, publication of “software papers”, and the use of online systems, for example source code repositories like GitHub. This paper summarizes the issues and shared experiences that were discussed, including cross-cutting issues and use cases. It joins a nascent literature seeking to understand what drives software work in science, and how it is impacted by the reward systems of science. These incentives can determine the extent to which developers are motivated to build software for the long-term, for the use of others, and whether to work collaboratively or separately. It also explores community building, leadership, and dynamics in relation to successful scientific software
Measuring the Use of the Active and Assisted Living Prototype CARIMO for Home Care Service Users: Evaluation Framework and Results
To address the challenges of aging societies, various information and communication technology (ICT)-based systems for older people have been developed in recent years. Currently, the evaluation of these so-called active and assisted living (AAL) systems usually focuses on the analyses of usability and acceptance, while some also assess their impact. Little is known about
the actual take-up of these assistive technologies. This paper presents a framework for measuring the take-up by analyzing the actual usage of AAL systems. This evaluation framework covers detailed information regarding the entire process including usage data logging, data preparation, and usage data analysis. We applied the framework on the AAL prototype CARIMO for measuring
its take-up during an eight-month field trial in Austria and Italy. The framework was designed to guide systematic, comparable, and reproducible usage data evaluation in the AAL field; however, the general applicability of the framework has yet to be validated
The Critical Role of Statistics in Demostrating the Reliability of Expert Evidence
Federal Rule of Evidence 702, which covers testimony by expert witnesses, allows a witness to testify “in the form of an opinion or otherwise” if “the testimony is based on sufficient facts or data” and “is the product of reliable principles and methods” that have been “reliably applied.” The determination of “sufficient” (facts or data) and whether the “reliable principles and methods” relate to the scientific question at hand involve more discrimination than the current Rule 702 may suggest. Using examples from latent fingerprint matching and trace evidence (bullet lead and glass), I offer some criteria that scientists often consider in assessing the “trustworthiness” of evidence to enable courts to better distinguish between “trustworthy” and “questionable” evidence. The codification of such criteria may ultimately strengthen the current Rule 702 so courts can better distinguish between demonstrably scientific sufficiency and “opinion” based on inadequate (or inappurtenant) methods
- …