7 research outputs found

    Current State of Microplastic Pollution Research Data: Trends in Availability and Sources of Open Data

    Get PDF
    The rapid growth in microplastic pollution research is influencing funding priorities, environmental policy, and public perceptions of risks to water quality and environmental and human health. Ensuring that environmental microplastics research data are findable, accessible, interoperable, and reusable (FAIR) is essential to inform policy and mitigation strategies. We present a bibliographic analysis of data sharing practices in the environmental microplastics research community, highlighting the state of openness of microplastics data. A stratified (by year) random subset of 785 of 6,608 microplastics articles indexed in Web of Science indicates that, since 2006, less than a third (28.5%) contained a data sharing statement. These statements further show that most often, the data were provided in the articles’ supplementary material (38.8%) and only 13.8% via a data repository. Of the 279 microplastics datasets found in online data repositories, 20.4% presented only metadata with access to the data requiring additional approval. Although increasing, the rate of microplastic data sharing still lags behind that of publication of peer-reviewed articles on environmental microplastics. About a quarter of the repository data originated from North America (12.8%) and Europe (13.4%). Marine and estuarine environments are the most frequently sampled systems (26.2%); sediments (18.8%) and water (15.3%) are the predominant media. Of the available datasets accessible, 15.4% and 18.2% do not have adequate metadata to determine the sampling location and media type, respectively. We discuss five recommendations to strengthen data sharing practices in the environmental microplastic research community

    Versatile thinking and the learning of statistical concepts

    No full text
    Statistics was for a long time a domain where calculation dominated to the detriment of statistical thinking. In recent years, the latter concept has come much more to the fore, and is now being both researched and promoted in school and tertiary courses. In this study, we consider the application of the concept of flexible or versatile thinking to statistical inference, as a key attribute of statistical thinking. Whilst this versatility comprises process/object, visuo/analytic and representational versatility, we concentrate here on the last aspect, which includes the ability to work within a representation system (or semiotic register) and to transform seamlessly between the systems for given concepts, as well as to engage in procedural and conceptual interactions with specific representations. To exemplify the theoretical ideas, we consider two examples based on the concepts of relative comparison and sampling variability as cases where representational versatility may be crucial to understanding. We outline the qualitative thinking involved in representations of relative density and sample and population distributions, including mathematical models and their precursor, diagrammatic forms

    The LHCb Upgrade I

    No full text
    Abstract The LHCb upgrade represents a major change of the experiment. The detectors have been almost completely renewed to allow running at an instantaneous luminosity five times larger than that of the previous running periods. Readout of all detectors into an all-software trigger is central to the new design, facilitating the reconstruction of events at the maximum LHC interaction rate, and their selection in real time. The experiment's tracking system has been completely upgraded with a new pixel vertex detector, a silicon tracker upstream of the dipole magnet and three scintillating fibre tracking stations downstream of the magnet. The whole photon detection system of the RICH detectors has been renewed and the readout electronics of the calorimeter and muon systems have been fully overhauled. The first stage of the all-software trigger is implemented on a GPU farm. The output of the trigger provides a combination of totally reconstructed physics objects, such as tracks and vertices, ready for final analysis, and of entire events which need further offline reprocessing. This scheme required a complete revision of the computing model and rewriting of the experiment's software.</jats:p
    corecore