375,457 research outputs found

    Leveraging Semantics to Improve Reproducibility in Scientific Workflows

    Full text link
    Reproducibility of published results is a cornerstone in scientific publishing and progress. Therefore, the scientific community has been encouraging authors and editors to publish their contributions in a verifiable and understandable way. Efforts such as the Reproducibility Initiative [1], or the Reproducibility Projects on Biology [2] and Psychology [3] domains, have been defining standards and patterns to assess whether an experimental result is reproducible

    BEAT: An Open-Source Web-Based Open-Science Platform

    Get PDF
    With the increased interest in computational sciences, machine learning (ML), pattern recognition (PR) and big data, governmental agencies, academia and manufacturers are overwhelmed by the constant influx of new algorithms and techniques promising improved performance, generalization and robustness. Sadly, result reproducibility is often an overlooked feature accompanying original research publications, competitions and benchmark evaluations. The main reasons behind such a gap arise from natural complications in research and development in this area: the distribution of data may be a sensitive issue; software frameworks are difficult to install and maintain; Test protocols may involve a potentially large set of intricate steps which are difficult to handle. Given the raising complexity of research challenges and the constant increase in data volume, the conditions for achieving reproducible research in the domain are also increasingly difficult to meet. To bridge this gap, we built an open platform for research in computational sciences related to pattern recognition and machine learning, to help on the development, reproducibility and certification of results obtained in the field. By making use of such a system, academic, governmental or industrial organizations enable users to easily and socially develop processing toolchains, re-use data, algorithms, workflows and compare results from distinct algorithms and/or parameterizations with minimal effort. This article presents such a platform and discusses some of its key features, uses and limitations. We overview a currently operational prototype and provide design insights.Comment: References to papers published on the platform incorporate

    Assessing the reproducibility of discriminant function analyses.

    Get PDF
    Data are the foundation of empirical research, yet all too often the datasets underlying published papers are unavailable, incorrect, or poorly curated. This is a serious issue, because future researchers are then unable to validate published results or reuse data to explore new ideas and hypotheses. Even if data files are securely stored and accessible, they must also be accompanied by accurate labels and identifiers. To assess how often problems with metadata or data curation affect the reproducibility of published results, we attempted to reproduce Discriminant Function Analyses (DFAs) from the field of organismal biology. DFA is a commonly used statistical analysis that has changed little since its inception almost eight decades ago, and therefore provides an opportunity to test reproducibility among datasets of varying ages. Out of 100 papers we initially surveyed, fourteen were excluded because they did not present the common types of quantitative result from their DFA or gave insufficient details of their DFA. Of the remaining 86 datasets, there were 15 cases for which we were unable to confidently relate the dataset we received to the one used in the published analysis. The reasons ranged from incomprehensible or absent variable labels, the DFA being performed on an unspecified subset of the data, or the dataset we received being incomplete. We focused on reproducing three common summary statistics from DFAs: the percent variance explained, the percentage correctly assigned and the largest discriminant function coefficient. The reproducibility of the first two was fairly high (20 of 26, and 44 of 60 datasets, respectively), whereas our success rate with the discriminant function coefficients was lower (15 of 26 datasets). When considering all three summary statistics, we were able to completely reproduce 46 (65%) of 71 datasets. While our results show that a majority of studies are reproducible, they highlight the fact that many studies still are not the carefully curated research that the scientific community and public expects

    A model for the fragmentation kinetics of crumpled thin sheets

    Full text link
    As a confined thin sheet crumples, it spontaneously segments into flat facets delimited by a network of ridges. Despite the apparent disorder of this process, statistical properties of crumpled sheets exhibit striking reproducibility. Experiments have shown that the total crease length accrues logarithmically when repeatedly compacting and unfolding a sheet of paper. Here, we offer insight to this unexpected result by exploring the correspondence between crumpling and fragmentation processes. We identify a physical model for the evolution of facet area and ridge length distributions of crumpled sheets, and propose a mechanism for re-fragmentation driven by geometric frustration. This mechanism establishes a feedback loop in which the facet size distribution informs the subsequent rate of fragmentation under repeated confinement, thereby producing a new size distribution. We then demonstrate the capacity of this model to reproduce the characteristic logarithmic scaling of total crease length, thereby supplying a missing physical basis for the observed phenomenon.Comment: 11 pages, 7 figures (+ Supplemental Materials: 15 pages, 9 figures); introduced a simpler approximation to model, key results unchanged; added references, expanded supplementary information, corrected Fig. 2 and revised Figs. 4 and 7 for clearer presentation of result

    Development and characterization of composites consisting of woven fabrics with integrated prismatic shaped cavities

    Get PDF
    Composites are extensively used in automotive, construction, airplanes, wind turbines etc. because of their good mechanical properties such as high specific stiffness, high specific strength and resistance against fatigue. The main issues with composites are delamination and the manual labour in the production process. If hollow structures like stiffeners need to be manufactured, these problems become even more apparent. As a result, there is a lot of interest in woven fabrics with integrated prismatic shaped cavities for composites as they reduce the manual labour, have a higher resistance against delamination and can lead to special properties and applications. In this work several of these woven fabrics with integrated prismatic shaped cavities are designed and produced in high-tenacity polyester yarns. Then, the possibility to use these fabrics in composites is explored: reproducibility of the production process is assessed and static testing is performed. A reproducible production process is developed and static testing shows promising results

    Transient cognitive dynamics, metastability, and decision making

    Get PDF
    Transient Cognitive Dynamics, Metastability, and Decision Making. Rabinovich et al. PLoS Computational Biology. 2008. 4(5) doi:10.1371/journal.pcbi.1000072The idea that cognitive activity can be understood using nonlinear dynamics has been intensively discussed at length for the last 15 years. One of the popular points of view is that metastable states play a key role in the execution of cognitive functions. Experimental and modeling studies suggest that most of these functions are the result of transient activity of large-scale brain networks in the presence of noise. Such transients may consist of a sequential switching between different metastable cognitive states. The main problem faced when using dynamical theory to describe transient cognitive processes is the fundamental contradiction between reproducibility and flexibility of transient behavior. In this paper, we propose a theoretical description of transient cognitive dynamics based on the interaction of functionally dependent metastable cognitive states. The mathematical image of such transient activity is a stable heteroclinic channel, i.e., a set of trajectories in the vicinity of a heteroclinic skeleton that consists of saddles and unstable separatrices that connect their surroundings. We suggest a basic mathematical model, a strongly dissipative dynamical system, and formulate the conditions for the robustness and reproducibility of cognitive transients that satisfy the competing requirements for stability and flexibility. Based on this approach, we describe here an effective solution for the problem of sequential decision making, represented as a fixed time game: a player takes sequential actions in a changing noisy environment so as to maximize a cumulative reward. As we predict and verify in computer simulations, noise plays an important role in optimizing the gain.This work was supported by ONR N00014-07-1-0741. PV acknowledges support from Spanish BFU2006-07902/BFI and CAM S-SEM-0255-2006
    corecore