4,220 research outputs found

    Sciunits: Reusable Research Objects

    Full text link
    Science is conducted collaboratively, often requiring knowledge sharing about computational experiments. When experiments include only datasets, they can be shared using Uniform Resource Identifiers (URIs) or Digital Object Identifiers (DOIs). An experiment, however, seldom includes only datasets, but more often includes software, its past execution, provenance, and associated documentation. The Research Object has recently emerged as a comprehensive and systematic method for aggregation and identification of diverse elements of computational experiments. While a necessary method, mere aggregation is not sufficient for the sharing of computational experiments. Other users must be able to easily recompute on these shared research objects. In this paper, we present the sciunit, a reusable research object in which aggregated content is recomputable. We describe a Git-like client that efficiently creates, stores, and repeats sciunits. We show through analysis that sciunits repeat computational experiments with minimal storage and processing overhead. Finally, we provide an overview of sharing and reproducible cyberinfrastructure based on sciunits gaining adoption in the domain of geosciences

    Summary of the First Workshop on Sustainable Software for Science: Practice and Experiences (WSSSPE1)

    Get PDF
    Challenges related to development, deployment, and maintenance of reusable software for science are becoming a growing concern. Many scientists’ research increasingly depends on the quality and availability of software upon which their works are built. To highlight some of these issues and share experiences, the First Workshop on Sustainable Software for Science: Practice and Experiences (WSSSPE1) was held in November 2013 in conjunction with the SC13 Conference. The workshop featured keynote presentations and a large number (54) of solicited extended abstracts that were grouped into three themes and presented via panels. A set of collaborative notes of the presentations and discussion was taken during the workshop. Unique perspectives were captured about issues such as comprehensive documentation, development and deployment practices, software licenses and career paths for developers. Attribution systems that account for evidence of software contribution and impact were also discussed. These include mechanisms such as Digital Object Identifiers, publication of “software papers”, and the use of online systems, for example source code repositories like GitHub. This paper summarizes the issues and shared experiences that were discussed, including cross-cutting issues and use cases. It joins a nascent literature seeking to understand what drives software work in science, and how it is impacted by the reward systems of science. These incentives can determine the extent to which developers are motivated to build software for the long-term, for the use of others, and whether to work collaboratively or separately. It also explores community building, leadership, and dynamics in relation to successful scientific software

    Data harmonisation for information fusion in digital healthcare: A state-of-the-art systematic review, meta-analysis and future research directions

    Get PDF
    Removing the bias and variance of multicentre data has always been a challenge in large scale digital healthcare studies, which requires the ability to integrate clinical features extracted from data acquired by different scanners and protocols to improve stability and robustness. Previous studies have described various computational approaches to fuse single modality multicentre datasets. However, these surveys rarely focused on evaluation metrics and lacked a checklist for computational data harmonisation studies. In this systematic review, we summarise the computational data harmonisation approaches for multi-modality data in the digital healthcare field, including harmonisation strategies and evaluation metrics based on different theories. In addition, a comprehensive checklist that summarises common practices for data harmonisation studies is proposed to guide researchers to report their research findings more effectively. Last but not least, flowcharts presenting possible ways for methodology and metric selection are proposed and the limitations of different methods have been surveyed for future research

    The dire disregard of measurement invariance testing in psychological science

    Get PDF
    In psychological science, self-report scales are widely used to compare means in targeted latent constructs across time points, groups, or experimental conditions. For these scale mean comparisons (SMC) to be meaningful and unbiased, the scales should be measurement invariant across the compared time points or (experimental) groups. Measurement invariance (MI) testing checks whether the latent constructs are measured equivalently across groups or time points. Since MI is essential for meaningful comparisons, we conducted a systematic review to check whether MI is taken seriously in psychological research. Specifically, we sampled 426 psychology articles with openly available data that involved a total of 918 SMCs to (1) investigate common practices in conducting and reporting of MI testing, (2) check whether reported MI test results can be reproduced, and (3) conduct MI tests for the SMCs that enabled sufficiently powerful MI testing with the shared data. Our results indicate that (1) 4% of the 918 scales underwent MI testing across groups or time and that these tests were generally poorly reported, (2) none of the reported MI tests could be successfully reproduced, and (3) of 161 newly performed MI tests, a mere 46 (29%) reached sufficient MI (scalar invariance), and MI often failed completely (89; 55%). Thus, MI tests were rarely done and poorly reported in psychological studies, and the frequent violations of MI indicate that reported group differences cannot be solely attributed to group differences in the latent constructs. We offer recommendations on reporting MI tests and improving computational reproducibility practices

    Quantum reservoir computing with repeated measurements on superconducting devices

    Full text link
    Reservoir computing is a machine learning framework that uses artificial or physical dissipative dynamics to predict time-series data using nonlinearity and memory properties of dynamical systems. Quantum systems are considered as promising reservoirs, but the conventional quantum reservoir computing (QRC) models have problems in the execution time. In this paper, we develop a quantum reservoir (QR) system that exploits repeated measurement to generate a time-series, which can effectively reduce the execution time. We experimentally implement the proposed QRC on the IBM's quantum superconducting device and show that it achieves higher accuracy as well as shorter execution time than the conventional QRC method. Furthermore, we study the temporal information processing capacity to quantify the computational capability of the proposed QRC; in particular, we use this quantity to identify the measurement strength that best tradeoffs the amount of available information and the strength of dissipation. An experimental demonstration with soft robot is also provided, where the repeated measurement over 1000 timesteps was effectively applied. Finally, a preliminary result with 120 qubits device is discussed

    Leveraging Container Technologies in a GIScience Project: A Perspective from Open Reproducible Research

    Get PDF
    Scientific reproducibility is essential for the advancement of science. It allows the results of previous studies to be reproduced, validates their conclusions and develops new contributions based on previous research. Nowadays, more and more authors consider that the ultimate product of academic research is the scientific manuscript, together with all the necessary elements (i.e., code and data) so that others can reproduce the results. However, there are numerous difficulties for some studies to be reproduced easily (i.e., biased results, the pressure to publish, and proprietary data). In this context, we explain our experience in an attempt to improve the reproducibility of a GIScience project. According to our project needs, we evaluated a list of practices, standards and tools that may facilitate open and reproducible research in the geospatial domain, contextualising them on Peng’s reproducibility spectrum. Among these resources, we focused on containerisation technologies and performed a shallow review to reflect on the level of adoption of these technologies in combination with OSGeo software. Finally, containerisation technologies proved to enhance the reproducibility and we used UML diagrams to describe representative work-flows deployed in our GIScience project.This work has been funded by the Generalitat Valenciana through the “Subvenciones para la realización de proyectos de I+D+i desarrollados por grupos de investigación emergentes” programme (GV/2019/016) and by the Spanish Ministry of Economy and Competitiveness under the subprogrammes Challenges-Collaboration 2014 (RTC-2014-1863-8) and Challenges R+D+I 2016 (CSO2016-79420-R AEI/FEDER, EU). Sergio Trilles has been funded by the postdoctoral programme PINV2018 - Universitat Jaume I (POSDOC-B/2018/12) and stays programme PINV2018 - Universitat Jaume I (E/2019/031)
    • …
    corecore