18 research outputs found

    The Experiment Factory: Standardizing Behavioral Experiments

    Get PDF
    The administration of behavioral and experimental paradigms for psychology research is hindered by lack of a coordinated effort to develop and deploy standardized paradigms. While several frameworks (de Leeuw (2015); McDonnell et al. (2012); Mason and Suri (2011); Lange et al. (2015)) have provided infrastructure and methods for individual research groups to develop paradigms, missing is a coordinated effort to develop paradigms linked with a system to easily deploy them. This disorganization leads to redundancy in development, divergent implementations of conceptually identical tasks, disorganized and error-prone code lacking documentation, and difficulty in replication. The ongoing reproducibility crisis in psychology and neuroscience research (Baker (2015); Open Science Collaboration (2015)) highlights the urgency of this challenge: reproducible research in behavioral psychology is conditional on deployment of equivalent experiments. A large, accessible repository of experiments for researchers to develop collaboratively is most efficiently accomplished through an open source framework. Here we present the Experiment Factory, an open source framework for the development and deployment of web-based experiments. The modular infrastructure includes experiments, virtual machines for local or cloud deployment, and an application to drive these components and provide developers with functions and tools for further extension. We release this infrastructure with a deployment (http://www.expfactory.org) that researchers are currently using to run a set of over 80 standardized web-based experiments on Amazon Mechanical Turk. By providing open source tools for both deployment and development, this novel infrastructure holds promise to bring reproducibility to the administration of experiments, and accelerate scientific progress by providing a shared community resource of psychological paradigms

    NeuroVault.org : a web-based repository for collecting and sharing unthresholded statistical maps of the human brain

    Get PDF
    Here we present NeuroVault-a web based repository that allows researchers to store, share, visualize, and decode statistical maps of the human brain. NeuroVault is easy to use and employs modern web technologies to provide informative visualization of data without the need to install additional software. In addition, it leverages the power of the Neurosynth database to provide cognitive decoding of deposited maps. The data are exposed through a public REST API enabling other services and tools to take advantage of it. NeuroVault is a new resource for researchers interested in conducting meta- and coactivation analyses

    AuthorSynth: a collaboration network and behaviorally-based visualization tool of activation reports from the neuroscience literature

    Get PDF
    Targeted collaboration is becoming more challenging with the ever-increasing number of publications, conferences, and academic responsibilities that the modern-day researcher must synthesize. Specifically, the field of neuroimaging had roughly 10,000 new papers in PubMed for the year 2013, presenting tens of thousands of international authors, each a potential collaborator working on some sub-domain in the field. To remove the burden of synthesizing an entire corpus of publications, talks, and conference interactions to find and assess collaborations, we combine meta-analytical neuroimaging informatics methods with machine learning and network analysis toward this goal. We present AuthorSynth, (http://www.vbmis.com/bmi/authorSynth) a novel application prototype that includes 1) a collaboration network to identify researchers with similar results reported in the literature, and 2) a 2D plot - brain lattice - to visually summarize a single author’s contribution to the field, and allow for searching of authors based on behavioral terms. This method capitalizes on intelligent synthesis of the neuroimaging literature, and demonstrates that data-driven approaches can be used to confirm existing collaborations, reveal potential ones, and identify gaps in published knowledge. We believe this tool exemplifies how methods from neuroimaging informatics can better inform researchers about progress and knowledge in the field, and enhance the modern workflow of finding collaborations

    Enhancing reproducibility in scientific computing: Metrics and registry for Singularity containers.

    No full text
    Here we present Singularity Hub, a framework to build and deploy Singularity containers for mobility of compute, and the singularity-python software with novel metrics for assessing reproducibility of such containers. Singularity containers make it possible for scientists and developers to package reproducible software, and Singularity Hub adds automation to this workflow by building, capturing metadata for, visualizing, and serving containers programmatically. Our novel metrics, based on custom filters of content hashes of container contents, allow for comparison of an entire container, including operating system, custom software, and metadata. First we will review Singularity Hub's primary use cases and how the infrastructure has been designed to support modern, common workflows. Next, we conduct three analyses to demonstrate build consistency, reproducibility metric and performance and interpretability, and potential for discovery. This is the first effort to demonstrate a rigorous assessment of measurable similarity between containers and operating systems. We provide these capabilities within Singularity Hub, as well as the source software singularity-python that provides the underlying functionality. Singularity Hub is available at https://singularity-hub.org, and we are excited to provide it as an openly available platform for building, and deploying scientific containers

    Reproducibility assessment algorithm: A comparison between two containers comes down to comparing the members of the tar stream first based on an md5sum of the file member itself, and then in the case of a mismatch, looking at the content hash (non-root owned) or using a size heuristic (root owned).

    No full text
    <p>The final counts of files of overlapping versus different files are then used to calculate an information coefficient using a subset of files particular to a filter (Levels of Reproducibility of Containers) to describe similarity of the two containers.</p

    The collection tree provides the researcher with an immediate comparison of the latest version of all containers across Singularity Hub, an easy way to find similar containers using the software and files inside as the metric for comparison.

    No full text
    <p>In the example above, a gray node represents a group of containers, and a red node a single container. The user can hover over a node to see all the containers that are represented.</p

    Operating system estimation: Each container is compared to a set of 46 operating systems, including multiple versions of Ubuntu, Centos, Debian, Opensuse, Alpine, Busybox, Fedora, and others.

    No full text
    <p>In the example above, the user is highlighting one of the columns to inspect the score, and the build was for a container bootstrapping a Centos 6 image.</p
    corecore