6,735 research outputs found

    Open-Source Workflows for Reproducible Molecular Simulation

    Get PDF
    We apply molecular simulation to predict the equilibrium structure of organic molecular aggregates and how these structures determine material properties, with a focus on software engineering practices for ensuring correctness. Because simulations are implemented in software, there is potential for authentic scientific reproducibility in such work: An entire experimental apparatus (codebase) can be given to another investigator who should be able to use the same processes to find the same answers. Yet in practice, there are many barriers which stand in the way of reproducible molecular simulations that we address through automation, generalization, and software packaging. Collaboration on and application of the Molecular Simulation and Design Framework (MoSDeF) features prominently. We present structural investigations of organic molecule aggregates and the development of infrastructure and workflows that help manage, initialize, and analyze molecular simulation results through the following scientific applications (1) A screening study wherein we validate self-assembled poly-3-hexylthiophene (P3HT) morphologies show the same state dependency as in prior work, and (2) A multi-university collaborative reproducibility study where we examine modeling choices that give rise to differences between simulation engines. In aggregate, we reinforce the need for pipelines and practices emphasizing transferability, reproducibility, useability, and extensibility in molecular simulation

    BSL: An R Package for Efficient Parameter Estimation for Simulation-Based Models via Bayesian Synthetic Likelihood

    Get PDF
    Bayesian synthetic likelihood (BSL; Price, Drovandi, Lee, and Nott 2018) is a popular method for estimating the parameter posterior distribution for complex statistical models and stochastic processes that possess a computationally intractable likelihood function. Instead of evaluating the likelihood, BSL approximates the likelihood of a judiciously chosen summary statistic of the data via model simulation and density estimation. Compared to alternative methods such as approximate Bayesian computation (ABC), BSL requires little tuning and requires less model simulations than ABC when the chosen summary statistic is high-dimensional. The original synthetic likelihood relies on a multivariate normal approximation of the intractable likelihood, where the mean and covariance are estimated by simulation. An extension of BSL considers replacing the sample covariance with a penalized covariance estimator to reduce the number of required model simulations. Further, a semi-parametric approach has been developed to relax the normality assumption. Finally, another extension of BSL aims to develop a more robust synthetic likelihood estimator while acknowledging there might be model misspecification. In this paper, we present the R package BSL that amalgamates the aforementioned methods and more into a single, easy-to-use and coherent piece of software. The package also includes several examples to illustrate use of the package and the utility of the methods

    Testing the spirit of the information age

    Get PDF
    Every age has a 'spirit," The Information Age seems to be a more extreme case than most eras, with the constant barrage of messages promising social and individual salvation. Information and information technology are heralded as\ud great, new possibilities not just for reform but perfection, with some even predicting the end of physical death (using information technology. by the end of the next century. The intensity of our current period's fascination with technology is partly due to the technology itself-ideas or sales pitches get out to more people more quickly than ever before in history, and, as a result it\ud is easy to be blinded by all the promises and hype. It is no accident that ideas like "ecommerce" and "knowledge management' are unifying concepts for many in this era, but although there is nothing intrinsically wrong with them, there is something amiss with how they are discussed. This essay comments on the latter issue, the hyperbole of the Information Age, from three perspectives: 1) as a consumer of information technology; 2) as an educator in a field (archives and records management) utilizing information technology; and 3) as an individual convinced about the relevancy of basic Judaic-Christian beliefs as one means to shift critically the many conflicting and confusing messages promulgated by the so-called modern Information\ud Age

    Spartan Daily, February 20, 2004

    Get PDF
    Volume 122, Issue 15https://scholarworks.sjsu.edu/spartandaily/9950/thumbnail.jp

    Spartan Daily, February 20, 2004

    Get PDF
    Volume 122, Issue 15https://scholarworks.sjsu.edu/spartandaily/9950/thumbnail.jp

    The Computer Science Ontology: A Large-Scale Taxonomy of Research Areas

    Get PDF
    Ontologies of research areas are important tools for characterising, exploring, and analysing the research landscape. Some fields of research are comprehensively described by large-scale taxonomies, e.g., MeSH in Biology and PhySH in Physics. Conversely, current Computer Science taxonomies are coarse-grained and tend to evolve slowly. For instance, the ACM classification scheme contains only about 2K research topics and the last version dates back to 2012. In this paper, we introduce the Computer Science Ontology (CSO), a large-scale, automatically generated ontology of research areas, which includes about 26K topics and 226K semantic relationships. It was created by applying the Klink-2 algorithm on a very large dataset of 16M scientific articles. CSO presents two main advantages over the alternatives: i) it includes a very large number of topics that do not appear in other classifications, and ii) it can be updated automatically by running Klink-2 on recent corpora of publications. CSO powers several tools adopted by the editorial team at Springer Nature and has been used to enable a variety of solutions, such as classifying research publications, detecting research communities, and predicting research trends. To facilitate the uptake of CSO we have developed the CSO Portal, a web application that enables users to download, explore, and provide granular feedback on CSO at different levels. Users can use the portal to rate topics and relationships, suggest missing relationships, and visualise sections of the ontology. The portal will support the publication of and access to regular new releases of CSO, with the aim of providing a comprehensive resource to the various communities engaged with scholarly data

    Mobile support in CSCW applications and groupware development frameworks

    No full text
    Computer Supported Cooperative Work (CSCW) is an established subset of the field of Human Computer Interaction that deals with the how people use computing technology to enhance group interaction and collaboration. Mobile CSCW has emerged as a result of the progression from personal desktop computing to the mobile device platforms that are ubiquitous today. CSCW aims to not only connect people and facilitate communication through using computers; it aims to provide conceptual models coupled with technology to manage, mediate, and assist collaborative processes. Mobile CSCW research looks to fulfil these aims through the adoption of mobile technology and consideration for the mobile user. Facilitating collaboration using mobile devices brings new challenges. Some of these challenges are inherent to the nature of the device hardware, while others focus on the understanding of how to engineer software to maximize effectiveness for the end-users. This paper reviews seminal and state-of-the-art cooperative software applications and development frameworks, and their support for mobile devices
    • …
    corecore