9,866 research outputs found

    Literate Statistical Practice

    Get PDF
    Literate Statistical Practice (LSP, Rossini, 2001) describes an approach for creating self-documenting statistical results. It applies literate programming (Knuth, 1992) and related techniques in a natural fashion to the practice of statistics. In particular, documentation, specification, and descriptions of results are written concurrently with writing and evaluation of statistical programs. We discuss how and where LSP can be integrated into practice and illustrate this with an example derived from an actual statistical consulting project. The approach is simplified through the use of a comprehensive, open source toolset incorporating Noweb, Emacs Speaks Statistics (ESS), Sweave (Ramsey, 1994; Rossini, et al, 2002; Leisch, 2002; Ihaka and Gentlemen, 1996). We conclude with an assessment of LSP for the construction of reproducible, auditable, and comprehensible statistical analyses

    Reproducible Econometric Research. A Critical Review of the State of the Art.

    Get PDF
    Recent software developments are reviewed from the vantage point of reproducible econometric research. We argue that the emergence of new tools, particularly in the open-source community, have greatly eased the burden of documenting and archiving both empirical and simulation work in econometrics. Some of these tools are highlighted in the discussion of three small replication exercises.Series: Research Report Series / Department of Statistics and Mathematic

    User and Developer Interaction with Editable and Readable Ontologies

    Get PDF
    The process of building ontologies is a difficult task that involves collaboration between ontology developers and domain experts and requires an ongoing interaction between them. This collaboration is made more difficult, because they tend to use different tool sets, which can hamper this interaction. In this paper, we propose to decrease this distance between domain experts and ontology developers by creating more readable forms of ontologies, and further to enable editing in normal office environments. Building on a programmatic ontology development environment, such as Tawny-OWL, we are now able to generate these readable/editable from the raw ontological source and its embedded comments. We have this translation to HTML for reading; this environment provides rich hyperlinking as well as active features such as hiding the source code in favour of comments. We are now working on translation to a Word document that also enables editing. Taken together this should provide a significant new route for collaboration between the ontologist and domain specialist.Comment: 5 pages, 5 figures, accepted at ICBO 2017, License update

    Building-in quality rather than assessing quality afterwards: a technological solution to ensuring computational accuracy in learning materials

    Get PDF
    [Abstract]: Quality encompasses a very broad range of ideas in learning materials, yet the accuracy of the content is often overlooked as a measure of quality. Various aspects of accuracy are briefly considered, and the issue of computational accuracy is then considered further. When learning materials are produced containing the results of mathematical computations, accuracy is essential: but how can the results of these computations be known to be correct? A solution is to embed the instructions for performing the calculations in the materials, and let the computer calculate the result and place it in the text. In this way, quality is built into the learning materials by design, not evaluated after the event. This is all accomplished using the ideas of literate programming, applied to the learning materials context. A small example demonstrates how remarkably easy the ideas are to apply in practice using the appropriate technology. Given that the technology is available and is easy to use, it would appear imperative that the approach discussed is adopted to improve quality in learning materials containing computational results

    lpEdit: an editor to facilitate reproducible analysis via literate programming

    Get PDF
    ArticleCopyright 2013 Adam J Richards et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited .There is evidence to suggest that a surprising proportion of published experiments in science are difficult if not impossible to reproduce. The concepts of data sharing, leaving an audit trail and extensive documentation are fundamental to reproducible research, whether it is in the laboratory or as part of an analysis. In this work, we introduce a tool for documentation that aims to make analyses more reproducible in the general scientific community. The application, lpEdit, is a cross-platform editor, written with PyQt4, that enables a broad range of scientists to carry out the analytic component of their work in a reproducible manner—through the use of literate programming. Literate programming mixes code and prose to produce a final report that reads like an article or book. lpEdit targets researchers getting started with statistics or programming, so the hurdles associated with setting up a proper pipeline are kept to a minimum and the learning burden is reduced through the use of templates and documentation. The documentation for lpEdit is centered around learning by example, and accordingly we use several increasingly involved examples to demonstrate the software’s capabilities. We first consider applications of lpEdit to process analyses mixing R and Python code with the LATEX documentation system. Finally, we illustrate the use of lpEdit to conduct a reproducible functional analysis of high-throughput sequencing data, using the transcriptome of the butterfly species Pieris brassica

    Views, Program Transformations, and the Evolutivity Problem in a Functional Language

    Get PDF
    We report on an experience to support multiple views of programs to solve the tyranny of the dominant decomposition in a functional setting. We consider two possible architectures in Haskell for the classical example of the expression problem. We show how the Haskell Refactorer can be used to transform one view into the other, and the other way back. That transformation is automated and we discuss how the Haskell Refactorer has been adapted to be able to support this automated transformation. Finally, we compare our implementation of views with some of the literature.Comment: 19 page

    TEI and LMF crosswalks

    Get PDF
    The present paper explores various arguments in favour of making the Text Encoding Initia-tive (TEI) guidelines an appropriate serialisation for ISO standard 24613:2008 (LMF, Lexi-cal Mark-up Framework) . It also identifies the issues that would have to be resolved in order to reach an appropriate implementation of these ideas, in particular in terms of infor-mational coverage. We show how the customisation facilities offered by the TEI guidelines can provide an adequate background, not only to cover missing components within the current Dictionary chapter of the TEI guidelines, but also to allow specific lexical projects to deal with local constraints. We expect this proposal to be a basis for a future ISO project in the context of the on going revision of LMF
    • …
    corecore