208 research outputs found

    Help me describe my data: A demonstration of the Open PHACTS VoID Editor

    Get PDF
    Abstract. The Open PHACTS VoID Editor helps non-Semantic Web experts to create machine interpretable descriptions for their datasets. The web app guides the user, an expert in the domain of the data, through a series of questions to capture details of their dataset and then generates a VoID dataset description. The generated dataset description conforms to the Open PHACTS dataset description guidelines that en-sure suitable provenance information is available about the dataset to enable its discovery and reuse

    A Unifying Hypothesis for Familial and Sporadic Alzheimer's Disease

    Get PDF
    Alzheimer's disease (AD) is characterised by the aggregation of two quite different proteins, namely, amyloid-beta (Aβ), which forms extracellular plaques, and tau, the main component of cytoplasmic neurofibrillary tangles. The amyloid hypothesis proposes that Aβ plaques precede tangle formation but there is still much controversy concerning the order of events and the linkage between Aβ and tau alterations is still unknown. Mathematical modelling has become an essential tool for generating and evaluating hypotheses involving complex systems. We have therefore used this approach to discover the most probable pathway linking Aβ and tau. The model supports a complex pathway linking Aβ and tau via GSK3β, p53, and oxidative stress. Importantly, the pathway contains a cycle with multiple points of entry. It is this property of the pathway which enables the model to be consistent with both the amyloid hypothesis for familial AD and a more complex pathway for sporadic forms

    PAV ontology: provenance, authoring and versioning

    Get PDF
    Provenance is a critical ingredient for establishing trust of published scientific content. This is true whether we are considering a data set, a computational workflow, a peer-reviewed publication or a simple scientific claim with supportive evidence. Existing vocabularies such as DC Terms and the W3C PROV-O are domain-independent and general-purpose and they allow and encourage for extensions to cover more specific needs. We identify the specific need for identifying or distinguishing between the various roles assumed by agents manipulating digital artifacts, such as author, contributor and curator. We present the Provenance, Authoring and Versioning ontology (PAV): a lightweight ontology for capturing just enough descriptions essential for tracking the provenance, authoring and versioning of web resources. We argue that such descriptions are essential for digital scientific content. PAV distinguishes between contributors, authors and curators of content and creators of representations in addition to the provenance of originating resources that have been accessed, transformed and consumed. We explore five projects (and communities) that have adopted PAV illustrating their usage through concrete examples. Moreover, we present mappings that show how PAV extends the PROV-O ontology to support broader interoperability. The authors strived to keep PAV lightweight and compact by including only those terms that have demonstrated to be pragmatically useful in existing applications, and by recommending terms from existing ontologies when plausible. We analyze and compare PAV with related approaches, namely Provenance Vocabulary, DC Terms and BIBFRAME. We identify similarities and analyze their differences with PAV, outlining strengths and weaknesses of our proposed model. We specify SKOS mappings that align PAV with DC Terms.Comment: 22 pages (incl 5 tables and 19 figures). Submitted to Journal of Biomedical Semantics 2013-04-26 (#1858276535979415). Revised article submitted 2013-08-30. Second revised article submitted 2013-10-06. Accepted 2013-10-07. Author proofs sent 2013-10-09 and 2013-10-16. Published 2013-11-22. Final version 2013-12-06. http://www.jbiomedsem.com/content/4/1/3

    The virtual design studio: developing new tools for learning, practice and research in design.

    Get PDF
    The emergence of new networked technologies such as virtual learning environments (VLEs) and digital libraries are providing opportunities for the development of new virtual tools to assist the design researcher in exploring ideas with the aid of visualising and mapping tools and to provide interfaces that support interdisciplinary collaboration between design teams. In 1998 a research project was initiated to evaluate the potential of computer assisted learning within Art and Design. This resulted in the development of a virtual learning environment designed to support Art and Design students and staff (www.studio-space.net). This paper describes the design process used to develop this VLE and the underlying principles based on a constructivist approach to experiential learning. The on-going research uses the metaphor of the design studio to explore a range of technologies that provide generative tools for the representation of design practice and related research, including the development and evaluation of an online Personal Development Planning tool (PDP) and other information management systems. The paper explores some of the ways in which tools such as; information retrieval applications, white-boards, visual mapping and digital archives can be combined to provide a virtual online design research studio. A further extension to the metaphor provides opportunities for developing new facilities, for example the portfolio, drawing board, bookcase, modelmaking area. The virtual design studio has two potential uses: first, to provide a tool box for the design researcher/educator to undertake collaborative design practice using CAD/CAM applications; second, to provide systems that help to externalise design methodologies, thus making it possible to gain an insight into the design process itself. This latter outcome can be achieved through the use of meta data (such as author, date/time created, version number - i.e. design iteration, note pad) and the representation of critical decision paths and reflection points

    Experimental and Computational Analysis of Polyglutamine-Mediated Cytotoxicity

    Get PDF
    Expanded polyglutamine (polyQ) proteins are known to be the causative agents of a number of human neurodegenerative diseases but the molecular basis of their cytoxicity is still poorly understood. PolyQ tracts may impede the activity of the proteasome, and evidence from single cell imaging suggests that the sequestration of polyQ into inclusion bodies can reduce the proteasomal burden and promote cell survival, at least in the short term. The presence of misfolded protein also leads to activation of stress kinases such as p38MAPK, which can be cytotoxic. The relationships of these systems are not well understood. We have used fluorescent reporter systems imaged in living cells, and stochastic computer modeling to explore the relationships of polyQ, p38MAPK activation, generation of reactive oxygen species (ROS), proteasome inhibition, and inclusion body formation. In cells expressing a polyQ protein inclusion, body formation was preceded by proteasome inhibition but cytotoxicity was greatly reduced by administration of a p38MAPK inhibitor. Computer simulations suggested that without the generation of ROS, the proteasome inhibition and activation of p38MAPK would have significantly reduced toxicity. Our data suggest a vicious cycle of stress kinase activation and proteasome inhibition that is ultimately lethal to cells. There was close agreement between experimental data and the predictions of a stochastic computer model, supporting a central role for proteasome inhibition and p38MAPK activation in inclusion body formation and ROS-mediated cell death

    The ‘Epistemic Object’ in the Creative Process of Doctoral Inquiry

    Get PDF
    Within the framework of practice-led doctoral research in the art and design sector, there has long been debate about the role of the artefact/creative works in the process of inquiry and in the final submission for Ph.D. examination. Their status can be ambiguous and the concept of ‘exhibition’ is – we would argue – problematic in this context. In this chapter we want to suggest an alternative way of considering the role of artefacts/creative works in a doctoral submission, by discussing the liberating concept of ‘epistemic objects’ – their possible forms and agencies, and the alternative display/sharing of the understandings generated from these through ‘exposition’ not exhibition. Whilst our experience and expertise lies within the sector of art and design, we suggest that some ideas in this chapter may resonate and be relevant to other creative disciplines in the revealing and sharing of doctoral research outcomes. This process can be difficult and provoke many anxieties for the practitioner-researcher and their supervisors, so some clarity on this might help everyone involved in the examination of doctoral work to approach it with integrity and confidence, and see it as a valuable learning experience for all involved

    Developing a research procedures programme for artists & designers.

    Get PDF
    This paper builds on our earlier research concerned with describing a contextual framework for the development of artistic research procedures1, 2, and attempts to move forward from this somewhat philosophical stage into more practical territory. Over the last year research personnel at the Centre for Research in Art & Design (CRiAD) at Grays School of Art have been involved in developing a Research Procedures Programme for Artists & Designers. Parts of this programme have already been piloted and evaluated as part of the Robert GordonUniversitys Research Methods Course for doctoral students from all disciplines within the institution. There has been a good response from these research students, who are beginning to recognise the contribution to research methodologies that the visual arts can make. A draft programme (currently under development) at CRiAD contains six phases, travelling from the general to the particular, from beginning research to achieving a higher degree. The programme contains seventeen sessions (or modules) and is a mixture of lectures, seminars, and participatory workshops, using video as a documentation and reinforcement learning tool, as well as various group learning techniques. The programme combines nitty gritty common sense advice (pertinent to all research paradigms) as well as particular and distinctive techniques for Fine Artists and Designers operating in a postmodern context. A set of key references and a glossary of research terms (including visual exemplars) are also being developed as part of this programme. Therefore, this paper, after a resumé of our earlier work, and some brief examples of methodologies and methods used in completed and ongoing research, sets out a suggested Research Procedures Programme for Artists & Designers, which we intend to develop in a variety of formats

    NiftyNet: a deep-learning platform for medical imaging

    Get PDF
    Medical image analysis and computer-assisted intervention problems are increasingly being addressed with deep-learning-based solutions. Established deep-learning platforms are flexible but do not provide specific functionality for medical image analysis and adapting them for this application requires substantial implementation effort. Thus, there has been substantial duplication of effort and incompatible infrastructure developed across many research groups. This work presents the open-source NiftyNet platform for deep learning in medical imaging. The ambition of NiftyNet is to accelerate and simplify the development of these solutions, and to provide a common mechanism for disseminating research outputs for the community to use, adapt and build upon. NiftyNet provides a modular deep-learning pipeline for a range of medical imaging applications including segmentation, regression, image generation and representation learning applications. Components of the NiftyNet pipeline including data loading, data augmentation, network architectures, loss functions and evaluation metrics are tailored to, and take advantage of, the idiosyncracies of medical image analysis and computer-assisted intervention. NiftyNet is built on TensorFlow and supports TensorBoard visualization of 2D and 3D images and computational graphs by default. We present 3 illustrative medical image analysis applications built using NiftyNet: (1) segmentation of multiple abdominal organs from computed tomography; (2) image regression to predict computed tomography attenuation maps from brain magnetic resonance images; and (3) generation of simulated ultrasound images for specified anatomical poses. NiftyNet enables researchers to rapidly develop and distribute deep learning solutions for segmentation, regression, image generation and representation learning applications, or extend the platform to new applications.Comment: Wenqi Li and Eli Gibson contributed equally to this work. M. Jorge Cardoso and Tom Vercauteren contributed equally to this work. 26 pages, 6 figures; Update includes additional applications, updated author list and formatting for journal submissio

    Using an Audience Response Systems in Library Instruction

    Get PDF
    An audience response system (sometimes referred to as a clicker system) can help turn a library instruction presentation into an interactive learning experience. In this session, three libraries will discuss how they\u27ve implemented an audience response system into their instruction program as well as for other uses in their libraries such as presentations to administrators and faculty workshops. You will also get to experience an audience response system as a participant and then learn how to create your own audience response presentation using a standard PowerPoint presentation
    corecore