135,551 research outputs found

    Sustaining Quality Assessment Processes in User-Centred Health Information Portals

    Get PDF
    Information portals are quality-controlled intermediaries, through which consumers can access online information of high relevance and quality. Developing and maintaining a portal’s content repository involves resource identification, selection and description processes undertaken by domain experts. Among these processes, the less standardised, manual quality assessment procedures are highlighted, where new solutions are imperative to solve its scalability and sustainability issues. Results of a qualitative analysis implicate that quality assessment is fundamentally a subjective issue that needs human intervention. For this reason, this research proposes a semi-automated quality assessment approach, in which a user-centred quality framework, an indicator-based quality model and a decision support tool are devised to address the identified domain expert needs for intelligent support. The system development methodology within design science framework is adopted by this research and the tool prototyping within the context of health information portals is underway to evaluate the feasibility and usefulness of the proposed approach

    Machine Science in Biomedicine: Practicalities, Pitfalls and Potential

    Full text link
    Machine Science, or Data-driven Research, is a new and interesting scientific methodology that uses advanced computational techniques to identify, retrieve, classify and analyse data in order to generate hypotheses and develop models. In this paper we describe three recent biomedical Machine Science studies, and use these to assess the current state of the art with specific emphasis on data mining, data assessment, costs, limitations, skills and tool support

    Integrated process of images and acceleration measurements for damage detection

    Get PDF
    The use of mobile robots and UAV to catch unthinkable images together with on-site global automated acceleration measurements easy achievable by wireless sensors, able of remote data transfer, have strongly enhanced the capability of defect and damage evaluation in bridges. A sequential procedure is, here, proposed for damage monitoring and bridge condition assessment based on both: digital image processing for survey and defect evaluation and structural identification based on acceleration measurements. A steel bridge has been simultaneously inspected by UAV to acquire images using visible light, or infrared radiation, and monitored through a wireless sensor network (WSN) measuring structural vibrations. First, image processing has been used to construct a geometrical model and to quantify corrosion extension. Then, the consistent structural model has been updated based on the modal quantities identified using the acceleration measurements acquired by the deployed WSN. © 2017 The Authors. Published by Elsevier Ltd

    Quality of Radiomic Features in Glioblastoma Multiforme: Impact of Semi-Automated Tumor Segmentation Software.

    Get PDF
    ObjectiveThe purpose of this study was to evaluate the reliability and quality of radiomic features in glioblastoma multiforme (GBM) derived from tumor volumes obtained with semi-automated tumor segmentation software.Materials and methodsMR images of 45 GBM patients (29 males, 16 females) were downloaded from The Cancer Imaging Archive, in which post-contrast T1-weighted imaging and fluid-attenuated inversion recovery MR sequences were used. Two raters independently segmented the tumors using two semi-automated segmentation tools (TumorPrism3D and 3D Slicer). Regions of interest corresponding to contrast-enhancing lesion, necrotic portions, and non-enhancing T2 high signal intensity component were segmented for each tumor. A total of 180 imaging features were extracted, and their quality was evaluated in terms of stability, normalized dynamic range (NDR), and redundancy, using intra-class correlation coefficients, cluster consensus, and Rand Statistic.ResultsOur study results showed that most of the radiomic features in GBM were highly stable. Over 90% of 180 features showed good stability (intra-class correlation coefficient [ICC] ≥ 0.8), whereas only 7 features were of poor stability (ICC < 0.5). Most first order statistics and morphometric features showed moderate-to-high NDR (4 > NDR ≥1), while above 35% of the texture features showed poor NDR (< 1). Features were shown to cluster into only 5 groups, indicating that they were highly redundant.ConclusionThe use of semi-automated software tools provided sufficiently reliable tumor segmentation and feature stability; thus helping to overcome the inherent inter-rater and intra-rater variability of user intervention. However, certain aspects of feature quality, including NDR and redundancy, need to be assessed for determination of representative signature features before further development of radiomics

    An investigation into the perspectives of providers and learners on MOOC accessibility

    Get PDF
    An effective open eLearning environment should consider the target learner’s abilities, learning goals, where learning takes place, and which specific device(s) the learner uses. MOOC platforms struggle to take these factors into account and typically are not accessible, inhibiting access to environments that are intended to be open to all. A series of research initiatives are described that are intended to benefit MOOC providers in achieving greater accessibility and disabled learners to improve their lifelong learning and re-skilling. In this paper, we first outline the rationale, the research questions, and the methodology. The research approach includes interviews, online surveys and a MOOC accessibility audit; we also include factors such the risk management of the research programme and ethical considerations when conducting research with vulnerable learners. Preliminary results are presented from interviews with providers and experts and from analysis of surveys of learners. Finally, we outline the future research opportunities. This paper is framed within the context of the Doctoral Consortium organised at the TEEM'17 conference

    Trying to break new ground in aerial archaeology

    Get PDF
    Aerial reconnaissance continues to be a vital tool for landscape-oriented archaeological research. Although a variety of remote sensing platforms operate within the earth’s atmosphere, the majority of aerial archaeological information is still derived from oblique photographs collected during observer-directed reconnaissance flights, a prospection approach which has dominated archaeological aerial survey for the past century. The resulting highly biased imagery is generally catalogued in sub-optimal (spatial) databases, if at all, after which a small selection of images is orthorectified and interpreted. For decades, this has been the standard approach. Although many innovations, including digital cameras, inertial units, photogrammetry and computer vision algorithms, geographic(al) information systems and computing power have emerged, their potential has not yet been fully exploited in order to re-invent and highly optimise this crucial branch of landscape archaeology. The authors argue that a fundamental change is needed to transform the way aerial archaeologists approach data acquisition and image processing. By addressing the very core concepts of geographically biased aerial archaeological photographs and proposing new imaging technologies, data handling methods and processing procedures, this paper gives a personal opinion on how the methodological components of aerial archaeology, and specifically aerial archaeological photography, should evolve during the next decade if developing a more reliable record of our past is to be our central aim. In this paper, a possible practical solution is illustrated by outlining a turnkey aerial prospection system for total coverage survey together with a semi-automated back-end pipeline that takes care of photograph correction and image enhancement as well as the management and interpretative mapping of the resulting data products. In this way, the proposed system addresses one of many bias issues in archaeological research: the bias we impart to the visual record as a result of selective coverage. While the total coverage approach outlined here may not altogether eliminate survey bias, it can vastly increase the amount of useful information captured during a single reconnaissance flight while mitigating the discriminating effects of observer-based, on-the-fly target selection. Furthermore, the information contained in this paper should make it clear that with current technology it is feasible to do so. This can radically alter the basis for aerial prospection and move landscape archaeology forward, beyond the inherently biased patterns that are currently created by airborne archaeological prospection

    AutoDiscern: Rating the Quality of Online Health Information with Hierarchical Encoder Attention-based Neural Networks

    Get PDF
    Patients increasingly turn to search engines and online content before, or in place of, talking with a health professional. Low quality health information, which is common on the internet, presents risks to the patient in the form of misinformation and a possibly poorer relationship with their physician. To address this, the DISCERN criteria (developed at University of Oxford) are used to evaluate the quality of online health information. However, patients are unlikely to take the time to apply these criteria to the health websites they visit. We built an automated implementation of the DISCERN instrument (Brief version) using machine learning models. We compared the performance of a traditional model (Random Forest) with that of a hierarchical encoder attention-based neural network (HEA) model using two language embeddings, BERT and BioBERT. The HEA BERT and BioBERT models achieved average F1-macro scores across all criteria of 0.75 and 0.74, respectively, outperforming the Random Forest model (average F1-macro = 0.69). Overall, the neural network based models achieved 81% and 86% average accuracy at 100% and 80% coverage, respectively, compared to 94% manual rating accuracy. The attention mechanism implemented in the HEA architectures not only provided 'model explainability' by identifying reasonable supporting sentences for the documents fulfilling the Brief DISCERN criteria, but also boosted F1 performance by 0.05 compared to the same architecture without an attention mechanism. Our research suggests that it is feasible to automate online health information quality assessment, which is an important step towards empowering patients to become informed partners in the healthcare process
    • …
    corecore