4,433 research outputs found

    Applying quantitative bias analysis to estimate the plausible effects of selection bias in a cluster randomised controlled trial: secondary analysis of the Primary care Osteoarthritis Screening Trial (POST).

    Get PDF
    BACKGROUND: Selection bias is a concern when designing cluster randomised controlled trials (c-RCT). Despite addressing potential issues at the design stage, bias cannot always be eradicated from a trial design. The application of bias analysis presents an important step forward in evaluating whether trial findings are credible. The aim of this paper is to give an example of the technique to quantify potential selection bias in c-RCTs. METHODS: This analysis uses data from the Primary care Osteoarthritis Screening Trial (POST). The primary aim of this trial was to test whether screening for anxiety and depression, and providing appropriate care for patients consulting their GP with osteoarthritis would improve clinical outcomes. Quantitative bias analysis is a seldom-used technique that can quantify types of bias present in studies. Due to lack of information on the selection probability, probabilistic bias analysis with a range of triangular distributions was also used, applied at all three follow-up time points; 3, 6, and 12 months post consultation. A simple bias analysis was also applied to the study. RESULTS: Worse pain outcomes were observed among intervention participants than control participants (crude odds ratio at 3, 6, and 12 months: 1.30 (95% CI 1.01, 1.67), 1.39 (1.07, 1.80), and 1.17 (95% CI 0.90, 1.53), respectively). Probabilistic bias analysis suggested that the observed effect became statistically non-significant if the selection probability ratio was between 1.2 and 1.4. Selection probability ratios of > 1.8 were needed to mask a statistically significant benefit of the intervention. CONCLUSIONS: The use of probabilistic bias analysis in this c-RCT suggested that worse outcomes observed in the intervention arm could plausibly be attributed to selection bias. A very large degree of selection of bias was needed to mask a beneficial effect of intervention making this interpretation less plausible

    Hierarchical Re-estimation of Topic Models for Measuring Topical Diversity

    Get PDF
    A high degree of topical diversity is often considered to be an important characteristic of interesting text documents. A recent proposal for measuring topical diversity identifies three elements for assessing diversity: words, topics, and documents as collections of words. Topic models play a central role in this approach. Using standard topic models for measuring diversity of documents is suboptimal due to generality and impurity. General topics only include common information from a background corpus and are assigned to most of the documents in the collection. Impure topics contain words that are not related to the topic; impurity lowers the interpretability of topic models and impure topics are likely to get assigned to documents erroneously. We propose a hierarchical re-estimation approach for topic models to combat generality and impurity; the proposed approach operates at three levels: words, topics, and documents. Our re-estimation approach for measuring documents' topical diversity outperforms the state of the art on PubMed dataset which is commonly used for diversity experiments.Comment: Proceedings of the 39th European Conference on Information Retrieval (ECIR2017

    Impaired hypertrophy in myoblasts is improved with testosterone administration

    Get PDF
    We investigated the ability of testosterone (T) to restore differentiation in multiple population doubled (PD) murine myoblasts, previously shown to have reduced differentiation in monolayer and bioengineered skeletal muscle cultures vs. their parental controls (CON) (Sharples et al., 2011, 2012 [7] and [26]). Cells were exposed to low serum conditions in the presence or absence of T (100 nM) ± PI3K inhibitor (LY294002) for 72 h and 7 days (early and late muscle differentiation respectively). Morphological analyses were performed to determine myotube number, diameter (μm) and myonuclear accretion as indices of differentiation and myotube hypertrophy. Changes in gene expression for myogenin, mTOR and myostatin were also performed. Myotube diameter in CON and PD cells increased from 17.32 ± 2.56 μm to 21.02 ± 1.89 μm and 14.58 ± 2.66 μm to 18.29 ± 3.08 μm (P ≤ 0.05) respectively after 72 h of T exposure. The increase was comparable in both PD (+25%) and CON cells (+21%) suggesting a similar intrinsic ability to respond to exogenous T administration. T treatment also significantly increased myonuclear accretion (% of myotubes expressing 5+ nuclei) in both cell types after 7 days exposure (P ≤ 0.05). Addition of PI3K inhibitor (LY294002) in the presence of T attenuated these effects in myotube morphology (in both cell types) suggesting a role for the PI3K pathway in T stimulated hypertrophy. Finally, PD myoblasts showed reduced responsiveness to T stimulated mRNA expression of mTOR vs. CON cells and T also reduced myostatin expression in PD myoblasts only. The present study demonstrates testosterone administration improves hypertrophy in myoblasts that basally display impaired differentiation and hypertrophic capacity vs. their parental controls, the action of testosterone in this model was mediated by PI3K/Akt pathway

    T2{}^2K2{}^2: The Twitter Top-K Keywords Benchmark

    Full text link
    Information retrieval from textual data focuses on the construction of vocabularies that contain weighted term tuples. Such vocabularies can then be exploited by various text analysis algorithms to extract new knowledge, e.g., top-k keywords, top-k documents, etc. Top-k keywords are casually used for various purposes, are often computed on-the-fly, and thus must be efficiently computed. To compare competing weighting schemes and database implementations, benchmarking is customary. To the best of our knowledge, no benchmark currently addresses these problems. Hence, in this paper, we present a top-k keywords benchmark, T2{}^2K2{}^2, which features a real tweet dataset and queries with various complexities and selectivities. T2{}^2K2{}^2 helps evaluate weighting schemes and database implementations in terms of computing performance. To illustrate T2{}^2K2{}^2's relevance and genericity, we successfully performed tests on the TF-IDF and Okapi BM25 weighting schemes, on one hand, and on different relational (Oracle, PostgreSQL) and document-oriented (MongoDB) database implementations, on the other hand

    The design-by-adaptation approach to universal access: learning from videogame technology

    Get PDF
    This paper proposes an alternative approach to the design of universally accessible interfaces to that provided by formal design frameworks applied ab initio to the development of new software. This approach, design-byadaptation, involves the transfer of interface technology and/or design principles from one application domain to another, in situations where the recipient domain is similar to the host domain in terms of modelled systems, tasks and users. Using the example of interaction in 3D virtual environments, the paper explores how principles underlying the design of videogame interfaces may be applied to a broad family of visualization and analysis software which handles geographical data (virtual geographic environments, or VGEs). One of the motivations behind the current study is that VGE technology lags some way behind videogame technology in the modelling of 3D environments, and has a less-developed track record in providing the variety of interaction methods needed to undertake varied tasks in 3D virtual worlds by users with varied levels of experience. The current analysis extracted a set of interaction principles from videogames which were used to devise a set of 3D task interfaces that have been implemented in a prototype VGE for formal evaluation

    Keele Aches and Pains Study Protocol: validity, acceptability and feasibility of the Keele STarT MSK Tool for subgrouping musculoskeletal patients in primary care

    Get PDF
    Musculoskeletal conditions represent a considerable burden worldwide, and are predominantly managed in primary care. Evidence suggests that many musculoskeletal conditions share similar prognostic factors. Systematically assessing patient’s prognosis, and matching treatments based on prognostic subgroups (stratified care), has been shown to be clinically and cost effective. This study (Keele Aches and Pains Study: KAPS) aims to refine and examine the validity of a brief questionnaire (Keele STarT MSK Tool), designed to enable risk-stratification of primary care patients with the five most common musculoskeletal pain presentations. We will also describe the subgroups of patients, and explore the acceptability and feasibility of using the tool, and how the tool is best implemented in clinical practice. The study design is mixed methods: a prospective, quantitative observational cohort study with a linked qualitative focus group and interview study. Patients who have consulted their General Practitioner or Healthcare Practitioner (GP/HCP) about a relevant musculoskeletal condition will be recruited from General practice. Participating patients will complete a baseline questionnaire (shortly after consultation), plus questionnaires 2 and 6 months later. A sub-sample of patients, along with participating GPs and HCPs, will be invited to take part in qualitative focus groups and interviews. The Keele STarT MSK Tool will be refined based on face, discriminant, construct and predictive validity at baseline and 2 months, and validated using data from 6 month follow-up. Patient and clinician perspectives about using the tool will be explored. This study will provide a validated prognostic tool (the Keele STarT MSK Tool) with established cut-points to stratify patients with the five most common musculoskeletal presentations into low, medium and high risk subgroups. The qualitative analysis of patient and healthcare perspectives will inform how to embed the tool into clinical practice using established general practice IT systems and clinician support packages

    The INCLUDE study: INtegrating and improving Care for patients with infLammatory rheUmatological DisordErs in the community; identifying multimorbidity: Protocol for a pilot randomized controlled trial.

    Get PDF
    Background: Patients with inflammatory rheumatic conditions such as rheumatoid arthritis, polymyalgia rheumatica and ankylosing spondylitis are at increased risk of common comorbidities such as cardiovascular disease, osteoporosis and anxiety and depression which lead to increased morbidity and mortality. These associated morbidities are often un-recognized and under-treated. While patients with other long-term conditions such as diabetes are invited for routine reviews in primary care, which may include identification and management of co-morbidities, at present this does not occur for patients with inflammatory conditions, and thus, opportunities to diagnose and optimally manage these comorbidities are missed. Objective: To evaluate the feasibility and acceptability of a nurse-led integrated care review (the INtegrating and improving Care for patients with infLammatory rheUmatological DisordErs in the community (INCLUDE) review) for people with inflammatory rheumatological conditions in primary care. Design: A pilot cluster randomized controlled trial will be undertaken to test the feasibility and acceptability of a nurse-led integrated primary care review for identification, assessment and initial management of common comorbidities including cardiovascular disease, osteoporosis and anxiety and depression. A process evaluation will be undertaken using a mixed methods approach including participant self-reported questionnaires, a medical record review, an INCLUDE EMIS template, intervention fidelity checking using audio-recordings of the INCLUDE review consultation and qualitative interviews with patient participants, study nurses and study general practitioners (GPs). Discussion: Success of the pilot study will be measured against the engagement, recruitment and study retention rates of both general practices and participants. Acceptability of the INCLUDE review to patients and practitioners and treatment fidelity will be explored using a parallel process evaluation. Trial Registration: ISRCTN12765345

    Beyond Volume: The Impact of Complex Healthcare Data on the Machine Learning Pipeline

    Full text link
    From medical charts to national census, healthcare has traditionally operated under a paper-based paradigm. However, the past decade has marked a long and arduous transformation bringing healthcare into the digital age. Ranging from electronic health records, to digitized imaging and laboratory reports, to public health datasets, today, healthcare now generates an incredible amount of digital information. Such a wealth of data presents an exciting opportunity for integrated machine learning solutions to address problems across multiple facets of healthcare practice and administration. Unfortunately, the ability to derive accurate and informative insights requires more than the ability to execute machine learning models. Rather, a deeper understanding of the data on which the models are run is imperative for their success. While a significant effort has been undertaken to develop models able to process the volume of data obtained during the analysis of millions of digitalized patient records, it is important to remember that volume represents only one aspect of the data. In fact, drawing on data from an increasingly diverse set of sources, healthcare data presents an incredibly complex set of attributes that must be accounted for throughout the machine learning pipeline. This chapter focuses on highlighting such challenges, and is broken down into three distinct components, each representing a phase of the pipeline. We begin with attributes of the data accounted for during preprocessing, then move to considerations during model building, and end with challenges to the interpretation of model output. For each component, we present a discussion around data as it relates to the healthcare domain and offer insight into the challenges each may impose on the efficiency of machine learning techniques.Comment: Healthcare Informatics, Machine Learning, Knowledge Discovery: 20 Pages, 1 Figur

    VEZF1 elements mediate protection from DNA methylation

    Get PDF
    There is growing consensus that genome organization and long-range gene regulation involves partitioning of the genome into domains of distinct epigenetic chromatin states. Chromatin insulator or barrier elements are key components of these processes as they can establish boundaries between chromatin states. The ability of elements such as the paradigm β-globin HS4 insulator to block the range of enhancers or the spread of repressive histone modifications is well established. Here we have addressed the hypothesis that a barrier element in vertebrates should be capable of defending a gene from silencing by DNA methylation. Using an established stable reporter gene system, we find that HS4 acts specifically to protect a gene promoter from de novo DNA methylation. Notably, protection from methylation can occur in the absence of histone acetylation or transcription. There is a division of labor at HS4; the sequences that mediate protection from methylation are separable from those that mediate CTCF-dependent enhancer blocking and USF-dependent histone modification recruitment. The zinc finger protein VEZF1 was purified as the factor that specifically interacts with the methylation protection elements. VEZF1 is a candidate CpG island protection factor as the G-rich sequences bound by VEZF1 are frequently found at CpG island promoters. Indeed, we show that VEZF1 elements are sufficient to mediate demethylation and protection of the APRT CpG island promoter from DNA methylation. We propose that many barrier elements in vertebrates will prevent DNA methylation in addition to blocking the propagation of repressive histone modifications, as either process is sufficient to direct the establishment of an epigenetically stable silent chromatin stat

    The role of the right temporoparietal junction in perceptual conflict: detection or resolution?

    Get PDF
    The right temporoparietal junction (rTPJ) is a polysensory cortical area that plays a key role in perception and awareness. Neuroimaging evidence shows activation of rTPJ in intersensory and sensorimotor conflict situations, but it remains unclear whether this activity reflects detection or resolution of such conflicts. To address this question, we manipulated the relationship between touch and vision using the so-called mirror-box illusion. Participants' hands lay on either side of a mirror, which occluded their left hand and reflected their right hand, but created the illusion that they were looking directly at their left hand. The experimenter simultaneously touched either the middle (D3) or the ring finger (D4) of each hand. Participants judged, which finger was touched on their occluded left hand. The visual stimulus corresponding to the touch on the right hand was therefore either congruent (same finger as touch) or incongruent (different finger from touch) with the task-relevant touch on the left hand. Single-pulse transcranial magnetic stimulation (TMS) was delivered to the rTPJ immediately after touch. Accuracy in localizing the left touch was worse for D4 than for D3, particularly when visual stimulation was incongruent. However, following TMS, accuracy improved selectively for D4 in incongruent trials, suggesting that the effects of the conflicting visual information were reduced. These findings suggest a role of rTPJ in detecting, rather than resolving, intersensory conflict
    corecore