3,370 research outputs found

    Towards an Ontology-Based Phenotypic Query Model

    Get PDF
    Clinical research based on data from patient or study data management systems plays an important role in transferring basic findings into the daily practices of physicians. To support study recruitment, diagnostic processes, and risk factor evaluation, search queries for such management systems can be used. Typically, the query syntax as well as the underlying data structure vary greatly between different data management systems. This makes it difficult for domain experts (e.g., clinicians) to build and execute search queries. In this work, the Core Ontology of Phenotypes is used as a general model for phenotypic knowledge. This knowledge is required to create search queries that determine and classify individuals (e.g., patients or study participants) whose morphology, function, behaviour, or biochemical and physiological properties meet specific phenotype classes. A specific model describing a set of particular phenotype classes is called a Phenotype Specification Ontology. Such an ontology can be automatically converted to search queries on data management systems. The methods described have already been used successfully in several projects. Using ontologies to model phenotypic knowledge on patient or study data management systems is a viable approach. It allows clinicians to model from a domain perspective without knowing the actual data structure or query language

    ASCOT: a text mining-based web-service for efficient search and assisted creation of clinical trials

    Get PDF
    Clinical trials are mandatory protocols describing medical research on humans and among the most valuable sources of medical practice evidence. Searching for trials relevant to some query is laborious due to the immense number of existing protocols. Apart from search, writing new trials includes composing detailed eligibility criteria, which might be time-consuming, especially for new researchers. In this paper we present ASCOT, an efficient search application customised for clinical trials. ASCOT uses text mining and data mining methods to enrich clinical trials with metadata, that in turn serve as effective tools to narrow down search. In addition, ASCOT integrates a component for recommending eligibility criteria based on a set of selected protocols

    Standardizing data exchange for clinical research protocols and case report forms: An assessment of the suitability of the Clinical Data Interchange Standards Consortium (CDISC) Operational Data Model (ODM)

    Get PDF
    AbstractEfficient communication of a clinical study protocol and case report forms during all stages of a human clinical study is important for many stakeholders. An electronic and structured study representation format that can be used throughout the whole study life-span can improve such communication and potentially lower total study costs. The most relevant standard for representing clinical study data, applicable to unregulated as well as regulated studies, is the Operational Data Model (ODM) in development since 1999 by the Clinical Data Interchange Standards Consortium (CDISC). ODM’s initial objective was exchange of case report forms data but it is increasingly utilized in other contexts. An ODM extension called Study Design Model, introduced in 2011, provides additional protocol representation elements.Using a case study approach, we evaluated ODM’s ability to capture all necessary protocol elements during a complete clinical study lifecycle in the Intramural Research Program of the National Institutes of Health. ODM offers the advantage of a single format for institutions that deal with hundreds or thousands of concurrent clinical studies and maintain a data warehouse for these studies. For each study stage, we present a list of gaps in the ODM standard and identify necessary vendor or institutional extensions that can compensate for such gaps. The current version of ODM (1.3.2) has only partial support for study protocol and study registration data mainly because it is outside the original development goal. ODM provides comprehensive support for representation of case report forms (in both the design stage and with patient level data). Inclusion of requirements of observational, non-regulated or investigator-initiated studies (outside Food and Drug Administration (FDA) regulation) can further improve future revisions of the standard

    Effect of antenatal milk expression education on lactation outcomes in birthing people with pre-pregnancy body mass index ≥ 25: Protocol for a randomized, controlled trial

    Get PDF
    Background: Birthing people with pre-pregnancy body mass indices (BMIs) ≥ 25 kg/m2, particularly those without prior breastfeeding experience, are at increased risk for suboptimal lactation outcomes. Antenatal milk expression (AME) may be one way to counteract the negative effects of early infant formula supplementation common in this population. Methods: This ongoing, randomized controlled trial in the United States evaluates the efficacy of a telelactation-delivered AME education intervention versus an attention control condition on lactation outcomes to 1 year postpartum among 280 nulliparous-to-primiparous, non-diabetic birthing people with pre-pregnancy BMI ≥ 25 kg/m2. The assigned study treatment is delivered via four weekly online video consultations between gestational weeks 37-40. Participants assigned to AME meet with study personnel and a lactation consultant to learn and practice AME. Instructions are provided for home practice of AME between study visits. Control group participants view videos on infant care/development at study visits. Participants complete emailed surveys at enrollment (340/7-366/7 gestational weeks) and 2 weeks, 6 weeks, 12 weeks, 6 months, and 12 months postpartum. Surveys assess lactation and infant feeding practices; breastfeeding self-efficacy, attitudes, and satisfaction; perception of insufficient milk; onset of lactogenesis-II; lactation support and problems; and reasons for breastfeeding cessation. Surveys also assess factors associated with lactation outcomes, including demographic characteristics, health problems, birth trauma, racial discrimination, and weight stigma. Health information and infant feeding data are abstracted from the pregnancy and birth center electronic health record. Milk samples are collected from the intervention group at each study visit and from both groups at each postpartum follow-up for future analyses. Qualitative interviews are conducted at 6 weeks postpartum to understand AME experiences. Primary outcomes of interest are breastfeeding exclusivity and breastfeeding self-efficacy scores at 2 weeks postpartum. Outcomes will be examined longitudinally with generalized linear mixed-effects modeling. Discussion: This is the first adequately powered trial evaluating the effectiveness of AME among U.S. birthing people and within a non-diabetic population with pre-pregnancy BMI ≥ 25 kg/m2. This study will also provide the first evidence of acceptability and effectiveness of telelactation-delivered AME. Trial registration: ClinicalTrials.gov: NCT04258709

    Natural language processing for mimicking clinical trial recruitment in critical care: a semi-automated simulation based on the LeoPARDS trial

    Get PDF
    Clinical trials often fail to recruit an adequate number of appropriate patients. Identifying eligible trial participants is resource-intensive when relying on manual review of clinical notes, particularly in critical care settings where the time window is short. Automated review of electronic health records (EHR) may help, but much of the information is in free text rather than a computable form. We applied natural language processing (NLP) to free text EHR data using the CogStack platform to simulate recruitment into the LeoPARDS study, a clinical trial aiming to reduce organ dysfunction in septic shock. We applied an algorithm to identify eligible patients using a moving 1-hour time window, and compared patients identified by our approach with those actually screened and recruited for the trial, for the time period that data were available. We manually reviewed records of a random sample of patients identified by the algorithm but not screened in the original trial. Our method identified 376 patients, including 34 patients with EHR data available who were actually recruited to LeoPARDS in our centre. The sensitivity of CogStack for identifying patients screened was 90% (95% CI 85%, 93%). Of the 203 patients identified by both manual screening and CogStack, the index date matched in 95 (47%) and CogStack was earlier in 94 (47%). In conclusion, analysis of EHR data using NLP could effectively replicate recruitment in a critical care trial, and identify some eligible patients at an earlier stage, potentially improving trial recruitment if implemented in real time

    Findings from a literature review

    Get PDF
    Mentzingen, H., António, N., & Bação, F. (2023). Automation of legal precedents retrieval: Findings from a literature review. International Journal of Intelligent Systems, 2023, 1-22. [6660983]. https://doi.org/10.21203/rs.3.rs-2292464/v1, https://doi.org/10.21203/rs.3.rs-2292464/v2, https://doi.org/10.1155/2023/6660983---This work was supported by national funds through FCT (Fundação para a Ciência e a Tecnologia), under the project-UIDB/04152/2020-Centro de Investigação em Gestão de Informação (MagIC)/NOVA IMS.Judges frequently rely their reasoning on precedents. Courts must preserve uniformity in decisions while, depending on the legal system, previous cases compel rulings. The search for methods to accurately identify similar previous cases is not new and has been a vital input, for example, to case-based reasoning (CBR) methodologies. This literature review offers a comprehensive analysis of the advancements in automating the identification of legal precedents, primarily focusing on the paradigm shift from manual knowledge engineering to the incorporation of Artificial Intelligence (AI) technologies such as natural language processing (NLP) and machine learning (ML). While multiple approaches harnessing NLP and ML show promise, none has emerged as definitively superior, and further validation through statistically significant samples and expert-provided ground truth is imperative. Additionally, this review employs text-mining techniques to streamline the survey process, providing an accurate and holistic view of the current research landscape. By delineating extant research gaps and suggesting avenues for future exploration, this review serves as both a summation and a call for more targeted, empirical investigations.publishersversionpublishe

    Timely and reliable evaluation of the effects of interventions: a framework for adaptive meta-analysis (FAME)

    Get PDF
    Most systematic reviews are retrospective and use aggregate data AD) from publications, meaning they can be unreliable, lag behind therapeutic developments and fail to influence ongoing or new trials. Commonly, the potential influence of unpublished or ongoing trials is overlooked when interpreting results, or determining the value of updating the meta-analysis or need to collect individual participant data (IPD). Therefore, we developed a Framework for Adaptive Metaanalysis (FAME) to determine prospectively the earliest opportunity for reliable AD meta-analysis. We illustrate FAME using two systematic reviews in men with metastatic (M1) and non-metastatic (M0)hormone-sensitive prostate cancer (HSPC)

    Guideline-based decision support in medicine : modeling guidelines for the development and application of clinical decision support systems

    Get PDF
    Guideline-based Decision Support in Medicine Modeling Guidelines for the Development and Application of Clinical Decision Support Systems The number and use of decision support systems that incorporate guidelines with the goal of improving care is rapidly increasing. Although developing systems that are both effective in supporting clinicians and accepted by them has proven to be a difficult task, of the systems that were evaluated by a controlled trial, the majority showed impact. The work, described in this thesis, aims at developing a methodology and framework that facilitates all stages in the guideline development process, ranging from the definition of models that represent guidelines to the implementation of run-time systems that provide decision support, based on the guidelines that were developed during the previous stages. The framework consists of 1) a guideline representation formalism that uses the concepts of primitives, Problem-Solving Methods (PSMs) and ontologies to represent guidelines of various complexity and granularity and different application domains, 2) a guideline authoring environment that enables guideline authors to define guidelines, based on the newly developed guideline representation formalism, and 3) a guideline execution environment that translates defined guidelines into a more efficient symbol-level representation, which can be read in and processed by an execution-time engine. The described methodology and framework were used to develop and validate a number of guidelines and decision support systems in various clinical domains such as Intensive Care, Family Practice, Psychiatry and the areas of Diabetes and Hypertension control
    corecore