521,709 research outputs found
Development and Validation of Clinical Whole-Exome and Whole-Genome Sequencing for Detection of Germline Variants in Inherited Disease
Context.-With the decrease in the cost of sequencing, the clinical testing paradigm has shifted from single gene to gene panel and now whole-exome and whole-genome sequencing. Clinical laboratories are rapidly implementing next-generation sequencing-based whole-exome and whole-genome sequencing. Because a large number of targets are covered by whole-exome and whole-genome sequencing, it is critical that a laboratory perform appropriate validation studies, develop a quality assurance and quality control program, and participate in proficiency testing. Objective.-To provide recommendations for wholeexome and whole-genome sequencing assay design, validation, and implementation for the detection of germline variants associated in inherited disorders. Data Sources.-An example of trio sequencing, filtration and annotation of variants, and phenotypic consideration to arrive at clinical diagnosis is discussed. Conclusions.-It is critical that clinical laboratories planning to implement whole-exome and whole-genome sequencing design and validate the assay to specifications and ensure adequate performance prior to implementation. Test design specifications, including variant filtering and annotation, phenotypic consideration, guidance on consenting options, and reporting of incidental findings, are provided. These are important steps a laboratory must take to validate and implement whole-exome and whole-genome sequencing in a clinical setting for germline variants in inherited disorders
Design Characteristics Influence Performance of Clinical Prediction Rules in Validation: A Meta-Epidemiological Study
BACKGROUND: Many new clinical prediction rules are derived and validated. But the design and reporting quality of clinical prediction research has been less than optimal. We aimed to assess whether design characteristics of validation studies were associated with the overestimation of clinical prediction rules' performance. We also aimed to evaluate whether validation studies clearly reported important methodological characteristics.
METHODS: Electronic databases were searched for systematic reviews of clinical prediction rule studies published between 2006 and 2010. Data were extracted from the eligible validation studies included in the systematic reviews. A meta-analytic meta-epidemiological approach was used to assess the influence of design characteristics on predictive performance. From each validation study, it was assessed whether 7 design and 7 reporting characteristics were properly described.
RESULTS: A total of 287 validation studies of clinical prediction rule were collected from 15 systematic reviews (31 meta-analyses). Validation studies using case-control design produced a summary diagnostic odds ratio (DOR) 2.2 times (95% CI: 1.2-4.3) larger than validation studies using cohort design and unclear design. When differential verification was used, the summary DOR was overestimated by twofold (95% CI: 1.2 -3.1) compared to complete, partial and unclear verification. The summary RDOR of validation studies with inadequate sample size was 1.9 (95% CI: 1.2 -3.1) compared to studies with adequate sample size. Study site, reliability, and clinical prediction rule was adequately described in 10.1%, 9.4%, and 7.0% of validation studies respectively.
CONCLUSION: Validation studies with design shortcomings may overestimate the performance of clinical prediction rules. The quality of reporting among studies validating clinical prediction rules needs to be improved
The role of laboratory medicine in healthcare: quality requirements of immunoassays, standardisation and data management in prospective medicine
In the last 10 years, the area of ELISA and protein-chip technology has developed and enthusiastically applied to an enormous variety of biological questions. However, the degree of stringency required in data analysis appears to have been underestimated. As a result, there are numerous published findings that are of questionable quality, requiring further confirmation and/or validation. In the course of feasibility and validation studies a number of key issues in research, development and clinical trial studies must be outlined, including those associated with laboratory design, analytical validation strategies, analytical completeness and data managements. The scope of the following review should provide assistance for defining key parameters in assay evaluation and validation in research and clinical trial projects in prospective medicine
E-infrastructures fostering multi-centre collaborative research into the intensive care management of patients with brain injury
Clinical research is becoming ever more collaborative with multi-centre trials now a common practice. With this in mind, never has it been more important to have secure access to data and, in so doing, tackle the challenges of inter-organisational data access and usage. This is especially the case for research conducted within the brain injury domain due to the complicated multi-trauma nature of the disease with its associated complex collation of time-series data of varying resolution and quality. It is now widely accepted that advances in treatment within this group of patients will only be delivered if the technical infrastructures underpinning the collection and validation of multi-centre research data for clinical trials is improved. In recognition of this need, IT-based multi-centre e-Infrastructures such as the Brain Monitoring with Information Technology group (BrainIT - www.brainit.org) and Cooperative Study on Brain Injury Depolarisations (COSBID - www.cosbid.de) have been formed. A serious impediment to the effective implementation of these networks is access to the know-how and experience needed to install, deploy and manage security-oriented middleware systems that provide secure access to distributed hospital based datasets and especially the linkage of these data sets across sites. The recently funded EU framework VII ICT project Advanced Arterial Hypotension Adverse Event prediction through a Novel Bayesian Neural Network (AVERT-IT) is focused upon tackling these challenges. This chapter describes the problems inherent to data collection within the brain injury medical domain, the current IT-based solutions designed to address these problems and how they perform in practice. We outline how the authors have collaborated towards developing Grid solutions to address the major technical issues. Towards this end we describe a prototype solution which ultimately formed the basis for the AVERT-IT project. We describe the design of the underlying Grid infrastructure for AVERT-IT and how it will be used to produce novel approaches to data collection, data validation and clinical trial design is also presented
Validating archetypes for the Multiple Sclerosis Functional Composite
Background Numerous information models for electronic health records, such as
openEHR archetypes are available. The quality of such clinical models is
important to guarantee standardised semantics and to facilitate their
interoperability. However, validation aspects are not regarded sufficiently
yet. The objective of this report is to investigate the feasibility of
archetype development and its community-based validation process, presuming
that this review process is a practical way to ensure high-quality information
models amending the formal reference model definitions. Methods A standard
archetype development approach was applied on a case set of three clinical
tests for multiple sclerosis assessment: After an analysis of the tests, the
obtained data elements were organised and structured. The appropriate
archetype class was selected and the data elements were implemented in an
iterative refinement process. Clinical and information modelling experts
validated the models in a structured review process. Results Four new
archetypes were developed and publicly deployed in the openEHR Clinical
Knowledge Manager, an online platform provided by the openEHR Foundation.
Afterwards, these four archetypes were validated by domain experts in a team
review. The review was a formalised process, organised in the Clinical
Knowledge Manager. Both, development and review process turned out to be time-
consuming tasks, mostly due to difficult selection processes between
alternative modelling approaches. The archetype review was a straightforward
team process with the goal to validate archetypes pragmatically. Conclusions
The quality of medical information models is crucial to guarantee standardised
semantic representation in order to improve interoperability. The validation
process is a practical way to better harmonise models that diverge due to
necessary flexibility left open by the underlying formal reference model
definitions. This case study provides evidence that both community- and tool-
enabled review processes, structured in the Clinical Knowledge Manager, ensure
archetype quality. It offers a pragmatic but feasible way to reduce variation
in the representation of clinical information models towards a more unified
and interoperable model
Validating archetypes for the Multiple Sclerosis Functional Composite
Background Numerous information models for electronic health records, such as
openEHR archetypes are available. The quality of such clinical models is
important to guarantee standardised semantics and to facilitate their
interoperability. However, validation aspects are not regarded sufficiently
yet. The objective of this report is to investigate the feasibility of
archetype development and its community-based validation process, presuming
that this review process is a practical way to ensure high-quality information
models amending the formal reference model definitions. Methods A standard
archetype development approach was applied on a case set of three clinical
tests for multiple sclerosis assessment: After an analysis of the tests, the
obtained data elements were organised and structured. The appropriate
archetype class was selected and the data elements were implemented in an
iterative refinement process. Clinical and information modelling experts
validated the models in a structured review process. Results Four new
archetypes were developed and publicly deployed in the openEHR Clinical
Knowledge Manager, an online platform provided by the openEHR Foundation.
Afterwards, these four archetypes were validated by domain experts in a team
review. The review was a formalised process, organised in the Clinical
Knowledge Manager. Both, development and review process turned out to be time-
consuming tasks, mostly due to difficult selection processes between
alternative modelling approaches. The archetype review was a straightforward
team process with the goal to validate archetypes pragmatically. Conclusions
The quality of medical information models is crucial to guarantee standardised
semantic representation in order to improve interoperability. The validation
process is a practical way to better harmonise models that diverge due to
necessary flexibility left open by the underlying formal reference model
definitions. This case study provides evidence that both community- and tool-
enabled review processes, structured in the Clinical Knowledge Manager, ensure
archetype quality. It offers a pragmatic but feasible way to reduce variation
in the representation of clinical information models towards a more unified
and interoperable model
The Sleep Condition Indicator: a clinical screening tool to evaluate insomnia disorder
Objective: Describe the development and psychometric validation of a brief scale (the Sleep Condition Indicator (SCI)) to evaluate insomnia disorder in everyday clinical practice.<p></p>
Design: The SCI was evaluated across five study samples. Content validity, internal consistency and concurrent validity were investigated.<p></p>
Participants: 30 941 individuals (71% female) completed the SCI along with other descriptive demographic and clinical information.<p></p>
Setting: Data acquired on dedicated websites.<p></p>
Results: The eight-item SCI (concerns about getting to sleep, remaining asleep, sleep quality, daytime personal functioning, daytime performance, duration of sleep problem, nights per week having a sleep problem and extent troubled by poor sleep) had robust internal consistency (뱳0.86) and showed convergent validity with the Pittsburgh Sleep Quality Index and Insomnia Severity Index. A two-item short-form (SCI-02: nights per week having a sleep problem, extent troubled by poor sleep), derived using linear regression modelling, correlated strongly with the SCI total score (r=0.90).<p></p>
Conclusions: The SCI has potential as a clinical screening tool for appraising insomnia symptoms against Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) criteria.<p></p>
Recommended from our members
The LONI QC System: A Semi-Automated, Web-Based and Freely-Available Environment for the Comprehensive Quality Control of Neuroimaging Data.
Quantifying, controlling, and monitoring image quality is an essential prerequisite for ensuring the validity and reproducibility of many types of neuroimaging data analyses. Implementation of quality control (QC) procedures is the key to ensuring that neuroimaging data are of high-quality and their validity in the subsequent analyses. We introduce the QC system of the Laboratory of Neuro Imaging (LONI): a web-based system featuring a workflow for the assessment of various modality and contrast brain imaging data. The design allows users to anonymously upload imaging data to the LONI-QC system. It then computes an exhaustive set of QC metrics which aids users to perform a standardized QC by generating a range of scalar and vector statistics. These procedures are performed in parallel using a large compute cluster. Finally, the system offers an automated QC procedure for structural MRI, which can flag each QC metric as being 'good' or 'bad.' Validation using various sets of data acquired from a single scanner and from multiple sites demonstrated the reproducibility of our QC metrics, and the sensitivity and specificity of the proposed Auto QC to 'bad' quality images in comparison to visual inspection. To the best of our knowledge, LONI-QC is the first online QC system that uniquely supports the variety of functionality where we compute numerous QC metrics and perform visual/automated image QC of multi-contrast and multi-modal brain imaging data. The LONI-QC system has been used to assess the quality of large neuroimaging datasets acquired as part of various multi-site studies such as the Transforming Research and Clinical Knowledge in Traumatic Brain Injury (TRACK-TBI) Study and the Alzheimer's Disease Neuroimaging Initiative (ADNI). LONI-QC's functionality is freely available to users worldwide and its adoption by imaging researchers is likely to contribute substantially to upholding high standards of brain image data quality and to implementing these standards across the neuroimaging community
- …