CORE
🇺🇦
make metadata, not war
Services
Services overview
Explore all CORE services
Access to raw data
API
Dataset
FastSync
Content discovery
Recommender
Discovery
OAI identifiers
OAI Resolver
Managing content
Dashboard
Bespoke contracts
Consultancy services
Support us
Support us
Membership
Sponsorship
Community governance
Advisory Board
Board of supporters
Research network
About
About us
Our mission
Team
Blog
FAQs
Contact us
research
Inter- and intra-rater reliability of the Chicago Classification in pe-diatric high-resolution esophageal manometry recordings
Authors
Rammy Abu-Assi
Marc Alexander Benninga
+12 more
I E Heijting
Daniel Robin Hoekman
Stamatiki Kritas
Sophie Kuizenga-Wessel
Samuel J Nurko
Taher Omari
Rachel L Rosen
Grace Seiboth
Maartje M J Singendonk
Marije Smits
Michiel P van Wijk
Pim W Wijenborg
Publication date
17 December 2014
Publisher
'Wiley'
Doi
Cite
Abstract
This article may be used for non-commercial purposes in accordance With Wiley Terms and Conditions for self-archiving'.Copyright © 2015 John Wiley & Sons, Inc. All rights reserved.Background The Chicago Classification (CC) facilitates interpretation of high-resolution manometry (HRM) recordings. Application of this adult based algorithm to the pediatric population is unknown. We therefore assessed intra and interrater reliability of software-based CC diagnosis in a pediatric cohort. Methods Thirty pediatric solid state HRM recordings (13M; mean age 12.1 ± 5.1 years) assessing 10 liquid swallows per patient were analyzed twice by 11 raters (six experts, five non-experts). Software-placed anatomical landmarks required manual adjustment or removal. Integrated relaxation pressure (IRP4s), distal contractile integral (DCI), contractile front velocity (CFV), distal latency (DL) and break size (BS), and an overall CC diagnosis were software-generated. In addition, raters provided their subjective CC diagnosis. Reliability was calculated with Cohen's and Fleiss’ kappa (κ) and intraclass correlation coefficient (ICC). Key Results Intra- and interrater reliability of software-generated CC diagnosis after manual adjustment of landmarks was substantial (mean κ = 0.69 and 0.77 respectively) and moderate-substantial for subjective CC diagnosis (mean κ = 0.70 and 0.58 respectively). Reliability of both software-generated and subjective diagnosis of normal motility was high (κ = 0.81 and κ = 0.79). Intra- and interrater reliability were excellent for IRP4s, DCI, and BS. Experts had higher interrater reliability than non-experts for DL (ICC = 0.65 vs ICC = 0.36 respectively) and the software-generated diagnosis diffuse esophageal spasm (DES, κ = 0.64 vs κ = 0.30). Among experts, the reliability for the subjective diagnosis of achalasia and esophageal gastric junction outflow obstruction was moderate-substantial (κ = 0.45–0.82). Conclusions & Inferences Inter- and intrarater reliability of software-based CC diagnosis of pediatric HRM recordings was high overall. However, experience was a factor influencing the diagnosis of some motility disorders, particularly DES and achalasia
Similar works
Full text
Open in the Core reader
Download PDF
Available Versions
Flinders Academic Commons
See this paper in CORE
Go to the repository landing page
Download from data provider
oai:dspace.flinders.edu.au:232...
Last time updated on 30/04/2017