2 research outputs found

    Effect of surgical experience and spine subspecialty on the reliability of the {AO} Spine Upper Cervical Injury Classification System

    Get PDF
    OBJECTIVE The objective of this paper was to determine the interobserver reliability and intraobserver reproducibility of the AO Spine Upper Cervical Injury Classification System based on surgeon experience (< 5 years, 5–10 years, 10–20 years, and > 20 years) and surgical subspecialty (orthopedic spine surgery, neurosurgery, and "other" surgery). METHODS A total of 11,601 assessments of upper cervical spine injuries were evaluated based on the AO Spine Upper Cervical Injury Classification System. Reliability and reproducibility scores were obtained twice, with a 3-week time interval. Descriptive statistics were utilized to examine the percentage of accurately classified injuries, and Pearson’s chi-square or Fisher’s exact test was used to screen for potentially relevant differences between study participants. Kappa coefficients (κ) determined the interobserver reliability and intraobserver reproducibility. RESULTS The intraobserver reproducibility was substantial for surgeon experience level (< 5 years: 0.74 vs 5–10 years: 0.69 vs 10–20 years: 0.69 vs > 20 years: 0.70) and surgical subspecialty (orthopedic spine: 0.71 vs neurosurgery: 0.69 vs other: 0.68). Furthermore, the interobserver reliability was substantial for all surgical experience groups on assessment 1 (< 5 years: 0.67 vs 5–10 years: 0.62 vs 10–20 years: 0.61 vs > 20 years: 0.62), and only surgeons with > 20 years of experience did not have substantial reliability on assessment 2 (< 5 years: 0.62 vs 5–10 years: 0.61 vs 10–20 years: 0.61 vs > 20 years: 0.59). Orthopedic spine surgeons and neurosurgeons had substantial intraobserver reproducibility on both assessment 1 (0.64 vs 0.63) and assessment 2 (0.62 vs 0.63), while other surgeons had moderate reliability on assessment 1 (0.43) and fair reliability on assessment 2 (0.36). CONCLUSIONS The international reliability and reproducibility scores for the AO Spine Upper Cervical Injury Classification System demonstrated substantial intraobserver reproducibility and interobserver reliability regardless of surgical experience and spine subspecialty. These results support the global application of this classification system

    An international validation of the AO spine subaxial injury classification system

    No full text
    Purpose To validate the AO Spine Subaxial Injury Classification System with participants of various experience levels, subspecialties, and geographic regions. Methods A live webinar was organized in 2020 for validation of the AO Spine Subaxial Injury Classification System. The validation consisted of 41 unique subaxial cervical spine injuries with associated computed tomography scans and key images. Intraobserver reproducibility and interobserver reliability of the AO Spine Subaxial Injury Classification System were calculated for injury morphology, injury subtype, and facet injury. The reliability and reproducibility of the classification system were categorized as slight (? = 0-0.20), fair (? = 0.21-0.40), moderate (? = 0.41-0.60), substantial (? = 0.61-0.80), or excellent (? = > 0.80) as determined by the Landis and Koch classification. Results A total of 203 AO Spine members participated in the AO Spine Subaxial Injury Classification System validation. The percent of participants accurately classifying each injury was over 90% for fracture morphology and fracture subtype on both assessments. The interobserver reliability for fracture morphology was excellent (? = 0.87), while fracture subtype (? = 0.80) and facet injury were substantial (? = 0.74). The intraobserver reproducibility for fracture morphology and subtype were excellent (? = 0.85, 0.88, respectively), while reproducibility for facet injuries was substantial (? = 0.76). Conclusion The AO Spine Subaxial Injury Classification System demonstrated excellent interobserver reliability and intraobserver reproducibility for fracture morphology, substantial reliability and reproducibility for facet injuries, and excellent reproducibility with substantial reliability for injury subtype
    corecore