69 research outputs found

    Effect of surgical experience and spine subspecialty on the reliability of the AO Spine Upper Cervical Injury Classification System.

    Get PDF
    OBJECTIVE The objective of this paper was to determine the interobserver reliability and intraobserver reproducibility of the AO Spine Upper Cervical Injury Classification System based on surgeon experience ( 20 years) and surgical subspecialty (orthopedic spine surgery, neurosurgery, and "other" surgery). METHODS A total of 11,601 assessments of upper cervical spine injuries were evaluated based on the AO Spine Upper Cervical Injury Classification System. Reliability and reproducibility scores were obtained twice, with a 3-week time interval. Descriptive statistics were utilized to examine the percentage of accurately classified injuries, and Pearson's chi-square or Fisher's exact test was used to screen for potentially relevant differences between study participants. Kappa coefficients (κ) determined the interobserver reliability and intraobserver reproducibility. RESULTS The intraobserver reproducibility was substantial for surgeon experience level ( 20 years: 0.70) and surgical subspecialty (orthopedic spine: 0.71 vs neurosurgery: 0.69 vs other: 0.68). Furthermore, the interobserver reliability was substantial for all surgical experience groups on assessment 1 ( 20 years: 0.62), and only surgeons with > 20 years of experience did not have substantial reliability on assessment 2 ( 20 years: 0.59). Orthopedic spine surgeons and neurosurgeons had substantial intraobserver reproducibility on both assessment 1 (0.64 vs 0.63) and assessment 2 (0.62 vs 0.63), while other surgeons had moderate reliability on assessment 1 (0.43) and fair reliability on assessment 2 (0.36). CONCLUSIONS The international reliability and reproducibility scores for the AO Spine Upper Cervical Injury Classification System demonstrated substantial intraobserver reproducibility and interobserver reliability regardless of surgical experience and spine subspecialty. These results support the global application of this classification system

    The AO spine upper cervical injury classification system: Do work setting or trauma center affiliation affect classification accuracy or reliability?

    Get PDF
    PURPOSE To assess the accuracy and reliability of the AO Spine Upper Cervical Injury Classification System based on a surgeons' work setting and trauma center affiliation. METHODS A total of 275 AO Spine members participated in a validation of 25 upper cervical spine injuries, which were evaluated by computed tomography (CT) scans. Each participant was grouped based on their work setting (academic, hospital-employed, or private practice) and their trauma center affiliation (Level I, Level II or III, and Level IV or no trauma center). The classification accuracy was calculated as percent of correct classifications, while interobserver reliability, and intraobserver reproducibility were evaluated based on Fleiss' Kappa coefficient. RESULTS The overall classification accuracy for surgeons affiliated with a level I trauma center was significantly greater than participants affiliated with a level II/III center or a level IV/no trauma center on assessment one (p1<0.0001) and two (p2 = 0.0003). On both assessments, surgeons affiliated with a level I or a level II/III trauma center were significantly more accurate at identifying IIIB injury types (p1 = 0.0007; p2 = 0.0064). Academic surgeons and hospital employed surgeons were significantly more likely to correctly classify type IIIB injuries on assessment one (p1 = 0.0146) and two (p2 = 0.0015). When evaluating classification reliability, the largest differences between work settings and trauma center affiliations was identified in type IIIB injuries. CONCLUSION Type B injuries are the most difficult injury type to correctly classify. They are classified with greater reliability and classification accuracy when evaluated by academic surgeons, hospital-employed surgeons, and surgeons associated with higher-level trauma centers (I or II/III)

    Global Validation of the AO Spine Upper Cervical Injury Classification.

    Get PDF
    STUDY DESIGN Global Cross Sectional Survey. OBJECTIVE To determine the classification accuracy, interobserver reliability, and intraobserver reproducibility of the AO Spine Upper Cervical Injury Classification System based on an international group of AO Spine members. SUMMARY OF BACKGROUND DATA Previous upper cervical spine injury classifications have primarily been descriptive without incorporating a hierarchical injury progression within the classification system. Further, upper cervical spine injury classifications have focused on distinct anatomical segments within the upper cervical spine. The AO Spine Upper Cervical Injury Classification System incorporates all injuries of the upper cervical spine into a single classification system focused on a hierarchical progression from isolated bony injuries (type A) to fracture dislocations (type C). METHODS A total of 275 AO Spine members participated in a validation aimed at classifying 25 upper cervical spine injuries via computed tomography (CT) scans according to the AO Spine Upper Cervical Classification System. The validation occurred on two separate occasions, three weeks apart. Descriptive statistics for percent agreement with the gold-standard were calculated and Pearson's chi square test evaluated significance between validation groups. Kappa coefficients (ƙ) determined the interobserver reliability and intraobserver reproducibility. RESULTS The accuracy of AO Spine members to appropriately classify upper cervical spine injuries was 79.7% on assessment 1 (AS1) and 78.7% on assessment 2 (AS2). The overall intraobserver reproducibility was substantial (ƙ=0.70), while the overall interobserver reliability for AS1 and AS2 was substantial (ƙ=0.63 and ƙ=0.61, respectively). Injury location had higher interobserver reliability (AS1: ƙ = 0.85 and AS2: ƙ=0.83) than the injury type (AS1: ƙ=0.59 and AS2: 0.57) on both assessments. CONCLUSION The global validation of the AO Spine Upper Cervical Injury Classification System demonstrated substantial interobserver agreement and intraobserver reproducibility. These results support the universal applicability of the AO Spine Upper Cervical Injury Classification System

    Global Validation of the AO Spine Upper Cervical Injury Classification: Geographic Region Affects Reliability and Reproducibility.

    Get PDF
    STUDY DESIGN Global Survey. OBJECTIVE To determine the accuracy, interobserver reliability, and intraobserver reproducibility of the AO Spine Upper Cervical Injury Classification System based on surgeons' AO Spine region of practice (Africa, Asia, Central/South America, Europe, Middle East, and North America). METHODS A total of 275 AO Spine members assessed 25 upper cervical spine injuries and classified them according to the AO Spine Upper Cervical Injury Classification System. Reliability, reproducibility, and accuracy scores were obtained over two assessments administered at three-week intervals. Kappa coefficients (ƙ) determined the interobserver reliability and intraobserver reproducibility. RESULTS On both assessments, participants from Europe and North America had the highest classification accuracy, while participants from Africa and Central/South America had the lowest accuracy (P < .0001). Participants from Africa (assessment 1 (AS1):ƙ = .487; AS2:0.491), Central/South America (AS1:ƙ = .513; AS2:0.511), and the Middle East (AS1:0.591; AS2: .599) achieved moderate reliability, while participants from North America (AS1:ƙ = .673; AS2:0.648) and Europe (AS1:ƙ = .682; AS2:0.681) achieved substantial reliability. Asian participants obtained substantial reliability on AS1 (ƙ = .632), but moderate reliability on AS2 (ƙ = .566). Although there was a large effect size, the low number of participants in certain regions did not provide adequate certainty that AO regions affected the likelihood of participants having excellent reproducibility (P = .342). CONCLUSIONS The AO Spine Upper Cervical Injury Classification System can be applied with high accuracy, interobserver reliability, and intraobserver reproducibility. However, lower classification accuracy and reliability were found in regions of Africa and Central/South America, especially for severe atlas injuries (IIB and IIC) and atypical hangman's type fractures (IIIB injuries)

    AO Spine Upper Cervical Injury Classification System: A Description and Reliability Study.

    Get PDF
    BACKGROUND CONTEXT Prior upper cervical spine injury classification systems have focused on injuries to the craniocervical junction (CCJ), atlas, and dens independently. However, no previous system has classified upper cervical spine injuries using a comprehensive system incorporating all injuries from the occiput to the C2-3 joint. PURPOSE To (1) determine the accuracy of experts at correctly classifying upper cervical spine injuries based on the recently proposed AO Spine Upper Cervical Injury Classification System (2) to determine their interobserver reliability and (3) identify the intraobserver reproducibility of the experts. STUDY DESIGN/SETTING International Multi-Center Survey PATIENT SAMPLE: A survey of international spine surgeons on 29 unique upper cervical spine injuries OUTCOME MEASURES: Classification accuracy, interobserver reliability, intraobserver reproducibility METHODS: Thirteen international AO Spine Knowledge Forum Trauma members participated in two live webinar-based classifications of 29 upper cervical spine injuries presented in random order, four weeks apart. Percent agreement with the gold-standard and kappa coefficients (ƙ) were calculated to determine the interobserver reliability and intraobserver reproducibility. RESULTS Raters demonstrated 80.8% and 82.7% accuracy with identification of the injury classification (combined location and type) on the first and second assessment, respectively. Injury classification intraobserver reproducibility was excellent (mean, [range] ƙ = 0.82 [0.58-1.00]). Excellent interobserver reliability was found for injury location (ƙ = 0.922 and ƙ= 0.912) on both assessments, while injury type was substantial (ƙ=0.689 and 0.699) on both assessments. This correlated to a substantial overall interobserver reliability (ƙ = 0.729 and 0.732). CONCLUSION Early phase validation demonstrated classification of upper cervical spine injuries using the AO Spine Upper Cervical Injury Classification System to be accurate, reliable, and reproducible. Greater than 80% accuracy was detected for injury classification. The intraobserver reproducibility was excellent, while the interobserver reliability was substantial

    Effect of surgical experience and spine subspecialty on the reliability of the AO Spine Upper Cervical Injury Classification System

    Get PDF
    Objective: The objective of this paper was to determine the interobserver reliability and intraobserver reproducibility of the AO Spine Upper Cervical Injury Classification System based on surgeon experience (\u3c 5 years, 5-10 years, 10-20 years, and \u3e 20 years) and surgical subspecialty (orthopedic spine surgery, neurosurgery, and other surgery). Methods: A total of 11,601 assessments of upper cervical spine injuries were evaluated based on the AO Spine Upper Cervical Injury Classification System. Reliability and reproducibility scores were obtained twice, with a 3-week time interval. Descriptive statistics were utilized to examine the percentage of accurately classified injuries, and Pearson\u27s chi-square or Fisher\u27s exact test was used to screen for potentially relevant differences between study participants. Kappa coefficients (κ) determined the interobserver reliability and intraobserver reproducibility. Results: The intraobserver reproducibility was substantial for surgeon experience level (\u3c 5 years: 0.74 vs 5-10 years: 0.69 vs 10-20 years: 0.69 vs \u3e 20 years: 0.70) and surgical subspecialty (orthopedic spine: 0.71 vs neurosurgery: 0.69 vs other: 0.68). Furthermore, the interobserver reliability was substantial for all surgical experience groups on assessment 1 (\u3c 5 years: 0.67 vs 5-10 years: 0.62 vs 10-20 years: 0.61 vs \u3e 20 years: 0.62), and only surgeons with \u3e 20 years of experience did not have substantial reliability on assessment 2 (\u3c 5 years: 0.62 vs 5-10 years: 0.61 vs 10-20 years: 0.61 vs \u3e 20 years: 0.59). Orthopedic spine surgeons and neurosurgeons had substantial intraobserver reproducibility on both assessment 1 (0.64 vs 0.63) and assessment 2 (0.62 vs 0.63), while other surgeons had moderate reliability on assessment 1 (0.43) and fair reliability on assessment 2 (0.36). Conclusions: The international reliability and reproducibility scores for the AO Spine Upper Cervical Injury Classification System demonstrated substantial intraobserver reproducibility and interobserver reliability regardless of surgical experience and spine subspecialty. These results support the global application of this classification system

    The Diagnostic Process of Spinal Post-Traumatic Deformity: An Expert Survey of 7 Cases, Consensus on Clinical Relevance Does Exist

    Get PDF
    STUDY DESIGN: Survey of cases. OBJECTIVE: To evaluate the opinion of experts in the diagnostic process of clinically relevant Spinal Post-traumatic Deformity (SPTD). SUMMARY OF BACKGROUND DATA: SPTD is a potential complication of spine trauma that can cause decreased function and quality of life impairment. The question of when SPTD becomes clinically relevant is yet to be resolved. METHODS: The survey of 7 cases was sent to 31 experts. The case presentation was medical history, diagnostic assessment, evaluation of diagnostic assessment, diagnosis, and treatment options. Means, ranges, percentages of participants, and descriptive statistics were calculated. RESULTS: Seventeen spinal surgeons reviewed the presented cases. The items\u27 fracture type and complaints were rated by the participants as more important, but no agreement existed on the items of medical history. In patients with possible SPTD in the cervical spine (C) area, participants requested a conventional radiograph (CR) (76%-83%), a flexion/extension CR (61%-71%), a computed tomography (CT)-scan (76%-89%), and a magnetic resonance (MR)-scan (89%-94%). In thoracolumbar spine (ThL) cases, full spine CR (89%-100%), CT scan (72%-94%), and MR scan (65%-94%) were requested most often. There was a consensus on 5 out of 7 cases with clinically relevant SPTD (82%-100%). When consensus existed on the diagnosis of SPTD, there was a consensus on the case being compensated or decompensated and being symptomatic or asymptomatic. CONCLUSIONS: There was strong agreement in 5 out of 7 cases on the presence of the diagnosis of clinically relevant SPTD. Among spine experts, there is a strong consensus to use CT scan and MR scan, a cervical CR for C-cases, and a full spine CR for ThL-cases. The lack of agreement on items of the medical history suggests that a Delphi study can help us reach a consensus on the essential items of clinically relevant SPTD. LEVEL OF EVIDENCE: Level V

    Effect of surgical experience and spine subspecialty on the reliability of the {AO} Spine Upper Cervical Injury Classification System

    Get PDF
    OBJECTIVE The objective of this paper was to determine the interobserver reliability and intraobserver reproducibility of the AO Spine Upper Cervical Injury Classification System based on surgeon experience (&lt; 5 years, 5–10 years, 10–20 years, and &gt; 20 years) and surgical subspecialty (orthopedic spine surgery, neurosurgery, and "other" surgery). METHODS A total of 11,601 assessments of upper cervical spine injuries were evaluated based on the AO Spine Upper Cervical Injury Classification System. Reliability and reproducibility scores were obtained twice, with a 3-week time interval. Descriptive statistics were utilized to examine the percentage of accurately classified injuries, and Pearson’s chi-square or Fisher’s exact test was used to screen for potentially relevant differences between study participants. Kappa coefficients (κ) determined the interobserver reliability and intraobserver reproducibility. RESULTS The intraobserver reproducibility was substantial for surgeon experience level (&lt; 5 years: 0.74 vs 5–10 years: 0.69 vs 10–20 years: 0.69 vs &gt; 20 years: 0.70) and surgical subspecialty (orthopedic spine: 0.71 vs neurosurgery: 0.69 vs other: 0.68). Furthermore, the interobserver reliability was substantial for all surgical experience groups on assessment 1 (&lt; 5 years: 0.67 vs 5–10 years: 0.62 vs 10–20 years: 0.61 vs &gt; 20 years: 0.62), and only surgeons with &gt; 20 years of experience did not have substantial reliability on assessment 2 (&lt; 5 years: 0.62 vs 5–10 years: 0.61 vs 10–20 years: 0.61 vs &gt; 20 years: 0.59). Orthopedic spine surgeons and neurosurgeons had substantial intraobserver reproducibility on both assessment 1 (0.64 vs 0.63) and assessment 2 (0.62 vs 0.63), while other surgeons had moderate reliability on assessment 1 (0.43) and fair reliability on assessment 2 (0.36). CONCLUSIONS The international reliability and reproducibility scores for the AO Spine Upper Cervical Injury Classification System demonstrated substantial intraobserver reproducibility and interobserver reliability regardless of surgical experience and spine subspecialty. These results support the global application of this classification system
    corecore