20 research outputs found

    Development of a Fluoroscopic Radiostereometric Analysis System With an Application to Glenohumeral Joint Kinematics

    Get PDF
    Ideally, joint kinematics should be measured with high accuracy, void of skin motion artefact, in three dimensions, and under dynamic conditions. Radiostereometric analysis (RSA) has the potential to fulfill all of these requirements. The objectives of this thesis were (1) to implement and validate a fluoroscopy-based RSA system, (2) to determine the effect of varying the calibration frame, (3) to correct image distortion, (4) to investigate errors in coordinate system creation for glenohumeral (shoulder) joint kinematics, (5) to introduce a new coordinate system definition for the scapula with limited radiation exposure, and (6) to use RSA to examine glenohumeral joint motions in- vivo. An RSA system consisting of two portable C-arm fluoroscopy units and two personal computers was assembled. Calibration was performed using a custom-made calibration frame. Images were digitized and RSA reconstruction was performed using custom-written software. Images taken using fluoroscopy under ideal conditions can produce reconstructions that are as accurate as those taken with digital radiography, with standard errors of measurement of 43pm and 0.23° and 36pm and 0.12°, respectively. RSA is more accurate than optical tracking for rigid body motion. The fluoroscopes may be positioned at angles less than 135° without affecting the accuracy of reconstruction. A global polynomial approach to distortion correction is appropriate for use with RSA; however, the polynomial degree must be determined for each system with an independent accuracy measure. m An alternative scapular coordinate system was introduced to decrease the required radiation exposure for coordinate system creation by approximately half. The kinematic angles obtained using the alternative coordinate system were different from those obtained using the International Society of Biomechanics standard; however, the differences are not clinically significant. As a first clinical application, glenohumeral joint translation was examined. The preliminary data suggests that humeral head position does not differ in active and static joint positioning. Fluoroscopy allows subjects to be examined while in motion and should enable substantial improvements to the study of even subtle in-vivo kinematics. It is likely that the RSA system will lead to an increased understanding of the effects of disease progression, surgical techniques and rehabilitation protocols on joint motion

    The centre of rotation of the shoulder complex and the effect of normalisation

    Get PDF
    AbstractShoulder motions consist of a composite movement of three joints and one pseudo-joint, which together dictate the humerothoracic motion. The purpose of this work was to quantify the location of the centre of rotation (CoR) of the shoulder complex as a whole. Dynamic motion of 12 participants was recorded using optical motion tracking during coronal, scapular and sagittal plane elevation. The instantaneous CoR was found for each angle of elevation using helical axes projected onto the three planes of motion. The location of an average CoR for each plane was evaluated using digitised and anthropometric measures for normalisation. When conducting motion in the coronal, scapular, and sagittal planes, respectively, the coefficients for locating the CoRs of the shoulder complex are −61%, −61%, and −65% of the anterior–posterior dimension – the vector between the midpoint of the incisura jugularis and the xiphoid process and the midpoint of the seventh cervical vertebra and the eighth thoracic vertebra; 0%, −1%, and −2% of the superior–inferior dimension – the vector between the midpoint of the acromioclavicular joints and the midpoint of the anterior superior iliac spines; and 57%, 57%, and 78% of the medial–lateral dimension −0.129 times the height of the participant. Knowing the location of the CoR of the shoulder complex as a whole enables improved participant positioning for evaluation and rehabilitation activities that involve movement of the hand with a fixed radius, such as those that employ isokinetic dynamometers

    In vivo knee contact force prediction using patient-specific musculoskeletal geometry in a segment-based computational model

    Get PDF
    Segment-based musculoskeletal models allow the prediction of muscle, ligament, and joint forces without making assumptions regarding joint degrees-of-freedom (DOF). The dataset published for the “Grand Challenge Competition to Predict in vivo Knee Loads” provides directly measured tibiofemoral contact forces for activities of daily living (ADL). For the Sixth Grand Challenge Competition to Predict in vivo Knee Loads, blinded results for “smooth” and “bouncy” gait trials were predicted using a customized patient-specific musculoskeletal model. For an unblinded comparison, the following modifications were made to improve the predictions: further customizations, including modifications to the knee center of rotation; reductions to the maximum allowable muscle forces to represent known loss of strength in knee arthroplasty patients; and a kinematic constraint to the hip joint to address the sensitivity of the segment-based approach to motion tracking artifact. For validation, the improved model was applied to normal gait, squat, and sit-to-stand for three subjects. Comparisons of the predictions with measured contact forces showed that segment-based musculoskeletal models using patient-specific input data can estimate tibiofemoral contact forces with root mean square errors (RMSEs) of 0.48–0.65 times body weight (BW) for normal gait trials. Comparisons between measured and predicted tibiofemoral contact forces yielded an average coefficient of determination of 0.81 and RMSEs of 0.46–1.01 times BW for squatting and 0.70–0.99 times BW for sit-to-stand tasks. This is comparable to the best validations in the literature using alternative models.</jats:p

    Epidemiological Characteristics of Foot and Ankle Injuries in 2 Professional Ballet Companies: A 3-Season Cohort Study of 588 Medical Attention Injuries and 255 Time-Loss Injuries.

    Get PDF
    The foot and ankle are often reported as the most common sites of injury in professional ballet dancers; however, epidemiological research focusing on foot and ankle injuries in isolation and investigating specific diagnoses is limited. To investigate the incidence rate, severity, burden, and mechanisms of foot and ankle injuries that (1) required visiting a medical team (medical attention foot and ankle injuries; MA-FAIs) and (2) prevented a dancer from fully participating in all dance-related activities for at least 24 hours after the injury (time-loss foot and ankle injuries; TL-FAIs) in 2 professional ballet companies. Descriptive epidemiological study. Foot and ankle injury data across 3 seasons (2016-2017 to 2018-2019) were extracted from the medical databases of 2 professional ballet companies. Injury-incidence rate (per dancer-season), severity, and burden were calculated and reported with reference to the mechanism of injury. A total of 588 MA-FAIs and 255 TL-FAIs were observed across 455 dancer-seasons. The incidence rates of MA-FAIs and TL-FAIs were significantly higher in women (1.20 MA-FAIs and 0.55 TL-FAIs per dancer-season) than in men (0.83 MA-FAIs and 0.35 TL-FAIs per dancer-season) (MA-FAIs, = .002; TL-FAIs, = .008). The highest incidence rates for any specific injury pathology were ankle impingement syndrome and synovitis for MA-FAIs (women 0.27 and men 0.25 MA-FAIs per dancer-season) and ankle sprain for TL-FAIs (women 0.15 and men 0.08 TL-FAIs per dancer-season). work and jumping actions in women and jumping actions in men were the most common mechanisms of injury. The primary mechanism of injury of ankle sprains was jumping activities, but the primary mechanisms of ankle synovitis and impingement in women were related to dancing . The results of this study highlight the importance of further investigation of injury prevention strategies targeting work and jumping actions in ballet dancers. Further research for injury prevention and rehabilitation strategies targeting posterior ankle impingement syndromes and ankle sprains are warranted. [Abstract copyright: © The Author(s) 2023.

    Validation of two-dimensional video-based inference of finger kinematics with pose estimation

    No full text
    Accurate capture finger of movements for biomechanical assessments has typically been achieved within laboratory environments through the use of physical markers attached to a participant’s hands. However, such requirements can narrow the broader adoption of movement tracking for kinematic assessment outside these laboratory settings, such as in the home. Thus, there is the need for markerless hand motion capture techniques that are easy to use and accurate enough to evaluate the complex movements of the human hand. Several recent studies have validated lower-limb kinematics obtained with a marker-free technique, OpenPose. This investigation examines the accuracy of OpenPose, when applied to images from single RGB cameras, against a ‘gold standard’ marker-based optical motion capture system that is commonly used for hand kinematics estimation. Participants completed four single-handed activities with right and left hands, including hand abduction and adduction, radial walking, metacarpophalangeal (MCP) joint flexion, and thumb opposition. The accuracy of finger kinematics was assessed using the root mean square error. Mean total active flexion was compared using the Bland–Altman approach, and the coefficient of determination of linear regression. Results showed good agreement for abduction and adduction and thumb opposition activities. Lower agreement between the two methods was observed for radial walking (mean difference between the methods of 5.03°) and MCP flexion (mean difference of 6.82°) activities, due to occlusion. This investigation demonstrated that OpenPose, applied to videos captured with monocular cameras, can be used for markerless motion capture for finger tracking with an error below 11° and on the order of that which is accepted clinically

    A 3DCNN-LSTM Multi-Class Temporal Segmentation for Hand Gesture Recognition

    No full text
    This paper introduces a multi-class hand gesture recognition model developed to identify a set of hand gesture sequences from two-dimensional RGB video recordings, using both the appearance and spatiotemporal parameters of consecutive frames. The classifier utilizes a convolutional-based network combined with a long-short-term memory unit. To leverage the need for a large-scale dataset, the model deploys training on a public dataset, adopting a technique known as transfer learning to fine-tune the architecture on the hand gestures of relevance. Validation curves performed over a batch size of 64 indicate an accuracy of 93.95% (±0.37) with a mean Jaccard index of 0.812 (±0.105) for 22 participants. The fine-tuned architecture illustrates the possibility of refining a model with a small set of data (113,410 fully labelled image frames) to cover previously unknown hand gestures. The main contribution of this work includes a custom hand gesture recognition network driven by monocular RGB video sequences that outperform previous temporal segmentation models, embracing a small-sized architecture that facilitates wide adoption

    A 3DCNN-LSTM Multi-Class Temporal Segmentation for Hand Gesture Recognition

    No full text
    This paper introduces a multi-class hand gesture recognition model developed to identify a set of hand gesture sequences from two-dimensional RGB video recordings, using both the appearance and spatiotemporal parameters of consecutive frames. The classifier utilizes a convolutional-based network combined with a long-short-term memory unit. To leverage the need for a large-scale dataset, the model deploys training on a public dataset, adopting a technique known as transfer learning to fine-tune the architecture on the hand gestures of relevance. Validation curves performed over a batch size of 64 indicate an accuracy of 93.95% (&plusmn;0.37) with a mean Jaccard index of 0.812 (&plusmn;0.105) for 22 participants. The fine-tuned architecture illustrates the possibility of refining a model with a small set of data (113,410 fully labelled image frames) to cover previously unknown hand gestures. The main contribution of this work includes a custom hand gesture recognition network driven by monocular RGB video sequences that outperform previous temporal segmentation models, embracing a small-sized architecture that facilitates wide adoption
    corecore