128 research outputs found

    The reliability and reproducibility of sagittal spinal curvature measurement using the Microsoft Kinect V2

    Get PDF
    BACKGROUND: Abnormal sagittal spinal curvature is associated with pain, decreased mobility, respiratory problems and increased mortality. Time-of-flight technology of the Microsoft Kinect sensor can reconstruct a three-dimensional image of the back quickly and inexpensively. OBJECTIVE: To estimate the extent of the reproducibility of sagittal spine curvature measurement using the Microsoft Kinect sensor. METHODS: Simultaneous measurement of thoracic and lumbar spine using the Microsoft Kinect sensor in 37 participants. Two investigators gave standardised instructions and each captured 3 images. Thoracic kyphosis and lumbar lordosis angle indexes were calculated using maximum height divided by the length. RESULTS: Adult participants (mean age in years (SD) = 51.7 (20.6); 57% female; BMI in kg/m2 (SD) = 24.9 (3.3)) kyphosis and lordosis indexes showed high intra-rater and inter-rater ICC values (0.960–0.973). The means of the first images from both raters had significantly larger kyphosis indexes compared to the second and third images, yet no difference between means in lordosis data. CONCLUSIONS: The results indicate that the Microsoft Kinect sensor has a reproducible method with high intra-rater and inter-rater reliability. The difference between the means over repeated measures suggest the second image capture is more consistent. It is a reproducible and quick method in clinical and research settings

    Minimal Required Resolution to Capture the 3D Shape of the Human Back: A Practical Approach

    Get PDF
    Adolescent idiopathic scoliosis (AIS) is a prevalent musculoskeletal disorder that causes abnormal spinal deformities. The early screening of children and adolescents is crucial to identify and prevent the further progression of AIS. In clinical examinations, scoliometers are often used to noninvasively estimate the primary Cobb angle, and optical 3D scanning systems have also emerged as alternative noninvasive approaches for this purpose. The recent advances in low-cost 3D scanners have led to their use in several studies to estimate the primary Cobb angle or even internal spinal alignment. However, none of these studies demonstrate whether such a low-cost scanner satisfies the minimal requirements for capturing the relevant deformities of the human back. To practically quantify the minimal required spatial resolution and camera resolution to capture the geometry and shape of the deformities of the human back, we used multiple 3D scanning methodologies and systems. The results from an evaluation of 30 captures of AIS patients and 76 captures of healthy subjects showed that the minimal required spatial resolution is between 2 mm and 5 mm, depending on the chosen error tolerance. Therefore, a minimal camera resolution of 640 × 480 pixels is recommended for use in future studies

    Investigation Of The Microsoft Kinect V2 Sensor As A Multi-Purpose Device For A Radiation Oncology Clinic

    Get PDF
    For a radiation oncology clinic, the number of devices available to assist in the workflow for radiotherapy treatments are quite numerous. Processes such as patient verification, motion management, or respiratory motion tracking can all be improved upon by devices currently on the market. These three specific processes can directly impact patient safety and treatment efficacy and, as such, are important to track and quantify. Most products available will only provide a solution for one of these processes and may be outside the reach of a typical radiation oncology clinic due to difficult implementation and incorporation with already existing hardware. This manuscript investigates the use of the Microsoft Kinect v2 sensor to provide solutions for all three processes all while maintaining a relatively simple and easy to use implementation. To assist with patient verification, the Kinect system was programmed to create a facial recognition and recall process. The basis of the facial recognition algorithm was created by utilizing a facial mapping library distributed by Microsoft within the Software Developers Toolkit (SDK). Here, the system extracts 31 fiducial points representing various facial landmarks. 3D vectors are created between each of the 31 points and the magnitude of each vector is calculated by the system. This allows for a face to be defined as a collection of 465 specific vector magnitudes. The 465 vector magnitudes defining a face are then used in both the creation of a facial reference data set and subsequent evaluations of real-time sensor data in the matching algorithm. To test the algorithm, a database of 39 faces was created, each with 465 vectors derived from the fiducial points, and a one-to-one matching procedure was performed to obtain sensitivity and specificity data of the facial identification system. In total, 5299 trials were performed and threshold parameters were created for match determination. Optimization of said parameters in the matching algorithm by way of ROC curves indicated the sensitivity of the system for was 96.5% and the specificity was 96.7%. These results indicate a fairly robust methodology for verifying, in real-time, a specific face through comparison from a pre-collected reference data set. In its current implementation, the process of data collection for each face and subsequent matching session averaged approximately 30 seconds, which may be too onerous to provide a realistic supplement to patient identification in a clinical setting. Despite the time commitment, the data collection process was well tolerated by all participants. It was found that ambient light played a crucial role in the accuracy and reproducibility of the facial recognition system. Testing with various light levels found that ambient light greater than 200 lux produced the most accurate results. As such, the acquisition process should be setup in such a way to ensure consistent ambient light conditions across both the reference recording session and subsequent real-time identification sessions. In developing a motion management process with the Kinect, two separate, but complimentary processes were created. First, to track large scale anatomical movements, the automatic skeletal tracking capabilities of the Kinect were utilized. 25 specific body joints (head, elbow, knee, etc) make up the skeletal frame and are locked to relative positions on the body. Using code written in C#, these joints are tracked, in 3D space, and compared to an initial state of the patient allowing for an indication of anatomical motion. Additionally, to track smaller, more subtle movements on a specific area of the body, a user drawn ROI can be created. Here, the depth values of all pixels associated with the body in the ROI are compared to the initial state. The system counts the number of live pixels with a depth difference greater than a specified threshold compared to the initial state and the area of each of those pixels is calculated based on their depth. The percentage of area moved (PAM) compared to the ROI area then becomes an indication of gross movement within the ROI. In this study, 9 specific joints proved to be stable during data acquisition. When moved in orthogonal directions, each coordinate recorded had a relatively linear trend of movement but not the expected 1:1 relationship to couch movement. Instead, calculation of the vector magnitude between the initial and current position proved a better indicator of movement. 5 of the 9 joints (Left/Right Elbow, Left/Right Hip, and Spine-Base) showed relatively consistent values for radial movements of 5mm and 10mm, achieving 20% - 25% coefficient of variation. For these 5 joints, this allowed for threshold values for calculated radial distances of 3mm and 7.5 mm to be set for 5mm and 10mm of actual movement, respectively. When monitoring a drawn ROI, it was found that the depth sensor had very little sensitivity of movement in the X (Left/Right) or Y (Superior/Inferior) direction, but exceptional sensitivity in the Z (Anterior/Posterior) direction. As such, the PAM values could only be coordinated with motion in the Z direction. PAM values of over 60% were shown to be indicative of movement in the Z direction equal to that of the threshold value set for movement as small as 3mm. Lastly, the Kinect was utilized to create a marker-less, respiratory motion tracking system. Code was written to access the Kinect’s depth sensor and create a process to track the respiratory motion of a subject by recording the depth (distance) values obtained at several user selected points on the subject, with each point representing one pixel on the depth image. As a patient breathes, a specific anatomical point on the chest/abdomen will move slightly within the depth image across a number of pixels. By tracking how depth values change for a specific pixel, instead of how the anatomical point moves throughout the image, a respiratory trace can be obtained based on changing depth values of the selected pixel. Tracking of these values can then be implemented via marker-less setup. Varian’s RPM system and the Anzai belt system were used in tandem with the Kinect in order to compare respiratory traces obtained by each using two different subjects. Analysis of the depth information from the Kinect for purposes of phase based and amplitude based binning proved to be correlated well with the RPM and Anzai systems. IQR values were obtained which compared times correlated with specific amplitude and phase percentage values against each product. The IQR spans of time indicated the Kinect would measure a specific percentage value within 0.077 s for Subject 1 and 0.164s for Subject 2 when compared to values obtained with RPM or Anzai. For 4D-CT scans, these times correlate to less than 1mm of couch movement and would create an offset of one half an acquired slice. These minimal deviations between the traces created by the Kinect and RPM or Anzai indicate that by tracking the depth values of user selected pixels within the depth image, rather than tracking specific anatomical locations, respiratory motion can be tracked and visualized utilizing the Kinect with results comparable to that of commercially available products

    Intelligent Sensors for Human Motion Analysis

    Get PDF
    The book, "Intelligent Sensors for Human Motion Analysis," contains 17 articles published in the Special Issue of the Sensors journal. These articles deal with many aspects related to the analysis of human movement. New techniques and methods for pose estimation, gait recognition, and fall detection have been proposed and verified. Some of them will trigger further research, and some may become the backbone of commercial systems

    Development of a mobile technology system to measure shoulder range of motion

    Get PDF
    In patients with shoulder movement impairment, assessing and monitoring shoulder range of motion is important for determining the severity of impairments due to disease or injury and evaluating the effects of interventions. Current clinical methods of goniometry and visual estimation require an experienced user and suffer from low inter-rater reliability. More sophisticated techniques such as optical or electromagnetic motion capture exist but are expensive and restricted to a specialised laboratory environment.;Inertial measurement units (IMU), such as those within smartphones and smartwatches, show promise as tools bridge the gap between laboratory and clinical techniques and accurately measure shoulder range of motion during both clinic assessments and in daily life.;This study aims to develop an Android mobile application for both a smartphone and a smartwatch to assess shoulder range of motion. Initial performance characterisation of the inertial sensing capabilities of both a smartwatch and smartphone running the application was conducted against an industrial inclinometer, free-swinging pendulum and custom-built servo-powered gimbal.;An initial validation study comparing the smartwatch application with a universal goniometer for shoulder ROM assessment was conducted with twenty healthy participants. An impaired condition was simulated by applying kinesiology tape across the participants shoulder girdle. Agreement, intra and inter-day reliability were assessed in both the healthy and impaired states.;Both the phone and watch performed with acceptable accuracy and repeatability during static (within ±1.1°) and dynamic conditions where it was strongly correlated to the pendulum and gimbal data (ICC > 0.9). Both devices could perform accurately within optimal responsiveness range of angular velocities compliant with humerus movement during activities of daily living (frequency response of 377°/s and 358°/s for the phone and watch respectively).;The concurrent agreement between the watch and the goniometer was high in both healthy and impaired states (ICC > 0.8) and between measurement days (ICC > 0.8). The mean absolute difference between the watch and the goniometer were within the accepted minimal clinically important difference for shoulder movement (5.11° to 10.58°).;The results show promise for the use of the developed Android application to be used as a goniometry tool for assessment of shoulder ROM. However, the limits of agreement across all the tests fell out with the acceptable margin and further investigation is required to determine validity. Evaluation of validity in clinical impairment patients is also required to assess the feasibility of the use of the application in clinical practice.In patients with shoulder movement impairment, assessing and monitoring shoulder range of motion is important for determining the severity of impairments due to disease or injury and evaluating the effects of interventions. Current clinical methods of goniometry and visual estimation require an experienced user and suffer from low inter-rater reliability. More sophisticated techniques such as optical or electromagnetic motion capture exist but are expensive and restricted to a specialised laboratory environment.;Inertial measurement units (IMU), such as those within smartphones and smartwatches, show promise as tools bridge the gap between laboratory and clinical techniques and accurately measure shoulder range of motion during both clinic assessments and in daily life.;This study aims to develop an Android mobile application for both a smartphone and a smartwatch to assess shoulder range of motion. Initial performance characterisation of the inertial sensing capabilities of both a smartwatch and smartphone running the application was conducted against an industrial inclinometer, free-swinging pendulum and custom-built servo-powered gimbal.;An initial validation study comparing the smartwatch application with a universal goniometer for shoulder ROM assessment was conducted with twenty healthy participants. An impaired condition was simulated by applying kinesiology tape across the participants shoulder girdle. Agreement, intra and inter-day reliability were assessed in both the healthy and impaired states.;Both the phone and watch performed with acceptable accuracy and repeatability during static (within ±1.1°) and dynamic conditions where it was strongly correlated to the pendulum and gimbal data (ICC > 0.9). Both devices could perform accurately within optimal responsiveness range of angular velocities compliant with humerus movement during activities of daily living (frequency response of 377°/s and 358°/s for the phone and watch respectively).;The concurrent agreement between the watch and the goniometer was high in both healthy and impaired states (ICC > 0.8) and between measurement days (ICC > 0.8). The mean absolute difference between the watch and the goniometer were within the accepted minimal clinically important difference for shoulder movement (5.11° to 10.58°).;The results show promise for the use of the developed Android application to be used as a goniometry tool for assessment of shoulder ROM. However, the limits of agreement across all the tests fell out with the acceptable margin and further investigation is required to determine validity. Evaluation of validity in clinical impairment patients is also required to assess the feasibility of the use of the application in clinical practice

    Augmented reality (AR) for surgical robotic and autonomous systems: State of the art, challenges, and solutions

    Get PDF
    Despite the substantial progress achieved in the development and integration of augmented reality (AR) in surgical robotic and autonomous systems (RAS), the center of focus in most devices remains on improving end-effector dexterity and precision, as well as improved access to minimally invasive surgeries. This paper aims to provide a systematic review of different types of state-of-the-art surgical robotic platforms while identifying areas for technological improvement. We associate specific control features, such as haptic feedback, sensory stimuli, and human-robot collaboration, with AR technology to perform complex surgical interventions for increased user perception of the augmented world. Current researchers in the field have, for long, faced innumerable issues with low accuracy in tool placement around complex trajectories, pose estimation, and difficulty in depth perception during two-dimensional medical imaging. A number of robots described in this review, such as Novarad and SpineAssist, are analyzed in terms of their hardware features, computer vision systems (such as deep learning algorithms), and the clinical relevance of the literature. We attempt to outline the shortcomings in current optimization algorithms for surgical robots (such as YOLO and LTSM) whilst providing mitigating solutions to internal tool-to-organ collision detection and image reconstruction. The accuracy of results in robot end-effector collisions and reduced occlusion remain promising within the scope of our research, validating the propositions made for the surgical clearance of ever-expanding AR technology in the future
    corecore