40 research outputs found

    Camera Calibration from Dynamic Silhouettes Using Motion Barcodes

    Full text link
    Computing the epipolar geometry between cameras with very different viewpoints is often problematic as matching points are hard to find. In these cases, it has been proposed to use information from dynamic objects in the scene for suggesting point and line correspondences. We propose a speed up of about two orders of magnitude, as well as an increase in robustness and accuracy, to methods computing epipolar geometry from dynamic silhouettes. This improvement is based on a new temporal signature: motion barcode for lines. Motion barcode is a binary temporal sequence for lines, indicating for each frame the existence of at least one foreground pixel on that line. The motion barcodes of two corresponding epipolar lines are very similar, so the search for corresponding epipolar lines can be limited only to lines having similar barcodes. The use of motion barcodes leads to increased speed, accuracy, and robustness in computing the epipolar geometry.Comment: Update metadat

    An Epipolar Line from a Single Pixel

    Full text link
    Computing the epipolar geometry from feature points between cameras with very different viewpoints is often error prone, as an object's appearance can vary greatly between images. For such cases, it has been shown that using motion extracted from video can achieve much better results than using a static image. This paper extends these earlier works based on the scene dynamics. In this paper we propose a new method to compute the epipolar geometry from a video stream, by exploiting the following observation: For a pixel p in Image A, all pixels corresponding to p in Image B are on the same epipolar line. Equivalently, the image of the line going through camera A's center and p is an epipolar line in B. Therefore, when cameras A and B are synchronized, the momentary images of two objects projecting to the same pixel, p, in camera A at times t1 and t2, lie on an epipolar line in camera B. Based on this observation we achieve fast and precise computation of epipolar lines. Calibrating cameras based on our method of finding epipolar lines is much faster and more robust than previous methods.Comment: WACV 201

    Calibration of Multiple Sparsely Distributed Cameras Using a Mobile Camera

    Get PDF
    In sports science research, there are many topics that utilize the body motion of athletes extracted by motion capture system, since motion information is valuable data for improving an athlete’s skills. However, one of the unsolved challenges in motion capture is extraction of athletes’ motion information during the actual game or match, as placing markers on athletes is a challenge during game play. In this research, the authors propose a method for acquisition of motion information without attaching a marker, utilizing computer vision technology. In the proposed method, the three-dimensional world joint position of the athlete’s body can be acquired using just two cameras without any visual markers. Furthermore, the athlete’s three-dimensional joint position during game play can also be obtained without complicated preparations. Camera calibration that estimates the projective relationship between three-dimensional world and two-dimensional image spaces is one of the principal processes for the respective three-dimensional image processing, such as three-dimensional reconstruction and three-dimensional tracking. A strong-calibration method, which needs to set up landmarks with known three-dimensional positions, is a common technique. However, as the target space expands, landmark placement becomes increasingly complicated. Although a weak-calibration method does not need known landmarks, the estimation precision depends on the accuracy of the correspondence between image captures. When multiple cameras are arranged sparsely, sufficient detection of corresponding points is difficult. In this research, the authors propose a calibration method that bridges multiple sparsely distributed cameras using mobile camera images. Appropriate spacing was confirmed between the images through comparative experiments evaluating camera calibration accuracy by changing the number of bridging images. Furthermore, the proposed method was applied to multiple capturing experiments in a large-scale space to verify its robustness. As a relevant example, the proposed method was applied to the three-dimensional skeleton estimation of badminton players. Subsequently, a quantitative evaluation was conducted on camera calibration for the three-dimensional skeleton. The reprojection error of each part of the skeletons and standard deviations were approximately 2.72 and 0.81 mm, respectively, confirming that the proposed method was highly accurate when applied to camera calibration. Consequently, a quantitative evaluation was conducted on the proposed calibration method and a calibration method using the coordinates of eight manual points. In conclusion, the proposed method stabilizes calibration accuracy in the vertical direction of the world coordinate system

    Machine Learning-based Detection of Compensatory Balance Responses and Environmental Fall Risks Using Wearable Sensors

    Get PDF
    Falls are the leading cause of fatal and non-fatal injuries among seniors worldwide, with serious and costly consequences. Compensatory balance responses (CBRs) are reactions to recover stability following a loss of balance, potentially resulting in a fall if sufficient recovery mechanisms are not activated. While performance of CBRs are demonstrated risk factors for falls in seniors, the frequency, type, and underlying cause of these incidents occurring in everyday life have not been well investigated. This study was spawned from the lack of research on development of fall risk assessment methods that can be used for continuous and long-term mobility monitoring of the geri- atric population, during activities of daily living, and in their dwellings. Wearable sensor systems (WSS) offer a promising approach for continuous real-time detection of gait and balance behavior to assess the risk of falling during activities of daily living. To detect CBRs, we record movement signals (e.g. acceleration) and activity patterns of four muscles involving in maintaining balance using wearable inertial measurement units (IMUs) and surface electromyography (sEMG) sensors. To develop more robust detection methods, we investigate machine learning approaches (e.g., support vector machines, neural networks) and successfully detect lateral CBRs, during normal gait with accuracies of 92.4% and 98.1% using sEMG and IMU signals, respectively. Moreover, to detect environmental fall-related hazards that are associated with CBRs, and affect balance control behavior of seniors, we employ an egocentric mobile vision system mounted on participants chest. Two algorithms (e.g. Gabor Barcodes and Convolutional Neural Networks) are developed. Our vision-based method detects 17 different classes of environmental risk factors (e.g., stairs, ramps, curbs) with 88.5% accuracy. To the best of the authors knowledge, this study is the first to develop and evaluate an automated vision-based method for fall hazard detection
    corecore