12,659 research outputs found

    Inertial sensor-based knee flexion/extension angle estimation

    Get PDF
    A new method for estimating knee joint flexion/extension angles from segment acceleration and angular velocity data is described. The approach uses a combination of Kalman filters and biomechanical constraints based on anatomical knowledge. In contrast to many recently published methods, the proposed approach does not make use of the earth’s magnetic field and hence is insensitive to the complex field distortions commonly found in modern buildings. The method was validated experimentally by calculating knee angle from measurements taken from two IMUs placed on adjacent body segments. In contrast to many previous studies which have validated their approach during relatively slow activities or over short durations, the performance of the algorithm was evaluated during both walking and running over 5 minute periods. Seven healthy subjects were tested at various speeds from 1 to 5 miles/hour. Errors were estimated by comparing the results against data obtained simultaneously from a 10 camera motion tracking system (Qualysis). The average measurement error ranged from 0.7 degrees for slow walking (1 mph) to 3.4 degrees for running (5mph). The joint constraint used in the IMU analysis was derived from the Qualysis data. Limitations of the method, its clinical application and its possible extension are discussed

    Visual-inertial self-calibration on informative motion segments

    Full text link
    Environmental conditions and external effects, such as shocks, have a significant impact on the calibration parameters of visual-inertial sensor systems. Thus long-term operation of these systems cannot fully rely on factory calibration. Since the observability of certain parameters is highly dependent on the motion of the device, using short data segments at device initialization may yield poor results. When such systems are additionally subject to energy constraints, it is also infeasible to use full-batch approaches on a big dataset and careful selection of the data is of high importance. In this paper, we present a novel approach for resource efficient self-calibration of visual-inertial sensor systems. This is achieved by casting the calibration as a segment-based optimization problem that can be run on a small subset of informative segments. Consequently, the computational burden is limited as only a predefined number of segments is used. We also propose an efficient information-theoretic selection to identify such informative motion segments. In evaluations on a challenging dataset, we show our approach to significantly outperform state-of-the-art in terms of computational burden while maintaining a comparable accuracy

    Estimating Epipolar Geometry With The Use of a Camera Mounted Orientation Sensor

    Get PDF
    Context: Image processing and computer vision are rapidly becoming more and more commonplace, and the amount of information about a scene, such as 3D geometry, that can be obtained from an image, or multiple images of the scene is steadily increasing due to increasing resolutions and availability of imaging sensors, and an active research community. In parallel, advances in hardware design and manufacturing are allowing for devices such as gyroscopes, accelerometers and magnetometers and GPS receivers to be included alongside imaging devices at a consumer level. Aims: This work aims to investigate the use of orientation sensors in the field of computer vision as sources of data to aid with image processing and the determination of a scene’s geometry, in particular, the epipolar geometry of a pair of images - and devises a hybrid methodology from two sets of previous works in order to exploit the information available from orientation sensors alongside data gathered from image processing techniques. Method: A readily available consumer-level orientation sensor was used alongside a digital camera to capture images of a set of scenes and record the orientation of the camera. The fundamental matrix of these pairs of images was calculated using a variety of techniques - both incorporating data from the orientation sensor and excluding its use Results: Some methodologies could not produce an acceptable result for the Fundamental Matrix on certain image pairs, however, a method described in the literature that used an orientation sensor always produced a result - however in cases where the hybrid or purely computer vision methods also produced a result - this was found to be the least accurate. Conclusion: Results from this work show that the use of an orientation sensor to capture information alongside an imaging device can be used to improve both the accuracy and reliability of calculations of the scene’s geometry - however noise from the orientation sensor can limit this accuracy and further research would be needed to determine the magnitude of this problem and methods of mitigation

    It's the Human that Matters: Accurate User Orientation Estimation for Mobile Computing Applications

    Full text link
    Ubiquity of Internet-connected and sensor-equipped portable devices sparked a new set of mobile computing applications that leverage the proliferating sensing capabilities of smart-phones. For many of these applications, accurate estimation of the user heading, as compared to the phone heading, is of paramount importance. This is of special importance for many crowd-sensing applications, where the phone can be carried in arbitrary positions and orientations relative to the user body. Current state-of-the-art focus mainly on estimating the phone orientation, require the phone to be placed in a particular position, require user intervention, and/or do not work accurately indoors; which limits their ubiquitous usability in different applications. In this paper we present Humaine, a novel system to reliably and accurately estimate the user orientation relative to the Earth coordinate system. Humaine requires no prior-configuration nor user intervention and works accurately indoors and outdoors for arbitrary cell phone positions and orientations relative to the user body. The system applies statistical analysis techniques to the inertial sensors widely available on today's cell phones to estimate both the phone and user orientation. Implementation of the system on different Android devices with 170 experiments performed at different indoor and outdoor testbeds shows that Humaine significantly outperforms the state-of-the-art in diverse scenarios, achieving a median accuracy of 15∘15^\circ averaged over a wide variety of phone positions. This is 558%558\% better than the-state-of-the-art. The accuracy is bounded by the error in the inertial sensors readings and can be enhanced with more accurate sensors and sensor fusion.Comment: Accepted for publication in the 11th International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services (Mobiquitous 2014

    Sparse Inertial Poser: Automatic 3D Human Pose Estimation from Sparse IMUs

    Full text link
    We address the problem of making human motion capture in the wild more practical by using a small set of inertial sensors attached to the body. Since the problem is heavily under-constrained, previous methods either use a large number of sensors, which is intrusive, or they require additional video input. We take a different approach and constrain the problem by: (i) making use of a realistic statistical body model that includes anthropometric constraints and (ii) using a joint optimization framework to fit the model to orientation and acceleration measurements over multiple frames. The resulting tracker Sparse Inertial Poser (SIP) enables 3D human pose estimation using only 6 sensors (attached to the wrists, lower legs, back and head) and works for arbitrary human motions. Experiments on the recently released TNT15 dataset show that, using the same number of sensors, SIP achieves higher accuracy than the dataset baseline without using any video data. We further demonstrate the effectiveness of SIP on newly recorded challenging motions in outdoor scenarios such as climbing or jumping over a wall.Comment: 12 pages, Accepted at Eurographics 201

    The Event-Camera Dataset and Simulator: Event-based Data for Pose Estimation, Visual Odometry, and SLAM

    Full text link
    New vision sensors, such as the Dynamic and Active-pixel Vision sensor (DAVIS), incorporate a conventional global-shutter camera and an event-based sensor in the same pixel array. These sensors have great potential for high-speed robotics and computer vision because they allow us to combine the benefits of conventional cameras with those of event-based sensors: low latency, high temporal resolution, and very high dynamic range. However, new algorithms are required to exploit the sensor characteristics and cope with its unconventional output, which consists of a stream of asynchronous brightness changes (called "events") and synchronous grayscale frames. For this purpose, we present and release a collection of datasets captured with a DAVIS in a variety of synthetic and real environments, which we hope will motivate research on new algorithms for high-speed and high-dynamic-range robotics and computer-vision applications. In addition to global-shutter intensity images and asynchronous events, we provide inertial measurements and ground-truth camera poses from a motion-capture system. The latter allows comparing the pose accuracy of ego-motion estimation algorithms quantitatively. All the data are released both as standard text files and binary files (i.e., rosbag). This paper provides an overview of the available data and describes a simulator that we release open-source to create synthetic event-camera data.Comment: 7 pages, 4 figures, 3 table

    A factorization approach to inertial affine structure from motion

    Full text link
    We consider the problem of reconstructing a 3-D scene from a moving camera with high frame rate using the affine projection model. This problem is traditionally known as Affine Structure from Motion (Affine SfM), and can be solved using an elegant low-rank factorization formulation. In this paper, we assume that an accelerometer and gyro are rigidly mounted with the camera, so that synchronized linear acceleration and angular velocity measurements are available together with the image measurements. We extend the standard Affine SfM algorithm to integrate these measurements through the use of image derivatives

    A factorization approach to inertial affine structure from motion

    Full text link
    We consider the problem of reconstructing a 3-D scene from a moving camera with high frame rate using the affine projection model. This problem is traditionally known as Affine Structure from Motion (Affine SfM), and can be solved using an elegant low-rank factorization formulation. In this paper, we assume that an accelerometer and gyro are rigidly mounted with the camera, so that synchronized linear acceleration and angular velocity measurements are available together with the image measurements. We extend the standard Affine SfM algorithm to integrate these measurements through the use of image derivatives
    • …
    corecore