117 research outputs found

    3D Orientation Estimation Using Inertial Sensors

    Get PDF
    Recently, inertial sensors have been widely used in the measurement of 3D orientations because of their small size and relative low cost. One of the useful applications in the area of Neurorehabilitation is to assess the upper limb motion for patients who are under neurorehabilitation. In this paper, the computation of the 3D orientation is discussed utilising the outputs from accelerometers, gyroscopes and magnetometers. Different 3D orientation representations are discussed to give recommendations for different use scenarios. Based on the results form the 3D orientation, 2D and 3D position tracking techniques are also calculated by considering the joint links and kinematics constraints from the upper limb segments. The results showed that the performance using complementary filter can make good estimation of the orientation.

    3D Orientation Estimation with Multiple 5G mmWave Base Stations

    Full text link
    We consider the problem of estimating the 3D orientation of a user, using the downlink mmWave signals received from multiple base stations. We show that the received signals from several base stations, having known positions, can be used to estimate the unknown orientation of the user. We formulate the estimation problem as a maximum likelihood estimation problem in the the manifold of rotation matrices. In order to provide an initial estimate to solve the non-linear non-convex optimization problem, we resort to a least squares estimation problem that exploits the underlying geometry. Our numerical results show that the problem of orientation estimation can be solved when the signals from at least two base stations are received. We also provide the orientation lower error bound, showing a narrow gap between the performance of the proposed estimators and the bound

    Implicit 3D Orientation Learning for 6D Object Detection from RGB Images

    Get PDF
    We propose a real-time RGB-based pipeline for object detection and 6D pose estimation. Our novel 3D orientation estimation is based on a variant of the Denoising Autoencoder that is trained on simulated views of a 3D model using Domain Randomization. This so-called Augmented Autoencoder has several advantages over existing methods: It does not require real, pose-annotated training data, generalizes to various test sensors and inherently handles object and view symmetries. Instead of learning an explicit mapping from input images to object poses, it provides an implicit representation of object orientations defined by samples in a latent space. Our pipeline achieves state-of-the-art performance on the T-LESS dataset both in the RGB and RGB-D domain. We also evaluate on the LineMOD dataset where we can compete with other synthetically trained approaches. We further increase performance by correcting 3D orientation estimates to account for perspective errors when the object deviates from the image center and show extended results.Comment: Code available at: https://github.com/DLR-RM/AugmentedAutoencode

    3D Bounding Box Estimation Using Deep Learning and Geometry

    Full text link
    We present a method for 3D object detection and pose estimation from a single image. In contrast to current techniques that only regress the 3D orientation of an object, our method first regresses relatively stable 3D object properties using a deep convolutional neural network and then combines these estimates with geometric constraints provided by a 2D object bounding box to produce a complete 3D bounding box. The first network output estimates the 3D object orientation using a novel hybrid discrete-continuous loss, which significantly outperforms the L2 loss. The second output regresses the 3D object dimensions, which have relatively little variance compared to alternatives and can often be predicted for many object types. These estimates, combined with the geometric constraints on translation imposed by the 2D bounding box, enable us to recover a stable and accurate 3D object pose. We evaluate our method on the challenging KITTI object detection benchmark both on the official metric of 3D orientation estimation and also on the accuracy of the obtained 3D bounding boxes. Although conceptually simple, our method outperforms more complex and computationally expensive approaches that leverage semantic segmentation, instance level segmentation and flat ground priors and sub-category detection. Our discrete-continuous loss also produces state of the art results for 3D viewpoint estimation on the Pascal 3D+ dataset.Comment: To appear in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 201

    Automatic 3D seed location and orientation in CT images for prostate brachytherapy

    No full text
    International audienceIn prostate brachytherapy, the analysis of the 3D pose information of each individual implanted seed is one of the critical issues for dose calculation and procedure quality assessment. This paper addresses the development of an automatic image processing solution for the separation, localization and 3D orientation estimation of prostate seeds. This solution combines an initial detection of a set of seed candidates in CT images (using a thresholding and connected component method) with an orientation estimation using principal components analysis (PCA). The main originality of the work is the ability to classify the detected objects based on a priori intensity and volume information and to separate groups of seeds using a modified k-means method. Experiments were carried out on CT images of a phantom and a patient aiming to compare the proposed solution with manual segmentation or other previous work in terms of detection performance and calculation time

    A Complementary Filter Design on SE(3) to IdentifyMicro-Motions during 3D Motion Tracking

    Get PDF
    In 3D motion capture, multiple methods have been developed in order to optimize thequality of the captured data. While certain technologies, such as inertial measurement units (IMU),are mostly suitable for 3D orientation estimation at relatively high frequencies, other technologies,such as marker-based motion capture, are more suitable for 3D position estimations at a lower frequencyrange. In this work, we introduce a complementary filter that complements 3D motion capture datawith high-frequency acceleration signals from an IMU. While the local optimization reduces the error ofthe motion tracking, the additional accelerations can help to detect micro-motions that are useful whendealing with high-frequency human motions or robotic applications. The combination of high-frequencyaccelerometers improves the accuracy of the data and helps to overcome limitations in motion capturewhen micro-motions are not traceable with 3D motion tracking system. In our experimental evaluation,we demonstrate the improvements of the motion capture results during translational, rotational,and combined movements

    GPU-accelerated ray-casting for 3D fiber orientation analysis

    Get PDF
    Orientation analysis of fibers is widely applied in the fields of medical, material and life sciences. The orientation information allows predicting properties and behavior of materials to validate and guide a fabrication process of materials with controlled fiber orientation. Meanwhile, development of detector systems for high-resolution non-invasive 3D imaging techniques led to a significant increase in the amount of generated data per a sample up to dozens of gigabytes. Though plenty of 3D orientation estimation algorithms were developed in recent years, neither of them can process large datasets in a reasonable amount of time. This fact complicates the further analysis and makes impossible fast feedback to adjust fabrication parameters. In this work, we present a new method for quantifying the 3D orientation of fibers. The GPU implementation of the proposed method surpasses another popular method for 3D orientation analysis regarding accuracy and speed. The validation of both methods was performed on a synthetic dataset with varying parameters of fibers. Moreover, the proposed method was applied to perform orientation analysis of scaffolds with different fibrous micro-architecture studied with the synchrotron μCT imaging setup. Each acquired dataset of size 600x600x450 voxels was analyzed in less 2 minutes using standard PC equipped with a single GPU

    Segmentation, separation and pose estimation of prostate brachytherapy seeds in CT images.

    No full text
    International audienceIn this paper, we address the development of an automatic approach for the computation of pose information (position + orientation) of prostate brachytherapy loose seeds from 3D CT images. From an initial detection of a set of seed candidates in CT images using a threshold and connected component method, the orientation of each individual seed is estimated by using the principal components analysis (PCA) method. The main originality of this approach is the ability to classify the detected objects based on a priori intensity and volume information and to separate groups of closely spaced seeds using three competing clustering methods: the standard and a modified k-means method and a Gaussian mixture model with an Expectation-Maximization algorithm. Experiments were carried out on a series of CT images of two phantoms and patients. The fourteen patients correspond to a total of 1063 implanted seeds. Detections are compared to manual segmentation and to related work in terms of detection performance and calculation time. This automatic method has proved to be accurate and fast including the ability to separate groups of seeds in a reliable way and to determine the orientation of each seed. Such a method is mandatory to be able to compute precisely the real dose delivered to the patient post-operatively instead of assuming the alignment of seeds along the theoretical insertion direction of the brachytherapy needles
    corecore