20 research outputs found

    Automated 3D burr detection in cast manufacturing using sparse convolutional neural networks

    Get PDF
    For automating deburring of cast parts, this paper proposes a general method for estimating burr height using 3D vision sensor that is robust to missing data in the scans and sensor noise. Specifically, we present a novel data-driven method that learns features that can be used to align clean CAD models from a workpiece database to the noisy and incomplete geometry of a RGBD scan. Using the learned features with Random sample consensus (RANSAC) for CAD to scan registration, learned features improve registration result as compared to traditional approaches by (translation error (Δ18.47 mm) and rotation error(Δ43∘)) and accuracy(35%) respectively. Furthermore, a 3D-vision based automatic burr detection and height estimation technique is presented. The estimated burr heights were verified and compared with measurements from a high resolution industrial CT scanning machine. Together with registration, our burr height estimation approach is able to estimate burr height similar to high resolution CT scans with Z-statistic value (z=0.279).publishedVersio

    Principal Feature Visualisation in Convolutional Neural Networks

    Get PDF
    acceptedVersio

    SYNTHETIC DATA FOR DNN-BASED DOA ESTIMATION OF INDOOR SPEECH

    No full text
    This paper investigates the use of different room impulse response (RIR) simulation methods for synthesizing training data for deep neural network-based direction of arrival (DOA) estimation of speech in reverberant rooms. Different sets of synthetic RIRs are obtained using the image source method (ISM) and more advanced methods including diffuse reflections and/or source directivity. Multi-layer perceptron (MLP) deep neural network (DNN) models are trained on generalized cross correlation (GCC) features extracted for each set. Finally, models are tested on features obtained from measured RIRs. This study shows the importance of training with RIRs from directive sources, as resultant DOA models achieved up to 51% error reduction compared to the steered response power with phase transform (SRP-PHAT) baseline (significant with p<<.01), while models trained with RIRs from omnidirectional sources did worse than the baseline. The performance difference was specifically present when estimating the azimuth of speakers not facing the array directly

    Synthetic Data For Dnn-Based Doa Estimation of Indoor Speech

    No full text
    This paper investigates the use of different room impulse response (RIR) simulation methods for synthesizing training data for deep neural network-based direction of arrival (DOA) estimation of speech in reverberant rooms. Different sets of synthetic RIRs are obtained using the image source method (ISM) and more advanced methods including diffuse reflections and/or source directivity. Multi-layer perceptron (MLP) deep neural network (DNN) models are trained on generalized cross correlation (GCC) features extracted for each set. Finally, models are tested on features obtained from measured RIRs. This study shows the importance of training with RIRs from directive sources, as resultant DOA models achieved up to 51% error reduction compared to the steered response power with phase transform (SRP-PHAT) baseline (significant with p<<.01), while models trained with RIRs from omnidirectional sources did worse than the baseline. The performance difference was specifically present when estimating the azimuth of speakers not facing the array directly.acceptedVersio

    6D pose estimation for subsea intervention in turbid waters

    No full text
    Manipulation tasks on subsea instalments require extremely precise detection and localization of objects of interest. This problem is referred to as “pose estimation”. In this work, we present a framework for detecting and predicting 6DoF pose for relevant objects (fish-tail, gauges, and valves) on a subsea panel under varying water turbidity. A deep learning model that takes 3D vision data as an input is developed, providing a more robust 6D pose estimate. Compared to the 2D vision deep learning model, the proposed method reduces rotation and translation prediction error by (−Δ0.39∘) and translation (−Δ6.5 mm), respectively, in high turbid waters. The proposed approach is able to provide object detection as well as 6D pose estimation with an average precision of 91%. The 6D pose estimation results show 2.59∘ and 6.49 cm total average deviation in rotation and translation as compared to the ground truth data on varying unseen turbidity levels. Furthermore, our approach runs at over 16 frames per second and does not require pose refinement steps. Finally, to facilitate the training of such model we also collected and automatically annotated a new underwater 6D pose estimation dataset spanning seven levels of turbidity

    6D pose estimation for subsea intervention in turbid waters

    Get PDF
    Manipulation tasks on subsea instalments require extremely precise detection and localization of objects of interest. This problem is referred to as “pose estimation”. In this work, we present a framework for detecting and predicting 6DoF pose for relevant objects (fish-tail, gauges, and valves) on a subsea panel under varying water turbidity. A deep learning model that takes 3D vision data as an input is developed, providing a more robust 6D pose estimate. Compared to the 2D vision deep learning model, the proposed method reduces rotation and translation prediction error by (−Δ0.39∘) and translation (−Δ6.5 mm), respectively, in high turbid waters. The proposed approach is able to provide object detection as well as 6D pose estimation with an average precision of 91%. The 6D pose estimation results show 2.59∘ and 6.49 cm total average deviation in rotation and translation as compared to the ground truth data on varying unseen turbidity levels. Furthermore, our approach runs at over 16 frames per second and does not require pose refinement steps. Finally, to facilitate the training of such model we also collected and automatically annotated a new underwater 6D pose estimation dataset spanning seven levels of turbidity.publishedVersio
    corecore