Micro-object pose estimation with sim-to-real transfer learning using small dataset

Abstract

International audience<span style="color: rgb(34, 34, 34); font-family: -apple-system, BlinkMacSystemFont, &quot;Segoe UI&quot;, Roboto, Oxygen-Sans, Ubuntu, Cantarell, &quot;Helvetica Neue&quot;, sans-serif; font-size: 18px;"&gtThree-dimensional (3D) pose estimation of micro/nano-objects isessential for the implementation of automatic manipulation inmicro/nano-robotic systems. However, out-of-plane pose estimationof a micro/nano-object is challenging, since the images aretypically obtained in 2D using a scanning electron microscope (SEM)or an optical microscope (OM). Traditional deep learning basedmethods require the collection of a large amount of labeled datafor model training to estimate the 3D pose of an object from amonocular image. Here we present a sim-to-real learning-to-matchapproach for 3D pose estimation of micro/nano-objects. Instead ofcollecting large training datasets, simulated data is generated toenlarge the limited experimental data obtained in practice, whilethe domain gap between the generated and experimental data isminimized via image translation based on a generative adversarialnetwork (GAN) model. A learning-to-match approach is used to mapthe generated data and the experimental data to a low-dimensionalspace with the same data distribution for different pose labels,which ensures effective feature embedding. Combining the labeleddata obtained from experiments and simulations, a new trainingdataset is constructed for robust pose estimation. The proposedmethod is validated with images from both SEM and OM, facilitatingthe development of closed-loop control of micro/nano-objects withcomplex shapes in micro/nano-robotic systems.</span&g

    Similar works