9,316 research outputs found

    Generative Model with Coordinate Metric Learning for Object Recognition Based on 3D Models

    Full text link
    Given large amount of real photos for training, Convolutional neural network shows excellent performance on object recognition tasks. However, the process of collecting data is so tedious and the background are also limited which makes it hard to establish a perfect database. In this paper, our generative model trained with synthetic images rendered from 3D models reduces the workload of data collection and limitation of conditions. Our structure is composed of two sub-networks: semantic foreground object reconstruction network based on Bayesian inference and classification network based on multi-triplet cost function for avoiding over-fitting problem on monotone surface and fully utilizing pose information by establishing sphere-like distribution of descriptors in each category which is helpful for recognition on regular photos according to poses, lighting condition, background and category information of rendered images. Firstly, our conjugate structure called generative model with metric learning utilizing additional foreground object channels generated from Bayesian rendering as the joint of two sub-networks. Multi-triplet cost function based on poses for object recognition are used for metric learning which makes it possible training a category classifier purely based on synthetic data. Secondly, we design a coordinate training strategy with the help of adaptive noises acting as corruption on input images to help both sub-networks benefit from each other and avoid inharmonious parameter tuning due to different convergence speed of two sub-networks. Our structure achieves the state of the art accuracy of over 50\% on ShapeNet database with data migration obstacle from synthetic images to real photos. This pipeline makes it applicable to do recognition on real images only based on 3D models.Comment: 14 page

    One-shot ultraspectral imaging with reconfigurable metasurfaces

    Full text link
    One-shot spectral imaging that can obtain spectral information from thousands of different points in space at one time has always been difficult to achieve. Its realization makes it possible to get spatial real-time dynamic spectral information, which is extremely important for both fundamental scientific research and various practical applications. In this study, a one-shot ultraspectral imaging device fitting thousands of micro-spectrometers (6336 pixels) on a chip no larger than 0.5 cm2^2, is proposed and demonstrated. Exotic light modulation is achieved by using a unique reconfigurable metasurface supercell with 158400 metasurface units, which enables 6336 micro-spectrometers with dynamic image-adaptive performances to simultaneously guarantee the density of spectral pixels and the quality of spectral reconstruction. Additionally, by constructing a new algorithm based on compressive sensing, the snapshot device can reconstruct ultraspectral imaging information (Δλ\Delta\lambda/λ\lambda~0.001) covering a broad (300-nm-wide) visible spectrum with an ultra-high center-wavelength accuracy of 0.04-nm standard deviation and spectral resolution of 0.8 nm. This scheme of reconfigurable metasurfaces makes the device can be directly extended to almost any commercial camera with different spectral bands to seamlessly switch the information between image and spectral image, and will open up a new space for the application of spectral analysis combining with image recognition and intellisense

    Single-shot compressed ultrafast photography: a review

    Get PDF
    Compressed ultrafast photography (CUP) is a burgeoning single-shot computational imaging technique that provides an imaging speed as high as 10 trillion frames per second and a sequence depth of up to a few hundred frames. This technique synergizes compressed sensing and the streak camera technique to capture nonrepeatable ultrafast transient events with a single shot. With recent unprecedented technical developments and extensions of this methodology, it has been widely used in ultrafast optical imaging and metrology, ultrafast electron diffraction and microscopy, and information security protection. We review the basic principles of CUP, its recent advances in data acquisition and image reconstruction, its fusions with other modalities, and its unique applications in multiple research fields

    Applying Deep Bidirectional LSTM and Mixture Density Network for Basketball Trajectory Prediction

    Full text link
    Data analytics helps basketball teams to create tactics. However, manual data collection and analytics are costly and ineffective. Therefore, we applied a deep bidirectional long short-term memory (BLSTM) and mixture density network (MDN) approach. This model is not only capable of predicting a basketball trajectory based on real data, but it also can generate new trajectory samples. It is an excellent application to help coaches and players decide when and where to shoot. Its structure is particularly suitable for dealing with time series problems. BLSTM receives forward and backward information at the same time, while stacking multiple BLSTMs further increases the learning ability of the model. Combined with BLSTMs, MDN is used to generate a multi-modal distribution of outputs. Thus, the proposed model can, in principle, represent arbitrary conditional probability distributions of output variables. We tested our model with two experiments on three-pointer datasets from NBA SportVu data. In the hit-or-miss classification experiment, the proposed model outperformed other models in terms of the convergence speed and accuracy. In the trajectory generation experiment, eight model-generated trajectories at a given time closely matched real trajectories
    • …
    corecore