24 research outputs found

    Geodesic Distance Histogram Feature for Video Segmentation

    Full text link
    This paper proposes a geodesic-distance-based feature that encodes global information for improved video segmentation algorithms. The feature is a joint histogram of intensity and geodesic distances, where the geodesic distances are computed as the shortest paths between superpixels via their boundaries. We also incorporate adaptive voting weights and spatial pyramid configurations to include spatial information into the geodesic histogram feature and show that this further improves results. The feature is generic and can be used as part of various algorithms. In experiments, we test the geodesic histogram feature by incorporating it into two existing video segmentation frameworks. This leads to significantly better performance in 3D video segmentation benchmarks on two datasets

    Implementation of Convolutional Neural Network Method in Identifying Fashion Image

    Get PDF
    The fashion industry has changed a lot over the years, which makes it hard for people to compare different kinds of fashion. To make it easier, different styles of clothing are tried out to find the exact and precise look desired. So, we opted to employ the Convolutional Neural Network (CNN) method for fashion classification. This approach represents one of the methodologies employed to utilize computers for the purpose of recognizing and categorizing items. The goal of this research is to see how well the Convolutional Neural Network method classifies the Fashion-MNIST dataset compared to other methods, models, and classification processes used in previous research. The information in this dataset is about different types of clothes and accessories. These items are divided into 10 categories, which include ankle boots, bags, coats, dresses, pullovers, sandals, shirts, sneakers, t-shirts, and trousers. The new classification method worked better than before on the test dataset. It had an accuracy value of 95. 92%, which is higher than in previous research. This research also uses a method called image data generator to make the Fashion MNIST image better. This method helps prevent too much focus on certain details and makes the results more accurate

    TexPose: Neural Texture Learning for Self-Supervised 6D Object Pose Estimation

    Full text link
    In this paper, we introduce neural texture learning for 6D object pose estimation from synthetic data and a few unlabelled real images. Our major contribution is a novel learning scheme which removes the drawbacks of previous works, namely the strong dependency on co-modalities or additional refinement. These have been previously necessary to provide training signals for convergence. We formulate such a scheme as two sub-optimisation problems on texture learning and pose learning. We separately learn to predict realistic texture of objects from real image collections and learn pose estimation from pixel-perfect synthetic data. Combining these two capabilities allows then to synthesise photorealistic novel views to supervise the pose estimator with accurate geometry. To alleviate pose noise and segmentation imperfection present during the texture learning phase, we propose a surfel-based adversarial training loss together with texture regularisation from synthetic data. We demonstrate that the proposed approach significantly outperforms the recent state-of-the-art methods without ground-truth pose annotations and demonstrates substantial generalisation improvements towards unseen scenes. Remarkably, our scheme improves the adopted pose estimators substantially even when initialised with much inferior performance

    3D-Aware Ellipse Prediction for Object-Based Camera Pose Estimation

    Get PDF
    International audienceIn this paper, we propose a method for coarse camera pose computation which is robust to viewing conditions and does not require a detailed model of the scene. This method meets the growing need of easy deployment of robotics or augmented reality applications in any environments, especially those for which no accurate 3D model nor huge amount of ground truth data are available. It exploits the ability of deep learning techniques to reliably detect objects regardless of viewing conditions. Previous works have also shown that abstracting the geometry of a scene of objects by an ellipsoid cloud allows to compute the camera pose accurately enough for various application needs. Though promising, these approaches use the ellipses fitted to the detection bounding boxes as an approximation of the im-aged objects. In this paper, we go one step further and propose a learning-based method which detects improved elliptic approximations of objects which are coherent with the 3D ellipsoid in terms of perspective projection. Experiments prove that the accuracy of the computed pose significantly increases thanks to our method and is more robust to the variability of the boundaries of the detection boxes. This is achieved with very little effort in terms of training data acquisition-a few hundred calibrated images of which only three need manual object annotation. Code and models are released at https://github.com/zinsmatt/3D-Aware-Ellipses-for-Visual-Localization
    corecore