553 research outputs found

    Tracking a Line for Automatic Piloting System of Drone

    Get PDF
    Nowadays, the FAA is discussing about open the low area for registered flight, so the Drone for Amazon will be the possible. In addition, Amazon is developing its own drone for future delivery method. In this project, we implement automatic piloting system for AR. Drone by tracking a line with on-board camera. This project uses AR. Drone for our implementation of the line tracker because there exist several available SDKs based on Wi-Fi network and two HD quality cameras. For the SDK, YA. Drone from University in Hamburg, Germany is employed and it is combined with Image Processing Technology in order for a drone to track the line. The preliminary results show that a drone can successfully track various types of lines behind a drone, such as straight, 90 degree, crank, circle and arbitrary lines. Using the automatic piloting system in this project, a drone can send something from one room to other room, and eventually deliver an item outdoor

    Gait based gender classification using Kinect sensor

    Get PDF
    In this project, we propose a novel method to recognize human gender based on their gaits. We collect samples of walking silhouettes with Microsoft Kinect sensor and extract gait features from Gait Energy Image (GEI). The samples are divided into two parts: training dataset and testing dataset. We train a SVM classifier using the training set and test with testing dataset. We use feature vector with a low dimension in this project. The experimental results show that our method has accuracy higher than 80%

    Analysis Precipitation to Seek a Good Place for Farm

    Get PDF
    In this poster, we present how to employ Hadoop System including HDFS and MapReduce to analyze the precipitation data to find good places for farming. The precipitation data are collected from National Oceanic and Atmospheric Administration (NOAA) and some formulas form Food and Agriculture Organization of the United Nations (FAO) to help to find places which have good precipitation for specific plant. In order to address this, we employ Hadoop Distributed File System (HDFS) and MapReduce programming with banana as an example. Combining the weather data with the precipitation data, we can figure out the places which are good for banana grows. The implemented system uses two MR programs and Google Earth to implement the visualization

    Look, Cast and Mold: Learning 3D Shape Manifold from Single-view Synthetic Data

    Full text link
    Inferring the stereo structure of objects in the real world is a challenging yet practical task. To equip deep models with this ability usually requires abundant 3D supervision which is hard to acquire. It is promising that we can simply benefit from synthetic data, where pairwise ground-truth is easy to access. Nevertheless, the domain gaps are nontrivial considering the variant texture, shape and context. To overcome these difficulties, we propose a Visio-Perceptual Adaptive Network for single-view 3D reconstruction, dubbed VPAN. To generalize the model towards a real scenario, we propose to fulfill several aspects: (1) Look: visually incorporate spatial structure from the single view to enhance the expressiveness of representation; (2) Cast: perceptually align the 2D image features to the 3D shape priors with cross-modal semantic contrastive mapping; (3) Mold: reconstruct stereo-shape of target by transforming embeddings into the desired manifold. Extensive experiments on several benchmarks demonstrate the effectiveness and robustness of the proposed method in learning the 3D shape manifold from synthetic data via a single-view. The proposed method outperforms state-of-the-arts on Pix3D dataset with IoU 0.292 and CD 0.108, and reaches IoU 0.329 and CD 0.104 on Pascal 3D+

    Entangled View-Epipolar Information Aggregation for Generalizable Neural Radiance Fields

    Full text link
    Generalizable NeRF can directly synthesize novel views across new scenes, eliminating the need for scene-specific retraining in vanilla NeRF. A critical enabling factor in these approaches is the extraction of a generalizable 3D representation by aggregating source-view features. In this paper, we propose an Entangled View-Epipolar Information Aggregation method dubbed EVE-NeRF. Different from existing methods that consider cross-view and along-epipolar information independently, EVE-NeRF conducts the view-epipolar feature aggregation in an entangled manner by injecting the scene-invariant appearance continuity and geometry consistency priors to the aggregation process. Our approach effectively mitigates the potential lack of inherent geometric and appearance constraint resulting from one-dimensional interactions, thus further boosting the 3D representation generalizablity. EVE-NeRF attains state-of-the-art performance across various evaluation scenarios. Extensive experiments demonstate that, compared to prevailing single-dimensional aggregation, the entangled network excels in the accuracy of 3D scene geometry and appearance reconstruction. Our code is publicly available at https://github.com/tatakai1/EVENeRF.Comment: Accepted by CVPR-202
    • …
    corecore