36,988 research outputs found

    Use of Time Varying Dynamics in Neural Network to Solve Multi-Target Classification

    Get PDF
    Several types of solutions exist for multiple target tracking. These techniques are computation-intensive and in some cases very difficult to operate online. The authors report on a backpropagation neural network which has been successfully used to identify multiple moving targets using kinematic data (time, range, range-rate and azimuth angle) from sensors to train the network. Preliminary results from simulated scenarios show that neural networks are capable of learning target identification for three targets during the time period used during training and a time period shortly after. This effective classification period can be extended by the use of networks in coordination with smart logic systems

    Automatic Model Based Dataset Generation for Fast and Accurate Crop and Weeds Detection

    Full text link
    Selective weeding is one of the key challenges in the field of agriculture robotics. To accomplish this task, a farm robot should be able to accurately detect plants and to distinguish them between crop and weeds. Most of the promising state-of-the-art approaches make use of appearance-based models trained on large annotated datasets. Unfortunately, creating large agricultural datasets with pixel-level annotations is an extremely time consuming task, actually penalizing the usage of data-driven techniques. In this paper, we face this problem by proposing a novel and effective approach that aims to dramatically minimize the human intervention needed to train the detection and classification algorithms. The idea is to procedurally generate large synthetic training datasets randomizing the key features of the target environment (i.e., crop and weed species, type of soil, light conditions). More specifically, by tuning these model parameters, and exploiting a few real-world textures, it is possible to render a large amount of realistic views of an artificial agricultural scenario with no effort. The generated data can be directly used to train the model or to supplement real-world images. We validate the proposed methodology by using as testbed a modern deep learning based image segmentation architecture. We compare the classification results obtained using both real and synthetic images as training data. The reported results confirm the effectiveness and the potentiality of our approach.Comment: To appear in IEEE/RSJ IROS 201

    Unsupervised Learning of Complex Articulated Kinematic Structures combining Motion and Skeleton Information

    Get PDF
    In this paper we present a novel framework for unsupervised kinematic structure learning of complex articulated objects from a single-view image sequence. In contrast to prior motion information based methods, which estimate relatively simple articulations, our method can generate arbitrarily complex kinematic structures with skeletal topology by a successive iterative merge process. The iterative merge process is guided by a skeleton distance function which is generated from a novel object boundary generation method from sparse points. Our main contributions can be summarised as follows: (i) Unsupervised complex articulated kinematic structure learning by combining motion and skeleton information. (ii) Iterative fine-to-coarse merging strategy for adaptive motion segmentation and structure smoothing. (iii) Skeleton estimation from sparse feature points. (iv) A new highly articulated object dataset containing multi-stage complexity with ground truth. Our experiments show that the proposed method out-performs state-of-the-art methods both quantitatively and qualitatively

    Double-peaked Narrow Emission-line Galaxies in LAMOST Survey

    Full text link
    We outline a full-scale search for galaxies exhibiting double-peaked profiles of promi- nent narrow emission lines, motivated by the prospect of finding objects related to merging galaxies, and even dual active galactic nuclei candidates as by-product, from the Large Sky Area Multi-object Fiber Spectroscopic Telescope (LAMOST) Data Re- lease 4. We assemble a large sample of 325 candidates with double-peaked or strong asymmetric narrow emission lines, with 33 objects therein appearing optically resolved dual-cored structures, close companions or signs of recent interaction on the Sloan Dig- ital Sky Survey images. A candidate from LAMOST (J074810.95+281349.2) is also stressed here based on the kinematic and spatial decompositions of the double-peaked narrow emission line target, with analysis from the cross-referenced Mapping Nearby Galaxies at the Apache Point Observatory (MaNGA) survey datacube. MaNGA en- ables us to constrain the origin of double peaks for these sources, and with the IFU data we infer that the most promising origin of double-peaked profiles for LAMOST J074810.95+281349.2 is the `Rotation Dominated + Disturbance' structure.Comment: 13 pages, 9 figures, accepted by MNRA

    Cascaded 3D Full-body Pose Regression from Single Depth Image at 100 FPS

    Full text link
    There are increasing real-time live applications in virtual reality, where it plays an important role in capturing and retargetting 3D human pose. But it is still challenging to estimate accurate 3D pose from consumer imaging devices such as depth camera. This paper presents a novel cascaded 3D full-body pose regression method to estimate accurate pose from a single depth image at 100 fps. The key idea is to train cascaded regressors based on Gradient Boosting algorithm from pre-recorded human motion capture database. By incorporating hierarchical kinematics model of human pose into the learning procedure, we can directly estimate accurate 3D joint angles instead of joint positions. The biggest advantage of this model is that the bone length can be preserved during the whole 3D pose estimation procedure, which leads to more effective features and higher pose estimation accuracy. Our method can be used as an initialization procedure when combining with tracking methods. We demonstrate the power of our method on a wide range of synthesized human motion data from CMU mocap database, Human3.6M dataset and real human movements data captured in real time. In our comparison against previous 3D pose estimation methods and commercial system such as Kinect 2017, we achieve the state-of-the-art accuracy
    • …
    corecore