1,499 research outputs found

    Advanced Feedback Linearization Control for Tiltrotor UAVs: Gait Plan, Controller Design, and Stability Analysis

    Full text link
    Three challenges, however, can hinder the application of Feedback Linearization: over-intensive control signals, singular decoupling matrix, and saturation. Activating any of these three issues can challenge the stability proof. To solve these three challenges, first, this research proposed the drone gait plan. The gait plan was initially used to figure out the control problems in quadruped (four-legged) robots; applying this approach, accompanied by Feedback Linearization, the quality of the control signals was enhanced. Then, we proposed the concept of unacceptable attitude curves, which are not allowed for the tiltrotor to travel to. The Two Color Map Theorem was subsequently established to enlarge the supported attitude for the tiltrotor. These theories were employed in the tiltrotor tracking problem with different references. Notable improvements in the control signals were witnessed in the tiltrotor simulator. Finally, we explored the control theory, the stability proof of the novel mobile robot (tilt vehicle) stabilized by Feedback Linearization with saturation. Instead of adopting the tiltrotor model, which is over-complicated, we designed a conceptual mobile robot (tilt-car) to analyze the stability proof. The stability proof (stable in the sense of Lyapunov) was found for a mobile robot (tilt vehicle) controlled by Feedback Linearization with saturation for the first time. The success tracking result with the promising control signals in the tiltrotor simulator demonstrates the advances of our control method. Also, the Lyapunov candidate and the tracking result in the mobile robot (tilt-car) simulator confirm our deductions of the stability proof. These results reveal that these three challenges in Feedback Linearization are solved, to some extents.Comment: Doctoral Thesis at The University of Toky

    Inner and Inter Label Propagation: Salient Object Detection in the Wild

    Full text link
    In this paper, we propose a novel label propagation based method for saliency detection. A key observation is that saliency in an image can be estimated by propagating the labels extracted from the most certain background and object regions. For most natural images, some boundary superpixels serve as the background labels and the saliency of other superpixels are determined by ranking their similarities to the boundary labels based on an inner propagation scheme. For images of complex scenes, we further deploy a 3-cue-center-biased objectness measure to pick out and propagate foreground labels. A co-transduction algorithm is devised to fuse both boundary and objectness labels based on an inter propagation scheme. The compactness criterion decides whether the incorporation of objectness labels is necessary, thus greatly enhancing computational efficiency. Results on five benchmark datasets with pixel-wise accurate annotations show that the proposed method achieves superior performance compared with the newest state-of-the-arts in terms of different evaluation metrics.Comment: The full version of the TIP 2015 publicatio

    Skeleton Key: Image Captioning by Skeleton-Attribute Decomposition

    Full text link
    Recently, there has been a lot of interest in automatically generating descriptions for an image. Most existing language-model based approaches for this task learn to generate an image description word by word in its original word order. However, for humans, it is more natural to locate the objects and their relationships first, and then elaborate on each object, describing notable attributes. We present a coarse-to-fine method that decomposes the original image description into a skeleton sentence and its attributes, and generates the skeleton sentence and attribute phrases separately. By this decomposition, our method can generate more accurate and novel descriptions than the previous state-of-the-art. Experimental results on the MS-COCO and a larger scale Stock3M datasets show that our algorithm yields consistent improvements across different evaluation metrics, especially on the SPICE metric, which has much higher correlation with human ratings than the conventional metrics. Furthermore, our algorithm can generate descriptions with varied length, benefiting from the separate control of the skeleton and attributes. This enables image description generation that better accommodates user preferences.Comment: Accepted by CVPR 201

    Joint Object and Part Segmentation using Deep Learned Potentials

    Full text link
    Segmenting semantic objects from images and parsing them into their respective semantic parts are fundamental steps towards detailed object understanding in computer vision. In this paper, we propose a joint solution that tackles semantic object and part segmentation simultaneously, in which higher object-level context is provided to guide part segmentation, and more detailed part-level localization is utilized to refine object segmentation. Specifically, we first introduce the concept of semantic compositional parts (SCP) in which similar semantic parts are grouped and shared among different objects. A two-channel fully convolutional network (FCN) is then trained to provide the SCP and object potentials at each pixel. At the same time, a compact set of segments can also be obtained from the SCP predictions of the network. Given the potentials and the generated segments, in order to explore long-range context, we finally construct an efficient fully connected conditional random field (FCRF) to jointly predict the final object and part labels. Extensive evaluation on three different datasets shows that our approach can mutually enhance the performance of object and part segmentation, and outperforms the current state-of-the-art on both tasks

    Unconstrained salient object detection via proposal subset optimization

    Full text link
    We aim at detecting salient objects in unconstrained images. In unconstrained images, the number of salient objects (if any) varies from image to image, and is not given. We present a salient object detection system that directly outputs a compact set of detection windows, if any, for an input image. Our system leverages a Convolutional-Neural-Network model to generate location proposals of salient objects. Location proposals tend to be highly overlapping and noisy. Based on the Maximum a Posteriori principle, we propose a novel subset optimization framework to generate a compact set of detection windows out of noisy proposals. In experiments, we show that our subset optimization formulation greatly enhances the performance of our system, and our system attains 16-34% relative improvement in Average Precision compared with the state-of-the-art on three challenging salient object datasets.http://openaccess.thecvf.com/content_cvpr_2016/html/Zhang_Unconstrained_Salient_Object_CVPR_2016_paper.htmlPublished versio
    • …
    corecore