115 research outputs found

    Integration of parent-child unmanned air vehicle focusing on control system development

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2001.Includes bibliographical references (p. 115).As a part of PCUAV(Parent Child Unmanned Air Vehicle) project, the author participated in three areas. First, a study on the vehicle integration concept between a larger and two smaller UAVs are described. Various integration concepts were considered and compared from the point of view of performance and stability. The reintegration between the larger and the smaller UAVs are tried in the project. The procedure of the modeling, controller design and simulation for the reintegration is described next. A vision based positioning was developed for the three-axis position sensor for the reintegration. The procedure and the lessons learned during this development are also presented.by Sanghyuk Park.S.M

    Avionics and control system development for mid-air rendezvous of two unmanned aerial vehicles

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2004.Includes bibliographical references (p. 177-181).This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.A flight control system was developed to achieve mid-air rendezvous of two unmanned aerial vehicles (UAVs) as a part of the Parent Child Unmanned Aerial Vehicle (PCUAV) project at MIT and the Draper Laboratory. A lateral guidance logic was developed for tightly tracking a desired flight path. The guidance logic is derived from geometric and kinematic properties, and has been demonstrated to work better than the conventional aircraft guidance method in waypoint navigation. A simple, low-order attitude estimation was developed that combines aircraft kinematics, GPS and low-quality rate gyros. It is demonstrated in simulation that the performance of the proposed method is as good as other advanced complex methods when the aircraft bank angle is relative small(<40 degrees). The end-game control strategy for the final phase of the rendezvous was also developed, using proportional navigation guidance in conjunction with an optical sensor. The associated miss distance was analyzed with regard to the wind effect and initial conditions. A series of flight tests was performed using two UAVs which were built as a part of the project. It was demonstrated that each individual aircraft can follow a desired flight path within a position accuracy of 2 meters (based on sensor data) while also tracking the air speed command to within 1 m/s. At the time of this thesis writing, it has been demonstrated that the developed control system can bring the two UAVs from any arbitrary initial positions into a configuration of a tight formation flight, where one vehicle trails the other with a commanded separation of 12 meters while maintaining the relative position error within 2 meters in both horizontal and vertical directions for 85% of the flight time.by Sanghyuk Park.Ph.D

    SeiT: Storage-Efficient Vision Training with Tokens Using 1% of Pixel Storage

    Full text link
    We need billion-scale images to achieve more generalizable and ground-breaking vision models, as well as massive dataset storage to ship the images (e.g., the LAION-4B dataset needs 240TB storage space). However, it has become challenging to deal with unlimited dataset storage with limited storage infrastructure. A number of storage-efficient training methods have been proposed to tackle the problem, but they are rarely scalable or suffer from severe damage to performance. In this paper, we propose a storage-efficient training strategy for vision classifiers for large-scale datasets (e.g., ImageNet) that only uses 1024 tokens per instance without using the raw level pixels; our token storage only needs <1% of the original JPEG-compressed raw pixels. We also propose token augmentations and a Stem-adaptor module to make our approach able to use the same architecture as pixel-based approaches with only minimal modifications on the stem layer and the carefully tuned optimization settings. Our experimental results on ImageNet-1k show that our method significantly outperforms other storage-efficient training methods with a large gap. We further show the effectiveness of our method in other practical scenarios, storage-efficient pre-training, and continual learning. Code is available at https://github.com/naver-ai/seitComment: ICCV 2023; First two authors contributed equally; code url: https://github.com/naver-ai/seit; 17 pages, 1.2M

    Few-shot Font Generation with Localized Style Representations and Factorization

    Full text link
    Automatic few-shot font generation is a practical and widely studied problem because manual designs are expensive and sensitive to the expertise of designers. Existing few-shot font generation methods aim to learn to disentangle the style and content element from a few reference glyphs, and mainly focus on a universal style representation for each font style. However, such approach limits the model in representing diverse local styles, and thus makes it unsuitable to the most complicated letter system, e.g., Chinese, whose characters consist of a varying number of components (often called "radical") with a highly complex structure. In this paper, we propose a novel font generation method by learning localized styles, namely component-wise style representations, instead of universal styles. The proposed style representations enable us to synthesize complex local details in text designs. However, learning component-wise styles solely from reference glyphs is infeasible in the few-shot font generation scenario, when a target script has a large number of components, e.g., over 200 for Chinese. To reduce the number of reference glyphs, we simplify component-wise styles by a product of component factor and style factor, inspired by low-rank matrix factorization. Thanks to the combination of strong representation and a compact factorization strategy, our method shows remarkably better few-shot font generation results (with only 8 reference glyph images) than other state-of-the-arts, without utilizing strong locality supervision, e.g., location of each component, skeleton, or strokes. The source code is available at https://github.com/clovaai/lffont.Comment: Accepted at AAAI 2021, 12 pages, 11 figures, the first two authors contributed equall
    • …
    corecore