1,018 research outputs found

    Investigation of Reduced-Order Modeling for Aircraft Stability and Control Prediction

    Get PDF
    High-fidelity computational fluid dynamics tools offer the potential to approximate increments for ground-to-flight scaling effects, as well as to augment the dynamic damping derivative data for motion-based flight simulators. Unfortunately, the computational expense is currently prohibitive for populating a complete simulator database. This work investigates an existing surrogate-based, indicial response reduced-order model methodology as a means to efficiently augment a flight simulator database with high-fidelity nonlinear aerodynamic damping derivatives. Creation of the reduced-order model is based on the superposition integrals of the step response with the derivative of its corresponding input signal. Step responses are calculated using a computational grid motion approach that separates the effects of angle of attack and sideslip angle from angular rates, and rates from angle of attack and sideslip. It is demonstrated that the transients produced during the start of a forced-oscillation motion are captured by the reduced-order model to the level of fidelity of a comparable computational solution. Aerodynamic coefficients computed within minutes by the reduced-order model for an aircraft undergoing an 18-second half Lazy-8 maneuver and a 25-second Immelmann turn maneuver are compared with those from full computational flight solutions that required days to complete. Finally, a cost-benefit assessment is included that demonstrates a compelling advantage for this approach. d for maneuvering, flexible vehicles

    Sharing visual features for multiclass and multiview object detection

    Get PDF
    We consider the problem of detecting a large number of different classes of objects in cluttered scenes. Traditional approaches require applying a battery of different classifiers to the image, at multiple locations and scales. This can be slow and can require a lot of training data, since each classifier requires the computation of many different image features. In particular, for independently trained detectors, the (run-time) computational complexity, and the (training-time) sample complexity, scales linearly with the number of classes to be detected. It seems unlikely that such an approach will scale up to allow recognition of hundreds or thousands of objects. We present a multi-class boosting procedure (joint boosting) that reduces the computational and sample complexity, by finding common features that can be shared across the classes (and/or views). The detectors for each class are trained jointly, rather than independently. For a given performance level, the total number of features required, and therefore the computational cost, is observed to scale approximately logarithmically with the number of classes. The features selected jointly are closer to edges and generic features typical of many natural structures instead of finding specific object parts. Those generic features generalize better and reduce considerably the computational cost of an algorithm for multi-class object detection

    Unsupervised Discovery of Parts, Structure, and Dynamics

    Full text link
    Humans easily recognize object parts and their hierarchical structure by watching how they move; they can then predict how each part moves in the future. In this paper, we propose a novel formulation that simultaneously learns a hierarchical, disentangled object representation and a dynamics model for object parts from unlabeled videos. Our Parts, Structure, and Dynamics (PSD) model learns to, first, recognize the object parts via a layered image representation; second, predict hierarchy via a structural descriptor that composes low-level concepts into a hierarchical structure; and third, model the system dynamics by predicting the future. Experiments on multiple real and synthetic datasets demonstrate that our PSD model works well on all three tasks: segmenting object parts, building their hierarchical structure, and capturing their motion distributions.Comment: ICLR 2019. The first two authors contributed equally to this wor

    Assessment of Executive Function Using a Series of Operant Conditioning Based Tasks in T1DM Rodents

    Get PDF
    This study examined the impact of Type 1 Diabetes Mellitus (T1DM) on executive function using a series of operant conditioning based tasks in rats. Sprague Dawley rats were randomized to either non-diabetic (n = 12; 6 male) or diabetic (n = 14; 6 male) groups. Diabetes was induced using multiple low-dose streptozotocin injections. All diabetic rodents were insulin-treated using subcutaneous insulin pellet implants. At week 14 of the study, rats were placed on a food restricted diet to induce 5 - 10% weight loss. Rodents were familiarized and tested on a series of tasks that required continuous adjustments to novel stimulus-reward paradigms in order to receive food rewards. No differences were observed in the number of trials, nor number / type of errors made to successfully complete each task between groups. Therefore, we report no differences in executive function, or more specifically set-shifting abilities between non-diabetic and diabetic rodents

    Contextual models for object detection using boosted random fields

    Get PDF
    We seek to both detect and segment objects in images. To exploit both local image data as well as contextual information, we introduce Boosted Random Fields (BRFs), which uses Boosting to learn the graph structure and local evidence of a conditional random field (CRF). The graph structure is learned by assembling graph fragments in an additive model. The connections between individual pixels are not very informative, but by using dense graphs, we can pool information from large regions of the image; dense models also support efficient inference. We show how contextual information from other objects can improve detection performance, both in terms of accuracy and speed, by using a computational cascade. We apply our system to detect stuff and things in office and street scenes

    Efficient Unsteady Model Estimation Using Computational and Experimental Data

    Get PDF
    Improving aircraft simulations for pilot training in loss-of-control and stalled conditions is one goal of NASA research in the System Wide Safety Program. One part of this effort is to develop appropriate generic aerodynamic models that provide representative responses in simulation for a given class of aircraft. In this part of the flight envelope nonlinear unsteady responses are often present and may require an extended aerodynamic model compared to that used in the conventional flight envelope. In this preliminary study, two objectives are addressed. First, to obtain a representative model for a NASA generic aircraft at an unsteady condition in the flight envelope and second, to evaluate the techniques involved. To meet these objectives, two different generic aircraft configurations are modeled using both experimental and analytical data. With these results, an initial assessment of the efficiency and quality of the tools and test techniques are evaluated to develop guidance for analytical and experimental approaches to unsteady modeling

    Computational Study of a Generic T-tail Transport

    Get PDF
    This paper presents a computational study on the static and dynamic stability characteristics of a generic transport T-tail configuration under a NASA research program to improve stall models for civil transports. The NASA Tetrahedral Unstructured Software System (TetrUSS) was used to obtain both static and periodic dynamic solutions at low speed conditions for three Reynolds number conditions up to 60 deg angle of attack. The computational results are compared to experimental data. The dominant effects of Reynolds number for the static conditions were found to occur in the stall region. The pitch and roll damping coefficients compared well to experimental results up to up to 40 deg angle of attack whereas yaw damping coefficient agreed only up to 20 deg angle of attack

    Context-Based Vision System for Place and Object Recognition

    Get PDF
    While navigating in an environment, a vision system has to be able to recognize where it is and what the main objects in the scene are. In this paper we present a context-based vision system for place and object recognition. The goal is to identify familiar locations (e.g., office 610, conference room 941, Main Street), to categorize new environments (office, corridor, street) and to use that information to provide contextual priors for object recognition (e.g., table, chair, car, computer). We present a low-dimensional global image representation that provides relevant information for place recognition and categorization, and how such contextual information introduces strong priors that simplify object recognition. We have trained the system to recognize over 60 locations (indoors and outdoors) and to suggest the presence and locations of more than 20 different object types. The algorithm has been integrated into a mobile system that provides real-time feedback to the user
    • …
    corecore