9,155 research outputs found

    Generic hypersonic vehicle performance model

    Get PDF
    An integrated computational model of a generic hypersonic vehicle was developed for the purpose of determining the vehicle's performance characteristics, which include the lift, drag, thrust, and moment acting on the vehicle at specified altitude, flight condition, and vehicular configuration. The lift, drag, thrust, and moment are developed for the body fixed coordinate system. These forces and moments arise from both aerodynamic and propulsive sources. SCRAMjet engine performance characteristics, such as fuel flow rate, can also be determined. The vehicle is assumed to be a lifting body with a single aerodynamic control surface. The body shape and control surface location are arbitrary and must be defined. The aerodynamics are calculated using either 2-dimensional Newtonian or modified Newtonian theory and approximate high-Mach-number Prandtl-Meyer expansion theory. Skin-friction drag was also accounted for. The skin-friction drag coefficient is a function of the freestream Mach number. The data for the skin-friction drag coefficient values were taken from NASA Technical Memorandum 102610. The modeling of the vehicle's SCRAMjet engine is based on quasi 1-dimensional gas dynamics for the engine diffuser, nozzle, and the combustor with heat addition. The engine has three variable inputs for control: the engine inlet diffuser area ratio, the total temperature rise through the combustor due to combustion of the fuel, and the engine internal expansion nozzle area ratio. The pressure distribution over the vehicle's lower aft body surface, which acts as an external nozzle, is calculated using a combination of quasi 1-dimensional gas dynamic theory and Newtonian or modified Newtonian theory. The exhaust plume shape is determined by matching the pressure inside the plume, calculated from the gas dynamic equations, with the freestream pressure, calculated from Newtonian or Modified Newtonian theory. In this manner, the pressure distribution along the vehicle after body expansion surface is then determined. The aerodynamic modeling, the engine modeling, and the exhaust plume analysis are described in more detail. A description of the computer code used to perform the above calculations is given and an input/output example is then given. The computer code is available on a Macintosh floppy disk

    A Combinatorial Solution to Non-Rigid 3D Shape-to-Image Matching

    Get PDF
    We propose a combinatorial solution for the problem of non-rigidly matching a 3D shape to 3D image data. To this end, we model the shape as a triangular mesh and allow each triangle of this mesh to be rigidly transformed to achieve a suitable matching to the image. By penalising the distance and the relative rotation between neighbouring triangles our matching compromises between image and shape information. In this paper, we resolve two major challenges: Firstly, we address the resulting large and NP-hard combinatorial problem with a suitable graph-theoretic approach. Secondly, we propose an efficient discretisation of the unbounded 6-dimensional Lie group SE(3). To our knowledge this is the first combinatorial formulation for non-rigid 3D shape-to-image matching. In contrast to existing local (gradient descent) optimisation methods, we obtain solutions that do not require a good initialisation and that are within a bound of the optimal solution. We evaluate the proposed method on the two problems of non-rigid 3D shape-to-shape and non-rigid 3D shape-to-image registration and demonstrate that it provides promising results.Comment: 10 pages, 7 figure

    Efficient Methods for Continuous and Discrete Shape Analysis

    Get PDF
    When interpreting an image of a given object, humans are able to abstract from the presented color information in order to really see the presented object. This abstraction is also known as shape. The concept of shape is not defined exactly in Computer Vision and in this work, we use three different forms of these definitions in order to acquire and analyze shapes. This work is devoted to improve the efficiency of methods that solve important applications of shape analysis. The most important problem in order to analyze shapes is the problem of shape acquisition. To simplify this very challenging problem, numerous researchers have incorporated prior knowledge into the acquisition of shapes. We will present the first approach to acquire shapes given a certain shape knowledge that computes always the global minimum of the involved functional which incorporates a Mumford-Shah like functional with a certain class of shape priors including statistic shape prior and dynamical shape prior. In order to analyze shapes, it is not only important to acquire shapes, but also to classify shapes. In this work, we follow the concept of defining a distance function that measures the dissimilarity of two given shapes. There are two different ways of obtaining such a distance function that we address in this work. Firstly, we model the set of all shapes as a metric space induced by the shortest path on an orbifold. The shortest path will provide us with a shape morphing, i.e., a continuous transformation from one shape into another. Secondly, we address the problem of shape matching that finds corresponding points on two shapes with respect to a preselected feature. Our main contribution for the problem of shape morphing lies in the immense acceleration of the morphing computation. Instead of solving partial resp. ordinary differential equations, we are able to solve this problem via a gradient descent approach that subsequently shortens the length of a path on the given manifold. During our runtime test, we observed a run-time acceleration of up to a factor of 1000. Shape matching is a classical discrete problem. If each shape is discretized by N shape points, most Computer Vision methods needed a cubic run-time. We will provide two approaches how to reduce this worst-case complexity to O(N2 log(N)). One approach exploits the planarity of the involved graph in order to efficiently compute N shortest path in a graph of O(N2) vertices. The other approach computes a minimal cut in a planar graph in O(N log(N)). In order to make this approach applicable to shape matching, we improved the run-time of a recently developed graph cut approach by an empirical factor of 2–4

    Bimetric gravity is cosmologically viable

    Get PDF
    Bimetric theory describes gravitational interactions in the presence of an extra spin-2 field. Previous work has suggested that its cosmological solutions are generically plagued by instabilities. We show that by taking the Planck mass for the second metric, MfM_f, to be small, these instabilities can be pushed back to unobservably early times. In this limit, the theory approaches general relativity with an effective cosmological constant which is, remarkably, determined by the spin-2 interaction scale. This provides a late-time expansion history which is extremely close to Λ\LambdaCDM, but with a technically-natural value for the cosmological constant. We find MfM_f should be no larger than the electroweak scale in order for cosmological perturbations to be stable by big-bang nucleosynthesis. We further show that in this limit the helicity-0 mode is no longer strongly-coupled at low energy scales.Comment: 8+2 pages, 2 tables. Version published in PLB. Minor typo corrections from v

    Adaptive Certified Training: Towards Better Accuracy-Robustness Tradeoffs

    Full text link
    As deep learning models continue to advance and are increasingly utilized in real-world systems, the issue of robustness remains a major challenge. Existing certified training methods produce models that achieve high provable robustness guarantees at certain perturbation levels. However, the main problem of such models is a dramatically low standard accuracy, i.e. accuracy on clean unperturbed data, that makes them impractical. In this work, we consider a more realistic perspective of maximizing the robustness of a model at certain levels of (high) standard accuracy. To this end, we propose a novel certified training method based on a key insight that training with adaptive certified radii helps to improve both the accuracy and robustness of the model, advancing state-of-the-art accuracy-robustness tradeoffs. We demonstrate the effectiveness of the proposed method on MNIST, CIFAR-10, and TinyImageNet datasets. Particularly, on CIFAR-10 and TinyImageNet, our method yields models with up to two times higher robustness, measured as an average certified radius of a test set, at the same levels of standard accuracy compared to baseline approaches.Comment: Presented at ICML 2023 workshop "New Frontiers in Adversarial Machine Learning

    Discrete-Continuous ADMM for Transductive Inference in Higher-Order MRFs

    Full text link
    This paper introduces a novel algorithm for transductive inference in higher-order MRFs, where the unary energies are parameterized by a variable classifier. The considered task is posed as a joint optimization problem in the continuous classifier parameters and the discrete label variables. In contrast to prior approaches such as convex relaxations, we propose an advantageous decoupling of the objective function into discrete and continuous subproblems and a novel, efficient optimization method related to ADMM. This approach preserves integrality of the discrete label variables and guarantees global convergence to a critical point. We demonstrate the advantages of our approach in several experiments including video object segmentation on the DAVIS data set and interactive image segmentation
    • …
    corecore