2,652 research outputs found

    Role of Computational Fluid Dynamics and Wind Tunnels in Aeronautics R and D

    Get PDF
    The purpose of this report is to investigate the status and future projections for the question of supplantation of wind tunnels by computation in design and to intuit the potential impact of computation approaches on wind-tunnel utilization all with an eye toward reducing the infrastructure cost at aeronautics R&D centers. Wind tunnels have been closing for myriad reasons, and such closings have reduced infrastructure costs. Further cost reductions are desired, and the work herein attempts to project which wind-tunnel capabilities can be replaced in the future and, if possible, the timing of such. If the possibility exists to project when a facility could be closed, then maintenance and other associated costs could be rescheduled accordingly (i.e., before the fact) to obtain an even greater infrastructure cost reduction

    Spray automated balancing of rotors: Methods and materials

    Get PDF
    The work described consists of two parts. In the first part, a survey is performed to assess the state of the art in rotor balancing technology as it applies to Army gas turbine engines and associated power transmission hardware. The second part evaluates thermal spray processes for balancing weight addition in an automated balancing procedure. The industry survey reveals that: (1) computerized balancing equipment is valuable to reduce errors, improve balance quality, and provide documentation; (2) slow-speed balancing is used exclusively, with no forseeable need for production high-speed balancing; (3) automated procedures are desired; and (4) thermal spray balancing is viewed with cautious optimism whereas laser balancing is viewed with concern for flight propulsion hardware. The FARE method (Fuel/Air Repetitive Explosion) was selected for experimental evaluation of bond strength and fatigue strength. Material combinations tested were tungsten carbide on stainless steel (17-4), Inconel 718 on Inconel 718, and Triballoy 800 on Inconel 718. Bond strengths were entirely adequate for use in balancing. Material combinations have been identified for use in hot and cold sections of an engine, with fatigue strengths equivalent to those for hand-ground materials

    A Tuned and Scalable Fast Multipole Method as a Preeminent Algorithm for Exascale Systems

    Full text link
    Among the algorithms that are likely to play a major role in future exascale computing, the fast multipole method (FMM) appears as a rising star. Our previous recent work showed scaling of an FMM on GPU clusters, with problem sizes in the order of billions of unknowns. That work led to an extremely parallel FMM, scaling to thousands of GPUs or tens of thousands of CPUs. This paper reports on a a campaign of performance tuning and scalability studies using multi-core CPUs, on the Kraken supercomputer. All kernels in the FMM were parallelized using OpenMP, and a test using 10^7 particles randomly distributed in a cube showed 78% efficiency on 8 threads. Tuning of the particle-to-particle kernel using SIMD instructions resulted in 4x speed-up of the overall algorithm on single-core tests with 10^3 - 10^7 particles. Parallel scalability was studied in both strong and weak scaling. The strong scaling test used 10^8 particles and resulted in 93% parallel efficiency on 2048 processes for the non-SIMD code and 54% for the SIMD-optimized code (which was still 2x faster). The weak scaling test used 10^6 particles per process, and resulted in 72% efficiency on 32,768 processes, with the largest calculation taking about 40 seconds to evaluate more than 32 billion unknowns. This work builds up evidence for our view that FMM is poised to play a leading role in exascale computing, and we end the paper with a discussion of the features that make it a particularly favorable algorithm for the emerging heterogeneous and massively parallel architectural landscape

    Designing and manufacturing assemblies

    Get PDF

    Resilient Multi-range Radar Detection System for Autonomous Vehicles: A New Statistical Method

    Get PDF
    © 2023 Crown. This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY), https://creativecommons.org/licenses/by/4.0/Critical issues with current detection systems are their susceptibility to adverse weather conditions and constraint on the vertical field view of the radars limiting the ability of such systems to accurately detect the height of the targets. In this paper, a novel multi-range radar (MRR) arrangement (i.e. triple: long-range, medium-range, and short-range radars) based on the sensor fusion technique is investigated that can detect objects of different sizes in a level 2 advanced driver-assistance system. To improve the accuracy of the detection system, the resilience of the MRR approach is investigated using the Monte Carlo (MC) method for the first time. By adopting MC framework, this study shows that only a handful of fine-scaled computations are required to accurately predict statistics of the radar detection failure, compared to many expensive trials. The results presented huge computational gains for such a complex problem. The MRR approach improved the detection reliability with an increased mean detection distance (4.9% over medium range and 13% over long range radar) and reduced standard deviation over existing methods (30% over medium range and 15% over long-range radar). This will help establishing a new path toward faster and cheaper development of modern vehicle detection systems.Peer reviewe

    Modeling Through

    Get PDF
    Theorists of justice have long imagined a decision-maker capable of acting wisely in every circumstance. Policymakers seldom live up to this ideal. They face well-understood limits, including an inability to anticipate the societal impacts of state intervention along a range of dimensions and values. Policymakers cannot see around corners or address societal problems at their roots. When it comes to regulation and policy-setting, policymakers are often forced, in the memorable words of political economist Charles Lindblom, to “muddle through” as best they can. Powerful new affordances, from supercomputing to artificial intelligence, have arisen in the decades since Lindblom’s 1959 article that stand to enhance policymaking. Computer-aided modeling holds promise in delivering on the broader goals of forecasting and system analysis developed in the 1970s, arming policymakers with the means to anticipate the impacts of state intervention along several lines—to model, instead of muddle. A few policymakers have already dipped a toe into these waters, others are being told that the water is warm. The prospect that economic, physical, and even social forces could be modeled by machines confronts policymakers with a paradox. Society may expect policymakers to avail themselves of techniques already usefully deployed in other sectors, especially where statutes or executive orders require the agency to anticipate the impact of new rules on particular values. At the same time, “modeling through” holds novel perils that policymakers may be ill-equipped to address. Concerns include privacy, brittleness, and automation bias of which law and technology scholars are keenly aware. They also include the extension and deepening of the quantifying turn in governance, a process that obscures normative judgments and recognizes only that which the machines can see. The water may be warm but there are sharks in it. These tensions are not new. And there is danger in hewing to the status quo. (We should still pursue renewable energy even though wind turbines as presently configured waste energy and kill wildlife.) As modeling through gains traction, however, policymakers, constituents, and academic critics must remain vigilant. This being early days, American society is uniquely positioned to shape the transition from muddling to modeling

    Design for assembly : re-design of an electrical switch for the ease of automatic assembly

    Get PDF
    A design for assembly (DFA) method is used to analyze the existing design of parts of an electrical switch, and to reduce and re-design them, for the ease of automatic assembly. The procedure for the selection of suitable and economical assembly method is presented based on the Boothroyd & Dewhurst methods. Analysis of the initial design for manual assembly and the re-design for automatic assembly are presented. An algorithmic approach for simplified generation of all mechanical assembly sequences and selecting the good assembly sequences is presented with graphical representations. A work station needed for the automatic assembly is developed, which incorporates vibratory bowl feeders, an indexing machine and other equipment

    Modeling Through

    Get PDF
    Theorists of justice have long imagined a decision-maker capable of acting wisely in every circumstance. Policymakers seldom live up to this ideal. They face well-understood limits, including an inability to anticipate the societal impacts of state intervention along a range of dimensions and values. Policymakers see around corners or address societal problems at their roots. When it comes to regulation and policy-setting, policymakers are often forced, in the memorable words of political economist Charles Lindblom, to “muddle through” as best they can. Powerful new affordances, from supercomputing to artificial intelligence, have arisen in the decades since Lindblom’s 1959 article that stand to enhance policymaking. Computer-aided modeling holds promise in delivering on the broader goals of forecasting and systems analysis developed in the 1970s, arming policymakers with the means to anticipate the impacts of state intervention along several lines—to model, instead of muddle. A few policymakers have already dipped a toe into these waters, others are being told that the water is warm. The prospect that economic, physical, and even social forces could be modeled by machines confronts policymakers with a paradox. Society may expect policymakers to avail themselves of techniques already usefully deployed in other sectors, especially where statutes or executive orders require the agency to anticipate the impact of new rules on particular values. At the same time, “modeling through” holds novel perils that policymakers may be ill equipped to address. Concerns include privacy, brittleness, and automation bias, all of which law and technology scholars are keenly aware. They also include the extension and deepening of the quantifying turn in governance, a process that obscures normative judgments and recognizes only that which the machines can see. The water may be warm, but there are sharks in it. These tensions are not new. And there is danger in hewing to the status quo. As modeling through gains traction, however, policymakers, constituents, and academic critics must remain vigilant. This being early days, American society is uniquely positioned to shape the transition from muddling to modeling

    On the Integration of Skilled Robot Motions for Productivity in Manufacturing

    Get PDF
    Robots used in manufacturing today are tailored to their tasks by system integration based on expert knowledge concerning both production and machine control. For upcoming new generations of even more flexible robot solutions, in applications such as dexterous assembly, the robot setup and programming gets even more challenging. Reuse of solutions in terms of parameters, controls, process tuning, and of software modules in general then gets increasingly important. There has been valuable progress within reuse of automation solutions when machines comply with standards and behave according to nominal models. However, more flexible robots with sensor-based manipulation skills and congnitive functions for human inteaction are far too complex to manage, and solutions are rarely reusable since knowledge is either implicit in imperative software or not captured in machine readable form. We propose techniques that build on existing knowledge by converting structured data into an RDF-based knowledge base. By enhancements of industrial control systems and available engineering tools, such knowledge can be gradually extended as part of the interaction during the definition of the robot task
    • …
    corecore