680 research outputs found

    Human-Machine Collaborative Optimization via Apprenticeship Scheduling

    Full text link
    Coordinating agents to complete a set of tasks with intercoupled temporal and resource constraints is computationally challenging, yet human domain experts can solve these difficult scheduling problems using paradigms learned through years of apprenticeship. A process for manually codifying this domain knowledge within a computational framework is necessary to scale beyond the ``single-expert, single-trainee" apprenticeship model. However, human domain experts often have difficulty describing their decision-making processes, causing the codification of this knowledge to become laborious. We propose a new approach for capturing domain-expert heuristics through a pairwise ranking formulation. Our approach is model-free and does not require enumerating or iterating through a large state space. We empirically demonstrate that this approach accurately learns multifaceted heuristics on a synthetic data set incorporating job-shop scheduling and vehicle routing problems, as well as on two real-world data sets consisting of demonstrations of experts solving a weapon-to-target assignment problem and a hospital resource allocation problem. We also demonstrate that policies learned from human scheduling demonstration via apprenticeship learning can substantially improve the efficiency of a branch-and-bound search for an optimal schedule. We employ this human-machine collaborative optimization technique on a variant of the weapon-to-target assignment problem. We demonstrate that this technique generates solutions substantially superior to those produced by human domain experts at a rate up to 9.5 times faster than an optimization approach and can be applied to optimally solve problems twice as complex as those solved by a human demonstrator.Comment: Portions of this paper were published in the Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI) in 2016 and in the Proceedings of Robotics: Science and Systems (RSS) in 2016. The paper consists of 50 pages with 11 figures and 4 table

    A Hybrid Multiobjective Discrete Particle Swarm Optimization Algorithm for Cooperative Air Combat DWTA

    Get PDF

    Application of a D Number based LBWA Model and an Interval MABAC Model in Selection of an Automatic Cannon for Integration into Combat Vehicles

    Get PDF
    A decision making procedure for selection of a weapon system involves different, often contradictory criteriaand reaching decisions under conditions of uncertainty. This paper proposes a novel multi-criteria methodology based on D numbers which enables efficient analysis of the information used for decision making. The proposed methodology has been developed in order to enable selection of an efficient weapon system under conditions when a large number of hierarchically structured evaluation criteria has to be processed. A novel D number based Level Based Weight Assessment – Multi Attributive Border Approximation area Comparison (D LBWA-MABAC) model is used for selection of an automatic cannon for integration into combat vehicles. Criteria weights are determined based on the improved LBWA-D model. The traditional MABAC method has been further developed by integration of interval numbers. A hybrid D LBWA-MABAC framework is used for evaluation of an automatic cannon for integration into combat vehicles. Nine weapon systems used worldwide have been ranked in this paper. This multicriteria approach allows decision makers to assess options objectively and reach a rational decision regarding the selection of an optimal weapon system. Validation of the proposed methodology is performed through sensitivity analysis which studies how changes in the weights of the best criterion and the elasticity coefficient affect the ranking results

    Aeronautical Engineering: A Continuing Bibliography with Indexes

    Get PDF
    This report lists reports, articles and other documents recently announced in the NASA STI Database

    Aeronautical Engineering. A continuing bibliography with indexes, supplement 156

    Get PDF
    This bibliography lists 288 reports, articles and other documents introduced into the NASA scientific and technical information system in December 1982

    A Framework For Measuring The Value-added Of Knowledge Processes With Analysis Of Process Interactions And Dynamics

    Get PDF
    The most known and widely used methods use cash flows and tangible assets to measure the impact of investments in the organization’s outputs. But in the last decade many newer organizations whose outputs are heavily dependent on information technology utilize knowledge as their main asset. These organizations’ market values lie on the knowledge of its employees and their technological capabilities. In the current technology-based business landscape the value added by assets utilized for generation of outputs cannot be appropriately measured and managed without considering the role that intangible assets and knowledge play in executing processes. The analysis of processes for comparison and decision making based on intangible value added can be accomplished using the knowledge required to execute processes. The measurement of value added by knowledge can provide a more realistic framework for analysis of processes where traditional cost methods are not appropriate, enabling managers to better allocate and control knowledge-based processes. Further consideration of interactions and complexity between proposed process alternatives can yield answers about where and when investments can improve value-added while dynamically providing higher returns on investmen

    Design of a hypersonic airbreathing cruise missile

    Get PDF
    In this project, a hypersonic airbreathing cruise missile is designed through an optimization process. The main use of this futuristic weapon technology is to be employed against enemy ships while being launched from an Mk-41 VLS launcher, the most widespread VLS in the world. In this optimization process and for this type of missile, Genetic Algorithms and Monte Carlo simulations are used to find an optimal solution for the rocket booster engine, the scramjet engine, aerodynamics and warhead sizing. With this data, trajectory, stability and manoeuvrability is studied to determine the performance of the missile. Other aspects concerning the missile materials of the airframe and dome are discussed on a qualitative way and no structural analysis is studied within this project. It should be noted that hypersonic weapons are mature technologies, and the only information about them are research papers regarding their aerodynamics and propulsion systems. Then, the only built hypersonic vehicles are experimental, meaning that the baseline data to start design process scarce. This brings a great challenge to this project, where creativity and the use of analytical expressions for rapid missile synthesis and conceptual design is a must. On the other side, due to the complex behaviour of hypersonic flows, this adds difficulties to our research since most of the methods employed are numerical, which can be calculated easily on a CFD in the final stages of the design, but they are not fast enough to be implemented on the initial stages of design. The final results prove that the methods employed for initial sizing are accurate enough to model a CAD design which can be used as a baseline for future research in this technology. To conclude with, the final results obtained with the algorithms helps us to understand the effects on design, especially when flying at high Mach numbers where the classical configuration of airframe, wings and tail must be radically redesigned to more aerodynamic efficient bodies such as Waveriders. Regarding to the optimization process, objective functions were changed manually to find better results at each iteration of the optimization process but better techniques such as local search methods used in artificial intelligence could be employed

    Reduced-order modelling for high-speed aerial weapon aerodynamics

    Get PDF
    In this work a high-fidelity low-cost surrogate of a computational fluid dynamics analysis tool was developed. This computational tool is composed of general and physics- based approximation methods by which three dimensional high-speed aerodynamic flow- field predictions are made with high efficiency and an accuracy which is comparable with that of CFD. The tool makes use of reduced-basis methods that are suitable for both linear and non-linear problems, whereby the basis vectors are computed via the proper orthogonal decomposition (POD) of a training dataset or a set of observations. The surrogate model was applied to two flow problems related to high-speed weapon aerodynamics. Comparisons of surrogate model predictions with high-fidelity CFD simulations suggest that POD-based reduced-order modelling together with response surface methods provide a reliable and robust approach for efficient and accurate predictions. In contrast to the many modelling efforts reported in the literature, this surrogate model provides access to information about the whole flow-field. In an attempt to reduce the up-front cost necessary to generate the training dataset from which the surrogate model is subsequently developed, a variable-fidelity POD- based reduced-order modelling method is proposed in this work for the first time. In this model, the scalar coefficients which are obtained by projecting the solution vectors onto the basis vectors, are mapped between spaces of low and high fidelities, to achieve high- fidelity predictions with complete flow-field information. In general, this technique offers an automatic way of fusing variable-fidelity data through interpolation and extrapolation schemes together with reduced-order modelling (ROM). Furthermore, a study was undertaken to investigate the possibility of modelling the transonic flow over an aerofoil using a kernel POD–based reduced-order modelling method. By using this type of ROM it was noticed that the weak non-linear features of the transonic flow are accurately modelled using a small number of basis vectors. The strong non-linear features are only modelled accurately by using a large number of basis vectors

    Dynamic Agent Based Modeling Using Bayesian Framework for Addressing Intelligence Adaptive Nuclear Nonproliferation Analysis

    Get PDF
    Realistically, no two nuclear proliferating or defensive entities are exactly identical; Agent Based Modeling (ABM) is a computational methodology addressing the uniqueness of those facilitating or preventing nuclear proliferation. The modular Bayesian ABM Nonproliferation Enterprise (BANE) tool has been developed at Texas A &M University for nuclear nonproliferation analysis. Entities engaged in nuclear proliferation cover a range of activities and fall within proliferating, defensive, and neutral agent classes. In BANE proliferating agents pursue nuclear weapons, or at least a latent nuclear weapons capability. Defensive nonproliferation agents seek to uncover, hinder, reverse, or dismantle any proliferation networks they discover. The vast majority of agents are neutral agents, of which only a small subset can significantly enable proliferation. BANE facilitates intelligent agent actions by employing entropy and mutual information for proliferation pathway determinations. Factors including technical success, resource expenditures, and detection probabilities are assessed by agents seeking optimal proliferation postures. Coupling ABM with Bayesian analysis is powerful from an omniscience limitation perspective. Bayesian analysis supports linking crucial knowledge and technology requirements into relationship networks for each proliferation category. With a Bayesian network, gaining information on proliferator actions in one category informs defensive agents where to expend limited counter-proliferation impeding capabilities. Correlating incomplete evidence for pattern recognition in BANE using Bayesian inference draws upon technical supply side proliferation linkages grounded in physics. Potential or current proliferator security, economic trajectory, or other factors modify demand drivers for undertaking proliferation. Using Bayesian inference the coupled demand and supply proliferation drivers are connected to create feedback interactions. Verification and some validation for BANE is performed using scenarios and historical case studies. Restrictive export controls, swings in global soft power affinity, and past proliferation program assessments for entities ranging from the Soviet Union to Iraq demonstrates BANE’s flexibility and applicability. As a newly developed tool, BANE has room for future contributions from computer science, engineering, and social scientists. Through BANE the framework exists for detailed nonproliferation expansion into broader weapons of mass effect analysis; since, nuclear proliferation is but one option for addressing international security concerns
    • 

    corecore