17,155 research outputs found

    Mapping customer needs to engineering characteristics: an aerospace perspective for conceptual design

    No full text
    Designing complex engineering systems, such as an aircraft or an aero-engine, is immensely challenging. Formal Systems Engineering (SE) practices are widely used in the aerospace industry throughout the overall design process to minimise the overall design effort, corrective re-work, and ultimately overall development and manufacturing costs. Incorporating the needs and requirements from customers and other stakeholders into the conceptual and early design process is vital for the success and viability of any development programme. This paper presents a formal methodology, the Value-Driven Design (VDD) methodology that has been developed for collaborative and iterative use in the Extended Enterprise (EE) within the aerospace industry, and that has been applied using the Concept Design Analysis (CODA) method to map captured Customer Needs (CNs) into Engineering Characteristics (ECs) and to model an overall ‘design merit’ metric to be used in design assessments, sensitivity analyses, and engineering design optimisation studies. Two different case studies with increasing complexity are presented to elucidate the application areas of the CODA method in the context of the VDD methodology for the EE within the aerospace secto

    A robust data driven approach to quantifying common-cause failure in power networks.

    Get PDF
    The standard alpha-factor model for common cause failure assumes symmetry, in that all components must have identical failure rates. In this paper, we generalise the alpha-factor model to deal with asymmetry, in order to apply the model to power networks, which are typically asymmetric. For parameter estimation, we propose a set of conjugate Dirichlet-Gamma priors, and we discuss how posterior bounds can be obtained. Finally, we demonstrate our methodology on a simple yet realistic example

    Beyond the DSGE Straitjacket

    Get PDF
    Academic macroeconomics and the research department of central banks have come to be dominated by Dynamic, Stochastic, General Equilibrium (DSGE) models based on micro-foundations of optimising representative agents with rational expectations. We argue that the dominance of this particular sort of DSGE and the resistance of some in the profession to alternatives has become a straitjacket that restricts empirical and theoretical experimentation and inhibits innovation and that the profession should embrace a more flexible approach to macroeconometric modelling. We describe one possible approach.macroeconometric models, DSGE, VARs, long run theory

    On Similarities between Inference in Game Theory and Machine Learning

    No full text
    In this paper, we elucidate the equivalence between inference in game theory and machine learning. Our aim in so doing is to establish an equivalent vocabulary between the two domains so as to facilitate developments at the intersection of both fields, and as proof of the usefulness of this approach, we use recent developments in each field to make useful improvements to the other. More specifically, we consider the analogies between smooth best responses in fictitious play and Bayesian inference methods. Initially, we use these insights to develop and demonstrate an improved algorithm for learning in games based on probabilistic moderation. That is, by integrating over the distribution of opponent strategies (a Bayesian approach within machine learning) rather than taking a simple empirical average (the approach used in standard fictitious play) we derive a novel moderated fictitious play algorithm and show that it is more likely than standard fictitious play to converge to a payoff-dominant but risk-dominated Nash equilibrium in a simple coordination game. Furthermore we consider the converse case, and show how insights from game theory can be used to derive two improved mean field variational learning algorithms. We first show that the standard update rule of mean field variational learning is analogous to a Cournot adjustment within game theory. By analogy with fictitious play, we then suggest an improved update rule, and show that this results in fictitious variational play, an improved mean field variational learning algorithm that exhibits better convergence in highly or strongly connected graphical models. Second, we use a recent advance in fictitious play, namely dynamic fictitious play, to derive a derivative action variational learning algorithm, that exhibits superior convergence properties on a canonical machine learning problem (clustering a mixture distribution)
    corecore