5,434 research outputs found

    Mapping customer needs to engineering characteristics: an aerospace perspective for conceptual design

    No full text
    Designing complex engineering systems, such as an aircraft or an aero-engine, is immensely challenging. Formal Systems Engineering (SE) practices are widely used in the aerospace industry throughout the overall design process to minimise the overall design effort, corrective re-work, and ultimately overall development and manufacturing costs. Incorporating the needs and requirements from customers and other stakeholders into the conceptual and early design process is vital for the success and viability of any development programme. This paper presents a formal methodology, the Value-Driven Design (VDD) methodology that has been developed for collaborative and iterative use in the Extended Enterprise (EE) within the aerospace industry, and that has been applied using the Concept Design Analysis (CODA) method to map captured Customer Needs (CNs) into Engineering Characteristics (ECs) and to model an overall ‘design merit’ metric to be used in design assessments, sensitivity analyses, and engineering design optimisation studies. Two different case studies with increasing complexity are presented to elucidate the application areas of the CODA method in the context of the VDD methodology for the EE within the aerospace secto

    A dimensioning and tolerancing methodology for concurrent engineering applications II: comprehensive solution strategy

    Get PDF
    Dimensioning and tolerancing (D&T) is a multidisciplinary problem which requires the fulfillment of a large number of dimensional requirements. However, almost all of the currently available D&T tools are only intended for use by the designer. In addition, they typically provide solutions for the requirements one at time. This paper presents a methodology for determining the dimensional specifications of the component parts and sub-assemblies of a product by satisfying all of its requirements. The comprehensive solution strategy presented here includes: a strategy for separating D&T problems into groups, the determination of an optimum solution order for coupled functional equations, a generic tolerance allocation strategy, and strategies for solving different types of D&T problems. A number of commonly used cost minimization strategies, such as the use of standard parts, preferred sizes, preferred fits, and preferred tolerances, have also been incorporated into the proposed methodology. The methodology is interactive and intended for use in a concurrent engineering environment by members of a product development team

    Determination of the Joint Confidence Region of Optimal Operating Conditions in Robust Design by Bootstrap Technique

    Full text link
    Robust design has been widely recognized as a leading method in reducing variability and improving quality. Most of the engineering statistics literature mainly focuses on finding "point estimates" of the optimum operating conditions for robust design. Various procedures for calculating point estimates of the optimum operating conditions are considered. Although this point estimation procedure is important for continuous quality improvement, the immediate question is "how accurate are these optimum operating conditions?" The answer for this is to consider interval estimation for a single variable or joint confidence regions for multiple variables. In this paper, with the help of the bootstrap technique, we develop procedures for obtaining joint "confidence regions" for the optimum operating conditions. Two different procedures using Bonferroni and multivariate normal approximation are introduced. The proposed methods are illustrated and substantiated using a numerical example.Comment: Two tables, Three figure

    Cost Estimation Method for Variation Management

    Get PDF
    International audienc

    AI and OR in management of operations: history and trends

    Get PDF
    The last decade has seen a considerable growth in the use of Artificial Intelligence (AI) for operations management with the aim of finding solutions to problems that are increasing in complexity and scale. This paper begins by setting the context for the survey through a historical perspective of OR and AI. An extensive survey of applications of AI techniques for operations management, covering a total of over 1200 papers published from 1995 to 2004 is then presented. The survey utilizes Elsevier's ScienceDirect database as a source. Hence, the survey may not cover all the relevant journals but includes a sufficiently wide range of publications to make it representative of the research in the field. The papers are categorized into four areas of operations management: (a) design, (b) scheduling, (c) process planning and control and (d) quality, maintenance and fault diagnosis. Each of the four areas is categorized in terms of the AI techniques used: genetic algorithms, case-based reasoning, knowledge-based systems, fuzzy logic and hybrid techniques. The trends over the last decade are identified, discussed with respect to expected trends and directions for future work suggested

    Performance Modeling and Resource Management for Mapreduce Applications

    Get PDF
    Big Data analytics is increasingly performed using the MapReduce paradigm and its open-source implementation Hadoop as a platform choice. Many applications associated with live business intelligence are written as complex data analysis programs defined by directed acyclic graphs of MapReduce jobs. An increasing number of these applications have additional requirements for completion time guarantees. The advent of cloud computing brings a competitive alternative solution for data analytic problems while it also introduces new challenges in provisioning clusters that provide best cost-performance trade-offs. In this dissertation, we aim to develop a performance evaluation framework that enables automatic resource management for MapReduce applications in achieving different optimization goals. It consists of the following components: (1) a performance modeling framework that estimates the completion time of a given MapReduce application when executed on a Hadoop cluster according to its input data sets, the job settings and the amount of allocated resources for processing it; (2) a resource allocation strategy for deadline-driven MapReduce applications that automatically tailors and controls the resource allocation on a shared Hadoop cluster to different applications to achieve their (soft) deadlines; (3) a simulator-based solution to the resource provision problem in public cloud environment that guides the users to determine the types and amount of resources that should lease from the service provider for achieving different goals; (4) an optimization strategy to automatically determine the optimal job settings within a MapReduce application for efficient execution and resource usage. We validate the accuracy, efficiency, and performance benefits of the proposed framework using a set of realistic MapReduce applications on both private cluster and public cloud environment
    • …
    corecore