554 research outputs found

    Comparative Analysis of Requirements Change Prediction Models: Manual, Linguistic, and Neural Network

    Get PDF
    Requirement change propagation, if not managed, may lead to monetary losses or project failure. The a posteriori tracking of requirement dependencies is a well-established practice in project and change management. The identification of these dependencies often requires manual input by one or more individuals with intimate knowledge of the project. Moreover, the definition of these dependencies that help to predict requirement change is not currently found in the literature. This paper presents two industry case studies of predicting system requirement change propagation through three approaches: manually, linguistically, and bag-of-words. Dependencies are manually and automatically developed between requirements from textual data and computationally processed to develop surrogate models to predict change. Two types of relationship generation, manual keyword selection and part-of-speech tagging, are compared. Artificial neural networks are used to create surrogate models to predict change. These approaches are evaluated on three connectedness metrics: shortest path, path count, and maximum flow rate. The results are given in terms of search depth needed within a requirements document to identify the subsequent changes. The semi-automated approach yielded the most accurate results, requiring a search depth of 11 %, but sacrifices on automation. The fully automated approach is able to predict requirement change within a search depth of 15 % and offers the benefits of full minimal human input

    Intermodal Transfer Coordination in Logistic Networks

    Get PDF
    Increasing awareness that globalization and information technology affect the patterns of transport and logistic activities has increased interest in the integration of intermodal transport resources. There are many significant advantages provided by integration of multiple transport schedules, such as: (1) Eliminating direct routes connecting all origin-destinations pairs and concentrating cargos on major routes; (2) improving the utilization of existing transportation infrastructure; (3) reducing the requirements for warehouses and storage areas due to poor connections, and (4) reducing other impacts including traffic congestion, fuel consumption and emissions. This dissertation examines a series of optimization problems for transfer coordination in intermodal and intra-modal logistic networks. The first optimization model is developed for coordinating vehicle schedules and cargo transfers at freight terminals, in order to improve system operational efficiency. A mixed integer nonlinear programming problem (MINLP) within the studied multi-mode, multi-hub, and multi-commodity network is formulated and solved by using sequential quadratic programming (SQP), genetic algorithms (GA) and a hybrid GA-SQP heuristic algorithm. This is done primarily by optimizing service frequencies and slack times for system coordination, while also considering loading and unloading, storage and cargo processing operations at the transfer terminals. Through a series of case studies, the model has shown its ability to optimize service frequencies (or headways) and slack times based on given input information. The second model is developed for countering schedule disruptions within intermodal freight systems operating in time-dependent, stochastic and dynamic environments. When routine disruptions occur (e.g. traffic congestion, vehicle failures or demand fluctuations) in pre-planned intermodal timed-transfer systems, the proposed dispatching control method determines through an optimization process whether each ready outbound vehicle should be dispatched immediately or held waiting for some late incoming vehicles with connecting freight. An additional sub-model is developed to deal with the freight left over due to missed transfers. During the phases of disruption responses, alleviations and management, the proposed real-time control model may also consider the propagation of delays at further downstream terminals. For attenuating delay propagations, an integrated dispatching control model and an analysis of sensitivity to slack times are presented

    Enhancement of an MBSE-supported methodology for managing engineering changes using the example of a machine tool

    Get PDF
    Engineering changes are often classified as critical and lead to high costs. The reason for this is the high system complexity. To deal with high complexity, MBSE can be used as an approach. However, in order to be able to operate model-based engineering change management, suitable approaches are required. The Advanced Engineering Change Impact Approach - AECIA presents a holistic methodology for model-based change management by supporting change request validity checking, change propagation and change impact analysis, and change information communication in an agile development environment. In this publication, the methodology is extended to include a procedure for checking the validity of change requests, applied to a real change case using a machine tool as an example, and initially evaluated

    Fast and accurate front propagation for simulation of geological folds

    Get PDF
    Front propagations described by static Hamilton-Jacobi equations can be used to simulate folded geological structures. Simulations of geological folds are a key ingredient in the Compound Earth Simulator (CES), an industrial software tool used in the exploration of oil and gas. In this thesis, local approximation techniques are investigated with respect to accuracy and efficiency. Several novel algorithms are also introduced, of which some are accelerated by parallel implementations on both multicore CPUs and Graphic Processing Units. These algorithms are able to simulate folds at a fraction of the time needed by the CES industry code, while retaining the same level of accuracy. Complicated tasks that previously needed several minutes to be computed can now be performed in just a matter of a few seconds, thus significantly improving the CES user experience

    Solar Sail Trajectory Design In The Earth-Moon Circular Restricted Three Body Problem

    Get PDF
    The quest to explore the Moon has helped resolve scientific questions, has spurred leaps in technology development, and has revealed Earth\u27s celestial companion to be a gateway to other destinations. With a renewed focus on returning to the Moon in this decade, alternatives to chemical propulsion systems are becoming attractive methods to efficiently use scarce resources and support extended mission durations. Thus, an investigation is conducted to develop a general framework, that facilitates propellant-free Earth-Moon transfers by exploiting sail dynamics in combination with advantageous transfer options offered in the Earth-Moon circular restricted multi-body dynamical model. Both periodic orbits in the vicinity of the Earth-Moon libration points, and lunar-centric long-term capture orbits are incorporated as target destinations to demonstrate the applicability of the general framework to varied design scanarios, each incorporating a variety of complexities and challenges. The transfers are comprised of three phases - a spiral Earth escape, a transit period, and, finally, the capture into a desirable orbit in the vicinity of the Moon. The Earth-escape phase consists of spiral trajectories constructed using three different sail steering strategies - locally optimal, on/off and velocity tangent. In the case of the Earth-libration point transfers, naturally occurring flow structures (e.g., invariant manifolds) arising from the mutual gravitational interaction of the Earth and Moon are exploited to link an Earth departure spiral with a destination orbit. In contrast, sail steering alone is employed to establish a link between the Earth-escape phase and capture orbits about the Moon due to a lack of applicable natural structures for the required connection. Metrics associated with the transfers including flight-time and the influence of operational constraints, such as occultation events, are investigated to determine the available capabilities for Earth-Moon transfers given current sail technology levels. Although the implemented steering laws suffice to generate baseline paths, infeasible turn rate demands placed on the sail are also investigated to explore the technical hurdles in designing Earth-Moon transfers. The methodologies are suitable for a variety of mission scenarios and sail configurations, rendering the resulting trajectories valuable for a diverse range of applications

    Concepts of change propagation analysis in engineering design

    Get PDF
    Interest in change propagation analysis for engineering design has increased rapidly since the topic gained prominence in the late 1990s. Although there are now many approaches and models, there is a smaller number of underlying key concepts. This article contributes a literature review and organising framework that summarises and relates these key concepts. Approaches that have been taken to address each key concept are collected and discussed. A visual analysis of the literature is presented to uncover some trends and gaps. The article thereby provides a thematic analysis of state-of-the-art in design change propagation analysis, and highlights opportunities for further work

    Modéliser et analyser les risques de propagations dans les projets complexes : application au développement de nouveaux véhicules

    Get PDF
    The management of complex projects requires orchestrating the cooperation of hundreds of individuals from various companies, professions and backgrounds, working on thousands of activities, deliverables, and risks. As well, these numerous project elements are more and more interconnected, and no decision or action is independent. This growing complexity is one of the greatest challenges of project management and one of the causes for project failure in terms of cost overruns and time delays. For instance, in the automotive industry, increasing market orientation and growing complexity of automotive product has changed the management structure of the vehicle development projects from a hierarchical to a networked structure, including the manufacturer but also numerous suppliers. Dependencies between project elements increase risks, since problems in one element may propagate to other directly or indirectly dependent elements. Complexity generates a number of phenomena, positive or negative, isolated or in chains, local or global, that will more or less interfere with the convergence of the project towards its goals. The thesis aim is thus to reduce the risks associated with the complexity of the vehicle development projects by increasing the understanding of this complexity and the coordination of project actors. To do so, a first research question is to prioritize actions to mitigate complexity-related risks. Then, a second research question is to propose a way to organize and coordinate actors in order to cope efficiently with the previously identified complexity-related phenomena.The first question will be addressed by modeling project complexity and by analyzing complexity-related phenomena within the project, at two levels. First, a high-level factor-based descriptive modeling is proposed. It permits to measure and prioritize project areas where complexity may have the most impact. Second, a low-level graph-based modeling is proposed, based on the finer modeling of project elements and interdependencies. Contributions have been made on the complete modeling process, including the automation of some data-gathering steps, in order to increase performance and decrease effort and error risk. These two models can be used consequently; a first high-level measure can permit to focus on some areas of the project, where the low-level modeling will be applied, with a gain of global efficiency and impact. Based on these models, some contributions are made to anticipate potential behavior of the project. Topological and propagation analyses are proposed to detect and prioritize critical elements and critical interdependencies, while enlarging the sense of the polysemous word “critical."The second research question will be addressed by introducing a clustering methodology to propose groups of actors in new product development projects, especially for the actors involved in many deliverable-related interdependencies in different phases of the project life cycle. This permits to increase coordination between interdependent actors who are not always formally connected via the hierarchical structure of the project organization. This allows the project organization to be actually closer to what a networked structure should be. The automotive-based industrial application has shown promising results for the contributions to both research questions. Finally, the proposed methodology is discussed in terms of genericity and seems to be applicable to a wide set of complex projects for decision support.La gestion de projets complexes nécessite d’orchestrer la coopération de centaines de personnes provenant de diverses entreprises, professions et compétences, de travailler sur des milliers d'activités, livrables, objectifs, actions, décisions et risques. En outre, ces nombreux éléments du projet sont de plus en plus interconnectés, et aucune décision ou action n’est indépendante. Cette complexité croissante est l'un des plus grands défis de la gestion de projet et l'une des causes de l'échec du projet en termes de dépassements de coûts et des retards. Par exemple, dans l'industrie automobile, l'augmentation de l'orientation du marché et de la complexité croissante des véhicules a changé la structure de gestion des projets de développement de nouveaux véhicules à partir d'une structure hiérarchique à une structure en réseau, y compris le constructeur, mais aussi de nombreux fournisseurs. Les dépendances entre les éléments du projet augmentent les risques, car les problèmes dans un élément peuvent se propager à d'autres éléments qui en dépendent directement ou indirectement. La complexité génère un certain nombre de phénomènes, positifs ou négatifs, isolés ou en chaînes, locaux ou globaux, qui vont plus ou moins interférer avec la convergence du projet vers ses objectifs.L'objectif de la thèse est donc de réduire les risques associés à la complexité des projets véhicules en augmentant la compréhension de cette complexité et de la coordination des acteurs du projet. Pour ce faire, une première question de recherche est de prioriser les actions pour atténuer les risques liés à la complexité. Puis, une seconde question de recherche est de proposer un moyen d'organiser et de coordonner les acteurs afin de faire face efficacement avec les phénomènes liés à la complexité identifiés précédemment.La première question sera abordée par la modélisation de complexité du projet en analysant les phénomènes liés à la complexité dans le projet, à deux niveaux. Tout d'abord, une modélisation descriptive de haut niveau basée facteur est proposé. Elle permet de mesurer et de prioriser les zones de projet où la complexité peut avoir le plus d'impact. Deuxièmement, une modélisation de bas niveau basée sur les graphes est proposée. Elle permet de modéliser plus finement les éléments du projet et leurs interdépendances. Des contributions ont été faites sur le processus complet de modélisation, y compris l'automatisation de certaines étapes de collecte de données, afin d'augmenter les performances et la diminution de l'effort et le risque d'erreur. Ces deux modèles peuvent être utilisés en conséquence; une première mesure de haut niveau peut permettre de se concentrer sur certains aspects du projet, où la modélisation de bas niveau sera appliquée, avec un gain global d'efficacité et d'impact. Basé sur ces modèles, certaines contributions sont faites pour anticiper le comportement potentiel du projet. Des analyses topologiques et de propagation sont proposées pour détecter et hiérarchiser les éléments essentiels et les interdépendances critiques, tout en élargissant le sens du mot polysémique "critique".La deuxième question de recherche sera traitée en introduisant une méthodologie de « Clustering » pour proposer des groupes d'acteurs dans les projets de développement de nouveaux produits, en particulier pour les acteurs impliqués dans de nombreuses interdépendances liées aux livrables à différentes phases du cycle de vie du projet. Cela permet d'accroître la coordination entre les acteurs interdépendants qui ne sont pas toujours formellement reliés par la structure hiérarchique de l'organisation du projet. Cela permet à l'organisation du projet d’être effectivement plus proche de la structure en « réseau » qu’elle devrait avoir. L'application industrielle aux projets de développement de nouveaux véhicules a montré des résultats prometteurs pour les contributions aux deux questions de recherche

    Distributed timing analysis

    Get PDF
    As design complexities continue to grow larger, the need to efficiently analyze circuit timing with billions of transistors across multiple modes and corners is quickly becoming the major bottleneck to the overall chip design closure process. To alleviate the long runtimes, recent trends are driving the need of distributed timing analysis (DTA) in electronic design automation (EDA) tools. However, DTA has received little research attention so far and remains a critical problem. In this thesis, we introduce several methods to approach DTA problems. We present a near-optimal algorithm to speed up the path-based timing analysis in Chapter 1. Path-based timing analysis is a key step in the overall timing flow to reduce unwanted pessimism, for example, common path pessimism removal (CPPR). In Chapter 2, we introduce a MapReduce-based distributed Path-based timing analysis framework that can scale up to hundreds of machines. In Chapter 3, we introduce our standalone timer, OpenTimer, an open-source high-performance timing analysis tool for very large scale integration (VLSI) systems. OpenTimer efficiently supports (1) both block-based and path-based timing propagations, (2) CPPR, and (3) incremental timing. OpenTimer works on industry formats (e.g., .v, .spef, .lib, .sdc) and is designed to be parallel and portable. To further facilitate integration between timing and timing-driven optimizations, OpenTimer provides user-friendly application programming interface (API) for inactive analysis. Experimental results on industry benchmarks re- leased from TAU 2015 timing analysis contest have demonstrated remarkable results achieved by OpenTimer, especially in its order-of-magnitude speedup over existing timers. In Chapter 4 we present a DTA framework built on top of our standalone timer OpenTimer. We investigated into existing cluster computing frameworks from big data community and demonstrated DTA is a difficult fit here in terms of computation patterns and performance concern. Our specialized DTA framework supports (1) general design partitions (logical, physical, hierarchical, etc.) stored in a distributed file system, (2) non-blocking IO with event-driven programming for effective communication and computation overlap, and (3) an efficient messaging interface between application and network layers. The effectiveness and scalability of our framework has been evaluated on large hierarchical industry designs over a cluster with hundreds of machines. In Chapter 5, we present our system DtCraft, a distributed execution engine for compute-intensive applications. Motivated by our DTA framework, DtCraft introduces a high-level programming model that lets users without detailed experience of distributed computing utilize the cluster resources. The major goal is to simplify the coding efforts on building distributed applications based on our system. In contrast to existing data-parallel cluster computing frameworks, DtCraft targets on high-performance or compute- intensive applications including simulations, modeling, and most EDA applications. Users describe a program in terms of a sequential stream graph associated with computation units and data streams. The DtCraft runtime transparently deals with the concurrency controls including work distribution, process communication, and fault tolerance. We have evaluated DtCraft on both micro-benchmarks and large-scale simulation and optimization problems, and showed the promising performance from single multi-core machines to clusters of computers
    • …
    corecore