10,559 research outputs found

    Continuous and discontinuous modelling of ductile fracture

    Get PDF
    In many metal forming processes (e.g. blanking, trimming, clipping, machining, cutting) fracture is triggered in order to induce material separation along with a desired product geometry. This type of fracture is preceded by a certain amount of plastic deformation and requires additional energy to be supplied in order for the crack to propagate. It is known as ductile fracture, as opposed to brittle fracture (as in ceramics, concrete, etc). Ductile fracture originates at a microscopic level, as the result of voids initiated at inclusions in the material matrix. These microscopic degradation processes lead to the degradation of the macroscopic mechanical properties, causing softening, strain localisation and finally the formation of macroscopic cracks. The initiation and propagation of cracks has traditionally been studied by fracture mechanics. Yet, the application of this theory to ductile fracture, where highly nonlinear degradation processes (material and geometrical) take place in the fracture process zone, raises many questions. To model these processes, continuum models can be used, either in the form of softening plasticity or continuum damage mechanics. Yet, continuous models can not be applied to model crack propagation, because displacements are no longer continuous across the crack. Hence, a proper model for ductile fracture requires a different approach, one that combines a continuous softening model with a strategy to represent cracks, i.e. displacement discontinuities. This has been the main goal of the present work. In a combined approach, the direction of crack propagation is automatically determined by the localisation pattern, and its rate strongly depends on the evolution of damage (or other internal variables responsible for the strain softening). This contrasts with fracture mechanics, where the material behaviour is not directly linked to the crack propagation criteria. Softening materials have to be supplied with an internal length, which acts as a localisation limiter, thereby ensuring the well-posedness of the governing partial differential equations and mesh independent results. For this purpose, a nonlocal gradient enhancement has been used in this work, which gives similar results to nonlocal models of an integral form. A number of numerical methods are available to model displacement discontinuities in a continuum. In the present context, we have used a remeshing strategy, since it has additional advantages when used with large strain localising material models: it prevents excessive element distortions and allows to optimise the element distriviii bution through mesh adaptivity. As a first step towards a continuum-discontinuum approach, an uncoupled damage model is used first, in which damage merely acts as a crack initiation-propagation indicator, without causing material softening. Since uncoupled models do not lead to material localisation, no regularisation is needed. Yet, uncoupled approaches can not capture the actual failure mechanisms and therefore, in general, can give reliable results only when the size of the fracture process zone is so small that its effect can be neglected. When the size of the fracture process zone is large enough, a truly combined model must be used, which is developed in the second part of this study. Due to softening, the transition from the continuous damage material to the discrete crack occurs gradually, with little stress redistribution, in contrast with the previous uncoupled approach. The gradient regularised softening behaviour is introduced in the yield behaviour of an elastoplastic material. The combined model has been applied satisfactorily to the prediction of ductile failure under shear loading conditions. Third, to be able to apply the model to more general loading conditions, the material description has been improved by introducing the influence of stress triaxiality in the damage evolution of a gradient regularised elastoplastic damage model. The model has been obtained using the continuum damage mechanics concept of effective stress. Results show how compressive (tensile) states of triaxiality may increase (decrease) the material ductility. Finally, the combined approach is applied to the modelling of actual metal forming processes, e.g. blanking, fine blanking, score forming. The gradient regularisation has been implemented in an operator-split manner, which can be very appealing for engineering purposes. To capture the large strain gradients in the localisation zones, a new mesh adaptivity criterion has been proposed. The results of the simulations are in good agreement with experimental data from literature

    Applying lean thinking to risk management in product development

    Get PDF
    This paper re-conceptualizes risk management (RM) in product development (PD) through a lean thinking perspective. Arguably, risk management in PD projects became a victim of its own success. It is often implemented as a highly formalized, compliance driven activity, ending up disconnected from the actual value creation of the engineering task. Cost overrun, delay and low quality decision making is common in product development processes even if RM processes are in place. Product development is about reaching project objectives by gradually reducing uncertainty, but often fail to do so without delay or cost overrun. This paper explores the relationship between product development and risk management and proposes to make RM an integrated value adding part of PD. Through a literature review we identify the potential of re-conceptualizing RM through lean thinking. We then conceptualize an outline of how one could apply lean thinking to RM to create a simple, value focused and consensusforming perspective on how to make RM a meaningful part of PD

    Pair programming and the re-appropriation of individual tools for collaborative software development

    Get PDF
    Although pair programming is becoming more prevalent in software development, and a number of reports have been written about it [10] [13], few have addressed the manner in which pairing actually takes place [12]. Even fewer consider the methods used to manage issues such as role change or the communication of complex issues. This paper highlights the way resources designed for individuals are re-appropriated and augmented by pair programmers to facilitate collaboration. It also illustrates that pair verbalisations can augment the benefits of the collocated team, providing examples from ethnographic studies of pair programmers 'in the wild'

    Shared-Use Bus Priority Lanes On City Streets: Case Studies in Design and Management, MTI Report 11-10

    Get PDF
    This report examines the policies and strategies governing the design and, especially, operations of bus lanes in major congested urban centers. It focuses on bus lanes that operate in mixed traffic conditions; the study does not examine practices concerning bus priority lanes on urban highways or freeways. Four key questions addressed in the paper are: How do the many public agencies within any city region that share authority over different aspects of the bus lanes coordinate their work in designing, operating, and enforcing the lanes? What is the physical design of the lanes? What is the scope of the priority use granted to buses? When is bus priority in effect, and what other users may share the lanes during these times? How are the lanes enforced? To answer these questions, the study developed detailed cases on the bus lane development and management strategies in seven cities that currently have shared-use bus priority lanes: Los Angeles, London, New York City, Paris, San Francisco, Seoul, and Sydney. Through the case studies, the paper examines the range of practices in use, thus providing planners and decision makers with an awareness of the wide variety of design and operational options available to them. In addition, the report highlights innovative practices that contribute to bus lanes’ success, where the research findings make this possible, such as mechanisms for integrating or jointly managing bus lane planning and operations across agencies

    ETL for data science?: A case study

    Get PDF
    Big data has driven data science development and research over the last years. However, there is a problem - most of the data science projects don't make it to production. This can happen because many data scientists don’t use a reference data science methodology. Another aggravating element is data itself, its quality and processing. The problem can be mitigated through research, progress and case studies documentation about the topic, fostering knowledge dissemination and reuse. Namely, data mining can benefit from other mature fields’ knowledge that explores similar matters, like data warehousing. To address the problem, this dissertation performs a case study about the project “IA-SI - Artificial Intelligence in Incentives Management”, which aims to improve the management of European grant funds through data mining. The key contributions of this study, to the academia and to the project’s development and success are: (1) A combined process model of the most used data mining process models and their tasks, extended with the ETL’s subsystems and other selected data warehousing best practices. (2) Application of this combined process model to the project and all its documentation. (3) Contribution to the project’s prototype implementation, regarding the data understanding and data preparation tasks. This study concludes that CRISP-DM is still a reference, as it includes all the other data mining process models’ tasks and detailed descriptions, and that its combination with the data warehousing best practices is useful to the project IA-SI and potentially to other data mining projects.A big data tem impulsionado o desenvolvimento e a pesquisa da ciência de dados nos últimos anos. No entanto, há um problema - a maioria dos projetos de ciência de dados não chega à produção. Isto pode acontecer porque muitos deles não usam uma metodologia de ciência de dados de referência. Outro elemento agravador são os próprios dados, a sua qualidade e o seu processamento. O problema pode ser mitigado através da documentação de estudos de caso, pesquisas e desenvolvimento da área, nomeadamente o reaproveitamento de conhecimento de outros campos maduros que exploram questões semelhantes, como data warehousing. Para resolver o problema, esta dissertação realiza um estudo de caso sobre o projeto “IA-SI - Inteligência Artificial na Gestão de Incentivos”, que visa melhorar a gestão dos fundos europeus de investimento através de data mining. As principais contribuições deste estudo, para a academia e para o desenvolvimento e sucesso do projeto são: (1) Um modelo de processo combinado dos modelos de processo de data mining mais usados e as suas tarefas, ampliado com os subsistemas de ETL e outras recomendadas práticas de data warehousing selecionadas. (2) Aplicação deste modelo de processo combinado ao projeto e toda a sua documentação. (3) Contribuição para a implementação do protótipo do projeto, relativamente a tarefas de compreensão e preparação de dados. Este estudo conclui que CRISP-DM ainda é uma referência, pois inclui todas as tarefas dos outros modelos de processos de data mining e descrições detalhadas e que a sua combinação com as melhores práticas de data warehousing é útil para o projeto IA-SI e potencialmente para outros projetos de data mining

    Constitutive relation error estimators for (visco) plastic finite element analysis with softening

    Get PDF
    International audienceA posteriori error estimators based on constitutive relation residuals have been developed for 20 years, in particular at Cachan. This approach has a strong physical meaning and is quite general. Here, we introduce an extended constitutive relation error estimators family able to measure the quality of finite element computations of structures which exhibit plastic/viscoplastic behavior with softening. These measures take into account, over the studied time interval, all the classical error sources involved in the computation: the space discretization (the mesh), the time discretization and the iterative technique used to solve the nonlinear discrete problem
    corecore