569 research outputs found

    Use of Petri Nets to Manage Civil Engineering Infrastructures

    Get PDF
    Over the last years there has been a shift, in the most developed countries, in investment and efforts within the construction sector. On the one hand, these countries have built infrastructures able to respond to current needs over the last decades, reducing the need for investments in new infrastructures now and in the near future. On the other hand, most of the infrastructures present clear signs of deterioration, making it fundamental to invest correctly in their recovery. The ageing of infrastructure together with the scarce budgets available for maintenance and rehabilitation are the main reasons for the development of decision support tools, as a mean to maximize the impact of investments. The objective of the present work is to develop a methodology for optimizing maintenance strategies, considering the available information on infrastructure degradation and the impact of maintenance in economic terms and loss of functionality, making possible the implementation of a management system transversal to different types of civil engineering infrastructures. The methodology used in the deterioration model is based on the concept of timed Petri nets. The maintenance model was built from the deterioration model, including the inspection, maintenance and renewal processes. The optimization of maintenance is performed through genetic algorithms. The deterioration and maintenance model was applied to components of two types of infrastructure: bridges (pre-stressed concrete decks and bearings) and buildings (ceramic claddings). The complete management system was used to analyse a section of a road network. All examples are based on Portuguese data

    Multi-product cost and value stream modelling in support of business process analysis

    Get PDF
    To remain competitive, most Manufacturing Enterprises (MEs) need cost effective and responsive business processes with capability to realise multiple value streams specified by changes in customer needs. To achieve this, there is the need to provide reusable computational representations of organisational structures, processes, information, resources and related cost and value flows especially in enterprises realizing multiple products. Current best process mapping techniques do not suitably capture attributes of MEs and their systems and thus dynamics associated with multi-product flows which impact on cost and value generation cannot be effectively modelled and used as basis for decision making. Therefore, this study has developed an integrated multiproduct dynamic cost and value stream modelling technique with the embedded capability of capturing aspects of dynamics associated with multiple product realization in MEs. The integrated multiproduct dynamic cost and value stream modelling technique rests on well experimented technologies in the domains of process mapping, enterprise modelling, system dynamics and discrete event simulation modelling. The applicability of the modelling technique was tested in four case study scenarios. The results generated out of the application of the modelling technique in solving key problems in case study companies, showed that the derived technique offers better solutions in designing, analysing, estimating cost and values and improving processes required for the realization of multiple products in MEs, when compared with current lean based value stream mapping techniques. Also the developed technique provides new modelling constructs which best describe process entities, variables and business indicators in support of enterprise systems design and business process (re) engineering. In addition to these benefits, an enriched approach for translating qualitative causal loop models into quantitative simulation models for parametric analysis of the impact of dynamic entities on processes has been introduced. Further work related to this research will include the extension of the technique to capture relevant strategic and tactical processes for in-depth analysis and improvements. Also further research related to the application of the dynamic producer unit concept in the design of MEs will be required

    Proceedings of the 5th Baltic Mechatronics Symposium - Espoo April 17, 2020

    Get PDF
    The Baltic Mechatronics Symposium is annual symposium with the objective to provide a forum for young scientists from Baltic countries to exchange knowledge, experience, results and information in large variety of fields in mechatronics. The symposium was organized in co-operation with Taltech and Aalto University. Due to Coronavirus COVID-19 the symposium was organized as a virtual conference. The content of the proceedings1. Monitoring Cleanliness of Public Transportation with Computer Vision2. Device for Bending and Cutting Coaxial Wires for Cryostat in Quantum Computing3. Inertial Measurement Method and Application for Bowling Performance Metrics4. Mechatronics Escape Room5. Hardware-In-the-Loop Test Setup for Tuning Semi-Active Hydraulic Suspension Systems6. Newtonian Telescope Design for Stand-off Laser Induced Breakdown Spectroscopy7. Simulation and Testing of Temperature Behavior in Flat Type Linear Motor Carrier8. Powder Removal Device for Metal Additive Manufacturing9. Self-Leveling Spreader Beam for Adjusting the Orientation of an Overhead Crane Loa

    Aggregate assembly process planning for concurrent engineering

    Get PDF
    In today's consumer and economic climate, manufacturers are finding it increasingly difficult to produce finished products with increased functionality whilst fulfilling the aesthetic requirements of the consumer. To remain competitive, manufacturers must always look for ways to meet the faster, better, and cheaper mantra of today's economy. The ability for any industry to mirror the ideal world, where the design, manufacturing, and assembly process of a product would be perfected before it is put mto production, will undoubtedly save a great deal of time and money. This thesis introduces the concept of aggregate assembly process planning for the conceptual stages of design, with the aim of providing the methodology behind such an environment. The methodology is based on an aggregate product model and a connectivity model. Together, they encompass all the requirements needed to fully describe a product in terms of its assembly processes, providing a suitable means for generating assembly sequences. Two general-purpose heuristics methods namely, simulated annealing and genetic algorithms are used for the optimisation of assembly sequences generated, and the loading of the optimal assembly sequences on to workstations, generating an optimal assembly process plan for any given product. The main novelty of this work is in the mapping of the optimisation methods to the issue of assembly sequence generation and line balancing. This includes the formulation of the objective functions for optimismg assembly sequences and resource loading. Also novel to this work is the derivation of standard part assembly methodologies, used to establish and estimate functional tunes for standard assembly operations. The method is demonstrated using CAPABLEAssembly; a suite of interlinked modules that generates a pool of optimised assembly process plans using the concepts above. A total of nine industrial products have been modelled, four of which are the conceptual product models. The process plans generated to date have been tested on industrial assembly lines and in some cases yield an increase in the production rate

    Data and Process Mining Applications on a Multi-Cell Factory Automation Testbed

    Get PDF
    This paper presents applications of both data mining and process mining in a factory automation testbed. It mainly concentrates on the Manufacturing Execution System (MES) level of production hierarchy. Unexpected failures might lead to vast losses on investment or irrecoverable damages. Predictive maintenance techniques, active/passive, have shown high potential of preventing such detriments. Condition monitoring of target pieces of equipment beside defined thresholds forms basis of the prediction. However, monitored parameters must be independent of environment changes, e.g. vibration of transportation equipments such as conveyor systems is variable to workload. This work aims to propose and demonstrate an approach to identify incipient faults of the transportation systems in discrete manufacturing settings. The method correlates energy consumption of the described devices with the workloads. At runtime, machine learning is used to classify the input energy data into two pattern descriptions. Consecutive mismatches between the output of the classifier and the workloads observed in real time indicate possibility of incipient failure at device level. Currently, as a result of high interaction between information systems and operational processes, and due to increase in the number of embedded heterogeneous resources, information systems generate unstructured and massive amount of events. Organizations have shown difficulties to deal with such an unstructured and huge amount of data. Process mining as a new research area has shown strong capabilities to overcome such problems. It applies both process modelling and data mining techniques to extract knowledge from data by discovering models from the event logs. Although process mining is recognised mostly as a business-oriented technique and recognised as a complementary of Business Process Management (BPM) systems, in this paper, capabilities of process mining are exploited on a factory automation testbed. Multiple perspectives of process mining is employed on the event logs produced by deploying Service Oriented Architecture through Web Services in a real multi-robot factory automation industrial testbed, originally used for assembly of mobile phones

    Modelling and Resolution of Dynamic Reliability Problems by the Coupling of Simulink and the Stochastic Hybrid Fault Tree Object Oriented (SHyFTOO) Library

    Get PDF
    Dependability assessment is one of the most important activities for the analysis of complex systems. Classical analysis techniques of safety, risk, and dependability, like Fault Tree Analysis or Reliability Block Diagrams, are easy to implement, but they estimate inaccurate dependability results due to their simplified hypotheses that assume the components’ malfunctions to be independent from each other and from the system working conditions. Recent contributions within the umbrella of Dynamic Probabilistic Risk Assessment have shown the potential to improve the accuracy of classical dependability analysis methods. Among them, Stochastic Hybrid Fault Tree Automaton (SHyFTA) is a promising methodology because it can combine a Dynamic Fault Tree model with the physics-based deterministic model of a system process, and it can generate dependability metrics along with performance indicators of the physical variables. This paper presents the Stochastic Hybrid Fault Tree Object Oriented (SHyFTOO), a Matlab® software library for the modelling and the resolution of a SHyFTA model. One of the novel features discussed in this contribution is the ease of coupling with a Matlab® Simulink model that facilitates the design of complex system dynamics. To demonstrate the utilization of this software library and the augmented capability of generating further dependability indicators, three di erent case studies are discussed and solved with a thorough description for the implementation of the corresponding SHyFTA models

    A systems engineering approach to servitisation system modelling and reliability assessment

    Get PDF
    Companies are changing their business model in order to improve their long term competitiveness. Once where they provided only products, they are now providing a service with that product resulting in a reduced cost of ownership. Such a business case benefits both customer and service supplier only if the availability of the product, and hence the service, is optimised. For highly integrated product and service offerings this means it is necessary to assess the reliability monitoring service which underpins service availability. Reliability monitoring service assessment requires examination of not only product monitoring capability but also the effectiveness of the maintenance response prompted by the detection of fault conditions. In order to address these seemingly dissimilar aspects of the reliability monitoring service, a methodology is proposed which defines core aspects of both the product and service organisation. These core aspects provide a basis from which models of both the product and service organisation can be produced. The models themselves though not functionally representative, portray the primary components of each type of system, the ownership of these system components and how they are interfaced. These system attributes are then examined to establish system risk to reliability by inspection, evaluation of the model or by reference to model source documentation. The result is a methodology that can be applied to such large scale, highly integrated systems at either an early stage of development or in latter development stages. The methodology will identify weaknesses in each system type, indicating areas which should be considered for system redesign and will also help inform the analyst of whether or not the reliability monitoring service as a whole meets the requirements of the proposed business case
    corecore