711,139 research outputs found

    Progressive Reliability Method and Its Application to Offshore Mooring Systems

    Get PDF
    Assessing the reliability of complex systems (e.g. structures) is essential for a reliability-based optimal design that balances safety and costs of such systems. This paper proposes the Progressive Reliability Method (PRM) for the quantification of the reliability of complex systems. The proposed method is a closed-form solution for calculating the probability of failure. The new method is flexible to the definition of “failure” (i.e., can consider serviceability and ultimate-strength failures) and uses the rules of probability theory to estimate the failure probability of the system or its components. The method is first discussed in general and then illustrated in two examples, including a case study to find the safest configuration and orientation of a 12-line offshore mooring system. The PRM results are compared with results of a similar assessment based on the Monte Carlo simulations. It is shown in the example of two-component that using PRM, the importance of system components to system safety can be quantified and compared as input information for maintenance planning

    Reliability model for component-based systems in cosmic (a case study)

    Get PDF
    Software component technology has a substantial impact on modern IT evolution. The benefits of this technology, such as reusability, complexity management, time and effort reduction, and increased productivity, have been key drivers of its adoption by industry. One of the main issues in building component-based systems is the reliability of the composed functionality of the assembled components. This paper proposes a reliability assessment model based on the architectural configuration of a component-based system and the reliability of the individual components, which is usage- or testing-independent. The goal of this research is to improve the reliability assessment process for large software component-based systems over time, and to compare alternative component-based system design solutions prior to implementation. The novelty of the proposed reliability assessment model lies in the evaluation of the component reliability from its behavior specifications, and of the system reliability from its topology; the reliability assessment is performed in the context of the implementation-independent ISO/IEC 19761:2003 International Standard on the COSMIC method chosen to provide the component\u27s behavior specifications. In essence, each component of the system is modeled by a discrete time Markov chain behavior based on its behavior specifications with extended-state machines. Then, a probabilistic analysis by means of Markov chains is performed to analyze any uncertainty in the component\u27s behavior. Our hypothesis states that the less uncertainty there is in the component\u27s behavior, the greater the reliability of the component. The system reliability assessment is derived from a typical component-based system architecture with composite reliability structures, which may include the composition of the serial reliability structures, the parallel reliability structures and the p-out-of-n reliability structures. The approach of assessing component-based system reliability in the COSMIC context is illustrated with the railroad crossing case study. © 2008 World Scientific Publishing Company

    Heuristic Approach for a Combined Transfer Line Balancing and Buffer Allocation Problem Considering Uncertain Demand

    Get PDF
    Featured Application This research was initiated by an industrial project. The problem was the design and configuration of machining lines for engine blocks. The proposed approach was validated using four real cases provided by the industrial partners of the project. The proposed approach could easily be applied to the design and configuration of any machining line for the production of a single complex mechanical component. In this paper, we refer to a real case study of an industrial partner recently committed to its project on the design of a multi-unit and multi-product manufacturing system. Although the considered problem refers to an actual complex manufacturing system, it can be theoretically classified as a union of two key problems that need to be solved during the transfer line design stage: the transfer line balancing problem (TLBP) and the buffer allocation problem (BAP). As two closely related problems, TLBP and BAP usually have similar optimizing directions and share the same purpose: finding a balance between the performance of the transfer line system as well as minimizing investment costs. These problems are usually solved sequentially, but this leads to solutions close to a local optimum in the solution space and not to the global optimum of the overall problem. This paper presents a multi-objective optimization for concurrently solving transfer line balancing and buffer allocation problems. The new approach is based on a combination of evolutionary and heuristic-based algorithms and takes into account the uncertainty of market demand. To validate the proposed approach, an industrial case study in a multi-unit manufacturing system producing multiple products (four engine blocks) is discussed

    A High Availability platform design using Heartbeat and integration in a production environment

    Get PDF
    Nowadays, the number of services in the Internet grows more and more. Hardware vendors bring to market more powerful and stable servers. Operative systems are becoming more robust and flexible offering loads of possibilities. Nevertheless, service outage might happen if one of these components crashes. When critical applications come to concern, it is a must to consider high availability techniques. By applying them, critical applications would be still available even in case of hardware, connectivity components or operative system failure. On one hand, functional description, component architecture and a comparison between three software-layer high availability solutions is written. On the other hand, it is necessary to enable SNMP protocol in every server, as the platform must be installed in a production environment within mobile operator’s network. Integration with SNMP manager used by the customer must be achieved. A brief study of the protocol and its components are explained. Platform design and implementation have been done in a development scenario. Beside, the client has another identical scenario to approve software and platform. This demo scenario has the same configuration than the development and production scenario. When the system has been approved, then production scenario must be configured in the same way than development and demo scenario. To deploy configuration and software releases, install scripts were implemented and packaged in RPM. Lastly, a high availability platform running critical services with SNMP monitoring has been achieved. Moreover, integration within mobile operator’s network has been done successfully. Since October 2007, the system is offering services running on the high availability platform implementation based on this project

    Automated extraction of architecture-level performance models of distributed component-based systems

    Full text link
    Abstract—Modern enterprise applications have to satisfy in-creasingly stringent Quality-of-Service requirements. To ensure that a system meets its performance requirements, the ability to predict its performance under different configurations and workloads is essential. Architecture-level performance models describe performance-relevant aspects of software architectures and execution environments allowing to evaluate different usage profiles as well as system deployment and configuration options. However, building performance models manually requires a lot of time and effort. In this paper, we present a novel automated method for the extraction of architecture-level performance models of distributed component-based systems, based on mon-itoring data collected at run-time. The method is validated in a case study with the industry-standard SPECjEnterprise2010 Enterprise Java benchmark, a representative software system executed in a realistic environment. The obtained performance predictions match the measurements on the real system within an error margin of mostly 10-20 percent. I

    Human System Modelling For Labour Utilisation And Man-Machine Configuration At Cellular Manufacturing

    Get PDF
    Manufacturing complexity has become more challenging with increased in demand fluctuation, product customisation and shorter lead time expectation. It is becoming more crucial to measure manufacturing complexity to better recognise and control the various manufacturing components to achieve optimum manufacturing performance. Cellular manufacturing or group technology is a method used to manage manufacturing complexity based on clustering of different types of equipment to process parts. The organizational structure of cellular manufacturing will always need to be flexible for reconfiguration to address rapid changes in customer requirement especially in managing its dual constraints; human and machine. Very often, the human component is overlooked or overestimated due to poor understanding of the effects of human constraints and lack of study is linked to the difficulty to model human’s behaviour. The purpose of this study is to develop a human system model to fill the gap in the study of human constraints on cellular manufacturing’s performance. As such, a new human system framework focusing on the aspects of human dynamics and attributes was designed to be integrated with the predetermined time standards system in an expert system, eMOST. The new human system model was evaluated for applicability at the actual manufacturing environment through five case studies where accurate labour utilisation and man-machine configuration information were conceived. Thus, the newly defined approach was able to efficiently improve data capture, analysis and model human constraints. The human information from the model was integrated with other manufacturing resources using WITNESS simulation modelling tool focusing on the bottleneck area to further evaluate the dynamic impact of these components on the manufacturing performance. Simulation modelling experiments use has also proven advantageous to change manufacturing configurations and run alternative scenarios to improve the efficiency of the system in terms of the throughput, cycle time, operator utilisation and man-machine configuration. The findings of this study enabled the management to make good decisions to efficiently manage the human resource and better predictions to reconfigure and competently manage resources allocation

    Potentials and challenges of the fuel cell technology for ship applications. A comprehensive techno-economic and environmental assessment of maritime power system configurations

    Get PDF
    The decarbonization of the global ship traffic is one of the industry’s greatest challenges for the next decades and will likely only be achieved with new, energy-efficient power technologies. To evaluate the performances of such technologies, a system modeling and optimization approach is introduced and tested, covering three elementary topics: shipboard solid oxide fuel cells (SOFCs), the benefits of decentralizing ship power systems, and the assessment of potential future power technologies and synthetic fuels. In the following, the analyses’ motivations, scopes, and derived conclusions are presented. SOFCs are a much-discussed technology with promising efficiency, fuel versatility, and few operating emissions. However, complex processes and high temperature levels inhibit their stand-alone dynamic operation. Therefore, the operability in a hybrid system is investigated, focusing on component configurations and evaluation approach corrections. It is demonstrated that moderate storage support satisfies the requirements for an uninterrupted ship operation. Depending on the load characteristics, energy-intensive and power-intensive storage applications with diverging challenges are identified. The analysis also emphasizes to treat degradation modeling with particular care, since technically optimal and cost-optimal design solutions differ meaningfully when assessing annual expenses. Decentralizing a power system with modular components in accordance with the load demand reduces both grid size and transmission losses, leading to a decrease of investment and operating costs. A cruise-ship-based case study considering variable installation locations and potential component failures is used to quantify these benefits. Transmission costs in a distributed system are reduced meaningfully with and without component failure consideration when compared to a central configuration. Also, minor modifications ensure the component redundancy requirements, resulting in comparably marginal extra expenses. Nowadays, numerous synthetic fuels are seen as candidates for future ship applications in combination with either combustion engines or fuel cells. To drive an ongoing technology discussion, performance indicators for envisioned system configurations are assessed in dependence on mission characteristics and critical price trends. Even if gaseous hydrogen is often considered not suitable for ship applications due to its low volumetric energy density, resulting little operating costs are accountable for its superior performance on short passages. For extended missions, fuel cells operating on methanol or ammonia surpass hydrogen economically

    Significant Feature Identification Mechanism For Ipv6 In Enhancing Intrusion Detection System

    Get PDF
    Manufacturing complexity has become more challenging with increased in demand fluctuation, product customisation and shorter lead time expectation. It is becoming more crucial to measure manufacturing complexity to better recognise and control the various manufacturing components to achieve optimum manufacturing performance. Cellular manufacturing or group technology is a method used to manage manufacturing complexity based on clustering of different types of equipment to process parts. The organizational structure of cellular manufacturing will always need to be flexible for reconfiguration to address rapid changes in customer requirement especially in managing its dual constraints; human and machine. Very often, the human component is overlooked or overestimated due to poor understanding of the effects of human constraints and lack of study is linked to the difficulty to model human’s behaviour. The purpose of this study is to develop a human system model to fill the gap in the study of human constraints on cellular manufacturing’s performance. As such, a new human system framework focusing on the aspects of human dynamics and attributes was designed to be integrated with the predetermined time standards system in an expert system, eMOST. The new human system model was evaluated for applicability at the actual manufacturing environment through five case studies where accurate labour utilisation and man-machine configuration information were conceived. Thus, the newly defined approach was able to efficiently improve data capture, analysis and model human constraints. The human information from the model was integrated with other manufacturing resources using WITNESS simulation modelling tool focusing on the bottleneck area to further evaluate the dynamic impact of these components on the manufacturing performance. Simulation modelling experiments use has also proven advantageous to change manufacturing configurations and run alternative scenarios to improve the efficiency of the system in terms of the throughput, cycle time, operator utilisation and man-machine configuration. The findings of this study enabled the management to make good decisions to efficiently manage the human resource and better predictions to reconfigure and competently manage resources allocation

    An integrated methodology for the performance and reliability evaluation of fault-tolerant systems

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.Includes bibliographical references (leaves 220-224).This thesis proposes a new methodology for the integrated performance and reliability evaluation of embedded fault-tolerant systems used in aircraft, space, tactical, and automotive applications. This methodology uses a behavioral model of the system dynamics, similar to the ones used by control engineers when designing the control system, but incorporates additional artifacts to model the failure behavior of the system components. These artifacts include component failure modes (and associated failure rates) and how those failure modes affect the dynamic behavior of the component. The methodology bases the system evaluation on the analysis of the dynamics of the different configurations the system can reach after component failures occur. For each of the possible system configurations, a performance evaluation of its dynamic behavior is carried out to check whether its properties, e.g., accuracy, overshoot, or settling time, which are called performance metrics, meet system requirements. Markov chains are used to model the stochastic process associated with the different configurations that a system can adopt when failures occur.(cont.) Reliability and unreliability measures can be quantified, as well as probabilistic measures of performance, by merging the values of the performance metrics for each configuration and the system configuration probabilities yielded by the corresponding Markov model. This methodology is not only used for system evaluation, but also for guiding the design process, and further optimization. Thus, within the context of the new methodology, we define new importance measures to rank the contributions of model parameters to system reliability and performance. In order to support this methodology, we developed a MATLAB/SIMULINKÂź tool, which also provides a common environment with a common language for control engineers and reliability engineers to develop fault-tolerant systems. We illustrate the use of the methodology and the capabilities of the tool with two case-studies. The first one corresponds to the lateral-directional control system of an advanced fighter aircraft. This case-study shows how the methodology can identify weak points in the system design; and point out possible solutions to eliminate them; compare different architecture alternatives from different perspectives; and test different failure detection, isolation, and reconfiguration (FDIR) techniques.(cont.) This case-study also shows the effectiveness of the MATLAB/SIMULINKÂź tool to analyze large and complex systems. The second case-study compares two very different solutions to achieve fault-tolerance in a steer-by-wire (SbW) system. The first solution is based on the replication of components; and the introduction of failure detection, isolation, and reconfiguration mechanisms. In the second solution, a dissimilar backup mechanism called brake-actuated steering (BAS), is used to achieve fault-tolerance rather than replicating each component within the system. This case-study complements the flight control system one by showing how the performance and MATLAB/SIMULINKÂź tool can be used to compare very different architectural approaches to achieve fault-tolerance; and therefore, how the methodology can be used to choose the best design in terms of performance and reliability.by Alejandro D. DomĂ­nguez-GarcĂ­a.Ph.D
    • 

    corecore