450 research outputs found

    Capacity flexibility of a maintenance service provider in specialized and commoditized system environments

    Get PDF
    In the last decades, after-sales services have become increasingly important since service is a source of differentiation as well as a lucrative business opportunity due to the substantial amount of revenue that can be generated from the products in use throughout their life cycle. Following this trend, many after-sales service providers have emerged in the market or evolved as semi-autonomous units within the OEM (Original Equipment Manufacturer) companies. In this thesis, we focus on the maintenance aspect of after-sales services. We assume that a maintenance service provider (MSP) is running a repair shop in an environment with numerous operating systems that are prone to failure. The MSP is responsible for keeping all systems in an environment up and working. We mainly focus on two types of environments: 1) Specialized System Environment 2) Commoditized System Environment. The systems in the first environment are highly customized. They are designed and built specifically following the owners’ precise requirements. Defense systems, specific lithography systems, mission aircrafts or other advanced/complex, engineer-to-order capital goods are examples of such specialized systems. Due to the diversity of owners’ requirements, each system develops many unique characteristics, which make it hard, if not impossible, to find a substitute for the system, in the market as a whole. In the second environment, the systems are more generic in terms of their functionality. Trucks, cranes, printers, copy machines, forklifts, computer systems, cooling towers, some common medical devices (i.e. anesthesia, x-ray and ultrasound machines, etc…), power systems are examples of such more commoditized systems. Due to the more generic features of the owners’ requirements, it is easier to find a substitute for a system in the market, with more or less the same functionality, for short-term hiring purposes. Upon a system breakdown, the defective unit (system/subsystem) is sent to the repair shop. MSP is responsible for the repair and also liable for the costs related to the down time. In order to alleviate the down-time costs, there are chiefly two different downtime service strategies that the MSP can follow, depending on the environment the repair shop is operating in. In the specialized system environment, the MSP holds a spare unit inventory for the critical subsystem that causes most of the failures. The downtime service related decision in such a case would be the inventory level of the critical spare subsystems. On the other hand, in the commoditized system environment, rather than keeping a spare unit inventory, the MSP hires a substitute system from an agreed rental store/3rd party supplier. The downtime service related decision in this case is the hiring duration. Next to the above downtime service related decisions, repair shop’s capacity level is the other primary determinant of the systems’ uptime/availability. Since maintenance is a labor-intensive industry, the capacity costs constitute a large portion of the total costs. Increasing pressure on profitability and the growing role of External Labor Supplier Agencies motivate service provider firms to scrutinize the prospects and possibilities of capacity flexibility by using contingent workforce. For various reasons, flexible capacity practices in real life are often periodic, and the period length is both a decision parameter and a metric for flexibility. A shorter period length implies more frequent adapting possibilities and a better tailoring of the capacity. On the other hand, the flexible capacity cost per unit time is higher for shorter period lengths due to the compensating wage differentials, which models the relation between the wage rate and the unpleasantness, risk or other undesirable attributes of the job. Certainly, short period length in this context is an undesirable attribute for the flexible capacity resource, as it mandates the resource to switch tasks and to be ready/available more frequently, without the guarantee that s/he will be actually employed. Therefore, we propose several empirically testable functional forms for the cost rate of a flexible capacity unit, which are decreasing with the period length and, in the limit, approaches to the cost rate of a permanent capacity unit from above. In the light of discussions above, we investigate three different capacity modes in this dissertation: ¿ Fixed Capacity Mode: In this mode, all of the capacity is permanent and ready for use in the repair shop. This mode serves as a reference point in order to assess the benefits of other flexible capacity modes. The relevant capacity decision in this mode is the single capacity level of the repair shop. ¿ Periodic Two-Level Capacity Mode: In this mode, we assume two levels of repair shop capacity: permanent and permanent plus contingent capacity levels. The permanent capacity is always available in the system, whereas the deployment of the contingent capacity is decided at the start of each period based on the number of units waiting to be repaired in the shop. The relevant capacity decisions in this mode are the permanent and contingent capacity levels, the period length and the states (in terms of number of defective units waiting) where the contingent capacity is deployed. ¿ Periodic Capacity Sell-Back Mode: In this mode, the failed units are sent to the repair shop at regular intervals in time. Due to this admission structure, when the repair of all the defective units in the repair shop are completed in a period, it is known that no new defective parts will arrive to the shop at least until the start of the next period. This certainty in idle times allows for a contract, where the repair shop capacity is sold at a reduced price to the capacity agency where it is assigned to other tasks until the start of the next period. The original cost of the multi-skilled repair shop capacity per time unit is higher than the permanent capacity cost that is mentioned in previous modes due to the compensation factors such as additional skills, frequent task switching and transportation/transaction costs. Similar to the previous capacity mode, the compensation decreases with the length of the period length. The relevant capacity decisions in this mode are the capacity level and the period length. The primary goal of this thesis is to develop quantitative models and methods for taking optimal capacity decisions for the repair shop in the presence of the capacity modes described above and to integrate these decisions with the other downtime service decisions of the MSP for two different types of system environments (specialized vs. commoditized). After the introduction of the problem, concepts and literature review are given in Chapters 1. In Chapter 2, we focus on the use of capacity flexibility in the repair operations of the MSP in specialized system environment. The capacity related decisions are integrated with the decision on the stock level of the spare unit inventory for all three capacity modes. In Chapter 3 we investigate the same three capacity modes in a (partially) commoditized system environment, where hiring a substitute system for a pre-determined, uniform duration becomes the conventional method upon a failure. In this chapter the decision on the hiring duration is integrated with the other capacity related decisions. Then we provide some preliminary analysis and give the early results on the hybrid strategy where both "keeping stock" and "hire substitute" strategies are followed. Finally in Chapter 4, we summarize our results, give the conclusion and discuss the topics covered in this thesis with a brief exploration on the future research. The numerical results reveal that, in both specialized and commoditized system environments, substantial cost savings (up to 70%) can be achieved under periodic two-level capacity and periodic capacity sell-back modes compared to the fixed capacity mode. However, both period length and the compensation scheme of the capacity resources greatly influence the savings, even in some cost instances, flexible modes (periodic two-level and capacity sell-back) become less economical compared to the fixed capacity mode. Cost parameter instances in which each of the 3 capacity modes becomes cost-optimal, the characteristics of the cost savings and the sensitivity analysis of cost/policy parameters are investigated in both of the system environments in Chapter 2 and Chapter 3, respectively. In the commoditized system environment, under the same cost parameter settings, the hiring substitute from an external supplier for a fixed duration causes a better, more refined and certain control compared to keeping an inventory. Hybrid strategy, in which a substitute is hired after a stock-out instance, is applicable in commoditized as well as commoditizing (previously specialized systems that are in the ongoing commoditization process) system environments. Hybrid strategy outperforms both "only keeping stock" and "only hiring substitute" alternatives; however, in the commoditized system environment, a MSP may still have a proclivity to employ the "hiring substitute" strategy only, because it does not require any initial investment, which is convenient for SMEs. These issues will be explicated further in Chapter 5. We believe that the framework, the design and analysis of the problems addressed as well as the results and the insights obtained in this dissertation can help and motivate other researchers/practitioners to further investigate the cost saving prospects from capacity flexibility in maintenance service operations. We also anticipate that the commoditization framework described in this thesis will be increasingly useful in the future, since the commoditization of the parts/machines will be much more widespread, pushing all the after-sales service providers to compete on the efficiency of their operatio

    Performance Evaluation of Stochastic Multi-Echelon Inventory Systems: A Survey

    Get PDF
    Globalization, product proliferation, and fast product innovation have significantly increased the complexities of supply chains in many industries. One of the most important advancements of supply chain management in recent years is the development of models and methodologies for controlling inventory in general supply networks under uncertainty and their widefspread applications to industry. These developments are based on three generic methods: the queueing-inventory method, the lead-time demand method and the flow-unit method. In this paper, we compare and contrast these methods by discussing their strengths and weaknesses, their differences and connections, and showing how to apply them systematically to characterize and evaluate various supply networks with different supply processes, inventory policies, and demand processes. Our objective is to forge links among research strands on different methods and various network topologies so as to develop unified methodologies.Masdar Institute of Science and TechnologyNational Science Foundation (U.S.) (NSF Contract CMMI-0758069)National Science Foundation (U.S.) (Career Award CMMI-0747779)Bayer Business ServicesSAP A

    Analysis of discrete-time queueing systems with vacations

    Get PDF

    Discrete Event Simulations

    Get PDF
    Considered by many authors as a technique for modelling stochastic, dynamic and discretely evolving systems, this technique has gained widespread acceptance among the practitioners who want to represent and improve complex systems. Since DES is a technique applied in incredibly different areas, this book reflects many different points of view about DES, thus, all authors describe how it is understood and applied within their context of work, providing an extensive understanding of what DES is. It can be said that the name of the book itself reflects the plurality that these points of view represent. The book embraces a number of topics covering theory, methods and applications to a wide range of sectors and problem areas that have been categorised into five groups. As well as the previously explained variety of points of view concerning DES, there is one additional thing to remark about this book: its richness when talking about actual data or actual data based analysis. When most academic areas are lacking application cases, roughly the half part of the chapters included in this book deal with actual problems or at least are based on actual data. Thus, the editor firmly believes that this book will be interesting for both beginners and practitioners in the area of DES

    Stochastic Processes with Applications

    Get PDF
    Stochastic processes have wide relevance in mathematics both for theoretical aspects and for their numerous real-world applications in various domains. They represent a very active research field which is attracting the growing interest of scientists from a range of disciplines.This Special Issue aims to present a collection of current contributions concerning various topics related to stochastic processes and their applications. In particular, the focus here is on applications of stochastic processes as models of dynamic phenomena in research areas certain to be of interest, such as economics, statistical physics, queuing theory, biology, theoretical neurobiology, and reliability theory. Various contributions dealing with theoretical issues on stochastic processes are also included

    A PROTOCOL SUITE FOR WIRELESS PERSONAL AREA NETWORKS

    Get PDF
    A Wireless Personal Area Network (WPAN) is an ad hoc network that consists of devices that surround an individual or an object. Bluetooth® technology is especially suitable for formation of WPANs due to the pervasiveness of devices with Bluetooth® chipsets, its operation in the unlicensed Industrial, Scientific, Medical (ISM) frequency band, and its interference resilience. Bluetooth® technology has great potential to become the de facto standard for communication between heterogeneous devices in WPANs. The piconet, which is the basic Bluetooth® networking unit, utilizes a Master/Slave (MS) configuration that permits only a single master and up to seven active slave devices. This structure limitation prevents Bluetooth® devices from directly participating in larger Mobile Ad Hoc Networks (MANETs) and Wireless Personal Area Networks (WPANs). In order to build larger Bluetooth® topologies, called scatternets, individual piconets must be interconnected. Since each piconet has a unique frequency hopping sequence, piconet interconnections are done by allowing some nodes, called bridges, to participate in more than one piconet. These bridge nodes divide their time between piconets by switching between Frequency Hopping (FH) channels and synchronizing to the piconet\u27s master. In this dissertation we address scatternet formation, routing, and security to make Bluetooth® scatternet communication feasible. We define criteria for efficient scatternet topologies, describe characteristics of different scatternet topology models as well as compare and contrast their properties, classify existing scatternet formation approaches based on the aforementioned models, and propose a distributed scatternet formation algorithm that efficiently forms a scatternet topology and is resilient to node failures. We propose a hybrid routing algorithm, using a bridge link agnostic approach, that provides on-demand discovery of destination devices by their address or by the services that devices provide to their peers, by extending the Service Discovery Protocol (SDP) to scatternets. We also propose a link level security scheme that provides secure communication between adjacent piconet masters, within what we call an Extended Scatternet Neighborhood (ESN)

    Robust and secure resource management for automotive cyber-physical systems

    Get PDF
    2022 Spring.Includes bibliographical references.Modern vehicles are examples of complex cyber-physical systems with tens to hundreds of interconnected Electronic Control Units (ECUs) that manage various vehicular subsystems. With the shift towards autonomous driving, emerging vehicles are being characterized by an increase in the number of hardware ECUs, greater complexity of applications (software), and more sophisticated in-vehicle networks. These advances have resulted in numerous challenges that impact the reliability, security, and real-time performance of these emerging automotive systems. Some of the challenges include coping with computation and communication uncertainties (e.g., jitter), developing robust control software, detecting cyber-attacks, ensuring data integrity, and enabling confidentiality during communication. However, solutions to overcome these challenges incur additional overhead, which can catastrophically delay the execution of real-time automotive tasks and message transfers. Hence, there is a need for a holistic approach to a system-level solution for resource management in automotive cyber-physical systems that enables robust and secure automotive system design while satisfying a diverse set of system-wide constraints. ECUs in vehicles today run a variety of automotive applications ranging from simple vehicle window control to highly complex Advanced Driver Assistance System (ADAS) applications. The aggressive attempts of automakers to make vehicles fully autonomous have increased the complexity and data rate requirements of applications and further led to the adoption of advanced artificial intelligence (AI) based techniques for improved perception and control. Additionally, modern vehicles are becoming increasingly connected with various external systems to realize more robust vehicle autonomy. These paradigm shifts have resulted in significant overheads in resource constrained ECUs and increased the complexity of the overall automotive system (including heterogeneous ECUs, network architectures, communication protocols, and applications), which has severe performance and safety implications on modern vehicles. The increased complexity of automotive systems introduces several computation and communication uncertainties in automotive subsystems that can cause delays in applications and messages, resulting in missed real-time deadlines. Missing deadlines for safety-critical automotive applications can be catastrophic, and this problem will be further aggravated in the case of future autonomous vehicles. Additionally, due to the harsh operating conditions (such as high temperatures, vibrations, and electromagnetic interference (EMI)) of automotive embedded systems, there is a significant risk to the integrity of the data that is exchanged between ECUs which can lead to faulty vehicle control. These challenges demand a more reliable design of automotive systems that is resilient to uncertainties and supports data integrity goals. Additionally, the increased connectivity of modern vehicles has made them highly vulnerable to various kinds of sophisticated security attacks. Hence, it is also vital to ensure the security of automotive systems, and it will become crucial as connected and autonomous vehicles become more ubiquitous. However, imposing security mechanisms on the resource constrained automotive systems can result in additional computation and communication overhead, potentially leading to further missed deadlines. Therefore, it is crucial to design techniques that incur very minimal overhead (lightweight) when trying to achieve the above-mentioned goals and ensure the real-time performance of the system. We address these issues by designing a holistic resource management framework called ROSETTA that enables robust and secure automotive cyber-physical system design while satisfying a diverse set of constraints related to reliability, security, real-time performance, and energy consumption. To achieve reliability goals, we have developed several techniques for reliability-aware scheduling and multi-level monitoring of signal integrity. To achieve security objectives, we have proposed a lightweight security framework that provides confidentiality and authenticity while meeting both security and real-time constraints. We have also introduced multiple deep learning based intrusion detection systems (IDS) to monitor and detect cyber-attacks in the in-vehicle network. Lastly, we have introduced novel techniques for jitter management and security management and deployed lightweight IDSs on resource constrained automotive ECUs while ensuring the real-time performance of the automotive systems

    An Application of Matrix Analytic Methods to Queueing Models with Polling

    Get PDF
    We review what it means to model a queueing system, and highlight several components of interest which govern the behaviour of customers, as well as the server(s) who tend to them. Our primary focus is on polling systems, which involve one or more servers who must serve multiple queues of customers according to their service policy, which is made up of an overall polling order, and a service discipline defined at each queue. The most common polling orders and service disciplines are discussed, and some examples are given to demonstrate their use. Classic matrix analytic method theory is built up and illustrated on models of increasing complexity, to provide context for the analyses of later chapters. The original research contained within this thesis is divided into two halves, finite population maintenance models and infinite population cyclic polling models. In the first half, we investigate a 2-class maintenance system with a single server, expressed as a polling model. In Chapter 2, the model we study considers a total of C machines which are at risk of failing when working. Depending on the failure that a machine experiences, it is sorted into either the class-1 or class-2 queue where it awaits service among other machines suffering from similar failures. The possible service policies that are considered include exhaustive, non-preemptive priority, and preemptive resume priority. In Chapter 3, this model is generalized to allow for a maintenance float of f spare machines that can be turned on to replace a failed machine. Additionally, the possible server behaviours are greatly generalized. In both chapters, among other topics, we discuss the optimization of server behaviour as well as the limiting number of working machines as we let C go to infinity. As these are systems with a finite population (for a given C and f), their steady-state distributions can be solved for using the algorithm for level-dependent quasi-birth-and-death processes without loss of accuracy. When a class of customers are impatient, the algorithms covered in this thesis require their queue length to be truncated in order for us to approximate the steady-state distribution for all but the simplest model. In Chapter 4, we model a 2-queue polling system with impatient customers and k_i-limited service disciplines. Finite buffers are assumed for both queues, such that if a customer arrives to find their queue full then they are blocked and lost forever. Finite buffers are a way to interpret a necessary truncation level, since we can simply assume that it is impossible to observe the removed states. However, if we are interested in approximating an infinite buffer system, this inconsistency will bias the steady-state probabilities if blocking probabilities are not negligible. In Chapter 5, we introduce the Unobserved Waiting Customer approximation as a way to reduce this natural biasing that is incurred when approximating an infinite buffer system. Among the queues considered within this chapter is a N-queue system with exhaustive service and customers who may or may not be impatient. In Chapter 6, we extend this approximation to allow for reneging rates that depend on a customer's place in their queue. This is applied to a N-queue polling system which generalizes the model of Chapter 4

    Performance modelling of replication protocols

    Get PDF
    PhD ThesisThis thesis is concerned with the performance modelling of data replication protocols. Data replication is used to provide fault tolerance and to improve the performance of a distributed system. Replication not only needs extra storage but also has an extra cost associated with it when performing an update. It is not always clear which algorithm will give best performance in a given scenario, how many copies should be maintained or where these copies should be located to yield the best performance. The consistency requirements also change with application. One has to choose these parameters to maximize reliability and speed and minimize cost. A study showing the effect of change in different parameters on the performance of these protocols would be helpful in making these decisions. With the use of data replication techniques in wide-area systems where hundreds or even thousands of sites may be involved, it has become important to evaluate the performance of the schemes maintaining copies of data. This thesis evaluates the performance of replication protocols that provide differ- ent levels of data consistency ranging from strong to weak consistency. The protocols that try to integrate strong and weak consistency are also examined. Queueing theory techniques are used to evaluate the performance of these protocols. The performance measures of interest are the response times of read and write jobs. These times are evaluated both when replicas are reliable and when they are subject to random breakdowns and repairs.Commonwealth Scholarshi
    • …
    corecore