1,999 research outputs found

    Automated process modelling and continuous improvement.

    Get PDF
    This thesis discusses and demonstrates the benefits of simulating and optimising a manufacturing control system in order to improve flow of production material through a system with high variety low volume output requirements. The need for and factors affecting synchronous flow are also discussed along with the consequences of poor flow and various solutions for overcoming it. A study into and comparison of various planning and control methodologies designed to promote flow of material through a manufacturing system was carried out to identify a suitable system to model. The research objectives are; • Identify the best system to model that will promote flow, • Identify the potential failure mechanisms within that system that exist and have not been yet resolved, • Produce a model that can fully resolve or reduce the probability of the identified failure mechanisms having an effect. This research led to an investigation into the main elements of a Drum-Buffer-Rope (DBR) environment in order to generate a comprehensive description of the requirements for DBR implementation and operation and attempt to improve the limitations that have been identified via the research literature. These requirements have been grouped into three areas, i.e.: a. plant layout and kanban controls, b. planning and control, and c. DBR infrastructure. A DBR model was developed combined with Genetic Algorithms with the aim of maximising the throughput level for an individual product mix. The results of the experiments have identified new knowledge on how DBR processes facilitate and impede material flow synchronisation within high variety/low volume manufacturing environments. The research results were limited to the assumptions made and constraints of the model, this research has highlighted that as such a model becomes more complex it also becomes more volatile and more difficult to control, leading to the conclusions that more research is required by extending the complexity of the model by adding more product mix and system variability to compare results with the results of this research. After which it will be expected that the model will be useful to enable a quick system response to large variations in product demand within the mixed model manufacturing industry.EPSR

    Towards Power- and Energy-Efficient Datacenters

    Full text link
    As the Internet evolves, cloud computing is now a dominant form of computation in modern lives. Warehouse-scale computers (WSCs), or datacenters, comprising the foundation of this cloud-centric web have been able to deliver satisfactory performance to both the Internet companies and the customers. With the increased focus and popularity of the cloud, however, datacenter loads rise and grow rapidly, and Internet companies are in need of boosted computing capacity to serve such demand. Unfortunately, power and energy are often the major limiting factors prohibiting datacenter growth: it is often the case that no more servers can be added to datacenters without surpassing the capacity of the existing power infrastructure. This dissertation aims to investigate the issues of power and energy usage in a modern datacenter environment. We identify the source of power and energy inefficiency at three levels in a modern datacenter environment and provides insights and solutions to address each of these problems, aiming to prepare datacenters for critical future growth. We start at the datacenter-level and find that the peak provisioning and improper service placement in multi-level power delivery infrastructures fragment the power budget inside production datacenters, degrading the compute capacity the existing infrastructure can support. We find that the heterogeneity among datacenter workloads is key to address this issue and design systematic methods to reduce the fragmentation and improve the utilization of the power budget. This dissertation then narrow the focus to examine the energy usage of individual servers running cloud workloads. Especially, we examine the power management mechanisms employed in these servers and find that the coarse time granularity of these mechanisms is one critical factor that leads to excessive energy consumption. We propose an intelligent and low overhead solution on top of the emerging finer granularity voltage/frequency boosting circuit to effectively pinpoints and boosts queries that are likely to increase the tail distribution and can reap more benefit from the voltage/frequency boost, improving energy efficiency without sacrificing the quality of services. The final focus of this dissertation takes a further step to investigate how using a fundamentally more efficient computing substrate, field programmable gate arrays (FPGAs), benefit datacenter power and energy efficiency. Different from other types of hardware accelerations, FPGAs can be reconfigured on-the-fly to provide fine-grain control over hardware resource allocation and presents a unique set of challenges for optimal workload scheduling and resource allocation. We aim to design a set coordinated algorithms to manage these two key factors simultaneously and fully explore the benefit of deploying FPGAs in the highly varying cloud environment.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/144043/1/hsuch_1.pd

    An Efficient Execution Model for Reactive Stream Programs

    Get PDF
    Stream programming is a paradigm where a program is structured by a set of computational nodes connected by streams. Focusing on data moving between computational nodes via streams, this programming model fits well for applications that process long sequences of data. We call such applications reactive stream programs (RSPs) to distinguish them from stream programs with rather small and finite input data. In stream programming, concurrency is expressed implicitly via communication streams. This helps to reduce the complexity of parallel programming. For this reason, stream programming has gained popularity as a programming model for parallel platforms. However, it is also challenging to analyse and improve the performance without an understanding of the program's internal behaviour. This thesis targets an effi cient execution model for deploying RSPs on parallel platforms. This execution model includes a monitoring framework to understand the internal behaviour of RSPs, scheduling strategies for RSPs on uniform shared-memory platforms; and mapping techniques for deploying RSPs on heterogeneous distributed platforms. The foundation of the execution model is based on a study of the performance of RSPs in terms of throughput and latency. This study includes quantitative formulae for throughput and latency; and the identification of factors that influence these performance metrics. Based on the study of RSP performance, this thesis exploits characteristics of RSPs to derive effective scheduling strategies on uniform shared-memory platforms. Aiming to optimise both throughput and latency, these scheduling strategies are implemented in two heuristic-based schedulers. Both of them are designed to be centralised to provide load balancing for RSPs with dynamic behaviour as well as dynamic structures. The first one uses the notion of positive and negative data demands on each stream to determine the scheduling priorities. This scheduler is independent from the runtime system. The second one requires the runtime system to provide the position information for each computational node in the RSP; and uses that to decide the scheduling priorities. Our experiments show that both schedulers provides similar performance while being significantly better than a reference implementation without dynamic load balancing. Also based on the study of RSP performance, we present in this thesis two new heuristic partitioning algorithms which are used to map RSPs onto heterogeneous distributed platforms. These are Kernighan-Lin Adaptation (KLA) and Congestion Avoidance (CA), where the main objective is to optimise the throughput. This is a multi-parameter optimisation problem where existing graph partitioning algorithms are not applicable. Compared to the generic meta-heuristic Simulated Annealing algorithm, both proposed algorithms achieve equally good or better results. KLA is faster for small benchmarks while slower for large ones. In contrast, CA is always orders of magnitudes faster even for very large benchmarks

    A methodology for software performance modeling and its application to a border inspection system

    Get PDF
    It is essential that software systems meet their performance objectives. Many factors affect software performance and it is fundamental to identify those factors and the magnitude of their effects early in the software lifecycle to avoid costly and extensive changes to software design, implementation, or requirements. In the last decade the development of techniques and methodologies to carry out performance analysis in the early stages of the software lifecycle has gained a lot of attention within the research community. Different approaches to evaluate software performance have been developed. Each of them is characterized by a certain software specification and performance modeling notation.;In this thesis we present a methodology for predictive performance modeling and analysis of software systems. We use the Unified Modeling Language (UML) as a software modeling notation and Layered Queuing Networks (LQN) as a performance modeling notation. Our focus is on the definition of a UML to LQN transformation We extend existing approaches by applying the transformation to a different set of UML diagrams, and propose a few extensions to the current UML Profile for Schedulability, Performance, and Time , which we use to annotate UML diagrams with performance-related information. We test the applicability of our methodology to the performance evaluation of a complex software system used at border entry ports to grant or deny access to incoming travelers

    Production line: effect of different inspection station allocation under accepts reject inspection policy

    Get PDF
    Manufacturing system is one of the most important parts in any organization as it produces the output of the company which will generate the profit. It consists partly of the production line which plays the role as the centre of production to create the end product which could be half finished or the full product. It is a big problem for the company to determine which is the better arrangement and combination of the tools or machines available in this area of the organization as different combination will greatly impact the productivity of the production line together with the profit of the company. This research intend to analyze a new production line in a metal stamping company based on the complain from the company and try to explore the better layout or arrangement in the production line in reflect to the complained problem and constrain of the provided of accept the defect and repair inspection policy. The production line is first being analyzed in response to complain through computer simulation. After the problem had been identified, the researcher tried different alternatives in the attempt to seek for the better layout or arrangement in the production line. The effect of different inspection station allocation layout is then being evaluated in term of the production time. The research has resulted in the finding of the cause for the long production time in the factory which is the long inspection steps which consumed much of the production time. After a few alternatives have been explored in allocating the inspection station, it is obvious that the current approach of the production line is the better one. Even by reducing the number of inspection station, interesting enough, the production time does not seem to decrease but yet increased. This finding contradicts the normal thought of fewer stations means shorter time. This finding could be the founding basic in the future research regarding the allocation of the inspection station following certain provided policy. This is also very helpful in real life practice in company as to help them improve their production time. As for the time being, there is yet a research addressing this issue pertaining the given inspection policy

    Ethernet Networks for Real-Time Use in the ATLAS Experiment

    Get PDF
    Ethernet became today's de-facto standard technology for local area networks. Defined by the IEEE 802.3 and 802.1 working groups, the Ethernet standards cover technologies deployed at the first two layers of the OSI protocol stack. The architecture of modern Ethernet networks is based on switches. The switches are devices usually built using a store-and-forward concept. At the highest level, they can be seen as a collection of queues and mathematically modelled by means of queuing theory. However, the traffic profiles on modern Ethernet networks are rather different from those assumed in classical queuing theory. The standard recommendations for evaluating the performance of network devices define the values that should be measured but do not specify a way of reconciling these values with the internal architecture of the switches. The introduction of the 10 Gigabit Ethernet standard provided a direct gateway from the LAN to the WAN by the means of the WAN PHY. Certain aspects related to the actual use of WAN PHY technology were vaguely defined by the standard. The ATLAS experiment at CERN is scheduled to start operation at CERN in 2007. The communication infrastructure of the Trigger and Data Acquisition System will be built using Ethernet networks. The real-time operational needs impose a requirement for predictable performance on the network part. In view of the diversity of the architectures of Ethernet devices, testing and modelling is required in order to make sure the full system will operate predictably. This thesis focuses on the testing part of the problem and addresses issues in determining the performance for both LAN and WAN connections. The problem of reconciling results from measurements to architectural details of the switches will also be tackled. We developed a scalable traffic generator system based on commercial-off-the-shelf Gigabit Ethernet network interface cards. The generator was able to transmit traffic at the nominal Gigabit Ethernet line rate for all frame sizes specified in the Ethernet standard. The calculation of latency was performed with accuracy in the range of +/- 200 ns. We indicate how certain features of switch architectures may be identified through accurate throughput and latency values measured for specific traffic distributions. At this stage, we present a detailed analysis of Ethernet broadcast support in modern switches. We use a similar hands-on approach to address the problem of extending Ethernet networks over long distances. Based on the 1 Gbit/s traffic generator used in the LAN, we develop a methodology to characterise point-to-point connections over long distance networks. At higher speeds, a combination of commercial traffic generators and high-end servers is employed to determine the performance of the connection. We demonstrate that the new 10 Gigabit Ethernet technology can interoperate with the installed base of SONET/SDH equipment through a series of experiments on point-to-point circuits deployed over long-distance network infrastructure in a multi-operator domain. In this process, we provide a holistic view of the end-to-end performance of 10 Gigabit Ethernet WAN PHY connections through a sequence of measurements starting at the physical transmission layer and continuing up to the transport layer of the OSI protocol stack

    A review on equipment protection and system protection relay in power system

    Get PDF
    Power system equipment is configured and connected together with multiple voltage levels in existing electrical power system. There are varieties of electrical equipment obtainable in the power system predominantly from generation side up to the distribution side. Consequently, appropriate protections must be apt to prevent inessential disturbances that lead to voltage instability, voltage collapse and sooner a total blackout took place in the power system. The understanding of each component on the system protection is critical. This is due to any abnormal condition and failure can be analyzed and solved effectively due to the rapid changing and development on the power system network. Therefore, the enhancement of power quality can be achieved by sheltering the equipment with protection relay in power system. Moreover, the design of a systematic network is crucial for the system protection itself. Several types of protective equipment and protection techniques are taken into consideration in this paper. Hence, the existing accessible types and methods of system protection in the power system network are reviewed

    A review on equipment protection and system protection relay in power system

    Get PDF
    Power system equipment is configured and connected together with multiple voltage levels in existing electrical power system. There are varieties of electrical equipment obtainable in the power system predominantly from generation side up to the distribution side. Consequently, appropriate protections must be apt to prevent inessential disturbances that lead to voltage instability, voltage collapse and sooner a total blackout took place in the power system. The understanding of each component on the system protection is critical. This is due to any abnormal condition and failure can be analyzed and solved effectively due to the rapid changing and development on the power system network. Therefore, the enhancement of power quality can be achieved by sheltering the equipment with protection relay in power system. Moreover, the design of a systematic network is crucial for the system protection itself. Several types of protective equipment and protection techniques are taken into consideration in this paper. Hence, the existing accessible types and methods of system protection in the power system network are reviewed
    corecore