494 research outputs found

    A Quality Systems Economic-Risk Design Theoretical Framework

    Get PDF
    Quality systems, including control charts theory and sampling plans, have become essential tools to develop business processes. Since 1928, research has been conducted in developing the economic-risk designs for specific types of control charts or sampling plans. However, there has been no theoretical or applied research attempts to combine these related theories into a synthesized theoretical framework of quality systems economic-risk design. This research proposes to develop a theoretical framework of quality systems economic-risk design from qualitative research synthesis of the economic-risk design of sampling plan models and control charts models. This theoretical framework will be useful in guiding future research into economic risk quality systems design theory and application

    Vol. 7, No. 2 (Full Issue)

    Get PDF

    Analysis of life cycle management leading to pharmaceutical process improvement by computer simulation

    Get PDF
    The pharmaceutical industry participates in a highly changing environment with increasing demands and competition while being less innovative. The development of medicinal products and their value towards ordinary goods force the manufacturers to produce high quality products in the most cost-effective way. By analyzing the life cycle of medicinal products and its management, the present challenges as well as appropriate solutions were identified. One such solution is computer simulation, which is why two approved production processes of film-coated tablets were optimized by discrete- event simulations. Through this, a methodological approach was developed to build, verify, and validate models of the as-is productions. Afterwards, the models were modified into different optimization scenarios to challenge multiple shift systems. These shift systems were evaluated considering the campaign duration, the production costs as well as the capacity utilizations of employees and machines. The implemented model changes could bisect the campaign duration and reduce the production costs in a two-digit percentage share. Thus, process optimizations by computer simulations were proved to be one remarkable strategy in the life cycle management of medicinal products.Die pharmazeutische Industrie partizipiert in einer sich stark verändernden Umgebung mit steigendenen Anforderungen sowie wachsender Konkurrenz und ist zugleich selbst weniger innovativ. Die Entwicklung von Arzneimitteln und deren Wert hin zu normalen Gütern zwingt die Hersteller möglichst kosteneffizient qualitativ hochwertige Produkte zu fertigen. Eine Analyse des Lebenszyklus von Arzneimitteln und dessen Management identifizierte sowohl die vorhandenen Herausforderungen als auch mögliche Lösungsansätze. Computer Simulationen stellen einen solchen Lösungsansatz dar, sodass zwei zugelassene Produktionsprozesse von Filmtabletten durch Simulationen optimiert wurden. Dafür wurde zuerst ein methodisches Vorgehen entwickelt um Modelle der Produktionsprozesse zu erstellen, sie zu verifizieren und zu validieren. Im Anschluss wurden diese Modelle in verschiedene Optimierungsszenarien abgewandelt um unterschiedliche Schichtsysteme zu prüfen. Deren Bewertung erfolgte anhand von Kampagnendauer, Produktionskosten sowie Mitarbeiter- und Maschinenauslastungen. Die implementierten Modelländerungen konnten die Dauer der Produktionskampagnen halbieren und die Produktionskosten um einen zweistelligen Prozentsatz senken. Somit wurde bewiesen, dass Prozessoptimierungen durch Computer Simulationen eine eindrucksvolle Strategie im Life Cycle Management von Arzenimitteln darstellen

    Quantitative evaluation of the effectiveness of official animal disease surveillance programmes

    Get PDF

    NUC BMAS

    Get PDF

    Statistical process control by quantile approach.

    Get PDF
    Most quality control and quality improvement procedures involve making assumptions about the distributional form of data it uses; usually that the data is normally distributed. It is common place to find processes that generate data which is non-normally distributed, e.g. Weibull, logistic or mixture data is increasingly encountered. Any method that seeks to avoid the use of transformation for non-normal data requires techniques for identification of the appropriate distributions. In cases where the appropriate distributions are known it is often intractable to implement.This research is concerned with statistical process control (SPC), where SPC can be apply for variable and attribute data. The objective of SPC is to control a process in an ideal situation with respect to a particular product specification. One of the several measurement tools of SPC is control chart. This research is mainly concerned with control chart which monitors process and quality improvement. We believe, it is a useful process monitoring technique when a source of variability is present. Here, control charts provides a signal that the process must be investigated. In general, Shewhart control charts assume that the data follows normal distribution. Hence, most of SPC techniques have been derived and constructed using the concept of quality which depends on normal distribution. In reality, often the set of data such as, chemical process data and lifetimes data, etc. are not normal. So when a control chart is constructed for x or R, assuming that the data is normal, if in reality, the data is nonnormal, then it will provide an inaccurate results.Schilling and Nelson has (1976) investigated under the central limit theory, the effect of non-normality on charts and concluded that the non-normality is usually not a problem for subgroup sizes of four or more. However, for smaller subgroup sizes, and especially for individual measurements, non-normality can be serious problem.The literature review indicates that there are real problems in dealing with statistical process control for non-normal distributions and mixture distributions. This thesis provides a quantile approach to deal with non-normal distributions, in order to construct median rankit control chart. Here, the quantile approach will also be used to calculate process capability index, average run length (ARL), multivariate control chart and control chart for mixture distribution for non-normal situations. This methodology can be easily adopted by the practitioner of statistical process control

    Characterizing, managing and monitoring the networks for the ATLAS data acquisition system

    Get PDF
    Particle physics studies the constituents of matter and the interactions between them. Many of the elementary particles do not exist under normal circumstances in nature. However, they can be created and detected during energetic collisions of other particles, as is done in particle accelerators. The Large Hadron Collider (LHC) being built at CERN will be the world's largest circular particle accelerator, colliding protons at energies of 14 TeV. Only a very small fraction of the interactions will give raise to interesting phenomena. The collisions produced inside the accelerator are studied using particle detectors. ATLAS is one of the detectors built around the LHC accelerator ring. During its operation, it will generate a data stream of 64 Terabytes/s. A Trigger and Data Acquisition System (TDAQ) is connected to ATLAS -- its function is to acquire digitized data from the detector and apply trigger algorithms to identify the interesting events. Achieving this requires the power of over 2000 computers plus an interconnecting network capable of sustaining a throughput of over 150 Gbit/s with minimal loss and delay. The implementation of this network required a detailed study of the available switching technologies to a high degree of precision in order to choose the appropriate components. We developed an FPGA-based platform (the GETB) for testing network devices. The GETB system proved to be flexible enough to be used as the ba sis of three different network-related projects. An analysis of the traffic pattern that is generated by the ATLAS data-taking applications was also possible thanks to the GETB. Then, while the network was being assembled, parts of the ATLAS detector started commissioning -- this task relied on a functional network. Thus it was imperative to be able to continuously identify existing and usable infrastructure and manage its operations. In addition, monitoring was required to detect any overload conditions with an indication where the excess demand was being generated. We developed tools to ease the maintenance of the network and to automatically produce inventory reports. We created a system that discovers the network topology and this permitted us to verify the installation and to track its progress. A real-time traffic visualization system has been built, allowing us to see at a glance which network segments are heavily utilized. Later, as the network achieves production status, it will be necessary to extend the monitoring to identify individual applications' use of the available bandwidth. We studied a traffic monitoring technology that will allow us to have a better understanding on how the network is used. This technology, based on packet sampling, gives the possibility of having a complete view of the network: not only its total capacity utilization, but also how this capacity is divided among users and software applicati ons. This thesis describes the establishment of a set of tools designed to characterize, monitor and manage complex, large-scale, high-performance networks. We describe in detail how these tools were designed, calibrated, deployed and exploited. The work that led to the development of this thesis spans over more than four years and closely follows the development phases of the ATLAS network: its design, its installation and finally, its current and future operation

    Sampling Algorithms for Evolving Datasets

    Get PDF
    Perhaps the most flexible synopsis of a database is a uniform random sample of the data; such samples are widely used to speed up the processing of analytic queries and data-mining tasks, to enhance query optimization, and to facilitate information integration. Most of the existing work on database sampling focuses on how to create or exploit a random sample of a static database, that is, a database that does not change over time. The assumption of a static database, however, severely limits the applicability of these techniques in practice, where data is often not static but continuously evolving. In order to maintain the statistical validity of the sample, any changes to the database have to be appropriately reflected in the sample. In this thesis, we study efficient methods for incrementally maintaining a uniform random sample of the items in a dataset in the presence of an arbitrary sequence of insertions, updates, and deletions. We consider instances of the maintenance problem that arise when sampling from an evolving set, from an evolving multiset, from the distinct items in an evolving multiset, or from a sliding window over a data stream. Our algorithms completely avoid any accesses to the base data and can be several orders of magnitude faster than algorithms that do rely on such expensive accesses. The improved efficiency of our algorithms comes at virtually no cost: the resulting samples are provably uniform and only a small amount of auxiliary information is associated with the sample. We show that the auxiliary information not only facilitates efficient maintenance, but it can also be exploited to derive unbiased, low-variance estimators for counts, sums, averages, and the number of distinct items in the underlying dataset. In addition to sample maintenance, we discuss methods that greatly improve the flexibility of random sampling from a system's point of view. More specifically, we initiate the study of algorithms that resize a random sample upwards or downwards. Our resizing algorithms can be exploited to dynamically control the size of the sample when the dataset grows or shrinks; they facilitate resource management and help to avoid under- or oversized samples. Furthermore, in large-scale databases with data being distributed across several remote locations, it is usually infeasible to reconstruct the entire dataset for the purpose of sampling. To address this problem, we provide efficient algorithms that directly combine the local samples maintained at each location into a sample of the global dataset. We also consider a more general problem, where the global dataset is defined as an arbitrary set or multiset expression involving the local datasets, and provide efficient solutions based on hashing
    • …
    corecore