10 research outputs found

    The use of statistical process control in pharmaceuticals industry

    Get PDF
    The use of statistical process control has gained a major importance in the last years due to very good results that is provides and due the ease interpretation of the results, even by the people who are not specialists in the field. An essential quality, that differs the statistical process control to the other quality analysis statistical methods is that it examines the process in all stages, not only in the final stage. The increase of the competitiveness in all areas of industry made that the methods used in quality control to be more performant. No organization can maintain a high standard without a performant quality control. The pharmaceutical industry is one of the most important industries, holding an essential role in human’s health in particular and in welfare of the whole society in general. This application is meant to illustrate, by using some of statistical indices, control diagrams and capability process indices, how it is used the statistical process control in the pharmaceutical industry and highlights both advantages and disadvantages of using it

    Use of Product Quality Review to Evaluate Quality and Process Capability: A Case Study of Ibuprofen in a Model Tablet Manufacture

    Get PDF
    Product quality review in the pharmaceutical industry is a regulatory requirement comprising periodic evaluation of licensed pharmaceutical products to verify consistency of the manufacturing process and appropriateness of specifications. In this study, product quality and process capability in the manufacture of ibuprofen tablets were evaluated. A quality review of 39 batches produced in the year 2019 was conducted. Components for review included starting materials, critical in-process controls, finished product results, non-conformances, deviations and quality relevant product complaints. Control charts and statistical analysis were used to trend results and compute process capability indices. Starting materials, in-process controls and finished product results complied with quality specifications. Process capability indices for tablet weight, size, dissolution and assay were greater than 1.0. The study showed that the established quality attributes of ibuprofen tablets were consistently produced and it was concluded that the manufacturing process was controlled and sufficient to assure reproducible outcomes

    Use of Statistics Boot Camps to Encourage Success of Students with Diverse Background Knowledge

    Get PDF
    This study seeks to analyze the effects of targeted interventions on students with varying statistics background knowledge. These interventions include remediating assessment homework assignments, and Maine Learning Assistant (MLA) led statistics boot camps. These interventions were completed in an undergraduate Lean Six Sigma course, where students initially had a wide variety of prior statistics experience. This large dispersion of background knowledge levels is paralleled in many STEM entry-level courses. Data collected about student participation in these interventions and their later success on exams in the course were analyzed using General Linear Model protocol to determine if any intervention created a statistically significant change in student success measures. Several models were run, each concentrating on a particular statistics background knowledge concept addressed in the boot camps and essential for course success. Examination success rates were found to have increased significantly from the cohort without these interventions (2016) to the cohort with these interventions (2018). This improvement was maintained with the second cohort with the interventions in 2019. The statistics backgrounds of the 2016, 2018 and 2019 cohorts were not found to be significantly different from each other after analyzing their reported backgrounds and major demographic. However, no strong singular effect was found on student success through General Linear Model Analysis. Further data collection of student participation and success measures is encouraged in subsequent course offerings, to enhance the chance of detecting subtler intervention effects on student success. Qualitative data from student and MLA interviews may also be beneficial to see how perceptions of statistics and teaching are influenced by the interventions

    Estimación de índices de capacidad de procesos usando la distribución generalizada de pareto

    Get PDF
    Los índices de capacidad de procesos (ICP) son un medio altamente efectivo de determinar la calidad del producto y desempeño del proceso. Entre muchos índices de capacidad de procesos desarrollados, Cp, Cpk, Cpm y Cpmk son los cuatro índices más populares bajo procesos distribuidos normalmente. Sin embargo, cuando estos índices tradicionales son utilizados para evaluar un proceso distribuido no normalmente a menudo guía a resultados inexactos. Por esto, los ICP basado tanto en el método de percentiles de Clements como en el método de percentiles de Burr fueron propuestos para superar esta deficiencia bajo procesos distribuidos no normalmente. Por esta razón, el objetivo de este trabajo es determinar el desempeño y la confiabilidad del método de bootstrap para estimar los intervalos de confianza para los índices basados en la técnica de Percentiles de Clements, Percentiles de Burr y el índice normal usando la Distribución Pareto Generalizada con dos parámetros. Posteriormente se comparan entre sí los intervalos de confianza Estándar Bootstrap, Percentil Bootstrap y Percentil Sesgo Corregido Bootstrap. Se realizó una serie de simulaciones usando la distribución Pareto Generalizada con diferentes condiciones, resultando en forma general que el índice estimado con el método de percentil de Burr es mejor estimador en cuanto al porcentaje de cubrimiento y ancho promedio. Por otro lado, cuando el valor de Cpu=0,50, el intervalo percentil bootstrap es en promedio, mejor estimador que los intervalos estándar bootstrap y percentil sesgo corregido. Para diferentes condiciones (Cpu=1,0 y 1,5), el método de percentil sesgo-corregido estima mejor los intervalos de confianza según los métodos de percentiles de Clements y Burr.The processes capability indices (PCI) are highly effective in determining the quality of the product and process performance. Among many indices developed process capability, Cp, Cpk, Cpm and Cpmk are the four most popular indices under normally distributed processes. However, when these traditional indices are used to assess a non-normally distributed process often leads to inaccurate results. For this, ICPs based on both Clements percentiles of method and the Burr method of percentiles were proposed to overcome this deficiency under processes not normally distributed. Therefore, the aim of this study is to determine the performance and reliability of the Bootstrap method to estimate confidence intervals for the ICP technique based on Clements Percentiles, Burr Percentiles and normal Cpk using the Generalized Pareto Distribution with two parameters. Then are compared with one another the Standard Bootstrap confidence intervals, Percentile Bootstrap and Biased Corrected Percentile Bootstrap. A series of simulations was conducted using the Generalized Pareto Distribution with different conditions, generally resulting in the index with Burr percentile method is a better estimator in terms of percentage of coverage and average width. On the other hand, when the value of  Cpu = 0,50 bootstrap percentile interval estimator is on average better than standard bootstrap and Biased Corrected Percentile Bootstrap intervals. For different conditions (Cpu =1,0 y 1,5) the biased corrected percentile bootstrap method is better to estimate confidence interval of PCI according to the percentile methods of Clements and particularly of Burr

    Estimación de índices de capacidad de procesos usando la distribución generalizada de pareto

    Get PDF
    Los índices de capacidad de procesos (ICP) son un medio altamente efectivo de determinar la calidad del producto y desempeño del proceso. Entre muchos índices de capacidad de procesos desarrollados, Cp, Cpk, Cpm y Cpmk son los cuatro índices más populares bajo procesos distribuidos normalmente. Sin embargo, cuando estos índices tradicionales son utilizados para evaluar un proceso distribuido no normalmente a menudo guía a resultados inexactos. Por esto, los ICP basado tanto en el método de percentiles de Clements como en el método de percentiles de Burr fueron propuestos para superar esta deficiencia bajo procesos distribuidos no normalmente. Por esta razón, el objetivo de este trabajo es determinar el desempeño y la confiabilidad del método de bootstrap para estimar los intervalos de confianza para los índices basados en la técnica de Percentiles de Clements, Percentiles de Burr y el índice normal usando la Distribución Pareto Generalizada con dos parámetros. Posteriormente se comparan entre sí los intervalos de confianza Estándar Bootstrap, Percentil Bootstrap y Percentil Sesgo Corregido Bootstrap. Se realizó una serie de simulaciones usando la distribución Pareto Generalizada con diferentes condiciones, resultando en forma general que el índice estimado con el método de percentil de Burr es mejor estimador en cuanto al porcentaje de cubrimiento y ancho promedio. Por otro lado, cuando el valor de Cpu=0,50, el intervalo percentil bootstrap es en promedio, mejor estimador que los intervalos estándar bootstrap y percentil sesgo corregido. Para diferentes condiciones (Cpu=1,0 y 1,5), el método de percentil sesgo-corregido estima mejor los intervalos de confianza según los métodos de percentiles de Clements y Burr.The processes capability indices (PCI) are highly effective in determining the quality of the product and process performance. Among many indices developed process capability, Cp, Cpk, Cpm and Cpmk are the four most popular indices under normally distributed processes. However, when these traditional indices are used to assess a non-normally distributed process often leads to inaccurate results. For this, ICPs based on both Clements percentiles of method and the Burr method of percentiles were proposed to overcome this deficiency under processes not normally distributed. Therefore, the aim of this study is to determine the performance and reliability of the Bootstrap method to estimate confidence intervals for the ICP technique based on Clements Percentiles, Burr Percentiles and normal Cpk using the Generalized Pareto Distribution with two parameters. Then are compared with one another the Standard Bootstrap confidence intervals, Percentile Bootstrap and Biased Corrected Percentile Bootstrap. A series of simulations was conducted using the Generalized Pareto Distribution with different conditions, generally resulting in the index with Burr percentile method is a better estimator in terms of percentage of coverage and average width. On the other hand, when the value of  Cpu = 0,50 bootstrap percentile interval estimator is on average better than standard bootstrap and Biased Corrected Percentile Bootstrap intervals. For different conditions (Cpu =1,0 y 1,5) the biased corrected percentile bootstrap method is better to estimate confidence interval of PCI according to the percentile methods of Clements and particularly of Burr

    Analyzing Process Capability Indices (PCI) and Cost of Poor Quality (COPQ) to Improve Performance of Supply Chain

    Get PDF
    Abstract Many ports have inefficient and ineffective activities in the entire of Suppl

    Six Sigma Model to Improve the Lean Supply Chain in Ports by System Dynamics Approach

    Get PDF
    Ports are one of the important sectors of the national economy of a country and are primarily involved in the import and export of goods and services from one point to another, such as between the sea, river, road, and railways. The quality of a port is one of the important aspects to make a port attractive. The lean supply chain in ports is one of these attractive aspects. This research aims to design a six sigma model to improve the lean supply chain in ports. Six sigma model can be built by using system dynamics approach which enables to take into account dynamics variables. The lean supply chain in ports focuses on eliminating sources of “waste” in the entire flow of material in the cargo-handling process. The types of waste in ports have been identified as the delay time of equipment and transporters, lost and damaged cargo, equipment and transporter breakdowns. This research begins with the research formulation and definition of objectives. After that, the model conceptualization is constructed using a causal loop diagram based on the objective, a literature study, and field study. The causal relationships between variables are determined by historical data in real cases, from the literature, and from expert judgements. The model is validated with a real case in CDG Port in Indonesia and simulated using Powersim software. By simulating the process from the base case model, it is possible to propose a policy for improvement scenarios. Regarding the simulation results in the base case, it has been found that the high berth occupancy ratio (BOR), which influences the congestion that is indicated by the vessel waiting time, is one of the key performance indicators in port operation. Also, the demurrage and repair costs contribute most to the total cost of poor quality, followed by the cost of lost cargo. The demurrage cost is caused by the delay time of equipment and transporters, and the repair cost is caused by equipment and transporter breakdown. Regarding the results of improvement scenarios, it can be concluded that increasing the operation cycle of the crane along with its lifting capacity can reduce the vessel waiting time as a key performance indicator in the port. Also, the increase of transport maintenance items, number of inspectors, and safety and security costs can reduce the costs arising from demurrage, repair, and lost cargo. The port performance is measured by the sigma value and the process capability indices as the performance metrics. These metrics are utilized to eliminate waste in order to improve the lean supply chain in the port. With this model, and changing the sigma value and the process capability indices of the waste, the results can be identified and analyzed.Häfen sind einer der wichtigsten Sektoren der Volkswirtschaft eines Landes und sind im Wesentlichen auf den Im- und Export von Waren und Dienstleistungen von einem Punkt zum anderen beteiligt, wie z.B. zwischen Meer, Fluss, Straße und Schienen. Die Qualität eines Hafens ist einer der wichtigsten Aspekte, um einen Hafen attraktiver zu machen. Die Lean Supply Chain in Häfen ist einer dieser attraktiven Aspekte. Diese Arbeit zielt darauf ab, ein Six-Sigma-Modell zu entwerfen, um die Lean Supply Chain in Häfen zu verbessern. Das Six-Sigma-Modell kann mit Hilfe von der Systemdynamik-Methode abgebildet werden, damit diese die Betrachtung der dynamischen Variablen ermöglicht. Die Lean Supply Chain in Häfen konzentriert sich auf die Beseitigung von Verlustursachen im gesamten Materialfluss während des Umschlagprozesses. Als Verlustarten in Häfen wurden die zeitliche Verzögerung durch Gerätschaften und Transporter, verloren gegangene und beschädigte Ladung, sowie Defekte an den Gerätschaften und Transportern identifiziert. Diese Arbeit beginnt mit der Beschreibung der (aktuellen) Forschung und Festlegung der Ziele. Danach wird das Modell mit einem Ursache- und Folgediagramm, auf der Grundlage der Ziele, sowie einer Literatur- und Feldstudie, konstruiert. Die kausalen Beziehungen zwischen den Variablen werden mittels historischer Daten zu realen Fällen, aus der Literatur und aus Expertenurteilen bestimmt. Das Modell wird mit der Software Powersim simuliert und mit dem realen Fall des CDG-Hafens in Indonesien validiert. Es ist möglich, auf der Grundlage von Simulationen des Base-Case-Modells eine Strategie für die Verbesserungsszenarien vorzuschlagen. Bezüglich der Ergebnisse der Simulation im Basisfall, hat es sich gezeigt, dass einer der wichtigsten Performance-Indikatoren im Hafenbetrieb das hohe Anlegeplatz-Besetzungsverhältnis (BOR) ist, welches die Überlastung beeinflusst, was sich an den Schiff-Wartezeiten erkennen lässt. Zusätzlich tragen Liegegebühr und Reparaturkosten am meisten zu den Gesamtkosten von schlechter Qualität bei, gefolgt von den Kosten für verlorene Ladung. Die Liegegebühr wird durch die Verzögerung durch Ausrüstung und Transporter verursacht und die Reparaturkosten durch den Defekt von Ausrüstung und Transportern. Hinsichtlich der Ergebnisse der Verbesserungsszenarien kann geschlussfolgert werden, dass die Wartezeit als ein Schlüsselindikator für die Leistungsfähigkeit eines Hafens durch die Erhöhung des Arbeitszyklus und der Hebeleistung der Kräne reduziert werden kann. Außerdem kann auch durch Erhöhung der Wartungsanzahl und der Ausgaben für Sicherheit die Kosten für Liegegebühren, Reparaturen und verloren gegangene Ladungen reduziert werden. Die Leistung des Hafens wird durch die Leistungskennwerte des Sigma-Wertes und der Prozessfähigkeitsindizes gemessen. Diese Kennzahlen werden für die Beseitigung von Verlusten verwendet, um die Lean Supply Chain im Hafen zu verbessern. Mit diesem Modell und der Änderung des Sigma-Wertes und der Prozessfähigkeitsindizes der Verluste, können die Ergebnisse identifiziert und analysiert werden

    Statistical process control by quantile approach.

    Get PDF
    Most quality control and quality improvement procedures involve making assumptions about the distributional form of data it uses; usually that the data is normally distributed. It is common place to find processes that generate data which is non-normally distributed, e.g. Weibull, logistic or mixture data is increasingly encountered. Any method that seeks to avoid the use of transformation for non-normal data requires techniques for identification of the appropriate distributions. In cases where the appropriate distributions are known it is often intractable to implement.This research is concerned with statistical process control (SPC), where SPC can be apply for variable and attribute data. The objective of SPC is to control a process in an ideal situation with respect to a particular product specification. One of the several measurement tools of SPC is control chart. This research is mainly concerned with control chart which monitors process and quality improvement. We believe, it is a useful process monitoring technique when a source of variability is present. Here, control charts provides a signal that the process must be investigated. In general, Shewhart control charts assume that the data follows normal distribution. Hence, most of SPC techniques have been derived and constructed using the concept of quality which depends on normal distribution. In reality, often the set of data such as, chemical process data and lifetimes data, etc. are not normal. So when a control chart is constructed for x or R, assuming that the data is normal, if in reality, the data is nonnormal, then it will provide an inaccurate results.Schilling and Nelson has (1976) investigated under the central limit theory, the effect of non-normality on charts and concluded that the non-normality is usually not a problem for subgroup sizes of four or more. However, for smaller subgroup sizes, and especially for individual measurements, non-normality can be serious problem.The literature review indicates that there are real problems in dealing with statistical process control for non-normal distributions and mixture distributions. This thesis provides a quantile approach to deal with non-normal distributions, in order to construct median rankit control chart. Here, the quantile approach will also be used to calculate process capability index, average run length (ARL), multivariate control chart and control chart for mixture distribution for non-normal situations. This methodology can be easily adopted by the practitioner of statistical process control

    Quality deviation requirements in residential buildings: predictive modeling of the interaction between deviation and cause

    Get PDF
    To address construction defects, sub-task requirements (STRs) were generated alongside a Bayesian belief network-BBN quantification, towards the modelling of a unique causation pattern. The study found that the patterns of direct causes of deviation from quality norms are unique for each STR, and that causation patterns cannot be generalised. The work conducted provides Building-Quality-Managers with a new visualization tool to clarify the STR-specific cause of quality deviation pathways when creating the built environment
    corecore