1,557 research outputs found

    Estimation of the location and the scale parameters of Burr Type XII distribution

    Get PDF
    The aim of this paper is to estimate the location and the scale parameters of Burr Type XII distribution. For this purpose, different estimation methods, namely, maximum likelihood (ML), modified maximum likelihood (MML), least squares (LS) and method of moments (MM) are used. The performances of these estimation methods are compared via Monte-Carlo simulation study under different sample sizes and parameter settings. At the end of the study, the wind speed data set and the annual flow data sets are analyzed for illustration of the modeling performance of Burr Type XII distribution

    Classes of Ordinary Differential Equations Obtained for the Probability Functions of Burr XII and Pareto Distributions

    Get PDF
    In this paper, the differential calculus was used to obtain some classes of ordinary differential equations (ODE) for the probability density function, quantile function, survival function, inverse survival function, hazard function and reversed hazard function of Burr XII and Pareto distributions. This was made easier since later distribution is a special case of the former. The stated necessary conditions required for the existence of the ODEs are consistent with the various parameters that defined the distributions. Solutions of these ODEs by using numerous available methods are new ways of understanding the nature of the probability functions that characterize the distributions

    Monitoring and performance analysis of regression profiles

    Get PDF
    There are many cases in industrial and non-industrial sections where the quality characteristics are in the form of profiles. Profile monitoring is a relatively new set of techniques in statistical quality control that is used in situations where the state of product or process is presented by regression models. In the past few years, most research in the field of profile monitoring has mainly focused on the use of effective statistical charting methods, study of more general shapes of profiles, and the effects of violations of assumptions in profile monitoring. Despite several research on the application of artificial neural networks to statistical quality control, no research has investigated the application of neural networks in monitoring profiles. Likewise, there is no research in the literature on the process capability analysis in profile processes. The process capability analysis is to evaluate the ability of a process to meet the customer/engineering specifications and must be done in Phase I of profile monitoring. In a review study on profile monitoring, Woodall (2007) pointed out the importance of process capability analysis in profiles. In this research, we use artificial neural networks (ANN) to detect and classify shifts in linear profiles. Three monitoring methods based on ANN are developed to monitor linear profiles in Phase II. We compare the results for different shift scenarios with existing methods in linear profile monitoring and discuss the results. Furthermore, in this thesis, we evaluate the estimation of process capability indices (PCIs) in linear profiles. We propose a method based on the relationship between proportions of non-conformance and the process capability indices in the profile process. In most existing profile monitoring methods in the literature, it is assumed that the profile design points are deterministic (fixed) so they are unchanged from one profile to another one. In this research, we investigate the estimation of the PCI in normal linear profiles for different scenarios of deterministic and arbitrary (random) data acquisition schemes as well as fixed or linear functional specification limits. We apply the proposed method in estimating the PCI in a yogurt production process. This thesis also focuses on the investigation of the process capability analysis in profiles with non-normal error terms. In this study, we review the methods for estimating PCI in non-normal data and carry out a comprehensive comparison study to evaluate the performance of these methods. Then these methods are applied in the leukocyte filtering process to evaluate the PCI with effect of non-normality in a blood service section. In addition, we develop a new method based on neural networks to estimate the parameters of the Burr XII distribution, which is required in some of the PCI estimation methods with non-normal environments. Finally, in this research we propose five methods to estimate process capability index in profiles where residuals follow non-normal distributions. In a comparison study using Monte Carlo simulations we evaluate the performance of the proposed methods in terms of their precision and accuracy. We provide conclusions and recommendation for the future research at the end

    Employee engagement model for the multi-family rental housing industry

    Get PDF
    Employee Engagement Model for the Multi-family Rental Housing Industry Deborah R. Phillips 238 Pages Directed by Dr. Roozbeh Kangari The multi-family rental housing industry has faced numerous challenges in the past decade. Increased competition, declining occupancy rates and higher operating expenses have forced management companies to re-examine their organizational strategies, particularly as it applies to its human capital. Employee engagement has become an emerging topic and shows that engaged employees perform better, put in extra effort to help get the job done, show a strong level of commitment to the organization, and are more motivated and optimistic about their work goals. Companies now recognize the value in fostering a climate in which engaged employees drive sales by creating loyal customers. However, despite documented support identifying the link between engaged employees and more impressive business outcomes, little research has concentrated on the special needs and challenges of the multi-family rental housing industry. Further, there are limited tools available to assist owners and managers with the task of identifying the drivers affecting employee engagement. An Employee Engagement Model (EEM) was developed to allow multi-family apartment rental property owners and managers to determine the percentage of satisfied residents for a given average level of engagement score. This research utilized statistical analysis, neural network techniques, and probabilistic modeling to develop the Employee Engagement Model. The Employee Engagement Model (EEM) offers new knowledge in the relationship between employee engagement and resident satisfaction in the multi-family rental housing industry. New knowledge may also be derived in correlations of certain aspects of employee engagement and the likelihood of residents extending their leases or referring others to his/her community, thus improving business performance. It is expected that the Employee Engagement Model (EEM) will provide useful feedback to multi-family professionals in their process of talent management. It is also expected that further discussions toward improvements in measuring employee engagement and its impact on satisfaction will be prompted by this research.Ph.D.Committee Chair: Kangari, Roozbeh; Committee Co-Chair: Roper, Kathy; Committee Member: Castro, Daniel; Committee Member: Cummings, William; Committee Member: Thomas-Mobley, Lind

    Multivariate process variability monitoring for high dimensional data

    Get PDF
    In today’s competitive market, the quality of a product or service is no longer measured by a single variable but by a number of variables that define the quality of the final product or service. It is known that these quality variables of products or services are correlated with each other, and it is therefore important to monitor these correlated quality characteristics simultaneously. Multivariate quality control charts are capable of such monitoring. Multivariate monitoring of industrial or clinical procedures often involves more than three correlated quality characteristics, and the status of the process is judged using a sample of one size. The majority of existing control charts for monitoring multivariate process variability for individual observations are capable of monitoring up to three quality characteristics. One of the hurdles in designing optimal variability control charts for large dimension data is the enormous computing resources and time that is required by the simulation algorithm to estimate the charts parameters. In this research, a novel algorithm based on the parallelised Monte Carlo simulation has been developed to improve the ability of the Multivariate Exponentially Weighted Mean Squared Deviation (MEWMS) and Multivariate Exponentially Weighted Moving Variance (MEWMV) charts to monitor multivariate process variability with a greater number of quality characteristics. Different techniques have been deployed to reduce computing space and the time complexity taken by the algorithm. The novelty of this algorithm is its ability to estimate the optimal control limit L (optimal L) for any given number of correlated quality characteristics, size of the shifts to be detected based on the smoothing constant, and the given in-control average run length in a computationally efficient way. The optimal L for the MEWMS and MEWMV charts to detect small, medium and large shifts in the covariance matrix of up to fifteen correlated quality characteristics has been provided. Furthermore, utilising the large number of optimal L values generated by the algorithm has enabled us to develop two mathematical functions that are capable of predicting L values for MEWMS and MEWMV charts. This would eliminate the need for further execution of the parallelised Monte Carlo simulation for high dimension data. One of the main challenges in deploying multivariate control charts is to identify which characteristics are responsible for the out-of-control signal detected by the charts, and what is the extent of their contribution to the signal. In this research, a smart diagnostic technique has been developed by using a hybrid of the wrapper filter approach to effectively identify the variables that are responsible for the process faults and to classify the percentage of their contribution to the faults. The robustness of the proposed techniques has been demonstrated through their application to a range of clinical and industrial multivariate processes where the percentage of correct classifications is presented for different scenarios. The majority of the existing multivariate control charts have been developed to monitor processes that follow multivariate normal distribution. In this thesis, the author has proposed a control chart for a non-normal high dimensional multivariate process based on the percentile point of Burr XII distribution. Geometric distance variables are fitted to the subset of correlated quality characteristics to reduce the dimension of the data, which is then followed by fitting the Burr XII distribution to each geometric distance variable. Since individual distance variables are independent, each can be monitored by individual control charts based on the percentile points of the fitted Burr XII distributions. A simulated annealing approach is used to estimate parameters of the Burr XII distribution. The proposed hybrid is utilised to identify and rank the variables responsible for the out-of-control signals of geometric distance variables

    Monitoring Pollen Counts and Pollen Allergy Index Using Satellite Observations in East Coast of the United States

    Get PDF
    Allergic diseases have become increasingly common over the world during the last four decades, and they are affecting millions of people. Pollination is an important process in the life cycle of plants. However, pollen exposure is associated with allergic diseases such as asthma and seasonal allergic rhinitis (hay fever). As a result, the total annual expenditure for asthma-associated morbidity is about 56billionintheUnitedStates,andtheoverallcostofallergicdiseasesisover56 billion in the United States, and the overall cost of allergic diseases is over 18 billion annually. For allergic rhinitis, the annual medical cost is approximately $3.4 billion. The intensity and frequency of the pollen exposures can be easily affected by many factors such as climate, vegetation, and topography, which are difficult to predict in large scales. Vegetation is very important as a pollen source, and the amount and time of pollinations depend on the flowering and growth of plants. With optimal water and temperature, vegetation can reach a maximum growth and flowering during a growing season, which means that maximum amount of pollen can be released from the plants. However, if the requirements of water and temperature cannot be met in the specific times within the growing season, pollen dispersal will be affected negatively. It is an urgent need to develop models or systems for predicting pollen events at large scales and providing early warning to prevent pollen effects on people. Unlike manual pollen counting at local sites, remote sensing facilitates the pollen estimates at large scales with temporally and spatially distributed observations, which significantly reduces the time and labor costs. With remotely sensed observations, Artificial Neural Network (ANN) helps us fill the gaps in understanding of the relationships between environmental variables and pollen concentration. At this point, I investigated pollen estimates from satellite observations in the states of East Coast United States with short and long-term data. This region is highly populated with a population of 104 million. In addition, this region has a great variety of temperature, precipitation, and vegetation. The final goal of this project is to investigate the relationships between satellite-derived variables (precipitation, land surface temperature (LST), and enhance vegetation index (EVI2)) and pollen count and further to generate a model for the prediction of pollen counts at high temporal and spatial resolutions. For this purpose, to predict pollen concentration using environmental variables, a Neural Network Analysis was performed. The results showed that strong correlations existed between pollen counts and environmental variables, except for precipitation in most locations. The validation analysis using regression models revealed strongly significant relationships between the observed and predicted pollen concentrations obtained for short and long-term data. The R squares (R2) for long term pollen counts were mostly higher than 0.5, ranging from 0.5542 for Olean, NY to 0.8589 for Savannah, GA. For short term predictions of pollen allergy index, R2 ranged from 0.53 to 0.966 except for a few sites, especially in southern Florida. The pollen distribution was mostly affected by precipitation in the southern part, whereas it was influenced by temperature in the northern part. Moreover, results demonstrated that ANN is a suitable tool for complicated statistical analysis and EVI2 combining with LST and precipitation is a reliable predictor of pollen variation. Overall the results provide a better understanding of pollen variation with vegetation seasonality and climate variables, which could assist an approach towards the establishment of an early warning system for allergy patients

    Network monitoring and performance assessment: from statistical models to neural networks

    Full text link
    Máster en Investigación e Innovación en Tecnologías de la Información y las ComunicacionesIn the last few years, computer networks have been playing a key role in many different fields. Companies have also evolved around the internet, getting advantage of the huge capacity of diffusion. Nevertheless, this also means that computer networks and IT systems have become a critical element for the business. In case of interruption or malfunction of the systems, this could result in devastating economic impact. In this light, it is necessary to provide models to properly evaluate and characterize the computer networks. Focusing on modeling, one has many different alternatives: from classical options based on statistic to recent alternatives based on machine learning and deep learning. In this work, we want to study the different models available for each context, paying attention to the advantage and disadvantages to provide the best solution for each case. To cover the majority of the spectrum, three cases have been studied: time-unaware phenomena, where we look at the bias-variance trade-off, time-dependent phenomena, where we pay attention the trends of the time series, and text processing to process attributes obtained by DPI. For each case, several alternatives have been studied and solutions have been tested both with synthetic data and real-world data, showing the successfulness of the proposa

    Development and application of process capability indices

    Get PDF
    In order to measure the performance of manufacturing processes, several process capability indices have been proposed. A process capability index (PCI) is a unitless number used to measure the ability of a process to continuously produce products that meet customer specifications. These indices have since helped practitioners understand and improve their production systems, but no single index can fully measure the performance of any observed process. Each index has its own drawbacks which can be complemented by using others. Advantages of commonly used indices in assessing different aspects of process performance have been highlighted. Quality cost is also a function of shift in mean, shift in variance and shift in yield. A hybrid is developed that complements the strengths of these individual indices and provides the set containing the smallest number of indices that gives the practitioner detailed information on the shift in mean or variance, the location of mean, yield and potential capability. It is validated that while no single index can fully assess and measure the performance of a univariate normal process, the optimal set of indices selected by the proposed hybrid can simultaneously provide precise information on the shift in mean or variance, the location of mean, yield and potential capability. A simulation study increased the process variability by 100% and then reduced by 50%. The optimal set managed to pick such a shift. The asymmetric ratio was able to detect both the 10% decrease and 20% increase in µ but did not alter significantly with a 50% decrease or a 100% increase in σ, which meant it was not sensitive to any shift in σ. The implementation of the hybrid provides the quality practitioner, or computer-aided manufacturing system, with a guideline on prioritised tasks needed to improve the process capability and reduce the cost of poor quality. The author extended the proposed hybrids to fully measure the performance of a process with multiple quality characteristics, which follow normal distribution and are correlated. Furthermore, for multivariate normal processes with correlated quality characteristics, process capability analysis is not complete without fault diagnostics. Fault diagnostics is the identification and ranking of quality characteristics responsible for multivariate process poor performance. Quality practitioners desire to identify and rank quality characteristics, responsible for poor performance, in order to prioritise resources for process quality improvement tasks thereby speeding up the process and minimising quality costs. To date, none of the existing commonly used source identification approaches can classify whether the process behaviour is caused by the shift in mean or change in variance. The author has proposed a source identification algorithm based on mean and variance impact factors to address this shortcoming. Furthermore, the author developed a novel fault diagnostic hybrid based on the proposed optimal set selection algorithm, principal component analysis, machine learning, and the proposed impact-factor. The novelty of this hybrid is that it can carry out a full multivariate process capability analysis and provides a robust tool to precisely identify and rank quality characteristics responsible for the shifts in mean, variance and yield. The fault diagnostic hybrid can guide the practitioners to identify and prioritise quality characteristics responsible for the poor process performance, thereby reducing the quality cost by effectively speeding up the multivariate process improvement tasks. Simulated scenarios have been generated to increase/decrease some components of the mean vector (µ2/µ4) and in increase/reduce the variability of some components (σ1 reduced to close to zero/σ6 multiplied by 100%). The hybrid ranked X2 and X6 as the most contributing variables to the process poor performance and X1 and X4 as the major contributors to process yield. There is a great challenge in carrying out process capability analysis and fault diagnostics on a high dimensional multivariate non-normal process, with multiple correlated quality characteristics, in a timely manner. The author has developed a multivariate non-normal fault diagnostic hybrid capable of assessing performance and perform fault diagnostics on multivariate non-normal processes. The proposed hybrid first utilizes the Geometric Distance (GD) approach, to reduce dimensionality of the correlated data into fewer number of independent GD variables which can be assessed using univariate process capability indices. This is followed by fitting Burr XII distribution to independent GD variables. The independent fitted distributions are used to estimate both yield and multivariate process capability in a time efficient way. Finally, machine learning approach, is deployed to carry out the task of fault diagnostic by identifying and ranking the correlated quality characteristics responsible for the poor performance of the least performing GD variable. The results show that the proposed hybrid is robust in estimating both yield and multivariate process capability, carrying out fault diagnostics beyond GD variables, and identifying the original characteristic responsible for poor performance. The novelty of the proposed non-normal fault diagnostic hybrid is that it considers quality characteristics related to the least performing GD variable, instead of investigating all the quality characteristics of the multivariate non-normal process. The efficacy of the proposed hybrid is assessed through a real manufacturing examples and simulated scenarios. Variables X1,, X2 and X3 shifted away from the target by 25%, 15% and 35%, respectively, and the hybrid was able to select variables X3 to be contributing the most to the corresponding geometric distance variable's poor performance
    corecore