10,546 research outputs found

    Reliability assessment of cutting tool life based on surrogate approximation methods

    Get PDF
    A novel reliability estimation approach to the cutting tools based on advanced approximation methods is proposed. Methods such as the stochastic response surface and surrogate modeling are tested, starting from a few sample points obtained through fundamental experiments and extending them to models able to estimate the tool wear as a function of the key process parameters. Subsequently, different reliability analysis methods are employed such as Monte Carlo simulations and first- and second-order reliability methods. In the present study, these reliability analysis methods are assessed for estimating the reliability of cutting tools. The results show that the proposed method is an efficient method for assessing the reliability of the cutting tool based on the minimum number of experimental results. Experimental verification for the case of high-speed turning confirms the findings of the present study for cutting tools under flank wear

    A Reliability Case Study on Estimating Extremely Small Percentiles of Strength Data for the Continuous Improvement of Medium Density Fiberboard Product Quality

    Get PDF
    The objective of this thesis is to better estimate extremely small percentiles of strength distributions for measuring failure process in continuous improvement initiatives. These percentiles are of great interest for companies, oversight organizations, and consumers concerned with product safety and reliability. The thesis investigates the lower percentiles for the quality of medium density fiberboard (MDF). The international industrial standard for measuring quality for MDF is internal bond (IB, a tensile strength test). The results of the thesis indicated that the smaller percentiles are crucial, especially the first percentile and lower ones. The thesis starts by introducing the background, study objectives, and previous work done in the area of MDF reliability. The thesis also reviews key components of total quality management (TQM) principles, strategies for reliability data analysis and modeling, information and data quality philosophy, and data preparation steps that were used in the research study. Like many real world cases, the internal bond data in material failure analysis do not follow perfectly the normal distribution. There was evidence from the study to suggest that MDF has potentially different failure modes for early failures. Forcing of the normality assumption may lead to inaccurate predictions and poor product quality. We introduce a novel, forced censoring technique that closer fits the lower tails of strength distributions, where these smaller percentiles are impacted most. In this thesis, such a forced censoring technique is implemented as a software module, using JMP® Scripting Language (JSL) to expedite data processing which is key for real-time manufacturing settings. Results show that the Weibull distribution models the data best and provides percentile estimates that are neither too conservative nor risky. Further analyses are performed to build an accelerated common-shaped Weibull model for these two product types using the JMP® Survival and Reliability platform. The use of the JMP® Scripting Language helps to automate the task of fitting an accelerated Weibull model and test model homogeneity in the shape parameter. At the end of modeling stage, a package script is written to readily provide the field engineers customized reporting for model visualization, parameter estimation, and percentile forecasting. Furthermore, using the powerful tools of Splida and S Plus, bootstrap estimates of the small percentiles demonstrate improved intervals by our forced censoring approach and the fitted model, including the common shape assumption. Additionally, relatively more advanced Bayesian methods are employed to predict the low percentiles of this particular product type, which has a rather limited number of observations. Model interpretability, cross-validation strategy, result comparisons, and habitual assessment of practical significance are particularly stressed and exercised throughout the thesis. Overall, the approach in the thesis is parsimonious and suitable for real time manufacturing settings. The approach follows a consistent strategy in statistical analysis which leads to more accuracy for product conformance evaluation. Such an approach may also potentially reduce the cost of destructive testing and data management due to reduced frequency of testing. If adopted, the approach may prevent field failures and improve product safety. The philosophy and analytical methods presented in the thesis also apply to other strength distributions and lifetime data

    Understanding 'The Essential Fact about Capitalism': markets, competition and creative destruction

    Get PDF
    This paper examines two ways in which competition works in modern capitalist economies to improve productivity. The first is through incentives: encouraging improvements in technology, organisation and effort on the part of existing establishments and firms. The second is through selection: replacing less-productive with more productive establishments and firms, whether smoothly via the transfer of market shares from less to more productive firms, or roughly through the exit of some firms and the entry of others. We report evidence from the UK suggesting that selection is responsible for a large proportion of aggregate productivity growth in manufacturing, and that much of this is due in turn to selection between plants belonging to multi-plant firms. We also investigate whether the nature of the selection process varies across the business cycle and report evidence suggesting that it is less effective in booms and recessions. Finally, although in principle productivity catch-up by low-income countries ought to be easier than innovation at the frontier, in the absence of a well functioning competitive infrastructure (a predicament that characterises many poor countries), selection may be associated with much more turbulence and a lower rate of productivity growth than in relatively prosperous societies. We report results of a survey of firms in transition economies suggesting that, particularly in the former Soviet states (excluding the Baltic states), poor output and productivity performance has not been due to an unwillingness on the part of firms to change and adapt. On the contrary, there has been a great deal of restructuring, much new entry and large reallocations of output between firms; but such activity has been much more weakly associated with improved performance than we would expect in established market economies

    An Applied Statistical Reliability Analysis of the Internal Bond of Medium Density Fiberboard

    Get PDF
    The forest products industry has seen tremendous growth in recent years and has a huge impact on the economies of many countries. For example, in the state of Maine in 1997, the forest products industry accounted for 9 billion U.S. dollars for that year. In the state of Tennessee, for example in 2000, this figure was 22 billion U.S. dollars for that year. It has, therefore, become more important in this industry to focus on production higher quality products. Statistical reliability methods, among other techniques, have been employed to help monitor and improve the quality of forest products. With such a large focus on quality improvement, data is quite plentiful, allowing for more useful analyses and examples. In this thesis, we demonstrate the usefulness of statistical reliability tools and apply them to help assess, manage, and improve the internal bond (IB) of medium density fiberboard (MDF). MDF is a high quality engineered wood composite that undergoes destructive testing during production. Workers can test cross sections of MDF panels and measure the IB in pounds per square inches. IB is a key metric of quality since it provides a direct measurement for the strength of MDF, which is important to customers and the manufacturers. Graphical procedures such as histograms, scatter plots, probability plots, and survival curves are explored to help the practitioner gain insights regarding the distributions of IB and strengths of different MDF product types. Much information can be revealed from a graphics approach. Though useful, probability plots can be a subjective way to assess the parametric distribution of a data set. Insightful developments in information criteria, in particular Akaike’s Information Criteria and Bozdogan’s Information Complexity Criteria, have made probability plotting more objective by assigning numeric scores to each plot. The plot with the lowest score is deemed the best among competing models. In application to MDF, we will see that initial intuitions are not always confirmed. Therefore, information criteria prove to be useful tools for the practitioner seeking more clarity regarding distributional assumptions. We recommend more usage of these helpful information criteria. Estimating lower percentiles in failure data analysis can provide valuable assistance to the practitioner for understanding product warranties and their costs. Since data may not be plentiful for the lower tails, estimation of these percentiles may not be an easy task. Indeed, we stress times to not even try to estimate the lowest percentiles. If samples are large and parametric assumptions are wear or not available, asymptotic approximations cam be utilized. However, unless the sample size is sufficiently large, such approximations will not be accurate. Bootstrap techniques provide one solution for the estimation of lower percentiles when asymptotic approximations should not be utilized. This computer intensive resampling scheme provides a method for estimating the true sampling distribution of these percentiles, or any population parameter of interest. This can be used for various parametric models or for nonparametric settings, when the parametric model might be imperfect of misspecified. The empirical bootstrap distribution can then be used for inferences such as determining standard errors and constructing confidence intervals. Helpful applications of the bootstrap to the MDF data shows this procedure’s advantages and limitations in order to aid the practitioner in their decision-making. Graphics can readily warn the practitioner when even certain bootstrap procedures are not advisable. To be able to say that improvements have been made, we must be able to measure reliability expressed in percentiles that allow for statistical variation. We need to make comparisons of these reliability measures between products and within products before and after process improvement interventions. Knowing when to trust confidence intervals and when not to trust them are crucial for managers and users of MDF to make successful decisions

    How petty is petty corruption? Evidence from firm survey in Africa

    Get PDF
    Recent firm-level surveys suggest that petty corruption is a serious problem for African firms, costing the average firm in many countries between 2.5 and 4.5 percent of sales. However, a minor difference in the way firms answer the question has a large effect on estimates of the size of the burden. On average, firms report payments that are between four and fifteen times higher when they report them as a percent of sales than when they report them in monetary terms. This paper discusses several possible reasons why there might be a difference including outliers, differences between firms that report bribes in monetary terms and firms that report them as a percent of sales, and the sensitivity of the corruption question. But none of these explanations explain the discrepancy. One plausible remaining reason is that firm managers overestimate bribes when they report them in percentage terms. If this is the case, petty corruption might be far less costly than the raw data suggest.Corruption; Africa; Firm Surveys

    Liquidity needs and vulnerability to financial udnerdevelopment

    Get PDF
    The author provides evidence of a causal and economically important effect of financial development on volatility. In contrast to the existing literature, the identification strategy is based on the differences in sensitivities to financial conditions across industries. The results show that sectors with larger liquidity needs are more volatile and experience deeper crises in financially underdeveloped countries. At the macroeconomic level, the results suggest that changes in financial development can generate important differences in aggregate volatility. The author also finds that financially underdeveloped countries partially protect themselves from volatility by concentrating less output in sectors with large liquidity needs. Nevertheless, this insulation mechanism seems to be insufficient to reverse the effects of financial underdevelopment on within-sector volatility. Finally, the author provides new evidence that: 1) Financial development affects volatility mainly through the intensive margin (output per firm). 2) Both the quality of information generated by firms, and the development of financial intermediaries have independent effects on sectoral volatility. 3) The development of financial intermediaries is more important than the development of equity markets for the reduction of volatility.Payment Systems&Infrastructure,Economic Theory&Research,Markets and Market Access,Fiscal&Monetary Policy,Financial Crisis Management&Restructuring,Economic Theory&Research,Financial Crisis Management&Restructuring,Fiscal&Monetary Policy,Development Economics&Aid Effectiveness,Markets and Market Access

    Statistical and Economic Evaluation of Time Series Models for Forecasting Arrivals at Call Centers

    Full text link
    Call centers' managers are interested in obtaining accurate point and distributional forecasts of call arrivals in order to achieve an optimal balance between service quality and operating costs. We present a strategy for selecting forecast models of call arrivals which is based on three pillars: (i) flexibility of the loss function; (ii) statistical evaluation of forecast accuracy; (iii) economic evaluation of forecast performance using money metrics. We implement fourteen time series models and seven forecast combination schemes on three series of daily call arrivals. Although we focus mainly on point forecasts, we also analyze density forecast evaluation. We show that second moments modeling is important both for point and density forecasting and that the simple Seasonal Random Walk model is always outperformed by more general specifications. Our results suggest that call center managers should invest in the use of forecast models which describe both first and second moments of call arrivals

    Labor effort over the business cycle

    Get PDF
    Unobservable labor utilization is recognized as a crucial feature of economic fluctuations. Yet very little is known on the behavior of work effort over the business cycle. By using firm-level panel data drawn from two high-quality sources, we obtain a microeconomic estimate of variable labor effort from a dynamic cost minimization set-up. We argue that, contrary to common assumptions, the relationship between effort and hours is not monotonic. During a recovery, if a critical level of hours per capita is reached (say, because of labor market rigidities), every additional hour is worked with decreasing effort, due to physical fatigue. We provide supporting evidence by estimating the structural parameters of a Taylor approximation of the effort function. Corroborating evidence has been obtained by estimating the elasticity of effort with respect to hours at different business cycle conditions.labor effort, factor hoarding, business cycles

    A Theoretical Foundation for the Development of Process Capability Indices and Process Parameters Optimization under Truncated and Censoring Schemes

    Get PDF
    Process capability indices (PCIs) provide a measure of the output of an in-control process that conforms to a set of specification limits. These measures, which assume that process output is approximately normally distributed, are intended for measuring process capability for manufacturing systems. After implementing inspections, however, non-conforming products are typically scrapped when units fail to meet the specification limits; hence, after inspections, the actual resulting distribution of shipped products that customers perceive is truncated. In this research, a set of customer-perceived PCIs is developed focused on the truncated normal distribution, as an extension of traditional manufacturer-based indices. Comparative studies and numerical examples reveal considerable differences among the traditional PCIs and the proposed PCIs. The comparison results suggest using the proposed PCIs for capability analyses when non-conforming products are scrapped prior to shipping to customers. The confidence interval approximations for the proposed PCIs are also developed. A simulation technique is implemented to compare the proposed PCIs with its traditional counterparts across multiple performance scenarios. The robust parameter design (RPD), as a systematic method for determining the optimum operating conditions that achieve the quality improvement goals, is also studied within the realm of censored data. Data censoring occurs in time-oriented observations when some data is unmeasurable outside a predetermined study period. The underlying conceptual basis of the current RPD studies is the random sampling from a normal distribution, assuming that all the data points are uncensored. However, censoring schemes are widely implemented in lifetime testing, survival analysis, and reliability studies. As such, this study develops the detailed guidelines for a new RPD method with the consideration of type I-right censoring concepts. The response functions are developed using nonparametric methods, including the Kaplan-Meier estimator, Greenwood\u27s formula, and the Cox proportional hazards regression method. Various response-surface-based robust parameter design optimization models are proposed and are demonstrated through a numerical example. Further, the process capability index for type I-right censored data using the nonparametric methods is also developed for assessing the performance of a product based on its lifetime
    corecore