286 research outputs found
Short and long-term wind turbine power output prediction
In the wind energy industry, it is of great importance to develop models that
accurately forecast the power output of a wind turbine, as such predictions are
used for wind farm location assessment or power pricing and bidding,
monitoring, and preventive maintenance. As a first step, and following the
guidelines of the existing literature, we use the supervisory control and data
acquisition (SCADA) data to model the wind turbine power curve (WTPC). We
explore various parametric and non-parametric approaches for the modeling of
the WTPC, such as parametric logistic functions, and non-parametric piecewise
linear, polynomial, or cubic spline interpolation functions. We demonstrate
that all aforementioned classes of models are rich enough (with respect to
their relative complexity) to accurately model the WTPC, as their mean squared
error (MSE) is close to the MSE lower bound calculated from the historical
data. We further enhance the accuracy of our proposed model, by incorporating
additional environmental factors that affect the power output, such as the
ambient temperature, and the wind direction. However, all aforementioned
models, when it comes to forecasting, seem to have an intrinsic limitation, due
to their inability to capture the inherent auto-correlation of the data. To
avoid this conundrum, we show that adding a properly scaled ARMA modeling layer
increases short-term prediction performance, while keeping the long-term
prediction capability of the model
Determining Optimal Reliability Targets Through Analysis of Product Validation Cost and Field Warranty Data
This work develops a new methodology to minimize the life cycle cost of a product using the decision variables controlled by a reliability/quality professional during a product development process. This methodology incorporates all product dependability-related activities into a comprehensive probabilistic cost model that enables minimization of the product's life cycle cost using the product dependability control variables. The primary model inputs include the cost of ownership of test equipment, forecasted cost of warranty returns, and environmental test parameters of a product validation program. Among these parameters, an emphasis is placed upon test duration and test sample size for durability related environmental tests. The warranty forecasting model is based on data mining of past warranty claims, parametric probabilistic analysis of the existing field data, and a piecewise application of several statistical distributions.
The modeling process is complicated by insufficient knowledge about the relationship between product quality and product reliability. This can be attributed to the lack of studies establishing the effect of product validation activities on future field failures, overall lack of comprehensive field failure studies, and the market's dictation of warranty terms as opposed to warranties based on engineering rationale. As a result of these complicating factors an innovative approach to estimating the quality-reliability relationship using probabilistic methods and stochastic simulation has been developed. The overall cost model and its minimization are generated using a Monte Carlo method that accounts for the propagation of uncertainties from the model inputs and their parameters to the life cycle cost solution.
This research provides reliability and quality professionals with a methodology to evaluate the efficiency of a product validation program from a life cycle cost point of view and identifies ways to improve the validation test flow by adjusting test durations, sample sizes, and equipment utilization. Solutions balance a rigorous theoretical treatment and practical applications and are specifically applied to the electronics industry
Expert Elicitation for Reliable System Design
This paper reviews the role of expert judgement to support reliability
assessments within the systems engineering design process. Generic design
processes are described to give the context and a discussion is given about the
nature of the reliability assessments required in the different systems
engineering phases. It is argued that, as far as meeting reliability
requirements is concerned, the whole design process is more akin to a
statistical control process than to a straightforward statistical problem of
assessing an unknown distribution. This leads to features of the expert
judgement problem in the design context which are substantially different from
those seen, for example, in risk assessment. In particular, the role of experts
in problem structuring and in developing failure mitigation options is much
more prominent, and there is a need to take into account the reliability
potential for future mitigation measures downstream in the system life cycle.
An overview is given of the stakeholders typically involved in large scale
systems engineering design projects, and this is used to argue the need for
methods that expose potential judgemental biases in order to generate analyses
that can be said to provide rational consensus about uncertainties. Finally, a
number of key points are developed with the aim of moving toward a framework
that provides a holistic method for tracking reliability assessment through the
design process.Comment: This paper commented in: [arXiv:0708.0285], [arXiv:0708.0287],
[arXiv:0708.0288]. Rejoinder in [arXiv:0708.0293]. Published at
http://dx.doi.org/10.1214/088342306000000510 in the Statistical Science
(http://www.imstat.org/sts/) by the Institute of Mathematical Statistics
(http://www.imstat.org
Simulation of Automotive Warranty Data
This thesis will investigate the prediction of the number of claims in a two dimensional automotive warranty claim model for the case of minimal repair.The method involved fitting marginal distributions for age of claim and mileage of claim seperately. Next, various copulas were fitted to establish the correlation between age and mileage, and assessed for fit. The Gumbel copula is chosen as optimal. From this Gumbel copula, a simulation of warranty claims is undertaken. The method produced a good fit for claim age but performed less well for claim mileage, due to the asymmetry of the correlation between mileage and age. Further research directions to improve the accuracy and usefulness of this model are suggested
Hazard rate models for early warranty issue detection using upstream supply chain information
This research presents a statistical methodology to construct an early automotive warranty issue detection model based on upstream supply chain information. This is contrary to extant methods that are mostly reactive and only rely on data available from the OEMs (original equipment manufacturers). For any upstream supply chain information with direct history from warranty claims, the research proposes hazard rate models to link upstream supply chain information as explanatory covariates for early detection of warranty issues. For any upstream supply chain information without direct warranty claims history, we introduce Bayesian hazard rate models to account for uncertainties of the explanatory covariates. In doing so, it improves both the accuracy of warranty issue detection as well as the lead time for detection. The proposed methodology is illustrated and validated using real-world data from a leading global Tier-one automotive supplier
A General Approach to Electrical Vehicle Battery Remanufacturing System Design
One of the major difficulties electrical vehicle (EV) industry facing today is the production and lifetime cost of battery packs. Studies show that using remanufactured batteries can dramatically lower the cost. The major difference between remanufacturing and traditional manufacturing is the supply and demand variabilities and uncertainties differences. The returned core for remanufacturing operations (supply side) can vary considerably in terms of the time of returns and the quality of returned products. On the other hand, because different contracts can be used to regulate suppliers, it is almost always assumed zero uncertainty and variability for traditional manufacturing systems. Similarly, customers demand traditional manufacturers to sell newly produced products in constant high quality. But, remanufacturers usually sell in aftermarket, and the quality of the products demanded can vary depends on the price range, usage, customer segment and many other factors. The key is to match supply and demand side variabilities so the overlapping between them can be maximized. Because of these differences, a new framework is needed for remanufacturing system design.
This research aims at developing a new approach to use remanufactured battery packs to fulfill EV warranties and customer aftermarket demands and to match supply and demand side variabilities. First, a market lifetime EV battery return (supply side) forecasting method is develop, and it is validated using Monte Carlo simulation. Second, a discrete event simulation method is developed to estimate EV battery lifetime cost for both customer and manufacturer/remanufacturer. Third, a new remanufacturing business model and a simulation framework are developed so both the quality and quantity aspects of supply and demand can be altered and the lifetime cost for both customer and manufacturer/remanufacturer can be minimized.
The business models and methodologies developed in this dissertation provide managerial insights to benefit both the manufacturer/remanufacturer and customers in EV industry. Many findings and methodologies can also be readily used in other remanufacturing settings. The effectiveness of the proposed models is illustrated and validated by case studies.PHDMechanical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/143955/1/xrliang_1.pd
Nonparametric change point estimation for survival distributions with a partially constant hazard rate
We present a new method for estimating a change point in the hazard function of a survival distribution assuming a constant hazard rate after the change point and a decreasing hazard rate before the change point. Our method is based on fitting a stump regression to p-values for testing hazard rates in small time intervals. We present three real data examples describing survival patterns of severely ill patients, whose excess mortality rates are known to persist far beyond hospital discharge. For designing survival studies in these patients and for the definition of hospital performance metrics (e.g. mortality), it is essential to define adequate and objective end points. The reliable estimation of a change point will help researchers to identify such end points. By precisely knowing this change point, clinicians can distinguish between the acute phase with high hazard (time elapsed after admission and before the change point was reached), and the chronic phase (time elapsed after the change point) in which hazard is fairly constant. We show in an extensive simulation study that maximum likelihood estimation is not robust in this setting, and we evaluate our new estimation strategy including bootstrap confidence intervals and finite sample bias correction
A framework for real-time product quality monitoring system with consideration of process-induced variations
Department of Human and Systems EngineeringAs industrial technologies develop, the manufacturing industry is globally changing in more automated and complex manners, and the prediction of real-time product quality has become an essential issue. Although many of the physical manufacturing activities are getting more automated than ever, there still exist many uncovered parameters that, either directly or indirectly, affect the product quality. In many manufacturing sites, the quality tests in their processes still rely on few skilled operators and quality experts, which requires a lot of time and human efforts to manage the product quality issues. In this thesis, thus, a real-time/in-process quality monitoring system for small and medium size manufacturing environments is proposed to provide the data-driven product quality monitoring system framework. The proposed framework consists of a product quality ontology model for complex manufacturing supply chain environments, and a real-time quality prediction tool using the support vector machine (SVM) algorithm that enables the quality monitoring system to classify the product quality patterns from the in-process production data. Additionally, we propose a framework for analysis of the quality inspection results from the monitoring system with respect to quality costs, including inspection and warranty costs. In addition, this thesis establishes a relationship between the warranty cost and the severity of customer-perceived quality. Finally, we suggest a future work that a prescriptive product quality assessment concept using the Hidden Markov Models (HMM) that analyze and forecast possible future product quality problems using production data from manufacturing processes based on data flow analysis. Also, a door trim production data in an automotive company is illustrated to verify the proposed quality monitoring/prediction model.ope
Management. A continuing bibliography with indexes
This bibliography cites 604 reports, articles, and other documents introduced into the NASA scientific and technical information system in 1979 covering the management of research and development, contracts, production, logistics, personnel, safety, reliability and quality control. Program, project, and systems management; management policy, philosophy, tools, and techniques; decision making processes for managers; technology assessment; management of urban problems; and information for managers on Federal resources, expenditures, financing, and budgeting are also covered. Abstracts are provided as well as subject, personal author, and corporate source indexes
- …