939 research outputs found

    Određivanje kritičnih uvjeta probijanja sjemenki badema metodom odzivnih površina i genetskim algoritmom

    Get PDF
    In this study, the effect of seed moisture content, probe diameter and loading velocity (puncture conditions) on some mechanical properties of almond kernel and peeled almond kernel is considered to model a relationship between the puncture conditions and rupture energy. Furthermore, distribution of the mechanical properties is determined. The main objective is to determine the critical values of mechanical properties significant for peeling machines. The response surface methodology was used to find the relationship between the input parameters and the output responses, and the fitness function was applied to measure the optimal values using the genetic algorithm. Two-parameter Weibull function was used to describe the distribution of mechanical properties. Based on the Weibull parameter values, i.e. shape parameter (β) and scale parameter (η) calculated for each property, the mechanical distribution variations were completely described and it was confirmed that the mechanical properties are rule governed, which makes the Weibull function suitable for estimating their distributions. The energy model estimated using response surface methodology shows that the mechanical properties relate exponentially to the moisture, and polynomially to the loading velocity and probe diameter, which enabled successful estimation of the rupture energy (R²=0.94). The genetic algorithm calculated the critical values of seed moisture, probe diameter, and loading velocity to be 18.11 % on dry mass basis, 0.79 mm, and 0.15 mm/min, respectively, and optimum rupture energy of 1.97·10-³ J. These conditions were used for comparison with new samples, where the rupture energy was experimentally measured to be 2.68 and 2.21·10-³ J for kernel and peeled kernel, respectively, which was nearly in agreement with our model results.U ovom je radu ispitan utjecaj udjela vlage u sjemenki, promjera sonde i brzine povećanja opterećenja (tj. uvjeta probijanja) na mehanička svojstva neoljuštene i oljuštene sjemenke badema. Izrađen je model za usporedbu uvjeta probijanja i čvrstoće sjemenki, te su ispitane mehaničke značajke sjemenki radi utvrđivanja kritičnih vrijednosti važnih za rad uređaja za ljuštenje badema. Metodom je odzivnih površina utvrđen odnos između početnih parametara i krajnjih vrijednosti, a optimalni su uvjeti procesa određeni genetskim algoritmom pomoću funkcije prikladnosti. Raspodjela mehaničkih svojstava opisana je primjenom Weibullovog modela s dva parametra, i to oblika (β) i skaliranja (η), te su u cijelosti prikazane varijacije tih značajki. Potvrđeno je da je Weibullova funkcija prikladna za određivanje raspodjele mehaničkih svojstava. Metodom odzivnih površina uspješno je procijenjena energija loma (R²=0,94), te je utvrđeno da je odnos između mehaničkih svojstava i udjela vlage eksponencijalan, a onaj između mehaničkih svojstava i brzine povećanja opterećenja te promjera sonde polinoman. Pomoću genetskog algoritma izračunate su sljedeće vrijednosti: kritični udjel vlage u bademu od 18,11 % (na bazi suhe tvari), promjer sonde od 0,79 mm, brzina povećanja opterećenja od 0,15 mm/min, te optimalna energija loma od 1,97·10-³ J. Ispitivanjem novih uzoraka određena je energija loma neoljuštenih badema od 2,68·10-³ J i oljuštenih od 2,21·10-³ J, što je u skladu s rezultatima dobivenim pomoću modela

    A neural network approach to audio-assisted movie dialogue detection

    Get PDF
    A novel framework for audio-assisted dialogue detection based on indicator functions and neural networks is investigated. An indicator function defines that an actor is present at a particular time instant. The cross-correlation function of a pair of indicator functions and the magnitude of the corresponding cross-power spectral density are fed as input to neural networks for dialogue detection. Several types of artificial neural networks, including multilayer perceptrons, voted perceptrons, radial basis function networks, support vector machines, and particle swarm optimization-based multilayer perceptrons are tested. Experiments are carried out to validate the feasibility of the aforementioned approach by using ground-truth indicator functions determined by human observers on 6 different movies. A total of 41 dialogue instances and another 20 non-dialogue instances is employed. The average detection accuracy achieved is high, ranging between 84.78%±5.499% and 91.43%±4.239%

    Failure Prognosis of Wind Turbine Components

    Get PDF
    Wind energy is playing an increasingly significant role in the World\u27s energy supply mix. In North America, many utility-scale wind turbines are approaching, or are beyond the half-way point of their originally anticipated lifespan. Accurate estimation of the times to failure of major turbine components can provide wind farm owners insight into how to optimize the life and value of their farm assets. This dissertation deals with fault detection and failure prognosis of critical wind turbine sub-assemblies, including generators, blades, and bearings based on data-driven approaches. The main aim of the data-driven methods is to utilize measurement data from the system and forecast the Remaining Useful Life (RUL) of faulty components accurately and efficiently. The main contributions of this dissertation are in the application of ALTA lifetime analysis to help illustrate a possible relationship between varying loads and generators reliability, a wavelet-based Probability Density Function (PDF) to effectively detecting incipient wind turbine blade failure, an adaptive Bayesian algorithm for modeling the uncertainty inherent in the bearings RUL prediction horizon, and a Hidden Markov Model (HMM) for characterizing the bearing damage progression based on varying operating states to mimic a real condition in which wind turbines operate and to recognize that the damage progression is a function of the stress applied to each component using data from historical failures across three different Canadian wind farms

    Analysis of interval‐grouped data in weed science: The binnednp Rcpp package

    Get PDF
    [Abstract] Weed scientists are usually interested in the study of the distribution and density functions of the random variable that relates weed emergence with environmental indices like the hydrothermal time (HTT). However, in many situations, experimental data are presented in a grouped way and, therefore, the standard nonparametric kernel estimators cannot be computed. Kernel estimators for the density and distribution functions for interval‐grouped data, as well as bootstrap confidence bands for these functions, have been proposed and implemented in the binnednp package. Analysis with different treatments can also be performed using a bootstrap approach and a Cramér‐von Mises type distance. Several bandwidth selection procedures were also implemented. This package also allows to estimate different emergence indices that measure the shape of the data distribution. The values of these indices are useful for the selection of the soil depth at which HTT should be measured which, in turn, would maximize the predictive power of the proposed methods. This paper presents the functions of the package and provides an example using an emergence data set of Avena sterilis (wild oat). The binnednp package provides investigators with a unique set of tools allowing the weed science research community to analyze interval‐grouped data.Ministerio de Economía y Competitividad; AGL2015-64130-RMinisterio de Economía y Competitividad; MTM2014-52876-RMinisterio de Economía y Competitividad; MTM2017-82724-RMinisterio de Economía y Competitividad; AGL2012-33736Xunta de Galicia; ED431C-2016-015Xunta de Galicia; ED431G/0

    Modified Maximum Entropy Method and Estimating the AIF via DCE-MRI Data Analysis

    Get PDF
    Background: For the kinetic models used in contrast-based medical imaging, the assignment of the arterial input function named AIF is essential for the estimation of the physiological parameters of the tissue via solving an optimization problem. Objective: In the current study, we estimate the AIF relayed on the modified maximum entropy method. The effectiveness of several numerical methods to determine kinetic parameters and the AIF is evaluated-in situations where enough information about the AIF is not available. The purpose of this study is to identify an appropriate method for estimating this function. Materials and Methods: The modified algorithm is a mixture of the maximum entropy approach with an optimization method, named the teaching-learning method. In here, we applied this algorithm in a Bayesian framework to estimate the kinetic parameters when specifying the unique form of the AIF by the maximum entropy method. We assessed the proficiency of the proposed method for assigning the kinetic parameters in the dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI), when determining AIF with some other parameter-estimation methods and a standard fixed AIF method. A previously analyzed dataset consisting of contrast agent concentrations in tissue and plasma was used. Results and Conclusions: We compared the accuracy of the results for the estimated parameters obtained from the MMEM with those of the empirical method, maximum likelihood method, moment matching ("method of moments"), the least-square method, the modified maximum likelihood approach, and our previous work. Since the current algorithm does not have the problem of starting point in the parameter estimation phase, it could find the best and nearest model to the empirical model of data, and therefore, the results indicated the Weibull distribution as an appropriate and robust AIF and also illustrated the power and effectiveness of the proposed method to estimate the kinetic parameters

    A systematic review of data quality issues in knowledge discovery tasks

    Get PDF
    Hay un gran crecimiento en el volumen de datos porque las organizaciones capturan permanentemente la cantidad colectiva de datos para lograr un mejor proceso de toma de decisiones. El desafío mas fundamental es la exploración de los grandes volúmenes de datos y la extracción de conocimiento útil para futuras acciones por medio de tareas para el descubrimiento del conocimiento; sin embargo, muchos datos presentan mala calidad. Presentamos una revisión sistemática de los asuntos de calidad de datos en las áreas del descubrimiento de conocimiento y un estudio de caso aplicado a la enfermedad agrícola conocida como la roya del café.Large volume of data is growing because the organizations are continuously capturing the collective amount of data for better decision-making process. The most fundamental challenge is to explore the large volumes of data and extract useful knowledge for future actions through knowledge discovery tasks, nevertheless many data has poor quality. We presented a systematic review of the data quality issues in knowledge discovery tasks and a case study applied to agricultural disease named coffee rust

    Merging Data Sources to Predict Remaining Useful Life – An Automated Method to Identify Prognostic Parameters

    Get PDF
    The ultimate goal of most prognostic systems is accurate prediction of the remaining useful life (RUL) of individual systems or components based on their use and performance. This class of prognostic algorithms is termed Degradation-Based, or Type III Prognostics. As equipment degrades, measured parameters of the system tend to change; these sensed measurements, or appropriate transformations thereof, may be used to characterize degradation. Traditionally, individual-based prognostic methods use a measure of degradation to make RUL estimates. Degradation measures may include sensed measurements, such as temperature or vibration level, or inferred measurements, such as model residuals or physics-based model predictions. Often, it is beneficial to combine several measures of degradation into a single parameter. Selection of an appropriate parameter is key for making useful individual-based RUL estimates, but methods to aid in this selection are absent in the literature. This dissertation introduces a set of metrics which characterize the suitability of a prognostic parameter. Parameter features such as trendability, monotonicity, and prognosability can be used to compare candidate prognostic parameters to determine which is most useful for individual-based prognosis. Trendability indicates the degree to which the parameters of a population of systems have the same underlying shape. Monotonicity characterizes the underlying positive or negative trend of the parameter. Finally, prognosability gives a measure of the variance in the critical failure value of a population of systems. By quantifying these features for a given parameter, the metrics can be used with any traditional optimization technique, such as Genetic Algorithms, to identify the optimal parameter for a given system. An appropriate parameter may be used with a General Path Model (GPM) approach to make RUL estimates for specific systems or components. A dynamic Bayesian updating methodology is introduced to incorporate prior information in the GPM methodology. The proposed methods are illustrated with two applications: first, to the simulated turbofan engine data provided in the 2008 Prognostics and Health Management Conference Prognostics Challenge and, second, to data collected in a laboratory milling equipment wear experiment. The automated system was shown to identify appropriate parameters in both situations and facilitate Type III prognostic model development

    Support vector machine based classification in condition monitoring of induction motors

    Get PDF
    Continuous and trouble-free operation of induction motors is an essential part of modern power and production plants. Faults and failures of electrical machinery may cause remarkable economical losses but also highly dangerous situations. In addition to analytical and knowledge-based models, application of data-based models has established a firm position in the induction motor fault diagnostics during the last decade. For example, pattern recognition with Neural Networks (NN) is widely studied. Support Vector Machine (SVM) is a novel machine learning method introduced in early 90's. It is based on the statistical learning theory presented by V.N. Vapnik, and it has been successfully applied to numerous classification and pattern recognition problems such as text categorization, image recognition and bioinformatics. SVM based classifier is built to minimize the structural misclassification risk, whereas conventional classification techniques often apply minimization of the empirical risk. Therefore, SVM is claimed to lead enhanced generalisation properties. Further, application of SVM results in the global solution for a classification problem. Thirdly, SVM based classification is attractive, because its efficiency does not directly depend on the dimension of classified entities. This property is very useful in fault diagnostics, because the number of fault classification features does not have to be drastically limited. However, SVM has not yet been widely studied in the area of fault diagnostics. Specifically, in the condition monitoring of induction motor, it does not seem to have been considered before this research. In this thesis, a SVM based classification scheme is designed for different tasks in induction motor fault diagnostics and for partial discharge analysis of insulation condition monitoring. Several variables are compared as fault indicators, and forces on rotor are found to be important in fault detection instead of motor current that is currently widely studied. The measurement of forces is difficult, but easily measurable vibrations are directly related to the forces. Hence, vibration monitoring is considered in more detail as the medium for the motor fault diagnostics. SVM classifiers are essentially 2-class classifiers. In addition to the induction motor fault diagnostics, the results of this thesis cover various methods for coupling SVMs for carrying out a multi-class classification problem.reviewe

    Uncertainty-Integrated Surrogate Modeling for Complex System Optimization

    Get PDF
    Approximation models such as surrogate models provide a tractable substitute to expensive physical simulations and an effective solution to the potential lack of quantitative models of system behavior. These capabilities not only enable the efficient design of complex systems, but is also essential for the effective analysis of physical phenomena/characteristics in the different domains of Engineering, Material Science, Biomedical Science, and various other disciplines. Since these models provide an abstraction of the real system behavior (often a low-fidelity representative) it is important to quantify the accuracy and the reliability of such approximation models without investing additional expensive system evaluations (simulations or physical experiments). Standard error measures, such as the mean squared error, the cross-validation error, and the Akaike\u27s information criterion however provide limited (often inadequate) information regarding the accuracy of the final surrogate model while other more effective dedicated error measures are tailored towards only one class of surrogate models. This lack of accuracy information and the ability to compare and test diverse surrogate models reduce the confidence in model application, restricts appropriate model selection, and undermines the effectiveness of surrogate-based optimization. A key contribution of this dissertation is the development of a new model-independent approach to quantify the fidelity of a trained surrogate model in a given region of the design domain. This method is called the Predictive Estimation of Model Fidelity (PEMF). The PEMF method is derived from the hypothesis that the accuracy of an approximation model is related to the amount of data resources leveraged to train the model . In PEMF, intermediate surrogate models are iteratively constructed over heuristic subsets of sample points. The median and the maximum errors estimated over the remaining points are used to determine the respective error distributions at each iteration. The estimated modes of the error distributions are represented as functions of the density of intermediate training points through nonlinear regression, assuming a smooth decreasing trend of errors with increasing sample density. These regression functions are then used to predict the expected median and maximum errors in the final surrogate models. It is observed that the model fidelities estimated by PEMF are up to two orders of magnitude more accurate and statistically more stable compared to those based on the popularly-used leave-one-out cross-validation method, when applied to a variety of benchmark problems. By leveraging this new paradigm in quantifying the fidelity of surrogate models, a novel automated surrogate model selection framework is also developed. This PEMF-based model selection framework is called the Concurrent Surrogate Model Selection (COSMOS). COSMOS, unlike existing model selection methods, coherently operates at all the three levels necessary to facilitate optimal selection, i.e., (1) selecting the model type, (2) selecting the kernel function type, and (3) determining the optimal values of the typically user-prescribed parameters. The selection criteria that guide optimal model selection are determined by PEMF and the search process is performed using a MINLP solver. The effectiveness of COSMOS is demonstrated by successfully applying it to different benchmark and practical engineering problems, where it offers a first-of-its-kind globally competitive model selection. In this dissertation, the knowledge about the accuracy of a surrogate estimated using PEMF is applied to also develop a novel model management approach for engineering optimization. This approach adaptively selects computational models (both physics-based models and surrogate models) of differing levels of fidelity and computational cost, to be used during optimization, with the overall objective to yield optimal designs with high-fidelity function estimates at a reasonable computational expense. In this technique, a new adaptive model switching (AMS) metric defined to guide the switching of model from one to the next higher fidelity model during the optimization process. The switching criterion is based on whether the uncertainty associated with the current model output dominates the latest improvement of the relative fitness function, where both the model output uncertainty and the function improvement (across the population) are expressed as probability distributions. This adaptive model switching technique is applied to two practical problems through Particle Swarm Optimization to successfully illustrate: (i) the computational advantage of this method over purely high-fidelity model-based optimization, and (ii) the accuracy advantage of this method over purely low-fidelity model-based optimization. Motivated by the unique capabilities of the model switching concept, a new model refinement approach is also developed in this dissertation. The model refinement approach can be perceived as an adaptive sequential sampling approach applied in surrogate-based optimization. Decisions regarding when to perform additional system evaluations to refine the model is guided by the same model-uncertainty principles as in the adaptive model switching technique. The effectiveness of this new model refinement technique is illustrated through application to practical surrogate-based optimization in the area of energy sustainability
    corecore