19 research outputs found

    A Regression Algorithm for the Smart Prognosis of a Reversed Polarity Fault in a Photovoltaic Generator

    No full text
    International audienceThis paper deals with a smart algorithm allowing reversed polarity fault diagnosis and prognosis in PV generators. The proposed prognosis (prediction) approach is based on the hybridization of a support vector regression (SVR) technique optimized by a k-NN regression tool (K-NNR) for undetermined outputs. To test the proposed algorithm performance, a PV generator database containing sample data is used for simulation purposes

    Using Machine Learning to Predict Hypervelocity Fragment Propagation of Space Debris Collisions

    Get PDF
    The future of spaceflight is threatened by the increasing amount of space debris, especially in the near-Earth environment. To continue operations, accurate characterization of hypervelocity fragment propagation following collisions and explosions is imperative. While large debris particles can be tracked by current methods, small particles are often missed. This paper presents a method to estimate fragment fly-out properties, such as fragment, velocity, and mass distributions, using machine learning. Previous work was performed on terrestrial data and associated simulations representing space debris collisions. The fragmentation of high-velocity fragmentation can be modeled by terrestrial fragmentation tests, such as static detonations. Recently, stereoscopic imaging techniques have become an addition to static arena testing. Collecting data with this method provides position vector and mass information faster and more accurately than previous manual-collection methods. Additionally, there is limited space debris data of similar accuracy on individual fragments. Therefore, this imaging technique was used as the primary collection method for the previous research data. Now, two-line element (TLE) sets for Iridium 33 are also used. Machine learning methodologies are leveraged to predict fragmentation fly-out from the collision event with Cosmos 2251. First, gaussian mixture models (GMMs) are used to model the probability distribution of the particles for a given desired characteristic at Julian dates following the event. Once this training data is generated, regression techniques can be used to predict these characteristics. K-nearest neighbor (K-NN) regressors are used to estimate the spatial distribution of the satellite fragments. Monte Carlo simulations are then used to validate the results, finding that this technique accurately estimates the total number of fragments expected to intersect a region of interest at a given time. Following this work, the same technique can be used to estimate the velocity and mass distributions of the debris. This information can then be used to estimate the kinetic energy of the particle and classify it to avoid future debris collisions

    Metode K-nearest Neighbor Berbasis Forward Selection Untuk Prediksi Harga Komoditi Lada

    Get PDF
    Banyak peneliti termotivasi untuk meningkatkan kinerja prediksi. K-Nearest Neighbor (KNN) merupakan salah satu algoritma untuk regresi maupun klasifikasi sudah secara sukses diimplementasikan di berbagai bidang. Di sisi lain, penentuan variabel yang sesuai dapat memberikan performa yang semakin baik pada suatu model. Pada penelitian ini bertujuan mengembangkan model prediksi dengan menggabungkan algoritma K-Nearest Neighbor dengan metode seleksi atribut, khususnya forward selection untuk memprediksi komiditi lada. Model yang diusulkan dievaluasi dengan data time series lada hitam dan lada putih. Hasil penelitian menunjukkan bahwa algoritma K-Nearest Neighbor berbasis forward selection memberikan kinerja yang terbaik dibandingkan dengan KNN berbasis backward elimination dan SVM berbasis seleksi atribut

    A New Efficiency Improvement of Ensemble Learning for Heart Failure Classification by Least Error Boosting

    Get PDF
    Heart failure is a very common disease, often a silent threat. It's also costly to treat and detect. There is also a steadily higher incidence rate of the disease at present. Although researchers have developed classification algorithms. Cardiovascular disease data were used by various ensemble learning methods, but the classification efficiency was not high enough due to the cumulative error that can occur from any weak learner effect and the accuracy of the vote-predicted class label. The objective of this research is the development of a new algorithm that improves the efficiency of the classification of patients with heart failure. This paper proposes Least Error Boosting (LEBoosting), a new algorithm that improves adaboost.m1's performance for higher classification accuracy. The learning algorithm finds the lowest error among various weak learners to be used to identify the lowest possible errors to update distribution to create the best final hypothesis in classification. Our trial will use the heart failure clinical records dataset, which contains 13 features of cardiac patients. Performance metrics are measured through precision, recall, f-measure, accuracy, and the ROC curve. Results from the experiment found that the proposed method had high performance compared to naïve bayes, k-NN,and decision tree, and outperformed other ensembles including bagging, logitBoost, LPBoost, and adaboost.m1, with an accuracy of 98.89%, and classified the capabilities of patients who died accurately as well compared to decision tree and bagging, which were completely indistinguishable. The findings of this study found that LEBoosting was able to maximize error reductions in the weak learner's training process from any weak learner to maximize the effectiveness of cardiology classifiers and to provide theoretical guidance to develop a model for analysis and prediction of heart disease. The novelty of this research is to improve original ensemble learning by finding the weak learner with the lowest error in order to update the best distribution to the final hypothesis, which will give LEBoosting the highest classification efficiency. Doi: 10.28991/ESJ-2023-07-01-010 Full Text: PD

    Application of Unsupervised K Nearest Neighbor (UNN) and Learning Vector Quantization (LVQ) Methods in Predicting Rupiah to Dollar

    Get PDF
    One of the factors in a country’s economy is the exchange value of the currency towards another currency. The exchange value of Rupiah towards Dollar (USA) can quickly change depending on the environmental conditions and has a huge impact for the Indonesian Government. In this research, Learning Vector Quantization (LVQ) and Unsupervised K Nearest Network (UNN) was implemented in predicting the currency value towards dollar. The UNN method was used to predict the selling value of the currency, the LVQ method was used to predict the buying value of the currency. The input data that is used is the selling, buying and interest data times series of the currency from the central back of the United States.From the research result and discussions that was made, UNN can achieve the lowest MAPE, which is 1,544% with the amount of data as much as 25 and the LVQ algorithm can accurately achieve a forecast with the amount of data as much as 25 with the learning rate of 0,075.The amount of trained data and the many patterns that exist in one LVQ class method can affect the result of the study and the result of the system

    Predicting Dynamic Fragmentation Characteristics from High-Impact Energy Events Utilizing Terrestrial Static Arena Test Data and Machine Learning

    Get PDF
    To continue space operations with the increasing space debris, accurate characterization of fragment fly-out properties from hypervelocity impacts is essential. However, with limited realistic experimentation and the need for data, available static arena test data, collected utilizing a novel stereoscopic imaging technique, is the primary dataset for this paper. This research leverages machine learning methodologies to predict fragmentation characteristics using combined data from this imaging technique and simulations, produced considering dynamic impact conditions. Gaussian mixture models (GMMs), fit via expectation maximization (EM), are used to model fragment track intersections on a defined surface of intersection. After modeling the fragment distributions, k-nearest neighbor (K-NN) regressors are used to predict the desired characteristics. Using Monte Carlo simulations, the K-NN regression is shown to predict the distributions for both the total number of fragments intersecting a given surface, as well as the expected total fragment velocity and mass associated with that surface. This information can then be used to estimate the kinetic energy of the particle to classify the particle and avoid debris collisions

    Simple Sensitivity Analysis for Orion GNC

    Get PDF
    The performance of Orion flight software, especially its GNC software, is being analyzed by running Monte Carlo simulations of Orion spacecraft flights. The simulated performance is analyzed for conformance with flight requirements, expressed as performance constraints. Flight requirements include guidance (e.g. touchdown distance from target) and control (e.g., control saturation) as well as performance (e.g., heat load constraints). The Monte Carlo simulations disperse hundreds of simulation input variables, for everything from mass properties to date of launch.We describe in this paper a sensitivity analysis tool (Critical Factors Tool or CFT) developed to find the input variables or pairs of variables which by themselves significantly influence satisfaction of requirements or significantly affect key performance metrics (e.g., touchdown distance from target). Knowing these factors can inform robustness analysis, can inform where engineering resources are most needed, and could even affect operations. The contributions of this paper include the introduction of novel sensitivity measures, such as estimating success probability, and a technique for determining whether pairs of factors are interacting dependently or independently. The tool found that input variables such as moments, mass, thrust dispersions, and date of launch were found to be significant factors for success of various requirements. Examples are shown in this paper as well as a summary and physics discussion of EFT-1 driving factors that the tool found

    Machine Learning to Predict Warhead Fragmentation In-Flight Behavior from Static Data

    Get PDF
    Accurate characterization of fragment fly-out properties from high-speed warhead detonations is essential for estimation of collateral damage and lethality for a given weapon. Real warhead dynamic detonation tests are rare, costly, and often unrealizable with current technology, leaving fragmentation experiments limited to static arena tests and numerical simulations. Stereoscopic imaging techniques can now provide static arena tests with time-dependent tracks of individual fragments, each with characteristics such as fragment IDs and their respective position vector. Simulation methods can account for the dynamic case but can exclude relevant dynamics experienced in real-life warhead detonations. This research leverages machine learning methodologies to predict fragmentation characteristics using data from this imaging technique and simulation data combined. Gaussian mixture models (GMMs), fit via expectation maximization (EM), are used to model fragment track intersections on a defined surface of intersection. After modeling the fragment distributions, k-nearest neighbor (K-NN) regressors are used to predict the desired fragmentation characteristics. Using Monte Carlo simulations, the K-NN regression is shown to predict the distributions for the total number of fragments intersecting a given surface and the total fragment velocity and mass associated with that surface. An ability to predict fragment fly-out characteristics accurately and quickly would provide information which can then be used to evaluate the collateral damage and lethality of a given weapon

    Enhanced Virtual Metrology on Chemical Mechanical Planarization Process using an Integrated Model and Data-Driven Approach

    Get PDF
    As an essential process in semiconductor manufacturing, Chemical Mechanical Planarization has been studied in recent decades and the material removal rate has been proved to be a critical performance indicator. Comparing with after-process metrology, virtual metrology shows advantages in production time saving and quick response to the process control. This paper presents an enhanced material removal rate prediction algorithm based on an integrated model and data-driven method. The proposed approach combines the physical mechanism and the influence of nearest neighbors, and extracts relevant features. The features are then input to construct multiple regression models, which are integrated to obtain the final prognosis. This method was evaluated by the PHM 2016 Data Challenge data sets and the result obtained the best mean squared error score among competitors
    corecore