353 research outputs found

    A Fault-Tolerant Regularizer for RBF Networks

    Full text link

    Hybrid Learning Algorithm of Radial Basis Function Networks for Reliability Analysis

    Full text link
    With the wide application of industrial robots in the field of precision machining, reliability analysis of positioning accuracy becomes increasingly important for industrial robots. Since the industrial robot is a complex nonlinear system, the traditional approximate reliability methods often produce unreliable results in analyzing its positioning accuracy. In order to study the positioning accuracy reliability of industrial robot more efficiently and accurately, a radial basis function network is used to construct the mapping relationship between the uncertain parameters and the position coordinates of the end-effector. Combining with the Monte Carlo simulation method, the positioning accuracy reliability is then evaluated. A novel hybrid learning algorithm for training radial basis function network, which integrates the clustering learning algorithm and the orthogonal least squares learning algorithm, is proposed in this article. Examples are presented to illustrate the high proficiency and reliability of the proposed method

    Non-Gaussian Hybrid Transfer Functions: Memorizing Mine Survivability Calculations

    Get PDF
    Hybrid algorithms and models have received significant interest in recent years and are increasingly used to solve real-world problems. Different from existing methods in radial basis transfer function construction, this study proposes a novel nonlinear-weight hybrid algorithm involving the non-Gaussian type radial basis transfer functions. The speed and simplicity of the non-Gaussian type with the accuracy and simplicity of radial basis function are used to produce fast and accurate on-the-fly model for survivability of emergency mine rescue operations, that is, the survivability under all conditions is precalculated and used to train the neural network. The proposed hybrid uses genetic algorithm as a learning method which performs parameter optimization within an integrated analytic framework, to improve network efficiency. Finally, the network parameters including mean iteration, standard variation, standard deviation, convergent time, and optimized error are evaluated using the mean squared error. The results demonstrate that the hybrid model is able to reduce the computation complexity, increase the robustness and optimize its parameters. This novel hybrid model shows outstanding performance and is competitive over other existing models

    Data Mining Applications to Fault Diagnosis in Power Electronic Systems: A Systematic Review

    Get PDF

    Intelligent energy management system : techniques and methods

    Get PDF
    ABSTRACT Our environment is an asset to be managed carefully and is not an expendable resource to be taken for granted. The main original contribution of this thesis is in formulating intelligent techniques and simulating case studies to demonstrate the significance of the present approach for achieving a low carbon economy. Energy boosts crop production, drives industry and increases employment. Wise energy use is the first step to ensuring sustainable energy for present and future generations. Energy services are essential for meeting internationally agreed development goals. Energy management system lies at the heart of all infrastructures from communications, economy, and society’s transportation to the society. This has made the system more complex and more interdependent. The increasing number of disturbances occurring in the system has raised the priority of energy management system infrastructure which has been improved with the aid of technology and investment; suitable methods have been presented to optimize the system in this thesis. Since the current system is facing various problems from increasing disturbances, the system is operating on the limit, aging equipments, load change etc, therefore an improvement is essential to minimize these problems. To enhance the current system and resolve the issues that it is facing, smart grid has been proposed as a solution to resolve power problems and to prevent future failures. This thesis argues that smart grid consists of computational intelligence and smart meters to improve the reliability, stability and security of power. In comparison with the current system, it is more intelligent, reliable, stable and secure, and will reduce the number of blackouts and other failures that occur on the power grid system. Also, the thesis has reported that smart metering is technically feasible to improve energy efficiency. In the thesis, a new technique using wavelet transforms, floating point genetic algorithm and artificial neural network based hybrid model for gaining accurate prediction of short-term load forecast has been developed. Adopting the new model is more accuracy than radial basis function network. Actual data has been used to test the proposed new method and it has been demonstrated that this integrated intelligent technique is very effective for the load forecast. Choosing the appropriate algorithm is important to implement the optimization during the daily task in the power system. The potential for application of swarm intelligence to Optimal Reactive Power Dispatch (ORPD) has been shown in this thesis. After making the comparison of the results derived from swarm intelligence, improved genetic algorithm and a conventional gradient-based optimization method, it was concluded that swam intelligence is better in terms of performance and precision in solving optimal reactive power dispatch problems.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Design and implementation of resilient attitude estimation algorithms for aerospace applications

    Get PDF
    Satellite attitude estimation is a critical component of satellite attitude determination and control systems, relying on highly accurate sensors such as IMUs, star trackers, and sun sensors. However, the complex space environment can cause sensor performance degradation or even failure. To address this issue, FDIR systems are necessary. This thesis presents a novel approach to satellite attitude estimation that utilizes an InertialNavigation System (INS) to achieve high accuracy with the low computational load. The algorithm is based on a two-layer Kalman filter, which incorporates the quaternion estimator(QUEST) algorithm, FQA, Linear interpolation (LERP)algorithms, and KF. Moreover, the thesis proposes an FDIR system for the INS that can detect and isolate faults and recover the system safely. This system includes two-layer fault detection with isolation and two-layered recovery, which utilizes an Adaptive Unscented Kalman Filter (AUKF), QUEST algorithm, residual generators, Radial Basis Function (RBF) neural networks, and an adaptive complementary filter (ACF). These two fault detection layers aim to isolate and identify faults while decreasing the rate of false alarms. An FPGA-based FDIR system is also designed and implemented to reduce latency while maintaining normal resource consumption in this thesis. Finally, a Fault Tolerance Federated Kalman Filter (FTFKF) is proposed to fuse the output from INS and the CNS to achieve high precision and robust attitude estimation.The findings of this study provide a solid foundation for the development of FDIR systems for various applications such as robotics, autonomous vehicles, and unmanned aerial vehicles, particularly for satellite attitude estimation. The proposed INS-based approach with the FDIR system has demonstrated high accuracy, fault tolerance, and low computational load, making it a promising solution for satellite attitude estimation in harsh space environment

    Detection of network anomalies and novel attacks in the internet via statistical network traffic separation and normality prediction

    Get PDF
    With the advent and the explosive growth of the global Internet and the electronic commerce environment, adaptive/automatic network and service anomaly detection is fast gaining critical research and practical importance. If the next generation of network technology is to operate beyond the levels of current networks, it will require a set of well-designed tools for its management that will provide the capability of dynamically and reliably identifying network anomalies. Early detection of network anomalies and performance degradations is a key to rapid fault recovery and robust networking, and has been receiving increasing attention lately. In this dissertation we present a network anomaly detection methodology, which relies on the analysis of network traffic and the characterization of the dynamic statistical properties of traffic normality, in order to accurately and timely detect network anomalies. Anomaly detection is based on the concept that perturbations of normal behavior suggest the presence of anomalies, faults, attacks etc. This methodology can be uniformly applied in order to detect network attacks, especially in cases where novel attacks are present and the nature of the intrusion is unknown. Specifically, in order to provide an accurate identification of the normal network traffic behavior, we first develop an anomaly-tolerant non-stationary traffic prediction technique, which is capable of removing both pulse and continuous anomalies. Furthermore we introduce and design dynamic thresholds, and based on them we define adaptive anomaly violation conditions, as a combined function of both the magnitude and duration of the traffic deviations. Numerical results are presented that demonstrate the operational effectiveness and efficiency of the proposed approach, under different anomaly traffic scenarios and attacks, such as mail-bombing and UDP flooding attacks. In order to improve the prediction accuracy of the statistical network traffic normality, especially in cases where high burstiness is present, we propose, study and analyze a new network traffic prediction methodology, based on the frequency domain traffic analysis and filtering, with the objective_of enhancing the network anomaly detection capabilities. Our approach is based on the observation that the various network traffic components, are better identified, represented and isolated in the frequency domain. As a result, the traffic can be effectively separated into a baseline component, that includes most of the low frequency traffic and presents low burstiness, and the short-term traffic that includes the most dynamic part. The baseline traffic is a mean non-stationary periodic time series, and the Extended Resource-Allocating Network (BRAN) methodology is used for its accurate prediction. The short-term traffic is shown to be a time-dependent series, and the Autoregressive Moving Average (ARMA) model is proposed to be used for the accurate prediction of this component. Furthermore, it is demonstrated that the proposed enhanced traffic prediction strategy can be combined with the use of dynamic thresholds and adaptive anomaly violation conditions, in order to improve the network anomaly detection effectiveness. The performance evaluation of the proposed overall strategy, in terms of the achievable network traffic prediction accuracy and anomaly detection capability, and the corresponding numerical results demonstrate and quantify the significant improvements that can be achieved

    Algorithms for Fault Detection and Diagnosis

    Get PDF
    Due to the increasing demand for security and reliability in manufacturing and mechatronic systems, early detection and diagnosis of faults are key points to reduce economic losses caused by unscheduled maintenance and downtimes, to increase safety, to prevent the endangerment of human beings involved in the process operations and to improve reliability and availability of autonomous systems. The development of algorithms for health monitoring and fault and anomaly detection, capable of the early detection, isolation, or even prediction of technical component malfunctioning, is becoming more and more crucial in this context. This Special Issue is devoted to new research efforts and results concerning recent advances and challenges in the application of “Algorithms for Fault Detection and Diagnosis”, articulated over a wide range of sectors. The aim is to provide a collection of some of the current state-of-the-art algorithms within this context, together with new advanced theoretical solutions

    Data driven process monitoring based on neural networks and classification trees

    Get PDF
    Process monitoring in the chemical and other process industries has been of great practical importance. Early detection of faults is critical in avoiding product quality deterioration, equipment damage, and personal injury. The goal of this dissertation is to develop process monitoring schemes that can be applied to complex process systems. Neural networks have been a popular tool for modeling and pattern classification for monitoring of process systems. However, due to the prohibitive computational cost caused by high dimensionality and frequently changing operating conditions in batch processes, their applications have been difficult. The first part of this work tackles this problem by employing a polynomial-based data preprocessing step that greatly reduces the dimensionality of the neural network process model. The process measurements and manipulated variables go through a polynomial regression step and the polynomial coefficients, which are usually of far lower dimensionality than the original data, are used to build a neural network model to produce residuals for fault classification. Case studies show a significant reduction in neural model construction time and sometimes better classification results as well. The second part of this research investigates classification trees as a promising approach to fault detection and classification. It is found that the underlying principles of classification trees often result in complicated trees even for rather simple problems, and construction time can excessive for high dimensional problems. Fisher Discriminant Analysis (FDA), which features an optimal linear discrimination between different faults and projects original data on to perpendicular scores, is used as a dimensionality reduction tool. Classification trees use the scores to separate observations into different fault classes. A procedure identifies the order of FDA scores that results in a minimum tree cost as the optimal order. Comparisons to other popular multivariate statistical analysis based methods indicate that the new scheme exhibits better performance on a benchmarking problem
    • …
    corecore