16,471 research outputs found

    A Review of Fault Diagnosing Methods in Power Transmission Systems

    Get PDF
    Transient stability is important in power systems. Disturbances like faults need to be segregated to restore transient stability. A comprehensive review of fault diagnosing methods in the power transmission system is presented in this paper. Typically, voltage and current samples are deployed for analysis. Three tasks/topics; fault detection, classification, and location are presented separately to convey a more logical and comprehensive understanding of the concepts. Feature extractions, transformations with dimensionality reduction methods are discussed. Fault classification and location techniques largely use artificial intelligence (AI) and signal processing methods. After the discussion of overall methods and concepts, advancements and future aspects are discussed. Generalized strengths and weaknesses of different AI and machine learning-based algorithms are assessed. A comparison of different fault detection, classification, and location methods is also presented considering features, inputs, complexity, system used and results. This paper may serve as a guideline for the researchers to understand different methods and techniques in this field

    Designing an expert knowledge-based Systemic Importance Index for financial institutions

    Get PDF
    Defining whether a financial institution is systemically important (or not) is challenging due to (i) the inevitability of combining complex importance criteria such as institutions’ size, connectedness and substitutability; (ii) the ambiguity of what an appropriate threshold for those criteria may be; and (iii) the involvement of expert knowledge as a key input for combining those criteria. The proposed method, a Fuzzy Logic Inference System, uses four key systemic importance indicators that capture institutions’ size, connectedness and substitutability, and a convenient deconstruction of expert knowledge to obtain a Systemic Importance Index. This method allows for combining dissimilar concepts in a non-linear, consistent and intuitive manner, whilst considering them as continuous –non binary- functions. Results reveal that the method imitates the way experts them-selves think about the decision process regarding what a systemically important financial institution is within the financial system under analysis. The Index is a comprehensive relative assessment of each financial institution’s systemic importance. It may serve financial authorities as a quantitative tool for focusing their attention and resources where the severity resulting from an institution failing or near-failing is estimated to be the greatest. It may also serve for enhanced policy-making (e.g. prudential regulation, oversight and supervision) and decision-making (e.g. resolving, restructuring or providing emergency liquidity).Systemic Importance, Systemic Risk, Fuzzy Logic, Approximate Reasoning, Too-connected-to-fail, Too-big-to-fail. Classification JEL: D85, C63, E58, G28.

    Overview of Remaining Useful Life prediction techniques in Through-life Engineering Services

    Get PDF
    Through-life Engineering Services (TES) are essential in the manufacture and servicing of complex engineering products. TES improves support services by providing prognosis of run-to-failure and time-to-failure on-demand data for better decision making. The concept of Remaining Useful Life (RUL) is utilised to predict life-span of components (of a service system) with the purpose of minimising catastrophic failure events in both manufacturing and service sectors. The purpose of this paper is to identify failure mechanisms and emphasise the failure events prediction approaches that can effectively reduce uncertainties. It will demonstrate the classification of techniques used in RUL prediction for optimisation of products’ future use based on current products in-service with regards to predictability, availability and reliability. It presents a mapping of degradation mechanisms against techniques for knowledge acquisition with the objective of presenting to designers and manufacturers ways to improve the life-span of components

    QUALITATIVE ANSWERING SURVEYS AND SOFT COMPUTING

    Get PDF
    In this work, we reflect on some questions about the measurement problem in economics and, especially, their relationship with the scientific method. Statistical sources frequently used by economists contain qualitative information obtained from verbal expressions of individuals by means of surveys, and we discuss the reasons why it would be more adequately analyzed with soft methods than with traditional ones. Some comments on the most commonly applied techniques in the analysis of these types of data with verbal answers are followed by our proposal to compute with words. In our view, an alternative use of the well known Income Evaluation Question seems especially suggestive for a computing with words approach, since it would facilitate an empirical estimation of the corresponding linguistic variable adjectives. A new treatment of the information contained in such surveys would avoid some questions incorporated in the so called Leyden approach that do not fit to the actual world.Computing with words, Leyden approach, qualitative answering surveys, fuzzy logic

    Forecasting Long-Term Government Bond Yields: An Application of Statistical and AI Models

    Get PDF
    This paper evaluates several artificial intelligence and classical algorithms on their ability of forecasting the monthly yield of the US 10-year Treasury bonds from a set of four economic indicators. Due to the complexity of the prediction problem, the task represents a challenging test for the algorithms under evaluation. At the same time, the study is of particular significance for the important and paradigmatic role played by the US market in the world economy. Four data-driven artificial intelligence approaches are considered, namely, a manually built fuzzy logic model, a machine learned fuzzy logic model, a self-organising map model and a multi-layer perceptron model. Their performance is compared with the performance of two classical approaches, namely, a statistical ARIMA model and an econometric error correction model. The algorithms are evaluated on a complete series of end-month US 10-year Treasury bonds yields and economic indicators from 1986:1 to 2004:12. In terms of prediction accuracy and reliability of the modelling procedure, the best results are obtained by the three parametric regression algorithms, namely the econometric, the statistical and the multi-layer perceptron model. Due to the sparseness of the learning data samples, the manual and the automatic fuzzy logic approaches fail to follow with adequate precision the range of variations of the US 10-year Treasury bonds. For similar reasons, the self-organising map model gives an unsatisfactory performance. Analysis of the results indicates that the econometric model has a slight edge over the statistical and the multi-layer perceptron models. This suggests that pure data-driven induction may not fully capture the complicated mechanisms ruling the changes in interest rates. Overall, the prediction accuracy of the best models is only marginally better than the prediction accuracy of a basic one-step lag predictor. This result highlights the difficulty of the modelling task and, in general, the difficulty of building reliable predictors for financial markets.interest rates; forecasting; neural networks; fuzzy logic.
    • 

    corecore