240,913 research outputs found

    Comparison between duval triangle and duval pentagon method for dissolved gas analysis of power transformer

    Get PDF
    Power transformers are the highest value of the equipment installed in high-voltage substations, comprising up to 60% of total investment. There is a need for economic and financial reports to be provided to make asset decisions and ensuring balance between investment, maintenance costs and operational performance. Health index (HI) is the most common approach used in determining the condition of the transformers. It is a tool that process information by creating a score that describe the condition of an asset. A comparative analysis is made between HI calculation models that allow the evaluation of the condition of a power transformer. Through this index it is possible to objectively determine the condition of power transformers to make maintenance or reinvestment decisions. Thus, it is possible to detect possible risk assets preventing them from failing, allowing an increase in the life time. Several studies have examined different power transformer condition assessment and life management techniques. These techniques include measuring or monitoring of dissolved gas analysis (DGA) using Duval Triangle method. DGA technique is a reliable method and widely used to detect incipient faults which may occur in transformers such as partial discharge, thermal fault and electrical fault. This paper will focus on DGA using Duval Triangle & Pentagon method. The objective of this paper is to compare and identify between Duval Triangle and Duval Pentagon methods which may provide more accurate interpretation of DGA test result. This comparative study is based on real data provided by Malaysia utility company. The analysis using Duval Pentagon method give the accurate fault analysis and exactly same as the interpretation given by IEC 60599 Standard. An accurate fault analysis using Duval Pentagon Method give a better output of life time prediction, types of possible faults and recommendations for future maintenance action can be achieved

    Application of support vector machines on the basis of the first Hungarian bankruptcy model

    Get PDF
    In our study we rely on a data mining procedure known as support vector machine (SVM) on the database of the first Hungarian bankruptcy model. The models constructed are then contrasted with the results of earlier bankruptcy models with the use of classification accuracy and the area under the ROC curve. In using the SVM technique, in addition to conventional kernel functions, we also examine the possibilities of applying the ANOVA kernel function and take a detailed look at data preparation tasks recommended in using the SVM method (handling of outliers). The results of the models assembled suggest that a significant improvement of classification accuracy can be achieved on the database of the first Hungarian bankruptcy model when using the SVM method as opposed to neural networks

    A critical assessment of imbalanced class distribution problem: the case of predicting freshmen student attrition

    Get PDF
    Predicting student attrition is an intriguing yet challenging problem for any academic institution. Class-imbalanced data is a common in the field of student retention, mainly because a lot of students register but fewer students drop out. Classification techniques for imbalanced dataset can yield deceivingly high prediction accuracy where the overall predictive accuracy is usually driven by the majority class at the expense of having very poor performance on the crucial minority class. In this study, we compared different data balancing techniques to improve the predictive accuracy in minority class while maintaining satisfactory overall classification performance. Specifically, we tested three balancing techniques—oversampling, under-sampling and synthetic minority over-sampling (SMOTE)—along with four popular classification methods—logistic regression, decision trees, neuron networks and support vector machines. We used a large and feature rich institutional student data (between the years 2005 and 2011) to assess the efficacy of both balancing techniques as well as prediction methods. The results indicated that the support vector machine combined with SMOTE data-balancing technique achieved the best classification performance with a 90.24% overall accuracy on the 10-fold holdout sample. All three data-balancing techniques improved the prediction accuracy for the minority class. Applying sensitivity analyses on developed models, we also identified the most important variables for accurate prediction of student attrition. Application of these models has the potential to accurately predict at-risk students and help reduce student dropout rates

    Neural Networks in Bankruptcy Prediction - A Comparative Study on the Basis of the First Hungarian Bankruptcy Model

    Get PDF
    The article attempts to answer the question whether or not the latest bankruptcy prediction techniques are more reliable than traditional mathematical–statistical ones in Hungary. Simulation experiments carried out on the database of the first Hungarian bankruptcy prediction model clearly prove that bankruptcy models built using artificial neural networks have higher classification accuracy than models created in the 1990s based on discriminant analysis and logistic regression analysis. The article presents the main results, analyses the reasons for the differences and presents constructive proposals concerning the further development of Hungarian bankruptcy prediction

    A comparative analysis of decision trees vis-a-vis other computational data mining techniques in automotive insurance fraud detection

    Get PDF
    The development and application of computational data mining techniques in financial fraud detection and business failure prediction has become a popular cross-disciplinary research area in recent times involving financial economists, forensic accountants and computational modellers. Some of the computational techniques popularly used in the context of - financial fraud detection and business failure prediction can also be effectively applied in the detection of fraudulent insurance claims and therefore, can be of immense practical value to the insurance industry. We provide a comparative analysis of prediction performance of a battery of data mining techniques using real-life automotive insurance fraud data. While the data we have used in our paper is US-based, the computational techniques we have tested can be adapted and generally applied to detect similar insurance frauds in other countries as well where an organized automotive insurance industry exists

    Intelligent Financial Fraud Detection Practices: An Investigation

    Full text link
    Financial fraud is an issue with far reaching consequences in the finance industry, government, corporate sectors, and for ordinary consumers. Increasing dependence on new technologies such as cloud and mobile computing in recent years has compounded the problem. Traditional methods of detection involve extensive use of auditing, where a trained individual manually observes reports or transactions in an attempt to discover fraudulent behaviour. This method is not only time consuming, expensive and inaccurate, but in the age of big data it is also impractical. Not surprisingly, financial institutions have turned to automated processes using statistical and computational methods. This paper presents a comprehensive investigation on financial fraud detection practices using such data mining methods, with a particular focus on computational intelligence-based techniques. Classification of the practices based on key aspects such as detection algorithm used, fraud type investigated, and success rate have been covered. Issues and challenges associated with the current practices and potential future direction of research have also been identified.Comment: Proceedings of the 10th International Conference on Security and Privacy in Communication Networks (SecureComm 2014

    Service Quality and Customer Loyalty in a Post-Crisis Context. Prediction-Oriented Modeling to Enhance the Particular Importance of a Social and Sustainable Approach

    Get PDF
    Research into the influence of service quality on customer loyalty has typically focused on confirming isolated direct causal influences regarding particular dimensions of quality, usually undertaken in the context of positive, firm-customer relations. The present study extends analysis of these factors through a new lens. First, the study was undertaken in a market context following a crisis that has had far-reaching consequences for customers’ relational behaviors. We explore the case of the Spanish banking industry, a sector that accurately reflects these new relational conditions, including a rising demand for more socially responsible banking. Second, we propose a holistic model that combines the effects of four key factors associated with service quality (outcome, personnel, servicescape and social qualities). We also apply an innovative predictive methodological technique using partial least squares (PLS) and qualitative comparative analysis (QCA) that enables us not only to determine the direct causal effects among variables, but also to consider different scenarios in which to predict customer loyalty. The results highlight the role of outcome and social qualities. The novelty of the social qualities factor helps to underscore the importance of social, ethical and sustainable practices to customer loyalty, although personnel and servicescape qualities must also be present to improve the predictive capability of service quality on loyalty

    Statistical modelling to predict corporate default for Brazilian companies in the context of Basel II using a new set of financial ratios

    Get PDF
    This paper deals with statistical modelling to predict failure of Brazilian companies in the light of the Basel II definition of default using a new set of explanatory variables. A rearrangement in the official format of the Balance Sheet is put forward. From this rearrangement a framework of complementary non-conventional ratios is proposed. Initially, a model using 22 traditional ratios is constructed. Problems associated with multicollinearity were found in this model. Adding a group of 6 non-conventional ratios alongside traditional ratios improves the model substantially. The main findings in this study are: (a) logistic regression performs well in the context of Basel II, yielding a sound model applicable in the decision making process; (b) the complementary list of financial ratios plays a critical role in the model proposed; (c) the variables selected in the model show that when current assets and current liabilities are split into two sub-groups - financial and operational - they are more effective in explaining default than the traditional ratios associated with liquidity; and (d) those variables also indicate that high interest rates in Brazil adversely affect the performance of those companies which have a higher dependency on borrowing
    corecore