2,018 research outputs found

    Coherent Asset Allocation and Diversification in the Presence of Stress Events

    Get PDF
    We propose a method to integrate frequentist and subjective probabilities in order to obtain a coherent asset allocation in the presence of stress events. Our working assumption is that in normal market asset returns are sufficiently regular for frequentist statistical techniques to identify their joint distribution, once the outliers have been removed from the data set. We also argue, however, that the exceptional events facing the portfolio manager at any point in time are specific to the each individual crisis, and that past regularities cannot be relied upon. We therefore deal with exceptional returns by eliciting subjective probabilities, and by employing the Bayesian net technology to ensure logical consistency. The portfolio allocation is then obtained by utility maximization over the combined (normal plus exceptional) distribution of returns. We show the procedure in detail in a stylized case.Stress tests, asset allocation, Bayesian Networks

    A Survey on Explainable Anomaly Detection

    Full text link
    In the past two decades, most research on anomaly detection has focused on improving the accuracy of the detection, while largely ignoring the explainability of the corresponding methods and thus leaving the explanation of outcomes to practitioners. As anomaly detection algorithms are increasingly used in safety-critical domains, providing explanations for the high-stakes decisions made in those domains has become an ethical and regulatory requirement. Therefore, this work provides a comprehensive and structured survey on state-of-the-art explainable anomaly detection techniques. We propose a taxonomy based on the main aspects that characterize each explainable anomaly detection technique, aiming to help practitioners and researchers find the explainable anomaly detection method that best suits their needs.Comment: Paper accepted by the ACM Transactions on Knowledge Discovery from Data (TKDD) for publication (preprint version

    A conceptual model for proactive detection of potential fraud enterprise systems: exploiting SAP audit trails to detect asset misappropriation

    Get PDF
    Fraud costs the Australian economy approximately $3 billion annually, and its frequency and financial impact continues to grow. Many organisations are poorly prepared to prevent and detect fraud. Fraud prevention is not perfect therefore fraud detection is crucial. Fraud detection strategies are intended to quickly and efficiently identify frauds that circumvent preventative measures so that an organisation can take appropriate corrective action. Enhancing the ability of organisations to detect potential fraud may have a positive impact on the economy. An effective model that facilitates proactive detection of potential fraud may potentially save costs and reduce the propensity of future fraud by early detection of suspicious user activities. Enterprise systems generate millions of transactions annually. While most of these are legal and routine transactions, a small number may be fraudulent. The enormous number of transactions makes it difficult to find these few instances among legitimate transactions. Without the availability of proactive fraud detection tools, investigating suspicious activities becomes overwhelming. This study explores and develops innovative methods for proactive detection of potential fraud in enterprise systems. The intention is to build a model for detection of potential fraud based on analysis of patterns or signatures building on theories and concepts of continuous fraud detection. This objective is addressed by answering the main question; can a generalised model for proactive detection of potential fraud in enterprise systems be developed? The study proposes a methodology for proactive detection of potential fraud that exploits audit trails in enterprise systems. The concept of proactive detection of otential fraud is demonstrated by developing a prototype. The prototype is a near real-time web based application that uses SAS for its analytics processes. The aim of the prototype is to confirm the feasibility of implementing proactive detection of potential fraud in practice. Verification of the prototype is achieved by performing a series of tests involving simulated activity, followed by a full scale case study with a large international manufacturing company. Validation is achieved by obtaining independent reviews from the case study senior staff, auditing practitioners and a panel of experts. Timing experiments confirm that the prototype is able to handle real data volumes from a real organisation without difficulty thereby providing evidence in support of enhancement of auditor productivity. This study makes a number of contributions to both the literature and auditing practice

    New Spark solutions for distributed frequent itemset and association rule mining algorithms

    Get PDF
    Funding for open access publishing: Universidad de Gran- ada/CBUA. The research reported in this paper was partially sup- ported by the BIGDATAMED project, which has received funding from the Andalusian Government (Junta de Andalucı ́a) under grant agreement No P18-RT-1765, by Grants PID2021-123960OB-I00 and Grant TED2021-129402B-C21 funded by Ministerio de Ciencia e Innovacio ́n and, by ERDF A way of making Europe and by the European Union NextGenerationEU. In addition, this work has been partially supported by the Ministry of Universities through the EU- funded Margarita Salas programme NextGenerationEU. Funding for open access charge: Universidad de Granada/CBUAThe large amount of data generated every day makes necessary the re-implementation of new methods capable of handle with massive data efficiently. This is the case of Association Rules, an unsupervised data mining tool capable of extracting information in the form of IF-THEN patterns. Although several methods have been proposed for the extraction of frequent itemsets (previous phase before mining association rules) in very large databases, the high computational cost and lack of memory remains a major problem to be solved when processing large data. Therefore, the aim of this paper is three fold: (1) to review existent algorithms for frequent itemset and association rule mining, (2)to develop new efficient frequent itemset Big Data algorithms using distributive computation, as well as a new association rule mining algorithm in Spark, and (3) to compare the proposed algorithms with the existent proposals varying the number of transactions and the number of items. To this purpose, we have used the Spark platform which has been demonstrated to outperform existing distributive algorithmic implementations.Universidad de Granada/CBUAJunta de Andalucia P18-RT-1765Ministry of Science and Innovation, Spain (MICINN) Instituto de Salud Carlos III Spanish Government PID2021-123960OB-I00, TED2021-129402B-C21ERDF A way of making EuropeEuropean Union NextGenerationEUMinistry of Universities through the E

    Real-time big data processing for anomaly detection : a survey

    Get PDF
    The advent of connected devices and omnipresence of Internet have paved way for intruders to attack networks, which leads to cyber-attack, financial loss, information theft in healthcare, and cyber war. Hence, network security analytics has become an important area of concern and has gained intensive attention among researchers, off late, specifically in the domain of anomaly detection in network, which is considered crucial for network security. However, preliminary investigations have revealed that the existing approaches to detect anomalies in network are not effective enough, particularly to detect them in real time. The reason for the inefficacy of current approaches is mainly due the amassment of massive volumes of data though the connected devices. Therefore, it is crucial to propose a framework that effectively handles real time big data processing and detect anomalies in networks. In this regard, this paper attempts to address the issue of detecting anomalies in real time. Respectively, this paper has surveyed the state-of-the-art real-time big data processing technologies related to anomaly detection and the vital characteristics of associated machine learning algorithms. This paper begins with the explanation of essential contexts and taxonomy of real-time big data processing, anomalous detection, and machine learning algorithms, followed by the review of big data processing technologies. Finally, the identified research challenges of real-time big data processing in anomaly detection are discussed. © 2018 Elsevier Lt

    IMPROVE - Innovative Modelling Approaches for Production Systems to Raise Validatable Efficiency

    Get PDF
    This open access work presents selected results from the European research and innovation project IMPROVE which yielded novel data-based solutions to enhance machine reliability and efficiency in the fields of simulation and optimization, condition monitoring, alarm management, and quality prediction

    Privacy & law enforcement

    Get PDF

    Internet banking fraud detection using prudent analysis

    Get PDF
    The threat posed by cybercrime to individuals, banks and other online financial service providers is real and serious. Through phishing, unsuspecting victims’ Internet banking usernames and passwords are stolen and their accounts robbed. In addressing this issue, commercial banks and other financial institutions use a generically similar approach in their Internet banking fraud detection systems. This common approach involves the use of a rule-based system combined with an Artificial Neural Network (ANN). The approach used by commercial banks has limitations that affect their efficiency in curbing new fraudulent transactions. Firstly, the banks’ security systems are focused on preventing unauthorized entry and have no way of conclusively detecting an imposter using stolen credentials. Also, updating these systems is slow and their maintenance is labour-intensive and ultimately costly to the business. A major limitation of these rule-bases is brittleness; an inability to recognise the limits of their knowledge. To address the limitations highlighted above, this thesis proposes, develops and evaluates a new system for use in Internet banking fraud detection using Prudence Analysis, a technique through which a system can detect when its knowledge is insufficient for a given case. Specifically, the thesis proposes the following contributions:Doctor of Philosoph
    • 

    corecore