85,418 research outputs found

    A dynamic analysis of stock markets using a hidden Markov model

    Get PDF
    none2siThis paper proposes a framework to detect financial crises, pinpoint the end of a crisis in stock markets and support investment decision-making processes. This proposal is based on a hidden Markov model (HMM) and allows for a specific focus on conditional mean returns. By analysing weekly changes in the US stock market indexes over a period of 20 years, this study obtains an accurate detection of stable and turmoil periods and a probabilistic measure of switching between different stock market conditions. The results contribute to the discussion of the capabilities of Markov-switching models of analysing stock market behaviour. In particular, we find evidence that HMM outperforms threshold GARCH model with Student-t innovations both in-sample and out-of-sample, giving financial operators some appealing investment strategies.openL. De Angelis; L.J. PaasL. De Angelis; L.J. Paa

    Intelligent Financial Fraud Detection Practices: An Investigation

    Full text link
    Financial fraud is an issue with far reaching consequences in the finance industry, government, corporate sectors, and for ordinary consumers. Increasing dependence on new technologies such as cloud and mobile computing in recent years has compounded the problem. Traditional methods of detection involve extensive use of auditing, where a trained individual manually observes reports or transactions in an attempt to discover fraudulent behaviour. This method is not only time consuming, expensive and inaccurate, but in the age of big data it is also impractical. Not surprisingly, financial institutions have turned to automated processes using statistical and computational methods. This paper presents a comprehensive investigation on financial fraud detection practices using such data mining methods, with a particular focus on computational intelligence-based techniques. Classification of the practices based on key aspects such as detection algorithm used, fraud type investigated, and success rate have been covered. Issues and challenges associated with the current practices and potential future direction of research have also been identified.Comment: Proceedings of the 10th International Conference on Security and Privacy in Communication Networks (SecureComm 2014

    Use of a Bayesian belief network to predict the impacts of commercializing non-timber forest products on livelihoods

    Get PDF
    Commercialization of non-timber forest products (NTFPs) has been widely promoted as a means of sustainably developing tropical forest resources, in a way that promotes forest conservation while supporting rural livelihoods. However, in practice, NTFP commercialization has often failed to deliver the expected benefits. Progress in analyzing the causes of such failure has been hindered by the lack of a suitable framework for the analysis of NTFP case studies, and by the lack of predictive theory. We address these needs by developing a probabilistic model based on a livelihood framework, enabling the impact of NTFP commercialization on livelihoods to be predicted. The framework considers five types of capital asset needed to support livelihoods: natural, human, social, physical, and financial. Commercialization of NTFPs is represented in the model as the conversion of one form of capital asset into another, which is influenced by a variety of socio-economic, environmental, and political factors. Impacts on livelihoods are determined by the availability of the five types of assets following commercialization. The model, implemented as a Bayesian Belief Network, was tested using data from participatory research into 19 NTFP case studies undertaken in Mexico and Bolivia. The model provides a novel tool for diagnosing the causes of success and failure in NTFP commercialization, and can be used to explore the potential impacts of policy options and other interventions on livelihoods. The potential value of this approach for the development of NTFP theory is discussed

    Towards a Quantum-Like Cognitive Architecture for Decision-Making

    Full text link
    We propose an alternative and unifying framework for decision-making that, by using quantum mechanics, provides more generalised cognitive and decision models with the ability to represent more information than classical models. This framework can accommodate and predict several cognitive biases reported in Lieder & Griffiths without heavy reliance on heuristics nor on assumptions of the computational resources of the mind

    Life-cycle of fatigue sensitive structures under uncertainty

    Get PDF
    Fatigue is the one of the main contributors to problems related to structural safety of civil and marine structures. Life-cycle management (LCM) techniques considering various uncertainties can be used to predict the safe service life of fatigue sensitive structures, plan for their future inspections and support the decision making process regarding maintenance and repair actions. This paper provides a brief overview of the LCM of fatigue sensitive civil and marine structures under uncertainty. Probabilistic performance prediction, inspection scheduling and maintenance optimization for such structures are discussed

    Using graphical models and multi-attribute utility theory for probabilistic uncertainty handling in large systems, with application to nuclear emergency management

    Get PDF
    Although many decision-making problems involve uncertainty, uncertainty handling within large decision support systems (DSSs) is challenging. One domain where uncertainty handling is critical is emergency response management, in particular nuclear emergency response, where decision making takes place in an uncertain, dynamically changing environment. Assimilation and analysis of data can help to reduce these uncertainties, but it is critical to do this in an efficient and defensible way. After briefly introducing the structure of a typical DSS for nuclear emergencies, the paper sets up a theoretical structure that enables a formal Bayesian decision analysis to be performed for environments like this within a DSS architecture. In such probabilistic DSSs many input conditional probability distributions are provided by different sets of experts overseeing different aspects of the emergency. These probabilities are then used by the decision maker (DM) to find her optimal decision. We demonstrate in this paper that unless due care is taken in such a composite framework, coherence and rationality may be compromised in a sense made explicit below. The technology we describe here builds a framework around which Bayesian data updating can be performed in a modular way, ensuring both coherence and efficiency, and provides sufficient unambiguous information to enable the DM to discover her expected utility maximizing policy
    corecore