10,789 research outputs found

    Learning Machines Supporting Bankruptcy Prediction

    Get PDF
    In many economic applications it is desirable to make future predictions about the financial status of a company. The focus of predictions is mainly if a company will default or not. A support vector machine (SVM) is one learning method which uses historical data to establish a classification rule called a score or an SVM. Companies with scores above zero belong to one group and the rest to another group. Estimation of the probability of default (PD) values can be calculated from the scores provided by an SVM. The transformation used in this paper is a combination of weighting ranks and of smoothing the results using the PAV algorithm. The conversion is then monotone. This discussion paper is based on the Creditreform database from 1997 to 2002. The indicator variables were converted to financial ratios; it transpired out that eight of the 25 were useful for the training of the SVM. The results showed that those ratios belong to activity, profitability, liquidity and leverage. Finally, we conclude that SVMs are capable of extracting the necessary information from financial balance sheets and then to predict the future solvency or insolvent of a company. Banks in particular will benefit from these results by allowing them to be more aware of their risk when lending money.Support Vector Machine, Bankruptcy, Default Probabilities Prediction, Profitability

    Unmasking Clever Hans Predictors and Assessing What Machines Really Learn

    Full text link
    Current learning machines have successfully solved hard application problems, reaching high accuracy and displaying seemingly "intelligent" behavior. Here we apply recent techniques for explaining decisions of state-of-the-art learning machines and analyze various tasks from computer vision and arcade games. This showcases a spectrum of problem-solving behaviors ranging from naive and short-sighted, to well-informed and strategic. We observe that standard performance evaluation metrics can be oblivious to distinguishing these diverse problem solving behaviors. Furthermore, we propose our semi-automated Spectral Relevance Analysis that provides a practically effective way of characterizing and validating the behavior of nonlinear learning machines. This helps to assess whether a learned model indeed delivers reliably for the problem that it was conceived for. Furthermore, our work intends to add a voice of caution to the ongoing excitement about machine intelligence and pledges to evaluate and judge some of these recent successes in a more nuanced manner.Comment: Accepted for publication in Nature Communication

    Stochastic Synapses Enable Efficient Brain-Inspired Learning Machines

    Get PDF
    Recent studies have shown that synaptic unreliability is a robust and sufficient mechanism for inducing the stochasticity observed in cortex. Here, we introduce Synaptic Sampling Machines, a class of neural network models that uses synaptic stochasticity as a means to Monte Carlo sampling and unsupervised learning. Similar to the original formulation of Boltzmann machines, these models can be viewed as a stochastic counterpart of Hopfield networks, but where stochasticity is induced by a random mask over the connections. Synaptic stochasticity plays the dual role of an efficient mechanism for sampling, and a regularizer during learning akin to DropConnect. A local synaptic plasticity rule implementing an event-driven form of contrastive divergence enables the learning of generative models in an on-line fashion. Synaptic sampling machines perform equally well using discrete-timed artificial units (as in Hopfield networks) or continuous-timed leaky integrate & fire neurons. The learned representations are remarkably sparse and robust to reductions in bit precision and synapse pruning: removal of more than 75% of the weakest connections followed by cursory re-learning causes a negligible performance loss on benchmark classification tasks. The spiking neuron-based synaptic sampling machines outperform existing spike-based unsupervised learners, while potentially offering substantial advantages in terms of power and complexity, and are thus promising models for on-line learning in brain-inspired hardware

    Reply to determining structural identifiability of parameter learning machines

    Get PDF
    The paper Ran and Hu (2014, Neurocomputing) examines identifiability and parameter redundancy in classes of models used in machine learning. This note discusses the results on global identifiability and also clarifies that the paper's results on parameter redundancy already exist in the paper Cole et al. (2010, Mathematical Biosciences)
    corecore