2,603 research outputs found

    Personalized bank campaign using artificial neural networks

    Get PDF
    Internship Report presented as the partial requirement for obtaining a Master's degree in Data Science and Advanced AnalyticsNowadays, high market competition requires Banks to focus more at individual customers´ behaviors. Specifically, customers prefer a personal relationship with the finance institution and they want to receive exclusive offers. Thus, a successful cross-sell and up- sell personalized campaign requires to know the individual client interest for the offer. The aim of this project is to create a model, that, is able to identify the probability of a customer to buy a product of the bank. The strategic plan is to run a long-term personalized campaign and the challenge is to create a model which remains accurate during this time. The source datasets consist of 12 dataMarts, which represent a monthly snapshot of the Bank’s dataWarehouse between April 2016 and March 2017. They consist of 191 original variables, which contain personal and transactional information and around 1.400.000 clients each. The selected modeling technique is Artificial Neural Networks and specifically, Multilayer Perceptron running with Back-propagation. The results showed that the model performs well and the business can use it to optimize the profitability. Despite the good results, business must monitor the model´s outputs to check the performance through time

    Application of neural networks and sensitivity analysis to improved prediction of trauma survival

    Get PDF
    Application of neural networks and sensitivity analysis to improved prediction of trauma surviva

    Radial Basis Function Artificial Neural Network for the Investigation of Thyroid Cytological Lesions

    Get PDF
    Objective. This study investigates the potential of an artificial intelligence (AI) methodology, the radial basis function (RBF) artificial neural network (ANN), in the evaluation of thyroid lesions. Study Design. The study was performed on 447 patients who had both cytological and histological evaluation in agreement. Cytological specimens were prepared using liquid-based cytology, and the histological result was based on subsequent surgical samples. Each specimen was digitized; on these images, nuclear morphology features were measured by the use of an image analysis system. The extracted measurements (41,324 nuclei) were separated into two sets: the training set that was used to create the RBF ANN and the test set that was used to evaluate the RBF performance. The system aimed to predict the histological status as benign or malignant. Results. The RBF ANN obtained in the training set has sensitivity 82.5%, specificity 94.6%, and overall accuracy 90.3%, while in the test set, these indices were 81.4%, 90.0%, and 86.9%, respectively. Algorithm was used to classify patients on the basis of the RBF ANN, the overall sensitivity was 95.0%, the specificity was 95.5%, and no statistically significant difference was observed. Conclusion. AI techniques and especially ANNs, only in the recent years, have been studied extensively. The proposed approach is promising to avoid misdiagnoses and assists the everyday practice of the cytopathology. The major drawback in this approach is the automation of a procedure to accurately detect and measure cell nuclei from the digitized images

    A comparison of supervised learning algorithms

    Get PDF
    Heutzutage sind viele verschiedene Machine Learning Algorithmen vorhanden. Als Konsequenz, mehr als ein Algorithmus kann auf ein ausgewähltes Problem angewandt werden. Dies führt zu einer Frage, wie unterschiedlich in solch einem Fall die Ergebnisse von den Algorithmen sein werden. In einem Versuch, diese Frage zu beantworten, werden in der folgenden Arbeit die Ergebnisse von vier sehr bekannten Supervised Learning Algorithmen, die auf ein binäres Klassifikationsproblem angewendet wurden, verglichen.Today there are a lot of various machine learning algorithms available. As a consequence, when it comes to a specific problem, more than one algorithm can be applied to solve it. Therefore, a question is posed, about how different the results would be in such a case. In an attempt answer this question, this thesis compares the results of four well-known supervised learning algorithms, applied to a binary classification problem

    NeuralSens: Sensitivity Analysis of Neural Networks

    Full text link
    Neural networks are important tools for data-intensive analysis and are commonly applied to model non-linear relationships between dependent and independent variables. However, neural networks are usually seen as "black boxes" that offer minimal information about how the input variables are used to predict the response in a fitted model. This article describes the \pkg{NeuralSens} package that can be used to perform sensitivity analysis of neural networks using the partial derivatives method. Functions in the package can be used to obtain the sensitivities of the output with respect to the input variables, evaluate variable importance based on sensitivity measures and characterize relationships between input and output variables. Methods to calculate sensitivities are provided for objects from common neural network packages in \proglang{R}, including \pkg{neuralnet}, \pkg{nnet}, \pkg{RSNNS}, \pkg{h2o}, \pkg{neural}, \pkg{forecast} and \pkg{caret}. The article presents an overview of the techniques for obtaining information from neural network models, a theoretical foundation of how are calculated the partial derivatives of the output with respect to the inputs of a multi-layer perceptron model, a description of the package structure and functions, and applied examples to compare \pkg{NeuralSens} functions with analogous functions from other available \proglang{R} packages.Comment: 28 pages, 12 figures, submitted to Journal of Statistical Software (JSS) https://www.jstatsoft.org/inde

    Optimization and Abstraction: A Synergistic Approach for Analyzing Neural Network Robustness

    Full text link
    In recent years, the notion of local robustness (or robustness for short) has emerged as a desirable property of deep neural networks. Intuitively, robustness means that small perturbations to an input do not cause the network to perform misclassifications. In this paper, we present a novel algorithm for verifying robustness properties of neural networks. Our method synergistically combines gradient-based optimization methods for counterexample search with abstraction-based proof search to obtain a sound and ({\delta}-)complete decision procedure. Our method also employs a data-driven approach to learn a verification policy that guides abstract interpretation during proof search. We have implemented the proposed approach in a tool called Charon and experimentally evaluated it on hundreds of benchmarks. Our experiments show that the proposed approach significantly outperforms three state-of-the-art tools, namely AI^2 , Reluplex, and Reluval
    • …
    corecore