2,234 research outputs found

    Optimizing Credit Limit Adjustments Under Adversarial Goals Using Reinforcement Learning

    Full text link
    Reinforcement learning has been explored for many problems, from video games with deterministic environments to portfolio and operations management in which scenarios are stochastic; however, there have been few attempts to test these methods in banking problems. In this study, we sought to find and automatize an optimal credit card limit adjustment policy by employing reinforcement learning techniques. In particular, because of the historical data available, we considered two possible actions per customer, namely increasing or maintaining an individual's current credit limit. To find this policy, we first formulated this decision-making question as an optimization problem in which the expected profit was maximized; therefore, we balanced two adversarial goals: maximizing the portfolio's revenue and minimizing the portfolio's provisions. Second, given the particularities of our problem, we used an offline learning strategy to simulate the impact of the action based on historical data from a super-app (i.e., a mobile application that offers various services from goods deliveries to financial products) in Latin America to train our reinforcement learning agent. Our results show that a Double Q-learning agent with optimized hyperparameters can outperform other strategies and generate a non-trivial optimal policy reflecting the complex nature of this decision. Our research not only establishes a conceptual structure for applying reinforcement learning framework to credit limit adjustment, presenting an objective technique to make these decisions primarily based on data-driven methods rather than relying only on expert-driven systems but also provides insights into the effect of alternative data usage for determining these modifications.Comment: 29 pages, 16 figure

    Monetizing Explainable AI: A Double-edged Sword

    Full text link
    Algorithms used by organizations increasingly wield power in society as they decide the allocation of key resources and basic goods. In order to promote fairer, juster, and more transparent uses of such decision-making power, explainable artificial intelligence (XAI) aims to provide insights into the logic of algorithmic decision-making. Despite much research on the topic, consumer-facing applications of XAI remain rare. A central reason may be that a viable platform-based monetization strategy for this new technology has yet to be found. We introduce and describe a novel monetization strategy for fusing algorithmic explanations with programmatic advertising via an explanation platform. We claim the explanation platform represents a new, socially-impactful, and profitable form of human-algorithm interaction and estimate its potential for revenue generation in the high-risk domains of finance, hiring, and education. We then consider possible undesirable and unintended effects of monetizing XAI and simulate these scenarios using real-world credit lending data. Ultimately, we argue that monetizing XAI may be a double-edged sword: while monetization may incentivize industry adoption of XAI in a variety of consumer applications, it may also conflict with the original legal and ethical justifications for developing XAI. We conclude by discussing whether there may be ways to responsibly and democratically harness the potential of monetized XAI to provide greater consumer access to algorithmic explanations

    DockStream: a docking wrapper to enhance de novo molecular design

    Get PDF
    Recently, we have released the de novo design platform REINVENT in version 2.0. This improved and extended iteration supports far more features and scoring function components, which allows bespoke and tailor-made protocols to maximize impact in small molecule drug discovery projects. A major obstacle of generative models is producing active compounds, in which predictive (QSAR) models have been applied to enrich target activity. However, QSAR models are inherently limited by their applicability domains. To overcome these limitations, we introduce a structure-based scoring component for REINVENT. DockStream is a flexible, stand-alone molecular docking wrapper that provides access to a collection of ligand embedders and docking backends. Using the benchmarking and analysis workflow provided in DockStream, execution and subsequent analysis of a variety of docking configurations can be automated. Docking algorithms vary greatly in performance depending on the target and the benchmarking and analysis workflow provides a streamlined solution to identifying productive docking configurations. We show that an informative docking configuration can inform the REINVENT agent to optimize towards improving docking scores using public data. With docking activated, REINVENT is able to retain key interactions in the binding site, discard molecules which do not fit the binding cavity, harness unused (sub-)pockets, and improve overall performance in the scaffold-hopping scenario. The code is freely available at https://github.com/MolecularAI/DockStream

    Adaptive simulations, towards interactive protein-ligand modeling

    Get PDF
    Modeling the dynamic nature of protein-ligand binding with atomistic simulations is one of the main challenges in computational biophysics, with important implications in the drug design process. Although in the past few years hardware and software advances have significantly revamped the use of molecular simulations, we still lack a fast and accurate ab initio description of the binding mechanism in complex systems, available only for up-to-date techniques and requiring several hours or days of heavy computation. Such delay is one of the main limiting factors for a larger penetration of protein dynamics modeling in the pharmaceutical industry. Here we present a game-changing technology, opening up the way for fast reliable simulations of protein dynamics by combining an adaptive reinforcement learning procedure with Monte Carlo sampling in the frame of modern multi-core computational resources. We show remarkable performance in mapping the protein-ligand energy landscape, being able to reproduce the full binding mechanism in less than half an hour, or the active site induced fit in less than 5 minutes. We exemplify our method by studying diverse complex targets, including nuclear hormone receptors and GPCRs, demonstrating the potential of using the new adaptive technique in screening and lead optimization studies.We thank Drs Anders Hogner and Christoph Grebner, from AstraZeneca, and Jorge Estrada, from BSC, for fruitful discussions and feedback on the manuscript. We acknowledge the BSC-CRG-IRB Joint Research Program in Computational Biology. This work was supported by the CTQ2016-79138-R grant from the Spanish Government. D.L. acknowledges the support of SEV-2011-00067, awarded by the Spanish Government.Peer ReviewedPostprint (published version

    EVALUATING ARTIFICIAL INTELLIGENCE METHODS FOR USE IN KILL CHAIN FUNCTIONS

    Get PDF
    Current naval operations require sailors to make time-critical and high-stakes decisions based on uncertain situational knowledge in dynamic operational environments. Recent tragic events have resulted in unnecessary casualties, and they represent the decision complexity involved in naval operations and specifically highlight challenges within the OODA loop (Observe, Orient, Decide, and Assess). Kill chain decisions involving the use of weapon systems are a particularly stressing category within the OODA loop—with unexpected threats that are difficult to identify with certainty, shortened decision reaction times, and lethal consequences. An effective kill chain requires the proper setup and employment of shipboard sensors; the identification and classification of unknown contacts; the analysis of contact intentions based on kinematics and intelligence; an awareness of the environment; and decision analysis and resource selection. This project explored the use of automation and artificial intelligence (AI) to improve naval kill chain decisions. The team studied naval kill chain functions and developed specific evaluation criteria for each function for determining the efficacy of specific AI methods. The team identified and studied AI methods and applied the evaluation criteria to map specific AI methods to specific kill chain functions.Civilian, Department of the NavyCivilian, Department of the NavyCivilian, Department of the NavyCaptain, United States Marine CorpsCivilian, Department of the NavyCivilian, Department of the NavyApproved for public release. Distribution is unlimited

    A CREDIT ANALYSIS OF THE UNBANKED AND UNDERBANKED: AN ARGUMENT FOR ALTERNATIVE DATA

    Get PDF
    The purpose of this study is to ascertain the statistical and economic significance of non-traditional credit data for individuals who do not have sufficient economic data, collectively known as the unbanked and underbanked. The consequences of not having sufficient economic information often determines whether unbanked and underbanked individuals will receive higher price of credit or be denied entirely. In terms of regulation, there is a strong interest in credit models that will inform policies on how to gradually move sections of the unbanked and underbanked population into the general financial network. In Chapter 2 of the dissertation, I establish the role of non-traditional credit data, known as alternative data, in modeling borrower default behavior for individuals who unbanked and underbanked individuals by taking a statistical approach. Further, using a combined traditional and alternative auto loan data, I am able to make statements about which alternative data variables contribute to borrower default behavior. Additionally, I devise a way to statistically test the goodness of fit metric for some machine learning classification models to ascertain whether the alternative data truly helps in the credit building process. In Chapter 3, I discuss the economic significance of incorporating alternative data in the credit modeling process. Using a maximum utility approach, I show that combining alternative and traditional data yields a higher profit for the lender, rather than using either data alone. Additionally, Chapter 3 advocates for the use of loss functions that aligns with a lender\u27s business objective of making a profit

    A Pervasive Computational Intelligence based Cognitive Security Co-design Framework for Hype-connected Embedded Industrial IoT

    Get PDF
    The amplified connectivity of routine IoT entities can expose various security trajectories for cybercriminals to execute malevolent attacks. These dangers are even amplified by the source limitations and heterogeneity of low-budget IoT/IIoT nodes, which create existing multitude-centered and fixed perimeter-oriented security tools inappropriate for vibrant IoT settings. The offered emulation assessment exemplifies the remunerations of implementing context aware co-design oriented cognitive security method in assimilated IIoT settings and delivers exciting understandings in the strategy execution to drive forthcoming study. The innovative features of our system is in its capability to get by with irregular system connectivity as well as node limitations in terms of scares computational ability, limited buffer (at edge node), and finite energy. Based on real-time analytical data, projected scheme select the paramount probable end-to-end security system possibility that ties with an agreed set of node constraints. The paper achieves its goals by recognizing some gaps in the security explicit to node subclass that is vital to our system’s operations
    corecore