15,936 research outputs found

    Fourteenth Biennial Status Report: März 2017 - February 2019

    No full text

    Machine Learning in Transaction Monitoring: The Prospect of xAI

    Full text link
    Banks hold a societal responsibility and regulatory requirements to mitigate the risk of financial crimes. Risk mitigation primarily happens through monitoring customer activity through Transaction Monitoring (TM). Recently, Machine Learning (ML) has been proposed to identify suspicious customer behavior, which raises complex socio-technical implications around trust and explainability of ML models and their outputs. However, little research is available due to its sensitivity. We aim to fill this gap by presenting empirical research exploring how ML supported automation and augmentation affects the TM process and stakeholders' requirements for building eXplainable Artificial Intelligence (xAI). Our study finds that xAI requirements depend on the liable party in the TM process which changes depending on augmentation or automation of TM. Context-relatable explanations can provide much-needed support for auditing and may diminish bias in the investigator's judgement. These results suggest a use case-specific approach for xAI to adequately foster the adoption of ML in TM.Comment: Proceedings of the 56th Hawaii International Conference on System Sciences, 2023 https://hdl.handle.net/10125/10305

    Machine Learning in Transaction Monitoring: The Prospect of xAI

    Get PDF
    Banks hold a societal responsibility and regulatory requirements to mitigate the risk of financial crimes. Risk mitigation primarily happens through monitoring customer activity through Transaction Monitoring (TM). Recently, Machine Learning (ML) has been proposed to identify suspicious customer behavior, which raises complex socio-technical implications around trust and explainability of ML models and their outputs. However, little research is available due to its sensitivity. We aim to fill this gap by presenting empirical research exploring how ML supported automation and augmentation affects the TM process and stakeholders’ requirements for building eXplainable Artificial Intelligence (xAI). Our study finds that xAI requirements depend on the liable party in the TM process which changes depending on augmentation or automation of TM. Context-relatable explanations can provide much-needed support for auditing and may diminish bias in the investigator’s judgement. These results suggest a use case-specific approach for xAI to adequately foster the adoption of ML in TM

    The promise and challenges of vegetable home gardening for improving nutrition and household welfare: new evidence from Kasese District, Uganda

    Get PDF
    Nearly eighty percent of Kasese District residents in Western Uganda pursue subsistencefarming on the slopes of the Rwenzori Mountains where soil erosion and poverty contribute to declining agricultural yields, food insecurity, and high rates of stunting and wasting in children. In 2017, the Rwenzori Center for Research and Advocacy (RCRA) began a pilot home garden program aimed at sustainably improving nutrition for vulnerable households in Kasese. In 2019, the research team investigated whether a home garden intervention for nutritional benefit is an effective entry point to achieve broad household welfare. Data were collected from fifty randomly selected households in four sites with varied degrees of exposure to the garden intervention. Methods included a questionnaire, innovative card sorting game (CSG), 24-hour recall nutrition survey, indepth interviews, and case stories of diverse Kasese women. Findings show that households experience diverse garden benefits and challenges depending upon baseline conditions, such as access to land, water, and money, as well as the quality and consistency of the technical and material support received. The frequency of vegetable consumption per day showed the most consistently positive results across households, while a 24-hour nutrition survey displayed increased consumption of leafy green vegetables high in iron and vitamin A among families with gardens, leading to ‘stronger children’ (CSG scenario) and improved family health. Further, over seventy percent of families generated income from their gardens, though varying widely in capacity to sell year-round. The garden income earned by women gardeners is spent almost entirely on child welfare. On average, more than ninety percent of garden households save ten percent of their income, primarily through Village Savings Groups. Therefore, regarding our research question, there is evidence to affirm that a home garden intervention for nutritional benefit can be an effective entry point to achieve broad household welfare.  This conclusion is supported by numerous previous studies on garden initiatives for improved nutrition around the world

    Broadening the Horizon of Adversarial Attacks in Deep Learning

    Get PDF
    152 p.Los modelos de Aprendizaje Automático como las Redes Neuronales Profundas son actualmente el núcleo de una amplia gama de tecnologías aplicadas en tareas críticas, como el reconocimiento facial o la conducción autónoma, en las que tanto la capacidad predictiva como la fiabilidad son requisitos fundamentales. Sin embargo, estos modelos pueden ser fácilmente engañados por inputs manipulados deforma imperceptible para el ser humano, denominados ejemplos adversos (adversarial examples), lo que implica una brecha de seguridad que puede ser explotada por un atacante con fines ilícitos. Dado que estas vulnerabilidades afectan directamente a la integridad y fiabilidad de múltiples sistemas que,progresivamente, están siendo desplegados en aplicaciones del mundo real, es crucial determinar el alcance de dichas vulnerabilidades para poder garantizar así un uso más responsable, informado y seguro de esos sistemas. Por estos motivos, esta tesis doctoral tiene como objetivo principal investigar nuevas nociones de ataques adversos y vulnerabilidades en las Redes Neuronales Profundas. Como resultado de esta investigación, a lo largo de esta tesis se exponen nuevos paradigmas de ataque que exceden o amplían las capacidades de los métodos actualmente disponibles en la literatura, ya que son capaces de alcanzar objetivos más generales, complejos o ambiciosos. Al mismo tiempo, se exponen nuevas brechas de seguridad en casos de uso y escenarios en los que las consecuencias de los ataques adversos no habían sido investigadas con anterioridad. Nuestro trabajo también arroja luz sobre diferentes propiedades de estos modelos que los hacen más vulnerables a los ataques adversos, contribuyendo a una mejor comprensión de estos fenómenos

    Impact of Human-AI Interaction on User Trust and Reliance in AI-Assisted Qualitative Coding

    Full text link
    While AI shows promise for enhancing the efficiency of qualitative analysis, the unique human-AI interaction resulting from varied coding strategies makes it challenging to develop a trustworthy AI-assisted qualitative coding system (AIQCs) that supports coding tasks effectively. We bridge this gap by exploring the impact of varying coding strategies on user trust and reliance on AI. We conducted a mixed-methods split-plot 3x3 study, involving 30 participants, and a follow-up study with 6 participants, exploring varying text selection and code length in the use of our AIQCs system for qualitative analysis. Our results indicate that qualitative open coding should be conceptualized as a series of distinct subtasks, each with differing levels of complexity, and therefore, should be given tailored design considerations. We further observed a discrepancy between perceived and behavioral measures, and emphasized the potential challenges of under- and over-reliance on AIQCs systems. Additional design implications were also proposed for consideration.Comment: 27 pages with references, 9 figures, 5 table
    corecore