8,073 research outputs found

    Scalable and Weakly Supervised Bank Transaction Classification

    Full text link
    This paper aims to categorize bank transactions using weak supervision, natural language processing, and deep neural network techniques. Our approach minimizes the reliance on expensive and difficult-to-obtain manual annotations by leveraging heuristics and domain knowledge to train accurate transaction classifiers. We present an effective and scalable end-to-end data pipeline, including data preprocessing, transaction text embedding, anchoring, label generation, discriminative neural network training, and an overview of the system architecture. We demonstrate the effectiveness of our method by showing it outperforms existing market-leading solutions, achieves accurate categorization, and can be quickly extended to novel and composite use cases. This can in turn unlock many financial applications such as financial health reporting and credit risk assessment

    AgileML: A Machine Learning Project Development Pipeline Incorporating Active Consumer Engagement

    Get PDF

    Life Sucks: Classifying Transgressive Fiction

    Get PDF
    This thesis explores the genre of transgressive fiction and locates my own creative work within this genre. Los Angeles Times writer Michael Silverblatt popularized the term transgressive fiction in 1993 and associates authors such as Dennis Cooper, Kathy Acker, Roland Barthes, Bret Easton Ellis, Michel Foucault, William Gass, Jean Genet, and William Burroughs with the genre; however, the definition of transgressive fiction remains elusive. Therefore, in my first chapter, I analyze three transgressive novels—Bret Easton Ellis’s American Psycho, Chuck Palahniuk’s Fight Club, and Tony Tullamuthe’s Private Citizens. My second chapter consists of selections from my creative work, a transgressive novel entitled Life Sucks. Finally, in my third chapter I reflect on my creative writing process and describe how my creative work engages with, contributes to, and departs from the genre of transgressive fiction

    The Care2Report System: Automated Medical Reporting as an Integrated Solution to Reduce Administrative Burden in Healthcare

    Get PDF
    Documenting patient medical information in the electronic medical record is a time-consuming task at the expense of direct patient care. We propose an integrated solution to automate the process of medical reporting. This vision is enabled through the integration of speech and action recognition technology with semantic interpretation based on knowledge graphs. This paper presents our dialogue summarization pipeline that transforms speech into a medical report via transcription and formal representation. We discuss the functional and technical architecture of our Care2Report system along with an initial system evaluation with data of real consultation sessions

    Survey of the State of the Art in Natural Language Generation: Core tasks, applications and evaluation

    Get PDF
    This paper surveys the current state of the art in Natural Language Generation (NLG), defined as the task of generating text or speech from non-linguistic input. A survey of NLG is timely in view of the changes that the field has undergone over the past decade or so, especially in relation to new (usually data-driven) methods, as well as new applications of NLG technology. This survey therefore aims to (a) give an up-to-date synthesis of research on the core tasks in NLG and the architectures adopted in which such tasks are organised; (b) highlight a number of relatively recent research topics that have arisen partly as a result of growing synergies between NLG and other areas of artificial intelligence; (c) draw attention to the challenges in NLG evaluation, relating them to similar challenges faced in other areas of Natural Language Processing, with an emphasis on different evaluation methods and the relationships between them.Comment: Published in Journal of AI Research (JAIR), volume 61, pp 75-170. 118 pages, 8 figures, 1 tabl

    Establishing a resource center: A guide for organizations supporting community foundations

    Get PDF
    Maintaining a resource center such as a library is a central tasks of an association to serve its members, though one of the first to be neglected. WINGS-CF commissioned this guide to assist organizations supporting community foundations to review and organize their resource items, and to propose several classification systems / taxonomies

    The Data Conversion Bottleneck in Analog Computing Accelerators

    Full text link
    Most modern computing tasks have digital electronic input and output data. Due to these constraints imposed by real-world use cases of computer systems, any analog computing accelerator, whether analog electronic or optical, must perform an analog-to-digital conversion on its input data and a subsequent digital-to-analog conversion on its output data. The energy and latency costs incurred by data conversion place performance limits on analog computing accelerators. To avoid this overhead, analog hardware must replace the full functionality of traditional digital electronic computer hardware. This is not currently possible for optical computing accelerators due to limitations in gain, input-output isolation, and information storage in optical hardware. This article presents a case study that profiles 27 benchmarks for an analog optical Fourier transform and convolution accelerator which we designed and built. The case study shows that an ideal optical Fourier transform and convolution accelerator can produce an average speedup of 9.4 times and a median speedup of 1.9 times for the set of benchmarks. The optical Fourier transform and convolution accelerator only produces significant speedup for pure Fourier transform (45.3 times) and convolution (159.4 times) applications.Comment: Accepted to the First Workshop on Machine Learning with New Compute Paradigms at NeurIPS 2023 (MLNPCP 2023

    Special Libraries, December 1962

    Get PDF
    Volume 53, Issue 10https://scholarworks.sjsu.edu/sla_sl_1962/1009/thumbnail.jp

    A grounded theory of requirements documentation in the practice of software development

    Get PDF
    This thesis is concerned with the concept of a “ good enough” requirements document. It takes the position, based on empirical observations, that standard prescriptive approaches have failed to identify the necessary and sufficient characteristics of a good requirements document, because what is good enough in one situation may not be desirable or acceptable in another. Therefore, no single set o f criteria can define “a good requirements document”. The thesis presents a grounded theory which attempts to explain the diversity of styles of requirements documents found in practice, in relation to the variety of situations in which software products and systems are developed. It identifies the factors that might be useful to categorise situations from the point of view of requirements documentation. Requirements documents are widely used in software development, an activity typically carried out in an organisational context. Organisational theory suggests that the best approach in any situation depends on the factors that affect that situation. In the research, it was found that experienced practitioners employ a wide variety of constituent elements, structures, and styles when documenting requirements. This is in contrast with much of the literature on requirements engineering. The contribution o f this research is in three parts (a) an analysis o f requirements documents as texts, (b) a scheme for classifying system development situations with respect to the requirements documentation process, and (c) a framework matching typical requirements documents with the types o f situations identified in (a). As a grounded theory, it is the result of a detailed and systematic investigation into the role of requirements documents in the practice of software development Its status as a theory implies that it is tentative and provisional. An outline of how the theory might be validated for its usefulness, applicability, and generality is presented in the concluding chapter
    • 

    corecore