77 research outputs found

    Cost-utility of adjuvant zoledronic acid in patients with breast cancer and low estrogen levels

    Get PDF
    BACKGROUND: Adjuvant zoledronic acid (za) appears to improve disease-free survival (dfs) in women with early-stage breast cancer and low levels of estrogen (lle) because of induced or natural menopause. Characterizing the cost-utility (cu) of this therapy could help to determine its role in clinical practice. METHODS: Using the perspective of the Canadian health care system, we examined the cu of adjuvant endocrine therapy with or without za in women with early-stage endocrine-sensitive breast cancer and lle. A Markov model was used to compute the cumulative costs in Canadian dollars and the quality-adjusted life-years (qalys) gained from each adjuvant strategy, discounted at a rate of 5% annually. The model incorporated the dfs and fracture benefits of adjuvant za. Probabilistic and one-way sensitivity analyses were conducted to examine key model parameters. RESULTS: Compared with a no-za strategy, adjuvant za in the induced and natural menopause groups was associated with, respectively, 7,825and7,825 and 7,789 in incremental costs and 0.46 and 0.34 in qaly gains for cu ratios of 17,007and17,007 and 23,093 per qaly gained. In one-way sensitivity analyses, the results were most sensitive to changes in the za dfs benefit. Probabilistic sensitivity analysis suggested a 100% probability of adjuvant za being a cost-effective strategy at a threshold of $100,000 per qaly gained. CONCLUSIONS: Based on available data, adjuvant za appears to be a cost-effective strategy in women with endocrine-sensitive breast cancer and lle, having cu ratios well below accepted thresholds

    The REVERE project:Experiments with the application of probabilistic NLP to systems engineering

    Get PDF
    Despite natural language’s well-documented shortcomings as a medium for precise technical description, its use in software-intensive systems engineering remains inescapable. This poses many problems for engineers who must derive problem understanding and synthesise precise solution descriptions from free text. This is true both for the largely unstructured textual descriptions from which system requirements are derived, and for more formal documents, such as standards, which impose requirements on system development processes. This paper describes experiments that we have carried out in the REVERE1 project to investigate the use of probabilistic natural language processing techniques to provide systems engineering support

    BOSS: Bayesian Optimization over String Spaces

    Get PDF
    This article develops a Bayesian optimization (BO) method which acts directly over raw strings, proposing the first uses of string kernels and genetic algorithms within BO loops. Recent applications of BO over strings have been hindered by the need to map inputs into a smooth and unconstrained latent space. Learning this projection is computationally and data-intensive. Our approach instead builds a powerful Gaussian process surrogate model based on string kernels, naturally supporting variable length inputs, and performs efficient acquisition function maximization for spaces with syntactical constraints. Experiments demonstrate considerably improved optimization over existing approaches across a broad range of constraints, including the popular setting where syntax is governed by a context-free grammar

    A Benchmark for Iris Location and a Deep Learning Detector Evaluation

    Full text link
    The iris is considered as the biometric trait with the highest unique probability. The iris location is an important task for biometrics systems, affecting directly the results obtained in specific applications such as iris recognition, spoofing and contact lenses detection, among others. This work defines the iris location problem as the delimitation of the smallest squared window that encompasses the iris region. In order to build a benchmark for iris location we annotate (iris squared bounding boxes) four databases from different biometric applications and make them publicly available to the community. Besides these 4 annotated databases, we include 2 others from the literature. We perform experiments on these six databases, five obtained with near infra-red sensors and one with visible light sensor. We compare the classical and outstanding Daugman iris location approach with two window based detectors: 1) a sliding window detector based on features from Histogram of Oriented Gradients (HOG) and a linear Support Vector Machines (SVM) classifier; 2) a deep learning based detector fine-tuned from YOLO object detector. Experimental results showed that the deep learning based detector outperforms the other ones in terms of accuracy and runtime (GPUs version) and should be chosen whenever possible.Comment: Accepted for presentation at the International Joint Conference on Neural Networks (IJCNN) 201

    BOSS: Bayesian Optimization over String Spaces

    Get PDF
    This article develops a Bayesian optimization (BO) method which acts directly over raw strings, proposing the first uses of string kernels and genetic algorithms within BO loops. Recent applications of BO over strings have been hindered by the need to map inputs into a smooth and unconstrained latent space. Learning this projection is computationally and data-intensive. Our approach instead builds a powerful Gaussian process surrogate model based on string kernels, naturally supporting variable length inputs, and performs efficient acquisition function maximization for spaces with syntactical constraints. Experiments demonstrate considerably improved optimization over existing approaches across a broad range of constraints, including the popular setting where syntax is governed by a context-free grammar

    From digital resources to historical scholarship with the British Library 19th Century Newspaper Collection

    Get PDF
    It is increasingly acknowledged that the Digital Humanities have placed too much emphasis on data creation and that the major priority should be turning digital sources into contributions to knowledge. While this sounds relatively simple, doing it involves intermediate stages of research that enhance digital sources, develop new methodologies and explore their potential to generate new knowledge from the source. While these stages are familiar in the social sciences they are less so in the humanities. In this paper we explore these stages based on research on the British Library’s Nineteenth Century Newspaper Collection, a corpus of many billion words that has much to offer to our understanding of the nineteenth century but whose size and complexity makes it difficult to work with

    Negotiations and Love Songs

    No full text

    Witness

    No full text

    Sweet Time Unafflicted

    No full text
    • …
    corecore