354 research outputs found

    Understanding and Mitigating Multi-sided Exposure Bias in Recommender Systems

    Get PDF
    Fairness is a critical system-level objective in recommender systems that has been the subject of extensive recent research. It is especially important in multi-sided recommendation platforms where it may be crucial to optimize utilities not just for the end user, but also for other actors such as item sellers or producers who desire a fair representation of their items. Existing solutions do not properly address various aspects of multi-sided fairness in recommendations as they may either solely have one-sided view (i.e. improving the fairness only for one side), or do not appropriately measure the fairness for each actor involved in the system. In this thesis, I aim at first investigating the impact of unfair recommendations on the system and how these unfair recommendations can negatively affect major actors in the system. Then, I seek to propose solutions to tackle the unfairness of recommendations. I propose a rating transformation technique that works as a pre-processing step before building the recommendation model to alleviate the inherent popularity bias in the input data and consequently to mitigate the exposure unfairness for items and suppliers in the recommendation lists. Also, as another solution, I propose a general graph-based solution that works as a post-processing approach after recommendation generation for mitigating the multi-sided exposure bias in the recommendation results. For evaluation, I introduce several metrics for measuring the exposure fairness for items and suppliers, and show that these metrics better capture the fairness properties in the recommendation results. I perform extensive experiments to evaluate the effectiveness of the proposed solutions. The experiments on different publicly-available datasets and comparison with various baselines confirm the superiority of the proposed solutions in improving the exposure fairness for items and suppliers.Comment: Doctoral thesi

    Sensitive Attribute Association Bias in Latent Factor Recommendation Algorithms: Theory and In Practice

    Get PDF
    This dissertation presents methods for evaluating and mitigating a relatively unexplored bias topic in recommendation systems, which we refer to as attribute association bias. Attribute association bias (AAB) can be introduced when leveraging latent factor recommendation models due to their ability to entangle model and implicit attributes into the trained latent space. This type of bias occurs when entity embeddings showcase significant levels of association with specific types of explicit or implicit entity attributes, thus having the potential to introduce representative harms for both consumer and provider stakeholders. We present a novel analysis method framework to help practitioners evaluate their latent factor recommendation models for AAB. This framework consists of three main techniques for gaining insight into sensitive AAB in the recommendation latent space: bias direction creation, bias evaluation metrics, and multi-group evaluation. Methods within our evaluation framework were inspired by techniques presented by the natural language processing research community for measuring gender bias in learned language representations. Additionally, we explore how this bias can be reinforced and produce feedback loops via retraining. Finally, we explore possible mitigation techniques for addressing said bias. Primarily, we demonstrate our methodology with two case studies that evaluate user gender association bias in latent factor recommendation. With our methods, we uncover the existence of user gender association bias and compare the various methods we propose to help guide practitioners in how best to use our techniques for their systems. In addition to exploring user gender, we experiment with measuring user age association bias as a means for evaluating non-binary AAB

    Continuity of object tracking

    Get PDF
    2022 Spring.Includes bibliographical references.The demand for object tracking (OT) applications has been increasing for the past few decades in many areas of interest: security, surveillance, intelligence gathering, and reconnaissance. Lately, newly-defined requirements for unmanned vehicles have enhanced the interest in OT. Advancements in machine learning, data analytics, and deep learning have facilitated the recognition and tracking of objects of interest; however, continuous tracking is currently a problem of interest to many research projects. This dissertation presents a system implementing a means to continuously track an object and predict its trajectory based on its previous pathway, even when the object is partially or fully concealed for a period of time. The system is divided into two phases: The first phase exploits a single fixed camera system and the second phase is composed of a mesh of multiple fixed cameras. The first phase system is composed of six main subsystems: Image Processing, Detection Algorithm, Image Subtractor, Image Tracking, Tracking Predictor, and the Feedback Analyzer. The second phase of the system adds two main subsystems: Coordination Manager and Camera Controller Manager. Combined, these systems allow for reasonable object continuity in the face of object concealment

    Proceedings of the XXVth TELEMAC-MASCARET User Conference, 9th to 11th October 2018, Norwich

    Get PDF

    Deep learning for prediction of colorectal cancer outcome: a discovery and validation study

    Get PDF
    Background Improved markers of prognosis are needed to stratify patients with early-stage colorectal cancer to refine selection of adjuvant therapy. The aim of the present study was to develop a biomarker of patient outcome after primary colorectal cancer resection by directly analysing scanned conventional haematoxylin and eosin stained sections using deep learning. Methods More than 12 000 000 image tiles from patients with a distinctly good or poor disease outcome from four cohorts were used to train a total of ten convolutional neural networks, purpose-built for classifying supersized heterogeneous images. A prognostic biomarker integrating the ten networks was determined using patients with a non-distinct outcome. The marker was tested on 920 patients with slides prepared in the UK, and then independently validated according to a predefined protocol in 1122 patients treated with single-agent capecitabine using slides prepared in Norway. All cohorts included only patients with resectable tumours, and a formalin-fixed, paraffin-embedded tumour tissue block available for analysis. The primary outcome was cancer-specific survival. Findings 828 patients from four cohorts had a distinct outcome and were used as a training cohort to obtain clear ground truth. 1645 patients had a non-distinct outcome and were used for tuning. The biomarker provided a hazard ratio for poor versus good prognosis of 3·84 (95% CI 2·72–5·43; p<0·0001) in the primary analysis of the validation cohort, and 3·04 (2·07–4·47; p<0·0001) after adjusting for established prognostic markers significant in univariable analyses of the same cohort, which were pN stage, pT stage, lymphatic invasion, and venous vascular invasion. Interpretation A clinically useful prognostic marker was developed using deep learning allied to digital scanning of conventional haematoxylin and eosin stained tumour tissue sections. The assay has been extensively evaluated in large, independent patient populations, correlates with and outperforms established molecular and morphological prognostic markers, and gives consistent results across tumour and nodal stage. The biomarker stratified stage II and III patients into sufficiently distinct prognostic groups that potentially could be used to guide selection of adjuvant treatment by avoiding therapy in very low risk groups and identifying patients who would benefit from more intensive treatment regimes

    Exploring the use of speech in audiology: A mixed methods study

    Get PDF
    This thesis aims to advance the understanding of how speech testing is, and can be, used for hearing device users within the audiological test battery. To address this, I engaged with clinicians and patients to understand the current role that speech testing plays in audiological testing in the UK, and developed a new listening test, which combined speech testing with localisation judgments in a dual task design. Normal hearing listeners and hearing aid users were tested, and a series of technical measurements were made to understand how advanced hearing aid settings might determine task performance. A questionnaire was completed by public and private sector hearing healthcare professionals in the UK to explore the use of speech testing. Overall, results revealed this assessment tool was underutilised by UK clinicians, but there was a significantly greater use in the private sector. Through a focus group and semi structured interviews with hearing aid users I identified a mismatch between their common listening difficulties and the assessment tools used in audiology and highlighted a lack of deaf awareness in UK adult audiology. The Spatial Speech in Noise Test (SSiN) is a dual task paradigm to simultaneously assess relative localisation and word identification performance. Testing on normal hearing listeners to investigate the impact of the dual task design found the SSiN to increase cognitive load and therefore better reflect challenging listening situations. A comparison of relative localisation and word identification performance showed that hearing aid users benefitted less from spatially separating speech and noise in the SSiN than normal hearing listeners. To investigate how the SSiN could be used to assess advanced hearing aid features, a subset of hearing aid users were fitted with the same hearing aid type and completed the SSiN once with adaptive directionality and once with omnidirectionality. The SSiN results differed between conditions but a larger sample size is needed to confirm these effects. Hearing aid technical measurements were used to quantify how hearing aid output changed in response to the SSiN paradigm

    Variable autonomy assignment algorithms for human-robot interactions.

    Get PDF
    As robotic agents become increasingly present in human environments, task completion rates during human-robot interaction has grown into an increasingly important topic of research. Safe collaborative robots executing tasks under human supervision often augment their perception and planning capabilities through traded or shared control schemes. However, such systems are often proscribed only at the most abstract level, with the meticulous details of implementation left to the designer\u27s prerogative. Without a rigorous structure for implementing controls, the work of design is frequently left to ad hoc mechanism with only bespoke guarantees of systematic efficacy, if any such proof is forthcoming at all. Herein, I present two quantitatively defined models for implementing sliding-scale variable autonomy, in which levels of autonomy are determined by the relative efficacy of autonomous subroutines. I experimentally test the resulting Variable Autonomy Planning (VAP) algorithm and against a traditional traded control scheme in a pick-and-place task, and apply the Variable Autonomy Tasking algorithm to the implementation of a robot performing a complex sanitation task in real-world environs. Results show that prioritizing autonomy levels with higher success rates, as encoded into VAP, allows users to effectively and intuitively select optimal autonomy levels for efficient task completion. Further, the Pareto optimal design structure of the VAP+ algorithm allows for significant performance improvements to be made through intervention planning based on systematic input determining failure probabilities through sensorized measurements. This thesis describes the design, analysis, and implementation of these two algorithms, with a particular focus on the VAP+ algorithm. The core conceit is that they are methods for rigorously defining locally optimal plans for traded control being shared between a human and one or more autonomous processes. It is derived from an earlier algorithmic model, the VAP algorithm, developed to address the issue of rigorous, repeatable assignment of autonomy levels based on system data which provides guarantees on basis of the failure-rate sorting of paired autonomous and manual subtask achievement systems. Using only probability ranking to define levels of autonomy, the VAP algorithm is able to sort modules into optimizable ordered sets, but is limited to only solving sequential task assignments. By constructing a joint cost metric for the entire plan, and by implementing a back-to-front calculation scheme for this metric, it is possible for the VAP+ algorithm to generate optimal planning solutions which minimize the expected cost, as amortized over time, funds, accuracy, or any metric combination thereof. The algorithm is additionally very efficient, and able to perform on-line assessments of environmental changes to the conditional probabilities associated with plan choices, should a suitable model for determining these probabilities be present. This system, as a paired set of two algorithms and a design augmentation, form the VAP+ algorithm in full
    • …
    corecore