4,121 research outputs found

    Identifying Users with Atypical Preferences to Anticipate Inaccurate Recommendations

    Get PDF
    International audienceThe social approach in recommender systems relies on the hypothesis that users' preferences are coherent between users. To recommend a user some items, it uses the preferences of other users, who have preferences similar to those of this user. Although this approach has shown to produce on average high quality recommendations , which makes it the most commonly used approach, some users are not satisfied. Being able to anticipate if a recommender will provide a given user with inaccurate recommendations, would be a major advantage. Nevertheless, little attention has been paid in the literature to studying this particular point. In this work, we assume that a part of the users who are not satisfied do not respect the assumption made by the social approach of recommendation: their preferences are not coherent with those of others; they have atypical preferences. We propose measures to identify these users, upstream of the recommendation process, based on their profile only (their preferences). The experiments conducted on a state of the art corpus show that these measures allow to identify reliably a subset of users with atypical preferences, who will get inaccurate recommendations

    Predictive Customer Lifetime value modeling: Improving customer engagement and business performance

    Get PDF
    CookUnity, a meal subscription service, has witnessed substantial annual revenue growth over the past three years. However, this growth has primarily been driven by the acquisition of new users to expand the customer base, rather than an evident increase in customers' spending levels. If it weren't for the raised subscription prices, the company's customer lifetime value (CLV) would have remained the same as it was three years ago. Consequently, the company's leadership recognizes the need to adopt a holistic approach to unlock an enhancement in CLV. The objective of this thesis is to develop a comprehensive understanding of CLV, its implications, and how companies leverage it to inform strategic decisions. Throughout the course of this study, our central focus is to deliver a fully functional and efficient machine learning solution to CookUnity. This solution will possess exceptional predictive capabilities, enabling accurate forecasting of each customer's future CLV. By equipping CookUnity with this powerful tool, our aim is to empower the company to strategically leverage CLV for sustained growth. To achieve this objective, we analyze various methodologies and approaches to CLV analysis, evaluating their applicability and effectiveness within the context of CookUnity. We thoroughly explore available data sources that can serve as predictors of CLV, ensuring the incorporation of the most relevant and meaningful variables in our model. Additionally, we assess different research methodologies to identify the top-performing approach and examine its implications for implementation at CookUnity. By implementing data-driven strategies based on our predictive CLV model, CookUnity will be able to optimize order levels and maximize the lifetime value of its customer base. The outcome of this thesis will be a robust ML solution with remarkable prediction accuracy and practical usability within the company. Furthermore, the insights gained from our research will contribute to a broader understanding of CLV in the subscription-based business context, stimulating further exploration and advancement in this field of study

    Can Matrix Factorization Improve the Accuracy of Recommendations Provided to Grey Sheep Users?

    Get PDF
    International audienceMatrix Factorization (MF)-based recommender systems provide on average accurate recommendations, they do consistently fail on some users. The literature has shown that this can be explained by the characteristics of the preferences of these users, who only partially agree with others. These users are referred to as Grey Sheep Users (GSU). This paper studies if it is possible to design a MF-based recommender that improves the accuracy of the recommendations provided to GSU. We introduce three MF-based models that have the characteristic to focus on original ways to exploit the ratings of GSU during the training phase (by selecting, weighting, etc.). The experiments conducted on a state-of-the-art dataset show that it is actually possible to design a MF-based model that significantly improves the accuracy of the recommendations, for most of GSU

    The Safe and Effective Clinical Deployment of Artificial Intelligence Tools

    Get PDF
    18 million new cancer cases are diagnosed each year. Roughly half of these patients will be treated with radiation therapy, a complex technique that requires an interdisciplinary team of clinical staff and expensive equipment to be delivered safely. Cancer centers in Low- and Middle-Income Countries (LMIC) have an especially difficult time meeting the demands of radiation therapy as the complexity of treatment techniques increase, with only 37% of patients in these regions having access to the care they need. Artificial Intelligence (AI) based tools are being developed to simplify the treatment planning and quality assurance processes to increase the number of patients who can be treated, as well as improving the quality of their treatment plans. While AI techniques have shown great promise, with any new technology it is important to not only assess the potential benefits, but also the associated risk. To this end, we have performed a risk assessment of our in-house automated treatment planning system, the Radiation Planning Assistant, to identify points of risk and subsequently develop appropriate quality assurance and training resources to minimize patient risk. To identify points of risk, a failure mode and effects analysis was performed by a multidisciplinary team of clinicians and software developers. Changes were then made to limit the risk of 76% of high-risk failures. These risk points were then incorporated into hazard testing, and we found that 62% of errors could be detected before a plan was created in the RPA. The user interface was then modified to limit the number of errors that will be propagated into the automatic planning process. Following the changes made to optimize the safety of the user interface, the efficacy of error detection during the plan review process was assessed. A custom checklist was developed to guide the review of automatically generated treatment plans, based on the results of our FMEA and AAPM TG-275. During final physics plan checks, when utilizing the customized checklist, we found an increase in the rate of error detection by 20% for physicists and 17% for medical physics residents. An end-to-end test was then performed to evaluate the entirety of the RPA training and deployment procedure for new users. Users were asked to review training materials and generate 10 treatment plans, including all treatment sites available in the RPA. Following training, 100% of the errors present in these plans were detected and users reported that the developed training materials provided them with all information needed to generate safe, high-quality, treatment plans. Finally, a real-time contour monitoring system was developed to limit the risk of systematic errors and detect abnormalities in the contouring process that could be attributed to software error, off-label use, or automation bias. In conclusion, we have optimized the safety and efficacy of the RPA training, quality assurance, and deployment processes. This evaluation has allowed us to not only maximize the impact of our automated treatment planning tool, the RPA, but has also generated results that should be used to inform the development of safe AI software and clinical deployment procedures, in future clinical environments

    An assessment of PenSim2

    Get PDF
    The Department for Work and Pensions (DWP)’s Pensim2 model is a dynamic microsimulation model. The principal purpose of this model is to estimate the future distribution of pensioner incomes, thus enabling analysis of the distributional effects of proposed changes to pension policy. This paper presents the results of an assessment of Pensim2 by researchers at the IFS. We start by looking at the overall structure of the model, and how it compares with other dynamic policy analysis models across the world. We make recommendations at this stage as to how the overall modelling strategy could be improved. We then go on to analyse the characteristics of most of the individual modules which make up Pensim2, examining the data used and the regression and predictions used in each step. The results from this examination are used to formulate a set of short and medium-term recommendations for developing and improving the model. Finally, we look at what might become possible for the model over a much longer time frame – looking towards developing a ‘Pensim3’ model over the next decade or so

    Weather Communication on Twitter: Identifying Popular Content and Optimal Warning Format Via Case Studies and a Survey Analysis

    Get PDF
    The use of Twitter as a channel for weather information inspires a deeper analysis of key information nodes during episodes of high impact weather, especially local meteorologists. To optimize communication on the channel, it is important to understand what kinds of messages produce exposure and attention among users—which translates to knowledge that could improve the reach of a warning. Literature identifies two key models that well describe the cognitive processing of tweets and warnings. The Protective Action Decision Model (PADM) describes risk perception and the factors that enable or disable one from acting on a warning. Particularly through environmental and social cues, the first steps of the PADM could be aided or impeded by a tweet. The Extended Parallel Process Model (EEPM) describes the components of an effective warning message. Even in a tweet, ignoring one or both of the two critical components of a warning—threat and efficacy—could inhibit a user from taking the correct protective action, if any at all. Through two case studies of tweets during high impact weather events in southeast Louisiana, messages containing photos and videos are most likely to appear in Twitter timelines and therefore generate the greatest exposure. Similarly, followers of a local meteorologist Twitter account will be most likely to retweet and therefore pay attention to messages containing photos and videos. The case studies also revealed that, particularly with warnings, tweets containing equal levels of threat and efficacy, as well as some personalizing factor such as a map or geographic indicator generate more retweets and therefore attention. In a subsequent survey, case study results were not duplicated via self-reported interests from respondents. An example photo was less popular and an example warning with minimal actionable information was most popular. The survey also revealed that Louisianans prefer websites and Facebook to receive weather information, while mobile phone apps and Twitter scored lower preferences

    Slave to the Algorithm? Why a \u27Right to an Explanation\u27 Is Probably Not the Remedy You Are Looking For

    Get PDF
    Algorithms, particularly machine learning (ML) algorithms, are increasingly important to individuals’ lives, but have caused a range of concerns revolving mainly around unfairness, discrimination and opacity. Transparency in the form of a “right to an explanation” has emerged as a compellingly attractive remedy since it intuitively promises to open the algorithmic “black box” to promote challenge, redress, and hopefully heightened accountability. Amidst the general furore over algorithmic bias we describe, any remedy in a storm has looked attractive. However, we argue that a right to an explanation in the EU General Data Protection Regulation (GDPR) is unlikely to present a complete remedy to algorithmic harms, particularly in some of the core “algorithmic war stories” that have shaped recent attitudes in this domain. Firstly, the law is restrictive, unclear, or even paradoxical concerning when any explanation-related right can be triggered. Secondly, even navigating this, the legal conception of explanations as “meaningful information about the logic of processing” may not be provided by the kind of ML “explanations” computer scientists have developed, partially in response. ML explanations are restricted both by the type of explanation sought, the dimensionality of the domain and the type of user seeking an explanation. However, “subject-centric explanations (SCEs) focussing on particular regions of a model around a query show promise for interactive exploration, as do explanation systems based on learning a model from outside rather than taking it apart (pedagogical versus decompositional explanations) in dodging developers\u27 worries of intellectual property or trade secrets disclosure. Based on our analysis, we fear that the search for a “right to an explanation” in the GDPR may be at best distracting, and at worst nurture a new kind of “transparency fallacy.” But all is not lost. We argue that other parts of the GDPR related (i) to the right to erasure ( right to be forgotten ) and the right to data portability; and (ii) to privacy by design, Data Protection Impact Assessments and certification and privacy seals, may have the seeds we can use to make algorithms more responsible, explicable, and human-centered

    Passenger Flows in Underground Railway Stations and Platforms, MTI Report 12-43

    Get PDF
    Urban rail systems are designed to carry large volumes of people into and out of major activity centers. As a result, the stations at these major activity centers are often crowded with boarding and alighting passengers, resulting in passenger inconvenience, delays, and at times danger. This study examines the planning and analysis of station passenger queuing and flows to offer rail transit station designers and transit system operators guidance on how to best accommodate and manage their rail passengers. The objectives of the study are to: 1) Understand the particular infrastructural, operational, behavioral, and spatial factors that affect and may constrain passenger queuing and flows in different types of rail transit stations; 2) Identify, compare, and evaluate practices for efficient, expedient, and safe passenger flows in different types of station environments and during typical (rush hour) and atypical (evacuations, station maintenance/ refurbishment) situations; and 3) Compile short-, medium-, and long-term recommendations for optimizing passenger flows in different station environments

    The View From Here: User-Centered Perspectives on Social Network Privacy

    Get PDF
    A great deal of personal information is released in online social network profiles, and this information is increasingly being sought as evidence in criminal, administrative and civil legal proceedings. Determination of the admissibility of social network profile information rests in part on the issue of subjective expectations of privacy: to what extent do online social network participants expect privacy in their social network profiles? This question is examined through a combination of interviews and focus groups. The results suggest that Facebook as a whole is characterized as a space where participants construct and display a produced version of the self to a large and indeterminate social network. The common perspective is that information posted on social network profiles is selected for social broadcast, and further dissemination (beyond the online social network to which information is disclosed) is therefore both acceptable and to be expected. Although they would prefer profile access to be restricted to a broadly defined social network of friends and acquaintances, online social network participants do not in general expect to control the audience for their profiles, and they therefore typically include only information that ‘everyone’ can know in their online profiles. They thus require and exercise control over the content that is associated with their online profiles, and actions that undermine this control run contrary to privacy expectations

    Cervical Cancer Screening Management in Primary Care: A Quality Improvement Project

    Get PDF
    Cervical cancer screening has evolved throughout the years into the current, very effective, algorithms for screening and management. The success of improved early detection of cervical cancer has saved many lives (Lees, Erickson, & Huh, 2016). The addition of human papillomavirus testing and genotyping has allowed for more efficient, and less invasive, management of cervical cancer screening (Cox, 2009). While there are significant advantages to these new guidelines, there are barriers to applying them in practice. The clinical site for the project was identified to be in need of a quality improvement project aimed at creating an improved patient notification, tracking and reminder system as well as improving provider adherence with the evidence-based guidelines. There were 48 total eligible providers that were included in the project. After identification of the problem, a review of the literature was undertaken to identify an evidence-based strategy for addressing practice gaps. This literature review focused on provider guideline adherence with cervical cancer screening guidelines and patient notification, tracking and reminder systems. Current literature demonstrates a gap in provider guideline adherence nationwide as well as strategies aimed at improving both provider and patient adherence with the reccomendations. These include use of consistent patient notification processes, implementation of an electronic tracking and reminder system, and provider educational sessions aimed at improving guideline compliance. Donabedian’s (2005) quality improvement framework was utilized to divide the literature findings into those interventions that effect outcomes, structure, and process of care in order to form the project plan and methods. Following this in-depth look at the background and existing literature, the project plan was established. The plan consisted of two phases: the first focusing on creation of project materials and preparation for project implementation, and the second focusing on the roll out of the new process and data collection for project analysis. Two objectives were identified for this project: improve provider adherence to the 2012 American Society of Colposcopy and Cervical Pathology Guidelines and implementation of an electronic patient notification, tracking and reminder system. A plan for data collection and analysis through pre- and post-implementation provider surveys and chart audits was established. After project implementation, data collection and analysis occurred. Objective One was evaluated in order to determine if the project implementation correlated with an increase in provider guideline adherence. The quality improvement project did find an improvement in guideline adherence in recommending appropriate follow-up for patients following receipt of cervical cancer screening results. For their survey responses on a series of patient vignettes, as well as whether patients were actually screened at an appropriate interval according to the recommendations, the providers were not found to show a statistical improvement following implementation of the project. In evaluating Objective Two, there was found to be moderate compliance on the part of the providers with the new process in the weeks following project implementation. Nursing participants in the new process were found to be 100% compliant with following the process. No statistical difference was found in provider beliefs regarding the practice’s tracking and reminder system pre- and post-intervention. Limitations existed in this study that limit the ability of the researcher to make assumptions based on the findings. Regardless, this project served to address the need for a robust notification tracking and reminder system. This system helps to ensure that patients receive timely, clear, and concise communication regarding their cervical cancer screening results and what these results mean for them. Additionally, they are notified and reminded to follow-up as needed. This is all done in an attempt to continue to drive down cervical cancer rates while also reducing unnecessary, and costly, procedures and testing
    • 

    corecore