588 research outputs found

    What Should We Teach in Information Retrieval?

    Get PDF

    Fairness in Recommendation: Foundations, Methods and Applications

    Full text link
    As one of the most pervasive applications of machine learning, recommender systems are playing an important role on assisting human decision making. The satisfaction of users and the interests of platforms are closely related to the quality of the generated recommendation results. However, as a highly data-driven system, recommender system could be affected by data or algorithmic bias and thus generate unfair results, which could weaken the reliance of the systems. As a result, it is crucial to address the potential unfairness problems in recommendation settings. Recently, there has been growing attention on fairness considerations in recommender systems with more and more literature on approaches to promote fairness in recommendation. However, the studies are rather fragmented and lack a systematic organization, thus making it difficult to penetrate for new researchers to the domain. This motivates us to provide a systematic survey of existing works on fairness in recommendation. This survey focuses on the foundations for fairness in recommendation literature. It first presents a brief introduction about fairness in basic machine learning tasks such as classification and ranking in order to provide a general overview of fairness research, as well as introduce the more complex situations and challenges that need to be considered when studying fairness in recommender systems. After that, the survey will introduce fairness in recommendation with a focus on the taxonomies of current fairness definitions, the typical techniques for improving fairness, as well as the datasets for fairness studies in recommendation. The survey also talks about the challenges and opportunities in fairness research with the hope of promoting the fair recommendation research area and beyond.Comment: Accepted by ACM Transactions on Intelligent Systems and Technology (TIST

    Question Answering over Curated and Open Web Sources

    Get PDF
    The last few years have seen an explosion of research on the topic of automated question answering (QA), spanning the communities of information retrieval, natural language processing, and artificial intelligence. This tutorial would cover the highlights of this really active period of growth for QA to give the audience a grasp over the families of algorithms that are currently being used. We partition research contributions by the underlying source from where answers are retrieved: curated knowledge graphs, unstructured text, or hybrid corpora. We choose this dimension of partitioning as it is the most discriminative when it comes to algorithm design. Other key dimensions are covered within each sub-topic: like the complexity of questions addressed, and degrees of explainability and interactivity introduced in the systems. We would conclude the tutorial with the most promising emerging trends in the expanse of QA, that would help new entrants into this field make the best decisions to take the community forward. Much has changed in the community since the last tutorial on QA in SIGIR 2016, and we believe that this timely overview will indeed benefit a large number of conference participants

    THE RISE OF AI IN CONTENT MANAGEMENT: REIMAGINING INTELLIGENT WORKFLOWS

    Get PDF
    As content management systems (CMS) become indispensable for managing digital experiences, AI integration promises to bring new levels of automation and intelligence to streamline workflows. This paper surveys how AI techniques like machine learning, natural language processing, computer vision, and knowledge graphs are transforming CMS capabilities across the content lifecycle. We analyze key use cases like automated metadata tagging, natural language generation, smart recommendations, predictive search, personalized experiences, and conversational interfaces. The benefits include enhanced content discoverability, accelerated creation, improved optimization, simplified governance, and amplified team productivity. However, adoption remains low due to challenges like opaque AI, poor workflow integration, unrealistic expectations, bias risks, and skills gaps. Strategic priorities include starting with focused pilots, evaluating multiple AI approaches, emphasizing transparent and fair AI models, and upskilling teams. Benefits are maximized through hybrid human-AI collaboration vs full automation. While AI integration is maturing, the outlook is cautiously optimistic. Leading CMS platforms are accelerating development of no-code AI tools. But mainstream adoption may take 2-5 years as skills and best practices evolve around transparent and ethical AI. Wise data practices, change management, and participatory design will be key. If implemented thoughtfully, AI can reimagine workflows by expanding human creativity, not replacing it. The future points to creative synergies between empowered users and AI assistants. But pragmatic pilots, continuous improvement, and participatory strategies are necessary to navigate the hype and deliver value. The promise warrants measured experimentation
    • …
    corecore