159 research outputs found

    Generalized knowledge-based semantics for multi-valued logic programs

    Get PDF
    A generalized logic programming system is presented which uses bilattices as the underlying framework for the semantics of programs. The two orderings of the bilattice reflect the concepts of truth and knowledge. Programs are interpreted according to their knowledge content, resulting in a monotonic semantic operator even in the presence of negation. A special case, namely, logic programming based on the four-valued bilattice is carefully studied on its own right. In the four-valued case, a version of the Closed World Assumption is incorporated into the semantics. Soundness and Completeness results are given with and without the presence of the Closed World Assumption. The concepts studied in the four-valued case are then generalized to arbitrary bilattices. The resulting logic programming systems are well suited for representing incomplete or conflicting information. Depending on the choice of the underlying bilattice, the knowledge-based logic programming language can provide a general framework for other languages based on probabilistic logics, intuitionistic logics, modal logics based on the possible-worlds semantics, and other useful non-classical logics. A novel procedural semantics is given which extends SLDNF-resolution and can retrieve both negative and positive information about a particular goal in a uniform setting. The proposed procedural semantics is based on an AND-parallel computational model for logic programs. The concept of substitution unification is introduced and many of its properties are studied in the context of the proposed computational model. Some of these properties may be of independent interest, particularly in the implementation of parallel and distributed logic programs. Finally, soundness and completeness results are proved for the proposed logic programming system. It is further shown that for finite distributive bilattices (and, more generally, bilattices with the descending chain property), an alternate procedural semantics can be developed based on a small subset of special truth values which turn out to be the join irreducible elements of the knowledge part of the bilattice. The algebraic properties of these elements and their relevance to the corresponding logic programming system are extensively studied

    Fairness of Exposure in Dynamic Recommendation

    Full text link
    Exposure bias is a well-known issue in recommender systems where the exposure is not fairly distributed among items in the recommendation results. This is especially problematic when bias is amplified over time as a few items (e.g., popular ones) are repeatedly over-represented in recommendation lists and users' interactions with those items will amplify bias towards those items over time resulting in a feedback loop. This issue has been extensively studied in the literature in static recommendation environment where a single round of recommendation result is processed to improve the exposure fairness. However, less work has been done on addressing exposure bias in a dynamic recommendation setting where the system is operating over time, the recommendation model and the input data are dynamically updated with ongoing user feedback on recommended items at each round. In this paper, we study exposure bias in a dynamic recommendation setting. Our goal is to show that existing bias mitigation methods that are designed to operate in a static recommendation setting are unable to satisfy fairness of exposure for items in long run. In particular, we empirically study one of these methods and show that repeatedly applying this method fails to fairly distribute exposure among items in long run. To address this limitation, we show how this method can be adapted to effectively operate in a dynamic recommendation setting and achieve exposure fairness for items in long run. Experiments on a real-world dataset confirm that our solution is superior in achieving long-term exposure fairness for the items while maintaining the recommendation accuracy

    Bias Disparity in Collaborative Recommendation: Algorithmic Evaluation and Comparison

    Get PDF
    Research on fairness in machine learning has been recently extended to recommender systems. One of the factors that may impact fairness is bias disparity, the degree to which a group's preferences on various item categories fail to be reflected in the recommendations they receive. In some cases biases in the original data may be amplified or reversed by the underlying recommendation algorithm. In this paper, we explore how different recommendation algorithms reflect the tradeoff between ranking quality and bias disparity. Our experiments include neighborhood-based, model-based, and trust-aware recommendation algorithms.Comment: Workshop on Recommendation in Multi-Stakeholder Environments (RMSE) at ACM RecSys 2019, Copenhagen, Denmar
    • …
    corecore