166 research outputs found

    Recognition of Graphological Wartegg Hand-drawings

    Get PDF
    Wartegg Test is a drawing completion task designed to reflect the personal characteristics of the testers. A complete Wartegg Test has eight 4 cm x 4 cm boxes with a printed hint in each of them. The test subjects are required to use a pencil to draw eight pictures in the boxes after they saw these printed hints. In recent years, the trend of utilizing high-speed hardware and deep learning based model for object detection makes it possible to recognize hand-drawn objects from images. However, recognizing them is not an easy task, like other hand-drawn images, theWartegg Test images are abstract and diverse. Also,Wartegg Test images are multi-object images, the number of objects in one image, their distribution and size are all unpredictable. These factors make the recognition task on Wartegg Test images more difficult. In this thesis, we present a complete framework including PCC (Pearson’s Correlation Coefficient) to extract lines and curves, SLIC(Simple linear Iterative Clustering Algorithm) for the selection of key feature points, DBSCAN(Density-based spatial clustering of applications with noise) for object cluster, and finally utilize transfer learning to increase the converging speed during training and deploy YoloV3-SPP model(A deep learning network) for detecting shapes and objects. Our system produced an accuracy of 87.9% for one object detection and 75% for multi-object detection which surpass the previous results by a wide margin

    Causal Collaborative Filtering

    Full text link
    Recommender systems are important and valuable tools for many personalized services. Collaborative Filtering (CF) algorithms -- among others -- are fundamental algorithms driving the underlying mechanism of personalized recommendation. Many of the traditional CF algorithms are designed based on the fundamental idea of mining or learning correlative patterns from data for matching, including memory-based methods such as user/item-based CF as well as learning-based methods such as matrix factorization and deep learning models. However, advancing from correlative learning to causal learning is an important problem, because causal/counterfactual modeling can help us to think outside of the observational data for user modeling and personalization. In this paper, we propose Causal Collaborative Filtering (CCF) -- a general framework for modeling causality in collaborative filtering and recommendation. We first provide a unified causal view of CF and mathematically show that many of the traditional CF algorithms are actually special cases of CCF under simplified causal graphs. We then propose a conditional intervention approach for dodo-calculus so that we can estimate the causal relations based on observational data. Finally, we further propose a general counterfactual constrained learning framework for estimating the user-item preferences. Experiments are conducted on two types of real-world datasets -- traditional and randomized trial data -- and results show that our framework can improve the recommendation performance of many CF algorithms.Comment: 14 pages, 5 figures, 3 table

    Tetherin inhibits prototypic foamy virus release

    Get PDF
    Background: Tetherin (also known as BST-2, CD317, and HM1.24) is an interferon- induced protein that blocks the release of a variety of enveloped viruses, such as retroviruses, filoviruses and herpesviruses. However, the relationship between tetherin and foamy viruses has not been clearly demonstrated. Results: In this study, we found that tetherin of human, simian, bovine or canine origin inhibits the production of infectious prototypic foamy virus (PFV). The inhibition of PFV by human tetherin is counteracted by human immunodeficiency virus type 1 (HIV-1) Vpu. Furthermore, we generated human tetherin transmembrane domain deletion mutant (delTM), glycosyl phosphatidylinositol (GPI) anchor deletion mutant (delGPI), and dimerization and glycosylation deficient mutants. Compared with wild type tetherin, the delTM and delGPI mutants only moderately inhibited PFV production. In contrast, the dimerization and glycosylation deficient mutants inhibit PFV production as efficiently as the wild type tetherin. Conclusions: These results demonstrate that tetherin inhibits the release and infectivity of PFV, and this inhibition is antagonized by HIV-1 Vpu. Both the transmembrane domain and the GPI anchor of tetherin are important for the inhibition of PFV, whereas the dimerization and the glycosylation of tetherin are dispensable

    Universal entropy bound and discrete space-time

    Full text link
    Starting from the universal entropy bounds suggested by Bekenstein and Susskind and applying them to the black-body radiation situation, we get a cut-off of space Δx≥χlP \Delta x \geq \chi l_{\mathrm{P}} with χ≥0.1\chi \geq 0.1. We go further to get a cut-off of time Δt≥χlP/c \Delta t \geq \chi l_{\mathrm{P}}/c , thus, the discrete space-time structure is obtained. With the discrete space-time, we can explain the uncertainty principle. Based on the hypothesis of information theory and the entropy of black holes, we get the precise value of the parameter χ\chi and demonstrate the reason why the entropy bounds hold.Comment: 9 latex pages, no figure, final version for journal publicatio

    Fairness in Recommendation: Foundations, Methods and Applications

    Full text link
    As one of the most pervasive applications of machine learning, recommender systems are playing an important role on assisting human decision making. The satisfaction of users and the interests of platforms are closely related to the quality of the generated recommendation results. However, as a highly data-driven system, recommender system could be affected by data or algorithmic bias and thus generate unfair results, which could weaken the reliance of the systems. As a result, it is crucial to address the potential unfairness problems in recommendation settings. Recently, there has been growing attention on fairness considerations in recommender systems with more and more literature on approaches to promote fairness in recommendation. However, the studies are rather fragmented and lack a systematic organization, thus making it difficult to penetrate for new researchers to the domain. This motivates us to provide a systematic survey of existing works on fairness in recommendation. This survey focuses on the foundations for fairness in recommendation literature. It first presents a brief introduction about fairness in basic machine learning tasks such as classification and ranking in order to provide a general overview of fairness research, as well as introduce the more complex situations and challenges that need to be considered when studying fairness in recommender systems. After that, the survey will introduce fairness in recommendation with a focus on the taxonomies of current fairness definitions, the typical techniques for improving fairness, as well as the datasets for fairness studies in recommendation. The survey also talks about the challenges and opportunities in fairness research with the hope of promoting the fair recommendation research area and beyond.Comment: Accepted by ACM Transactions on Intelligent Systems and Technology (TIST
    • …
    corecore