18 research outputs found

    Query Suggestion and Data Fusion in Contextual Disambiguation

    Full text link

    Lifted graphical models: a survey

    Get PDF
    Lifted graphical models provide a language for expressing dependencies between different types of entities, their attributes, and their diverse relations, as well as techniques for probabilistic reasoning in such multi-relational domains. In this survey, we review a general form for a lifted graphical model, a par-factor graph, and show how a number of existing statistical relational representations map to this formalism. We discuss inference algorithms, including lifted inference algorithms, that efficiently compute the answers to probabilistic queries over such models. We also review work in learning lifted graphical models from data. There is a growing need for statistical relational models (whether they go by that name or another), as we are inundated with data which is a mix of structured and unstructured, with entities and relations extracted in a noisy manner from text, and with the need to reason effectively with this data. We hope that this synthesis of ideas from many different research groups will provide an accessible starting point for new researchers in this expanding field

    Representation and Evolution of User Profile in Information Retrieval Based on Bayesian Approach

    No full text

    Zero-Shot Task Transfer

    No full text
    In this work, we present a novel meta-learning algorithm TTNet1 that regresses model parameters for novel tasks for which no ground truth is available (zero-shot tasks). In order to adapt to novel zero-shot tasks, our meta-learner learns from the model parameters of known tasks (with ground truth) and the correlation of known tasks to zeroshot tasks. Such intuition finds its foothold in cognitive science, where a subject (human baby) can adapt to a novel concept (depth understanding) by correlating it with old concepts (hand movement or self-motion), without receiving an explicit supervision. We evaluated our model on the Taskonomy dataset, with four tasks as zero-shot: surface normal, room layout, depth and camera pose estimation. These tasks were chosen based on the data acquisition complexity and the complexity associated with the learning process using a deep network. Our proposed methodology outperforms state-of-the-art models (which use ground truth) on each of our zero-shot tasks, showing promise on zeroshot task transfer. We also conducted extensive experiments to study the various choices of our methodology, as well as showed how the proposed method can also be used in transfer learning. To the best of our knowledge, this is the first such effort on zero-shot learning in the task space

    Cross-Domain Activity Recognition

    No full text
    In activity recognition, one major challenge is huge manual effort in labeling when a new domain of activities is to be tested. In this paper, we ask an interesting question: can we transfer the available labeled data from a set of existing activities in one domain to help recognize the activities in another different but related domain? Our answer is “yes”, provided that the sensor data from the two domains are related in some way. We develop a bridge between the activities in two domains by learning a similarity function via Web search, under the condition that the sensor data are from the same feature space. Based on the learned similarity measures, our algorithm interprets the data from the source domain as the data in the domain with different confidence levels, thus accomplishing the cross-domain knowledge transfer task. Our algorithm is evaluated on several real-world datasets to demonstrate its effectiveness
    corecore