821 research outputs found

    Typicality and Familiarity Effects in Children\u27s Memory: the Interaction of Processing and the Knowledge Base.

    Get PDF
    Third- and sixth-graders and adults participated in an experiment based upon Hunt and Einstein\u27s (1981) theory which relates study activities or processing task to subsequent memory performance. Participants performed a processing task designed to emphasize either relational or item-specific information. In addition, the information about the words available in each subject\u27s knowledge base was measured in two ways: relational information was assessed with a typicality rating task and item-specific information was assessed with an attribute listing task. The experiment consisted of three phases. In the first phase subjects performed one of two processing tasks on a list containing typical, atypical and unfamiliar exemplers of a semantic category. One group of subjects sorted the words into categories (the relational task) the other group rated the words for pleasantness (the item-specific task). In the second phase, subjects\u27 memory for the words was tested on a free recall test. In the third phase the knowledge base assessment tasks were performed. The knowledge base measures indicated: the relative amount of relational versus item-specific information available for typical, atypical and unfamiliar words is different for each type of word and that amount of relational and item-specific information in the knowledge base changes with age. As predicted by the theory, recall was influenced by the interaction of word type with processing task. Finally, parallels between free recall results and the knowledge base measures indicated that knowledge base development interacts with the processing task to influence what is recalled by subjects at the three age levels

    Advanced Semantics for Commonsense Knowledge Extraction

    Get PDF
    Commonsense knowledge (CSK) about concepts and their properties is useful for AI applications such as robust chatbots. Prior works like ConceptNet, TupleKB and others compiled large CSK collections, but are restricted in their expressiveness to subject-predicate-object (SPO) triples with simple concepts for S and monolithic strings for P and O. Also, these projects have either prioritized precision or recall, but hardly reconcile these complementary goals. This paper presents a methodology, called Ascent, to automatically build a large-scale knowledge base (KB) of CSK assertions, with advanced expressiveness and both better precision and recall than prior works. Ascent goes beyond triples by capturing composite concepts with subgroups and aspects, and by refining assertions with semantic facets. The latter are important to express temporal and spatial validity of assertions and further qualifiers. Ascent combines open information extraction with judicious cleaning using language models. Intrinsic evaluation shows the superior size and quality of the Ascent KB, and an extrinsic evaluation for QA-support tasks underlines the benefits of Ascent.Comment: Web interface available at https://ascent.mpi-inf.mpg.d

    SIFT and color feature fusion using localized maximum-margin learning for scene classification

    Get PDF
    published_or_final_versionThe 3rd International Conference on Machine Vision (ICMV 2010), Hong Kong, China, 28-30 December 2010. In Proceedings of 3rd ICMV, 2010, p. 56-6

    Where do hypotheses come from?

    Get PDF
    Why are human inferences sometimes remarkably close to the Bayesian ideal and other times systematically biased? One notable instance of this discrepancy is that tasks where the candidate hypotheses are explicitly available result in close to rational inference over the hypothesis space, whereas tasks requiring the self-generation of hypotheses produce systematic deviations from rational inference. We propose that these deviations arise from algorithmic processes approximating Bayes' rule. Specifically in our account, hypotheses are generated stochastically from a sampling process, such that the sampled hypotheses form a Monte Carlo approximation of the posterior. While this approximation will converge to the true posterior in the limit of infinite samples, we take a small number of samples as we expect that the number of samples humans take is limited by time pressure and cognitive resource constraints. We show that this model recreates several well-documented experimental findings such as anchoring and adjustment, subadditivity, superadditivity, the crowd within as well as the self-generation effect, the weak evidence, and the dud alternative effects. Additionally, we confirm the model's prediction that superadditivity and subadditivity can be induced within the same paradigm by manipulating the unpacking and typicality of hypotheses, in 2 experiments.This work was supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF – 1231216
    • …
    corecore