36,353 research outputs found

    Team Semantics and Recursive Enumerability

    Full text link
    It is well known that dependence logic captures the complexity class NP, and it has recently been shown that inclusion logic captures P on ordered models. These results demonstrate that team semantics offers interesting new possibilities for descriptive complexity theory. In order to properly understand the connection between team semantics and descriptive complexity, we introduce an extension D* of dependence logic that can define exactly all recursively enumerable classes of finite models. Thus D* provides an approach to computation alternative to Turing machines. The essential novel feature in D* is an operator that can extend the domain of the considered model by a finite number of fresh elements. Due to the close relationship between generalized quantifiers and oracles, we also investigate generalized quantifiers in team semantics. We show that monotone quantifiers of type (1) can be canonically eliminated from quantifier extensions of first-order logic by introducing corresponding generalized dependence atoms

    The Relationship between Working Memory and Cognitive Functioning in Children

    Get PDF
    One-hundred and forty-four Year 1 children (51% boys and 49% girls, mean age 6) from Queensland State primary schools participated in a study to investigate the relationship between working memory and cognitive functioning. Children were given two tests of cognitive functioning (the School-Years Screening Test for the Evaluation of Mental Status (SYSTEMS) and the Kaufman Brief Intelligence Test (K-BIT)) and six subtests of working memory from the Working Memory Test Battery for Children (WMTB-C) (Backward Digit Recall, Listening Recall, Digit Recall, Word List Matching, Word List Recall and Non-word List Recall). The two cognitive tests correlated at r = .50. Results showed a high correlation between SYSTEMS and the Phonological Loop (PL) component of working memory. The K-BIT also correlated highly with PL component. The SYSTEMS and K-BIT showed various levels of correlation with the working memory sub-tests. A measurement model utilising Confirmatory Factor Analysis method showed a strong relationship between working memory and cognitive functioning, the degree of fit for the model was very high at GFI = .996

    Parameterized Neural Network Language Models for Information Retrieval

    Full text link
    Information Retrieval (IR) models need to deal with two difficult issues, vocabulary mismatch and term dependencies. Vocabulary mismatch corresponds to the difficulty of retrieving relevant documents that do not contain exact query terms but semantically related terms. Term dependencies refers to the need of considering the relationship between the words of the query when estimating the relevance of a document. A multitude of solutions has been proposed to solve each of these two problems, but no principled model solve both. In parallel, in the last few years, language models based on neural networks have been used to cope with complex natural language processing tasks like emotion and paraphrase detection. Although they present good abilities to cope with both term dependencies and vocabulary mismatch problems, thanks to the distributed representation of words they are based upon, such models could not be used readily in IR, where the estimation of one language model per document (or query) is required. This is both computationally unfeasible and prone to over-fitting. Based on a recent work that proposed to learn a generic language model that can be modified through a set of document-specific parameters, we explore use of new neural network models that are adapted to ad-hoc IR tasks. Within the language model IR framework, we propose and study the use of a generic language model as well as a document-specific language model. Both can be used as a smoothing component, but the latter is more adapted to the document at hand and has the potential of being used as a full document language model. We experiment with such models and analyze their results on TREC-1 to 8 datasets

    On the Effect of Semantically Enriched Context Models on Software Modularization

    Full text link
    Many of the existing approaches for program comprehension rely on the linguistic information found in source code, such as identifier names and comments. Semantic clustering is one such technique for modularization of the system that relies on the informal semantics of the program, encoded in the vocabulary used in the source code. Treating the source code as a collection of tokens loses the semantic information embedded within the identifiers. We try to overcome this problem by introducing context models for source code identifiers to obtain a semantic kernel, which can be used for both deriving the topics that run through the system as well as their clustering. In the first model, we abstract an identifier to its type representation and build on this notion of context to construct contextual vector representation of the source code. The second notion of context is defined based on the flow of data between identifiers to represent a module as a dependency graph where the nodes correspond to identifiers and the edges represent the data dependencies between pairs of identifiers. We have applied our approach to 10 medium-sized open source Java projects, and show that by introducing contexts for identifiers, the quality of the modularization of the software systems is improved. Both of the context models give results that are superior to the plain vector representation of documents. In some cases, the authoritativeness of decompositions is improved by 67%. Furthermore, a more detailed evaluation of our approach on JEdit, an open source editor, demonstrates that inferred topics through performing topic analysis on the contextual representations are more meaningful compared to the plain representation of the documents. The proposed approach in introducing a context model for source code identifiers paves the way for building tools that support developers in program comprehension tasks such as application and domain concept location, software modularization and topic analysis

    The game transfer phenomena scale: an instrument for investigating the nonvolitional effects of video game playing

    Get PDF
    A variety of instruments have been developed to assess different dimensions of playing videogames and its effects on cognitions, affect, and behaviors. The present study examined the psychometric properties of the Game Transfer Phenomena Scale (GTPS) that assesses non-volitional phenomena experienced after playing videogames (i.e., altered perceptions, automatic mental processes, and involuntary behaviors). A total of 1,736 gamers participated in an online survey used as the basis for the analysis. Confirmatory factor analysis (CFA) was performed to confirm the factorial structure of the GTPS. The five-factor structure using the 20 indicators based on the analysis of gamers’ self-reports fitted the data well. Population cross-validity was also achieved and the positive associations between the session length and overall scores indicate the GTPS warranted criterion-related validity. Although the understanding of GTP is still in its infancy, the GTPS appears to be a valid and reliable instrument for assessing non-volitional gaming-related phenomena. The GTPS can be used for understanding the phenomenology of post-effects of playing videogames
    corecore