38,240 research outputs found

    Syntactic and Semantic Understanding of Conceptual Data Models

    Get PDF
    Conceptual data models are used for discovery and validation communication between analysts and users; as a communication tool between analysts and designers; as a basis for end-user developed applications; and as part of the systems documentation (e.g., Batra and Davis 1992; Juhn and Naumann 1985; Siau et al. 1997). A goal of creating a conceptual model is to develop a database schema to be used to implement a database that meets the information needs of intended users. To develop a suitable database schema, the designer must be able to use the conceptual data model as a communication tool to verify the assumptions made in its creation. Batra and Davis state that the conceptual model must be capable of providing a structure for the database along with the semantic constraints for communication with users. The conceptual data model also serves as a representation of the database after its completion: it is part of the systems documentation, and hence can be used for system evaluation by auditors or others. Conceptual data models include several components, each of which provides information content. Siau et al. examined the use of two components in entity-relationship data models: the surface semantics and the structural constraints (participation cardinality) of the relationships

    Having Your Cake and Eating It Too: Autonomy and Interaction in a Model of Sentence Processing

    Full text link
    Is the human language understander a collection of modular processes operating with relative autonomy, or is it a single integrated process? This ongoing debate has polarized the language processing community, with two fundamentally different types of model posited, and with each camp concluding that the other is wrong. One camp puts forth a model with separate processors and distinct knowledge sources to explain one body of data, and the other proposes a model with a single processor and a homogeneous, monolithic knowledge source to explain the other body of data. In this paper we argue that a hybrid approach which combines a unified processor with separate knowledge sources provides an explanation of both bodies of data, and we demonstrate the feasibility of this approach with the computational model called COMPERE. We believe that this approach brings the language processing community significantly closer to offering human-like language processing systems.Comment: 7 pages, uses aaai.sty macr

    The Structured Process Modeling Method (SPMM) : what is the best way for me to construct a process model?

    Get PDF
    More and more organizations turn to the construction of process models to support strategical and operational tasks. At the same time, reports indicate quality issues for a considerable part of these models, caused by modeling errors. Therefore, the research described in this paper investigates the development of a practical method to determine and train an optimal process modeling strategy that aims to decrease the number of cognitive errors made during modeling. Such cognitive errors originate in inadequate cognitive processing caused by the inherent complexity of constructing process models. The method helps modelers to derive their personal cognitive profile and the related optimal cognitive strategy that minimizes these cognitive failures. The contribution of the research consists of the conceptual method and an automated modeling strategy selection and training instrument. These two artefacts are positively evaluated by a laboratory experiment covering multiple modeling sessions and involving a total of 149 master students at Ghent University

    Ontologies and Information Extraction

    Full text link
    This report argues that, even in the simplest cases, IE is an ontology-driven process. It is not a mere text filtering method based on simple pattern matching and keywords, because the extracted pieces of texts are interpreted with respect to a predefined partial domain model. This report shows that depending on the nature and the depth of the interpretation to be done for extracting the information, more or less knowledge must be involved. This report is mainly illustrated in biology, a domain in which there are critical needs for content-based exploration of the scientific literature and which becomes a major application domain for IE

    On the Effect of Semantically Enriched Context Models on Software Modularization

    Full text link
    Many of the existing approaches for program comprehension rely on the linguistic information found in source code, such as identifier names and comments. Semantic clustering is one such technique for modularization of the system that relies on the informal semantics of the program, encoded in the vocabulary used in the source code. Treating the source code as a collection of tokens loses the semantic information embedded within the identifiers. We try to overcome this problem by introducing context models for source code identifiers to obtain a semantic kernel, which can be used for both deriving the topics that run through the system as well as their clustering. In the first model, we abstract an identifier to its type representation and build on this notion of context to construct contextual vector representation of the source code. The second notion of context is defined based on the flow of data between identifiers to represent a module as a dependency graph where the nodes correspond to identifiers and the edges represent the data dependencies between pairs of identifiers. We have applied our approach to 10 medium-sized open source Java projects, and show that by introducing contexts for identifiers, the quality of the modularization of the software systems is improved. Both of the context models give results that are superior to the plain vector representation of documents. In some cases, the authoritativeness of decompositions is improved by 67%. Furthermore, a more detailed evaluation of our approach on JEdit, an open source editor, demonstrates that inferred topics through performing topic analysis on the contextual representations are more meaningful compared to the plain representation of the documents. The proposed approach in introducing a context model for source code identifiers paves the way for building tools that support developers in program comprehension tasks such as application and domain concept location, software modularization and topic analysis

    Grip Force Reveals the Context Sensitivity of Language-Induced Motor Activity during “Action Words

    Get PDF
    Studies demonstrating the involvement of motor brain structures in language processing typically focus on \ud time windows beyond the latencies of lexical-semantic access. Consequently, such studies remain inconclusive regarding whether motor brain structures are recruited directly in language processing or through post-linguistic conceptual imagery. In the present study, we introduce a grip-force sensor that allows online measurements of language-induced motor activity during sentence listening. We use this tool to investigate whether language-induced motor activity remains constant or is modulated in negative, as opposed to affirmative, linguistic contexts. Our findings demonstrate that this simple experimental paradigm can be used to study the online crosstalk between language and the motor systems in an ecological and economical manner. Our data further confirm that the motor brain structures that can be called upon during action word processing are not mandatorily involved; the crosstalk is asymmetrically\ud governed by the linguistic context and not vice versa
    • …
    corecore