74,358 research outputs found

    Mutual Impact - On the Relationship of Technology and Language Learning and Teaching

    Get PDF

    Generating natural language specifications from UML class diagrams

    Get PDF
    Early phases of software development are known to be problematic, difficult to manage and errors occurring during these phases are expensive to correct. Many systems have been developed to aid the transition from informal Natural Language requirements to semistructured or formal specifications. Furthermore, consistency checking is seen by many software engineers as the solution to reduce the number of errors occurring during the software development life cycle and allow early verification and validation of software systems. However, this is confined to the models developed during analysis and design and fails to include the early Natural Language requirements. This excludes proper user involvement and creates a gap between the original requirements and the updated and modified models and implementations of the system. To improve this process, we propose a system that generates Natural Language specifications from UML class diagrams. We first investigate the variation of the input language used in naming the components of a class diagram based on the study of a large number of examples from the literature and then develop rules for removing ambiguities in the subset of Natural Language used within UML. We use WordNet,a linguistic ontology, to disambiguate the lexical structures of the UML string names and generate semantically sound sentences. Our system is developed in Java and is tested on an independent though academic case study

    The re-historicisation and increased contextualisation of curriculum and its associated pedagogies

    Get PDF
    Curriculum has traditionally been an historical and technical field. The consequence has been to view curriculum and its associated pedagogical practices as neutral entities, devoid of meaning - in essence arising ex nihilo. However, this naïve assumption has fatefully resulted in revisiting the same swamps over and over again. Standardised curriculum and pedagogy function invisibly to reproduce class and inequality and to institutionalise cultural norms. Despite lingering attempts to maintain this technocratic approach that ignores subcutaneous meanings, a strong movement has emerged to reconceptualise curriculum in terms of its historical and sociopolitical context. While it is conceded that this is a step into a larger quagmire, it is a necessary one if true progress is to be made. Nevertheless, this large quagmire provides the possibility of escape, unlike the fatal determinism of forever returning to the swamps. Expectedly, this move to reconceptualise curriculum has its critics. Their arguments are also addressed, in particular the perceived tendency to separate theory and practice. Although curriculum and curriculum practices can be contextualised in many ways, this paper focuses primarily on key political concepts and concealed constructs such as hegemony, reproduction and resistance, resilience of the institution, the non-neutral nature of knowledge, the inclusion/exclusion principle, slogan systems and the hidden curriculum. Only by understanding the complex historical and political nature of curriculum can teaching professionals understand the hidden meaning of their practices. This is the first step for professionals to take in order to achieve Giroux's (1979, 1985, 1992) vision of teachers as transformative professionals (particularly through collaborative frameworks like the IDEAS project) in a climate of standardised curriculum and testing

    Modelling the acquisition of syntactic categories

    Get PDF
    This research represents an attempt to model the child’s acquisition of syntactic categories. A computational model, based on the EPAM theory of perception and learning, is developed. The basic assumptions are that (1) syntactic categories are actively constructed by the child using distributional learning abilities; and (2) cognitive constraints in learning rate and memory capacity limit these learning abilities. We present simulations of the syntax acquisition of a single subject, where the model learns to build up multi-word utterances by scanning a sample of the speech addressed to the subject by his mother

    A Comparative analysis: QA evaluation questions versus real-world queries

    Get PDF
    This paper presents a comparative analysis of user queries to a web search engine, questions to a Q&A service (answers.com), and questions employed in question answering (QA) evaluations at TREC and CLEF. The analysis shows that user queries to search engines contain mostly content words (i.e. keywords) but lack structure words (i.e. stopwords) and capitalization. Thus, they resemble natural language input after case folding and stopword removal. In contrast, topics for QA evaluation and questions to answers.com mainly consist of fully capitalized and syntactically well-formed questions. Classification experiments using a na¨ıve Bayes classifier show that stopwords play an important role in determining the expected answer type. A classification based on stopwords is considerably more accurate (47.5% accuracy) than a classification based on all query words (40.1% accuracy) or on content words (33.9% accuracy). To simulate user input, questions are preprocessed by case folding and stopword removal. Additional classification experiments aim at reconstructing the syntactic wh-word frame of a question, i.e. the embedding of the interrogative word. Results indicate that this part of questions can be reconstructed with moderate accuracy (25.7%), but for a classification problem with a much larger number of classes compared to classifying queries by expected answer type (2096 classes vs. 130 classes). Furthermore, eliminating stopwords can lead to multiple reconstructed questions with a different or with the opposite meaning (e.g. if negations or temporal restrictions are included). In conclusion, question reconstruction from short user queries can be seen as a new realistic evaluation challenge for QA systems
    corecore