771,102 research outputs found

    Code or (Not Code): Separating Formal and Natural Language in CS Education

    Get PDF
    This paper argues that the "institutionalised understanding" of pseudo-code as a blend of formal and natural languages makes it an unsuitable choice for national assessment where the intention is to test program comprehension skills. It permits question-setters to inadvertently introduce a level of ambiguity and consequent confusion. This is not in keeping with either good assessment practice or an argument developed in the paper that CS education should be clearly fostering the skills needed for understanding formal, as distinct from natural, languages. The argument is backed up by an analysis of 49 questions drawn from the national school CS examinations of a single country, spanning a period of six years and two phases -- the first in which no formal pseudo-code was defined, the second in which a formal reference language, referred to as a "formally-defined pseudo-code", was provided for teachers and exam setters. The analysis demonstrates that in both phases, incorrect, confusing or ambiguous code was presented in questions. The paper concludes by recommending that the term reference language should be used in place of pseudo-code, and an appropriate formally-defined language specified, in national exam settings where a common language of assessment is required. This change of terms emphasises the characteristics required of a language to be used for assessment of program comprehension. The reference language used in the study is outlined. It was designed by the authors for human readability and also to make absolutely explicit the demarcation between formal and informal language, in such a way that automated checking can be carried out on programs written in the language. Formal specifications and a checker for the language are available

    Improving Assessment of Students through Semantic Space Construction

    Get PDF
    Assessment is one of the hardest tasks an Intel- ligent Tutoring System has to perform. It involves different and sometimes uncorrelated sub-tasks: building a student model to define her needs, defining tools and procedures to perform tests, understanding students’ replies to system prompts, defining suitable procedures to evaluate the correctness of students’ replies, and strategies to improve students’ abilities after the assessment session. In this work we present an improvement of our system, TutorJ, with particular attention to the assessment phase. Many tutoring systems offer only a limited set of assessment options like multiple-choice questions, fill-in-the-blanks tests or other types of predefined replies obtained through graphical widgets (radio-buttons, text-areas). This limited set of solutions makes interaction poor and unable to satisfy the users’ needs. Our interest is to enrich interaction with dialog in natural language. In this respect, the assessment problem is strictly connected to natural language understanding. The preliminary step is indeed to understand questions and replies of the student. We have reviewed the system design in the framework of a cognitive architecture with the aim to reach a double result: the reduction of the effort for the construction of the knowledge base and the improvement of the system capabilities in the assessment process. To this aim a new common semantic space has been defined and implemented. The entire architecture is oriented to intuitive and natural interaction

    Mixed Language Speech Recognition

    Get PDF
    Users that provide spoken input mixed language are common in many geographies and application domains. Automatic speech recognition in such a context requires multiple natural language understanding (NLU) models to be run in parallel and their outputs to be combined. This disclosure describes techniques to improve the performance of such ASR models by the use of a ranking unit for language determination and assessment of whether the voice input makes sense. A response to the query is provided to the user in the language as determined by the ranking unit

    Reading Comprehension Assessment Using LLM-based Chatbot

    Get PDF
    Users who are learning to read or learning about a topic by viewing content on a device can benefit from conversational activities, such as question-answer turns for the viewed content. This disclosure describes techniques to perform natural language assessments of content that is being consumed on a user device. A chatbot is implemented using suitable technology, such as a large language model. With user permission, the model is used to generate questions that evaluate the user’s understanding of the content viewed. User provided answers are evaluated and suitable responses are provided to the user. The techniques enable automated assessment and feedback. The described features for assessment via chatbot (or virtual assistant) can be built into any application. Assessment is performed on-device and in a confidential manner

    Génération pertinente de scénarios pour un service de transport intelligent

    Get PDF
    International audienceThis paper addresses risk assessment issues while conceiving complex systems. Indeed, project stakeholders have to share the same problems understanding allowing to undertake rational and optimal decisions. We propose an approach based on Natural Language Processing (NLP) techniques to improve systems quality requirements such as consistency and completeness. We assess the relevancy of our approaches through experimentations and highlighted feedbacks from project stakeholders and players

    A Conversational AI Approach to Architecture Framework Reviews

    Get PDF
    Architecture framework reviews (AFRs) are valuable tools offered by cloud infrastructure providers to enable customers to validate their implementation against a set of curated architectural best practices. AFRs are currently offered via self-service questionnaires, which tend to be inflexible, or via reviews by human experts, which, although guided, are less accessible and more expensive. This disclosure describes a conversational artificial intelligence (AI) interface (chatbot) that enables dialog-based architecture framework reviews and alignment assessment. The described automated self-service tool has natural language capabilities that enable dialog-based interactions and guidance, and draws from a pool of static questions to pose architectural framework review questions (AFRQ). The user provides answers in a natural language, unstructured format. The answers are interpreted by AI using natural language understanding (NLU) and are mapped to a level of alignment or maturity. Since NLU is used, an exact text match or static logical mapping are not required

    Predicting uncertainty and risk in the natural sciences: bridging the gap between academics and industry

    Get PDF
    The increase in large-scale disasters in recent years, such as the 2007 floods in the UK, has caused disruptions of livelihood, enormous economic losses and increase in fatalities. Losses from natural hazards are only partially derived from the physical event itself but are also caused by society’s vulnerability to it. In the first three months of 2010, an unprecedented US$16 billion in losses occurred from natural hazards caused by events such as the Haiti and Chilean earthquakes, and the European storm Xynthia. This made it the worst ever first quarter for natural hazard losses and left the insurance industry exposed financially to the more loss-prone third and forth quarters. NERC science has a central role to play in the forecasting and mitigation of natural hazards. Research in this area forms the basis for technological solutions to early warning systems, designing mitigation strategies and providing critical information for decision makers to help save lives and avoid economic losses. Understanding uncertainty is essential if reliable forecasting and risk assessments are to be made. However, the quantification and assessment of uncertainty in natural hazards has in general been limited particularly in terms of model limitations and multiplicity. There are several reasons for this; most notably the fragmented nature of natural hazard research which is split both across science areas and between research, risk management and policy. Because of this, each sector has developed its own concepts and language which has acted as a barrier for effective communication and prevented the production of generic methods that have the potential to be used across sectors. It is clear therefore that by bringing the natural hazard community together significant breakthroughs in the visualisation and understanding of risk and uncertainty could be achieved. To accomplish this, this research programme has 4 prime objectives: 1.To improve communication and networking between researchers and risk managers within the financial services sector 2.To provide a platform for the dissemination of information on uncertainty and risk analysis between a range of researchers and practitioners 3.To generate a portfolio of best practice in uncertainty and risk analysis 4.To act as a focal point between the financial sector and natural hazard research in NERC This paper will discuss how the Natural Environmental Research Council, in partnership with other organisations such as TSB, EA and EPSRC etc, is working with academics and industry to bring about a step change in the way that uncertainty and risk assessments are achieved throughout the natural hazard community
    • …
    corecore