9,589 research outputs found

    In search of finalizing and validating digital learning tools supporting all in acquiring full literacy

    Get PDF
    Unlike many believe, accurate and fluent basic reading skill (ie. to decode text) is not enough for learning knowledge via reading. More than 10 years ago a digital learning game supporting the first step towards full literacy, i.e., GraphoGame (GG) was developed by the first author with his colleagues in the University of Jyväskylä, Finland. It trains the acquisition of basic reading skills, i.e., learning to sound out written language. Nowadays, when almost everyone in the world has an opportunity to use this GG, it is time to start supporting the acquisition of full literacy (FL). FL is necessary for efficient learning in school, where reading the schoolbooks successfully is essential. The present plan aims to help globally almost all who read whatever orthography to start from the earliest possible grade during which children have learned the mastery of the basic reading skill to immediately continue taking the next step to reach FL. Unlike common beliefs, support of FL is mostly needed among those who read transparent orthographies (reading by the majority of readers of alphabetic writings) which are easier to sound out due to consistency between spoken and written units at grapheme-phoneme level. This makes readers able to sound any written item which is pronounceable with only a little help of knowing what it means. Therefore, children tend to become inclined to not pay enough attention to the meaning but concentrate on decoding the text letter-by-letter. They had to learn from the beginning to approach the goal of reading, mediation of the meaning of the text. Readers of nontransparent English need to attend morphology for correct sounding. The continuing fall of OECD’s Program for International Student Assessment (PISA) results, e.g., in Finland reveals that especially boys are not any more interested in reading outside school which would be natural way to reach the main goal of reading, FL. What could be a better way to help boys towards FL than motivating them to play computer games which requires reading comprehension. The new digital ComprehensionGame designed by the first author motivates pupils to read in effective way by concurrently elevating their school achievements by connecting the training to daily reading lessons. This article describes our efforts to elaborate and validate this new digital tool by starting from populations of learners who need it most in Africa and in Finland

    An exploration of the attitudes and beliefs of teacher trainers and teacher trainees concerning the use of the L1 in the EFL classroom

    Get PDF
    One important conflict within English language teaching methodology is concerning the use or exclusion of learners’ first languages (L1) when learning English. Perspectives on the topic range from those in favour of complete avoidance of the L1 in the EFL classroom, constantly striving for an exclusively L2 classroom to those who believe in the value and learning benefit of allowing and, to some extent, encouraging the use of all manner of languages available to the learner. This thesis conducted interviews and surveys in order to provide an in-depth exploration of the attitudes and beliefs of teacher trainers and teacher trainees in North Rhine Westphalia concerning the use of the L1, as well as other potential languages, in the English language classroom. Although the two groups of participants held many similar attitudes and beliefs concerning L1 use, some significant and interesting differences were found. Teacher trainees showed themselves to be more open concerning the use of the L1 than their more experienced counterparts. It remains, however, unclear what exactly the reason for these differences is. A further aspect which became apparent is how the pressures of language choice and of exclusive L2 instruction in the EFL classroom during observed and examination lessons is felt by teacher trainees. This is potentially adding to the overall burden of the teacher training period in NRW. The thesis concludes that an increase in evidence-based teacher education, concerning not only the aspect of L1 use in the EFL classroom but also many other aspects of language teaching could be prudent in the continued development of well-informed best-practice approaches. This thesis holds the standpoint that complete eradication of the L1 in the EFL classroom is counterproductive to successful language learning. Judicious use of the L1 and the development of a more plurilingusitic attitude to language learning, enabling learners to make use of any available linguistic resources, can offer both learners and teachers helpful scaffolding which can facilitate the successful learning of further languages

    Knowledge Graph Building Blocks: An easy-to-use Framework for developing FAIREr Knowledge Graphs

    Full text link
    Knowledge graphs and ontologies provide promising technical solutions for implementing the FAIR Principles for Findable, Accessible, Interoperable, and Reusable data and metadata. However, they also come with their own challenges. Nine such challenges are discussed and associated with the criterion of cognitive interoperability and specific FAIREr principles (FAIR + Explorability raised) that they fail to meet. We introduce an easy-to-use, open source knowledge graph framework that is based on knowledge graph building blocks (KGBBs). KGBBs are small information modules for knowledge-processing, each based on a specific type of semantic unit. By interrelating several KGBBs, one can specify a KGBB-driven FAIREr knowledge graph. Besides implementing semantic units, the KGBB Framework clearly distinguishes and decouples an internal in-memory data model from data storage, data display, and data access/export models. We argue that this decoupling is essential for solving many problems of knowledge management systems. We discuss the architecture of the KGBB Framework as we envision it, comprising (i) an openly accessible KGBB-Repository for different types of KGBBs, (ii) a KGBB-Engine for managing and operating FAIREr knowledge graphs (including automatic provenance tracking, editing changelog, and versioning of semantic units); (iii) a repository for KGBB-Functions; (iv) a low-code KGBB-Editor with which domain experts can create new KGBBs and specify their own FAIREr knowledge graph without having to think about semantic modelling. We conclude with discussing the nine challenges and how the KGBB Framework provides solutions for the issues they raise. While most of what we discuss here is entirely conceptual, we can point to two prototypes that demonstrate the principle feasibility of using semantic units and KGBBs to manage and structure knowledge graphs

    An end-to-end, interactive Deep Learning based Annotation system for cursive and print English handwritten text

    Full text link
    With the surging inclination towards carrying out tasks on computational devices and digital mediums, any method that converts a task that was previously carried out manually, to a digitized version, is always welcome. Irrespective of the various documentation tasks that can be done online today, there are still many applications and domains where handwritten text is inevitable, which makes the digitization of handwritten documents a very essential task. Over the past decades, there has been extensive research on offline handwritten text recognition. In the recent past, most of these attempts have shifted to Machine learning and Deep learning based approaches. In order to design more complex and deeper networks, and ensure stellar performances, it is essential to have larger quantities of annotated data. Most of the databases present for offline handwritten text recognition today, have either been manually annotated or semi automatically annotated with a lot of manual involvement. These processes are very time consuming and prone to human errors. To tackle this problem, we present an innovative, complete end-to-end pipeline, that annotates offline handwritten manuscripts written in both print and cursive English, using Deep Learning and User Interaction techniques. This novel method, which involves an architectural combination of a detection system built upon a state-of-the-art text detection model, and a custom made Deep Learning model for the recognition system, is combined with an easy-to-use interactive interface, aiming to improve the accuracy of the detection, segmentation, serialization and recognition phases, in order to ensure high quality annotated data with minimal human interaction.Comment: 17 pages, 8 figures, 2 table

    Technical Dimensions of Programming Systems

    Get PDF
    Programming requires much more than just writing code in a programming language. It is usually done in the context of a stateful environment, by interacting with a system through a graphical user interface. Yet, this wide space of possibilities lacks a common structure for navigation. Work on programming systems fails to form a coherent body of research, making it hard to improve on past work and advance the state of the art. In computer science, much has been said and done to allow comparison of programming languages, yet no similar theory exists for programming systems; we believe that programming systems deserve a theory too. We present a framework of technical dimensions which capture the underlying characteristics of programming systems and provide a means for conceptualizing and comparing them. We identify technical dimensions by examining past influential programming systems and reviewing their design principles, technical capabilities, and styles of user interaction. Technical dimensions capture characteristics that may be studied, compared and advanced independently. This makes it possible to talk about programming systems in a way that can be shared and constructively debated rather than relying solely on personal impressions. Our framework is derived using a qualitative analysis of past programming systems. We outline two concrete ways of using our framework. First, we show how it can analyze a recently developed novel programming system. Then, we use it to identify an interesting unexplored point in the design space of programming systems. Much research effort focuses on building programming systems that are easier to use, accessible to non-experts, moldable and/or powerful, but such efforts are disconnected. They are informal, guided by the personal vision of their authors and thus are only evaluable and comparable on the basis of individual experience using them. By providing foundations for more systematic research, we can help programming systems researchers to stand, at last, on the shoulders of giants

    DIN Spec 91345 RAMI 4.0 compliant data pipelining: An approach to support data understanding and data acquisition in smart manufacturing environments

    Get PDF
    Today, data scientists in the manufacturing domain are confronted with a set of challenges associated to data acquisition as well as data processing including the extraction of valuable in-formation to support both, the work of the manufacturing equipment as well as the manufacturing processes behind it. One essential aspect related to data acquisition is the pipelining, including various commu-nication standards, protocols and technologies to save and transfer heterogenous data. These circumstances make it hard to understand, find, access and extract data from the sources depend-ing on use cases and applications. In order to support this data pipelining process, this thesis proposes the use of the semantic model. The selected semantic model should be able to describe smart manufacturing assets them-selves as well as to access their data along their life-cycle. As a matter of fact, there are many research contributions in smart manufacturing, which already came out with reference architectures or standards for semantic-based meta data descrip-tion or asset classification. This research builds upon these outcomes and introduces a novel se-mantic model-based data pipelining approach using as a basis the Reference Architecture Model for Industry 4.0 (RAMI 4.0).Hoje em dia, os cientistas de dados no domínio da manufatura são confrontados com várias normas, protocolos e tecnologias de comunicação para gravar, processar e transferir vários tipos de dados. Estas circunstâncias tornam difícil compreender, encontrar, aceder e extrair dados necessários para aplicações dependentes de casos de utilização, desde os equipamentos aos respectivos processos de manufatura. Um aspecto essencial poderia ser um processo de canalisação de dados incluindo vários normas de comunicação, protocolos e tecnologias para gravar e transferir dados. Uma solução para suporte deste processo, proposto por esta tese, é a aplicação de um modelo semântico que descreva os próprios recursos de manufactura inteligente e o acesso aos seus dados ao longo do seu ciclo de vida. Muitas das contribuições de investigação em manufatura inteligente já produziram arquitecturas de referência como a RAMI 4.0 ou normas para a descrição semântica de meta dados ou classificação de recursos. Esta investigação baseia-se nestas fontes externas e introduz um novo modelo semântico baseado no Modelo de Arquitectura de Referência para Indústria 4.0 (RAMI 4.0), em conformidade com a abordagem de canalisação de dados no domínio da produção inteligente como caso exemplar de utilização para permitir uma fácil exploração, compreensão, descoberta, selecção e extracção de dados
    corecore