2,966 research outputs found

    Tracing the creation and evaluation of accessible Open Educational Resources through learning analytics

    Get PDF
    The adoption of Open Educational Resources (OER) has been continuously growing and with it the need to addressing the diversity of students’ learning needs. Because of that, OER should meet with characteristics such as the web accessibility and quality. Thus, teachers as the creators of OER need supporting tools and specialized competences. The main contribution of this thesis is a Learning Analytics Model to Trace the Creation and Evaluation of OER (LAMTCE) considering web accessibility and quality. LAMTCE also includes a user model of the teacher’s competences in the creation and evaluation of OER. Besides that, we developed ATCE, a learning analytics tool based on the LAMTCE model. Finally, it was carried out an evaluation conducted with teachers involving the use of the tool and we found that the tool really benefited teachers in the acquisition of their competences in creation and evaluation of accessible and quality OER.La adopción de Recursos Educativos Abiertos (REA) ha ido en aumento y con ello la necesidad de abordar la diversidad de necesidades de aprendizaje de los estudiantes. Por ello, los REA deben cumplir con características tales como la accesibilidad web y la calidad. Así, los profesores como los creadores de REA necesitan de herramientas de soporte y competencias especializadas. La principal contribución de la tesis es el modelo LAMTCE, un modelo de analíticas de aprendizaje para hacer seguimiento a la creación y evaluación de REA considerando la accesibilidad web y la calidad. LAMTCE también incluye un modelo de usuario de las competencias del profesor en creación y evaluación de REA. Además, se desarrolló ATCE, una herramienta de analíticas de aprendizaje que está basada en el modelo LAMTCE. Finalmente, se llevó a cabo un estudio con profesores involucrando el uso de la herramienta encontrando que ésta realmente benefició a los profesores en la adquisición de sus competencias en creación y evaluación de REA accesibles y de calidad

    Mapping the Current Landscape of Research Library Engagement with Emerging Technologies in Research and Learning: Final Report

    Get PDF
    The generation, dissemination, and analysis of digital information is a significant driver, and consequence, of technological change. As data and information stewards in physical and virtual space, research libraries are thoroughly entangled in the challenges presented by the Fourth Industrial Revolution:1 a societal shift powered not by steam or electricity, but by data, and characterized by a fusion of the physical and digital worlds.2 Organizing, structuring, preserving, and providing access to growing volumes of the digital data generated and required by research and industry will become a critically important function. As partners with the community of researchers and scholars, research libraries are also recognizing and adapting to the consequences of technological change in the practices of scholarship and scholarly communication. Technologies that have emerged or become ubiquitous within the last decade have accelerated information production and have catalyzed profound changes in the ways scholars, students, and the general public create and engage with information. The production of an unprecedented volume and diversity of digital artifacts, the proliferation of machine learning (ML) technologies,3 and the emergence of data as the “world’s most valuable resource,”4 among other trends, present compelling opportunities for research libraries to contribute in new and significant ways to the research and learning enterprise. Librarians are all too familiar with predictions of the research library’s demise in an era when researchers have so much information at their fingertips. A growing body of evidence provides a resounding counterpoint: that the skills, experience, and values of librarians, and the persistence of libraries as an institution, will become more important than ever as researchers contend with the data deluge and the ephemerality and fragility of much digital content. This report identifies strategic opportunities for research libraries to adopt and engage with emerging technologies,5 with a roughly fiveyear time horizon. It considers the ways in which research library values and professional expertise inform and shape this engagement, the ways library and library worker roles will be reconceptualized, and the implication of a range of technologies on how the library fulfills its mission. The report builds on a literature review covering the last five years of published scholarship, primarily North American information science literature, and interviews with a dozen library field experts, completed in fall 2019. It begins with a discussion of four cross-cutting opportunities that permeate many or all aspects of research library services. Next, specific opportunities are identified in each of five core research library service areas: facilitating information discovery, stewarding the scholarly and cultural record, advancing digital scholarship, furthering student learning and success, and creating learning and collaboration spaces. Each section identifies key technologies shaping user behaviors and library services, and highlights exemplary initiatives. Underlying much of the discussion in this report is the idea that “digital transformation is increasingly about change management”6 —that adoption of or engagement with emerging technologies must be part of a broader strategy for organizational change, for “moving emerging work from the periphery to the core,”7 and a broader shift in conceptualizing the research library and its services. Above all, libraries are benefitting from the ways in which emerging technologies offer opportunities to center users and move from a centralized and often siloed service model to embedded, collaborative engagement with the research and learning enterprise

    Obvious: a meta-toolkit to encapsulate information visualization toolkits. One toolkit to bind them all

    Get PDF
    This article describes “Obvious”: a meta-toolkit that abstracts and encapsulates information visualization toolkits implemented in the Java language. It intends to unify their use and postpone the choice of which concrete toolkit(s) to use later-on in the development of visual analytics applications. We also report on the lessons we have learned when wrapping popular toolkits with Obvious, namely Prefuse, the InfoVis Toolkit, partly Improvise, JUNG and other data management libraries. We show several examples on the uses of Obvious, how the different toolkits can be combined, for instance sharing their data models. We also show how Weka and RapidMiner, two popular machine-learning toolkits, have been wrapped with Obvious and can be used directly with all the other wrapped toolkits. We expect Obvious to start a co-evolution process: Obvious is meant to evolve when more components of Information Visualization systems will become consensual. It is also designed to help information visualization systems adhere to the best practices to provide a higher level of interoperability and leverage the domain of visual analytics

    A Review and Analysis of Process at the Nexus of Instructional and Software Design

    Get PDF
    This dissertation includes a literature review and a single case analysis at the nexus of instructional design and technology and software development. The purpose of this study is to explore the depth and breadth of educational software design and development processes, and educational software reuse, with the intent of uncovering barriers to software development, software re-use and software replication in educational contexts. First, a thorough review of the academic literature was conducted on a representative sampling of educational technology studies. An examination of a 15-year time period within four representative journals identified 72 studies that addressed educational software to some extent. An additional sampling of the initial results identified 50 of those studies that discussed software the development process. These were further analyzed for evidence of software re-use and replication. Review results found a lack of reusable and/or replication-focused reports of instructional software development in educational technology journals, but found some reporting of educational technology reuse and replication from articles outside of educational technology. Based on the analysis, possible reasons for this occurrence are discussed. The author then proposes how a model for conducting and presenting instructional software design and development research based on the constructs of design-based research and cultural-historical activity theory might help mitigate this gap. Finally, the author presents a qualitative analysis of the software development process within a large, design-based educational technology project using cultural-historical activity theory (CHAT) as a lens. Using CHAT, the author seeks to uncover contradictions between the working worlds of instructional design and technology and software development with the intent of demonstrating how to mitigate tensions between these systems, and ultimately to increase the likelihood of reusable/replicable educational technologies. Findings reveal myriad tensions and social contradictions centered around the translation of instructional goals and requirements into software design and development tasks. Based on these results, the researcher proposes an educational software development framework called the iterative and integrative instructional software design framework that may help alleviate these tensions and thus make educational software design and development more productive, transparent, and replicable

    StreamingHub: Interactive Stream Analysis Workflows

    Get PDF
    Reusable data/code and reproducible analyses are foundational to quality research. This aspect, however, is often overlooked when designing interactive stream analysis workflows for time-series data (e.g., eye-tracking data). A mechanism to transmit informative metadata alongside data may allow such workflows to intelligently consume data, propagate metadata to downstream tasks, and thereby auto-generate reusable, reproducible analytic outputs with zero supervision. Moreover, a visual programming interface to design, develop, and execute such workflows may allow rapid prototyping for interdisciplinary research. Capitalizing on these ideas, we propose StreamingHub, a framework to build metadata propagating, interactive stream analysis workflows using visual programming. We conduct two case studies to evaluate the generalizability of our framework. Simultaneously, we use two heuristics to evaluate their computational fluidity and data growth. Results show that our framework generalizes to multiple tasks with a minimal performance overhead

    The Medium of Visualization for Software Comprehension

    Get PDF
    Although abundant studies have shown how visualization can help software developers to understand software systems, visualization is still not a common practice since developers (i) have little support to find a proper visualization for their needs, and once they find a suitable visualization tool, they (ii) are unsure of its effectiveness. We aim to offer support for identifying proper visualizations, and to increase the effectiveness of visualization techniques. In this dissertation, we characterize proposed software visualizations. To fill the gap between proposed visualizations and their practical application, we encapsulate such characteristics in an ontology, and propose a meta-visualization approach to find suitable visualizations. Amongst others characteristics of software visualizations, we identify that the medium used to display them can be a means to increase the effectiveness of visualization techniques for particular comprehension tasks.We implement visualization prototypes and validate our thesis via experiments. We found that even though developers using a physical 3D model medium required the least time to deal with tasks that involve identifying outliers, they perceived the least difficulty when visualizing systems based on the standard computer screen medium. Moreover, developers using immersive virtual reality obtained the highest recollection. We conclude that the effectiveness of software visualizations that use the city metaphor to support comprehension tasks can be increased when city visualizations are rendered in an appropriate medium. Furthermore, that visualization of software visualizations can be a suitable means for exploring their multiple characteristics that can be properly encapsulated in an ontology

    A Framework for Seamless Variant Management and Incremental Migration to a Software Product-Line

    Get PDF
    Context: Software systems often need to exist in many variants in order to satisfy varying customer requirements and operate under varying software and hardware environments. These variant-rich systems are most commonly realized using cloning, a convenient approach to create new variants by reusing existing ones. Cloning is readily available, however, the non-systematic reuse leads to difficult maintenance. An alternative strategy is adopting platform-oriented development approaches, such as Software Product-Line Engineering (SPLE). SPLE offers systematic reuse, and provides centralized control, and thus, easier maintenance. However, adopting SPLE is a risky and expensive endeavor, often relying on significant developer intervention. Researchers have attempted to devise strategies to synchronize variants (change propagation) and migrate from clone&own to an SPL, however, they are limited in accuracy and applicability. Additionally, the process models for SPLE in literature, as we will discuss, are obsolete, and only partially reflect how adoption is approached in industry. Despite many agile practices prescribing feature-oriented software development, features are still rarely documented and incorporated during actual development, making SPL-migration risky and error-prone.Objective: The overarching goal of this PhD is to bridge the gap between clone&own and software product-line engineering in a risk-free, smooth, and accurate manner. Consequently, in the first part of the PhD, we focus on the conceptualization, formalization, and implementation of a framework for migrating from a lean architecture to a platform-based one.Method: Our objectives are met by means of (i) understanding the literature relevant to variant-management and product-line migration and determining the research gaps (ii) surveying the dominant process models for SPLE and comparing them against the contemporary industrial practices, (iii) devising a framework for incremental SPL adoption, and (iv) investigating the benefit of using features beyond PL migration; facilitating model comprehension.Results: Four main results emerge from this thesis. First, we present a qualitative analysis of the state-of-the-art frameworks for change propagation and product-line migration. Second, we compare the contemporary industrial practices with the ones prescribed in the process models for SPL adoption, and provide an updated process model that unifies the two to accurately reflect the real practices and guide future practitioners. Third, we devise a framework for incremental migration of variants into a fully integrated platform by exploiting explicitly recorded metadata pertaining to clone and feature-to-asset traceability. Last, we investigate the impact of using different variability mechanisms on the comprehensibility of various model-related tasks.Future work: As ongoing and future work, we aim to integrate our framework with existing IDEs and conduct a developer study to determine the efficiency and effectiveness of using our framework. We also aim to incorporate safe-evolution in our operators
    corecore