192,400 research outputs found

    Teachers Know Best: Making Data Work For Teachers and Students

    Get PDF
    The Teachers Know Best research project seeks to encourage innovation in K - 12 education by helping product developers and those who procure resources for teachers better understand teachers' views. The intent of Making Data Work is to drill down to help educators, school leaders, and product developers better understand the challenges teachers face when working with this critical segment of digital instructional tools. More than 4,600 teachers from a nationally representative sample were surveyed about their use of data to drive instruction and the use of these tools.This study focuses on the potential of a specific subset of digital instructional tools: those that help teachers collect and make use of student data to tailor and improve instruction for individual students. The use of data is a crucial component in personalized learning, which ensures that student learning experiences -- what they learn and how, when, and where they learn it -- are tailored to their individual needs, skills, and interests and enable them to take ownership of their learning. Personalized learning is critical to meeting all students where they are, so they are neither bored with assignments that are too easy nor overwhelmed by work that is too hard

    Mashing up Visual Languages and Web Mash-ups

    Get PDF
    Research on web mashups and visual languages share an interest in human-centered computing. Both research communities are concerned with supporting programming by everyday, technically inexpert users. Visual programming environments have been a focus for both communities, and we believe that there is much to be gained by further discussion between these research communities. In this paper we explore some connections between web mashups and visual languages, and try to identify what each might be able to learn from the other. Our goal is to establish a framework for a dialog between the communities, and to promote the exchange of ideas and our respective understandings of humancentered computing.published or submitted for publicationis peer reviewe

    A unified view of data-intensive flows in business intelligence systems : a survey

    Get PDF
    Data-intensive flows are central processes in today’s business intelligence (BI) systems, deploying different technologies to deliver data, from a multitude of data sources, in user-preferred and analysis-ready formats. To meet complex requirements of next generation BI systems, we often need an effective combination of the traditionally batched extract-transform-load (ETL) processes that populate a data warehouse (DW) from integrated data sources, and more real-time and operational data flows that integrate source data at runtime. Both academia and industry thus must have a clear understanding of the foundations of data-intensive flows and the challenges of moving towards next generation BI environments. In this paper we present a survey of today’s research on data-intensive flows and the related fundamental fields of database theory. The study is based on a proposed set of dimensions describing the important challenges of data-intensive flows in the next generation BI setting. As a result of this survey, we envision an architecture of a system for managing the lifecycle of data-intensive flows. The results further provide a comprehensive understanding of data-intensive flows, recognizing challenges that still are to be addressed, and how the current solutions can be applied for addressing these challenges.Peer ReviewedPostprint (author's final draft

    A Quality Model for Actionable Analytics in Rapid Software Development

    Get PDF
    Background: Accessing relevant data on the product, process, and usage perspectives of software as well as integrating and analyzing such data is crucial for getting reliable and timely actionable insights aimed at continuously managing software quality in Rapid Software Development (RSD). In this context, several software analytics tools have been developed in recent years. However, there is a lack of explainable software analytics that software practitioners trust. Aims: We aimed at creating a quality model (called Q-Rapids quality model) for actionable analytics in RSD, implementing it, and evaluating its understandability and relevance. Method: We performed workshops at four companies in order to determine relevant metrics as well as product and process factors. We also elicited how these metrics and factors are used and interpreted by practitioners when making decisions in RSD. We specified the Q-Rapids quality model by comparing and integrating the results of the four workshops. Then we implemented the Q-Rapids tool to support the usage of the Q-Rapids quality model as well as the gathering, integration, and analysis of the required data. Afterwards we installed the Q-Rapids tool in the four companies and performed semi-structured interviews with eight product owners to evaluate the understandability and relevance of the Q-Rapids quality model. Results: The participants of the evaluation perceived the metrics as well as the product and process factors of the Q-Rapids quality model as understandable. Also, they considered the Q-Rapids quality model relevant for identifying product and process deficiencies (e.g., blocking code situations). Conclusions: By means of heterogeneous data sources, the Q-Rapids quality model enables detecting problems that take more time to find manually and adds transparency among the perspectives of system, process, and usage.Comment: This is an Author's Accepted Manuscript of a paper to be published by IEEE in the 44th Euromicro Conference on Software Engineering and Advanced Applications (SEAA) 2018. The final authenticated version will be available onlin

    Innovation from user experience in Living Labs: revisiting the ‘innovation factory’-concept with a panel-based and user-centered approach

    Get PDF
    This paper focuses on the problem of facilitating sustainable innovation practices with a user-centered approach. We do so by revisiting the knowledge-brokering cycle and Hargadon and Sutton’s ideas on building an ‘innovation factory’ within the light of current Living Lab-practices. Based on theoretical as well as practical evidence from a case study analysis of the LeYLab-Living Lab, it is argued that Living Labs with a panel-based approach can act as innovation intermediaries where innovation takes shape through actual user experience in real-life environments, facilitating all four stages within the knowledge-brokering cycle. This finding is also in line with the recently emerging Quadruple Helix-model for innovation, stressing the crucial role of the end-user as a stakeholder throughout the whole innovation process
    • …
    corecore