65,726 research outputs found

    Collaborative Development of Open Educational Resources for Open and Distance Learning

    Get PDF
    Open and distance learning (ODL) is mostly characterised by the up front development of self study educational resources that have to be paid for over time through use with larger student cohorts (typically in the hundreds per annum) than for conventional face to face classes. This different level of up front investment in educational resources, and increasing pressures to utilise more expensive formats such as rich media, means that collaborative development is necessary to firstly make use of diverse professional skills and secondly to defray these costs across institutions. The Open University (OU) has over 40 years of experience of using multi professional course teams to develop courses; of working with a wide range of other institutions to develop educational resources; and of licensing use of its educational resources to other HEIs. Many of these arrangements require formal contracts to work properly and clearly identify IPR and partner responsibilities. With the emergence of open educational resources (OER) through the use of open licences, the OU and other institutions has now been able to experiment with new ways of collaborating on the development of educational resources that are not so dependent on tight legal contracts because each partner is effectively granting rights to the others to use the educational resources they supply through the open licensing (Lane, 2011; Van Dorp and Lane, 2011). This set of case studies examines the many different collaborative models used for developing and using educational resources and explain how open licensing is making it easier to share the effort involved in developing educational resources between institutions as well as how it may enable new institutions to be able to start up open and distance learning programmes more easily and at less initial cost. Thus it looks at three initiatives involving people from the OU (namely TESSA, LECH-e, openED2.0) and contrasts these with the Peer-2-Peer University and the OER University as exemplars of how OER may change some of the fundamental features of open and distance learning in a Web 2.0 world. It concludes that while there may be multiple reasons and models for collaborating on the development of educational resources the very openness provided by the open licensing aligns both with general academic values and practice but also with well established principles of open innovation in businesses

    OSCAR: A Collaborative Bandwidth Aggregation System

    Full text link
    The exponential increase in mobile data demand, coupled with growing user expectation to be connected in all places at all times, have introduced novel challenges for researchers to address. Fortunately, the wide spread deployment of various network technologies and the increased adoption of multi-interface enabled devices have enabled researchers to develop solutions for those challenges. Such solutions aim to exploit available interfaces on such devices in both solitary and collaborative forms. These solutions, however, have faced a steep deployment barrier. In this paper, we present OSCAR, a multi-objective, incentive-based, collaborative, and deployable bandwidth aggregation system. We present the OSCAR architecture that does not introduce any intermediate hardware nor require changes to current applications or legacy servers. The OSCAR architecture is designed to automatically estimate the system's context, dynamically schedule various connections and/or packets to different interfaces, be backwards compatible with the current Internet architecture, and provide the user with incentives for collaboration. We also formulate the OSCAR scheduler as a multi-objective, multi-modal scheduler that maximizes system throughput while minimizing energy consumption or financial cost. We evaluate OSCAR via implementation on Linux, as well as via simulation, and compare our results to the current optimal achievable throughput, cost, and energy consumption. Our evaluation shows that, in the throughput maximization mode, we provide up to 150% enhancement in throughput compared to current operating systems, without any changes to legacy servers. Moreover, this performance gain further increases with the availability of connection resume-supporting, or OSCAR-enabled servers, reaching the maximum achievable upper-bound throughput

    Theory and Practice of Data Citation

    Full text link
    Citations are the cornerstone of knowledge propagation and the primary means of assessing the quality of research, as well as directing investments in science. Science is increasingly becoming "data-intensive", where large volumes of data are collected and analyzed to discover complex patterns through simulations and experiments, and most scientific reference works have been replaced by online curated datasets. Yet, given a dataset, there is no quantitative, consistent and established way of knowing how it has been used over time, who contributed to its curation, what results have been yielded or what value it has. The development of a theory and practice of data citation is fundamental for considering data as first-class research objects with the same relevance and centrality of traditional scientific products. Many works in recent years have discussed data citation from different viewpoints: illustrating why data citation is needed, defining the principles and outlining recommendations for data citation systems, and providing computational methods for addressing specific issues of data citation. The current panorama is many-faceted and an overall view that brings together diverse aspects of this topic is still missing. Therefore, this paper aims to describe the lay of the land for data citation, both from the theoretical (the why and what) and the practical (the how) angle.Comment: 24 pages, 2 tables, pre-print accepted in Journal of the Association for Information Science and Technology (JASIST), 201

    Report on the Second Workshop on Sustainable Software for Science: Practice and Experiences (WSSSPE2)

    Get PDF
    This technical report records and discusses the Second Workshop on Sustainable Software for Science: Practice and Experiences (WSSSPE2). The report includes a description of the alternative, experimental submission and review process, two workshop keynote presentations, a series of lightning talks, a discussion on sustainability, and five discussions from the topic areas of exploring sustainability; software development experiences; credit & incentives; reproducibility & reuse & sharing; and code testing & code review. For each topic, the report includes a list of tangible actions that were proposed and that would lead to potential change. The workshop recognized that reliance on scientific software is pervasive in all areas of world-leading research today. The workshop participants then proceeded to explore different perspectives on the concept of sustainability. Key enablers and barriers of sustainable scientific software were identified from their experiences. In addition, recommendations with new requirements such as software credit files and software prize frameworks were outlined for improving practices in sustainable software engineering. There was also broad consensus that formal training in software development or engineering was rare among the practitioners. Significant strides need to be made in building a sense of community via training in software and technical practices, on increasing their size and scope, and on better integrating them directly into graduate education programs. Finally, journals can define and publish policies to improve reproducibility, whereas reviewers can insist that authors provide sufficient information and access to data and software to allow them reproduce the results in the paper. Hence a list of criteria is compiled for journals to provide to reviewers so as to make it easier to review software submitted for publication as a “Software Paper.

    The role of data & program code archives in the future of economic research

    Get PDF
    This essay examines the role of data and program-code archives in making economic research "replicable." Replication of published results is recognized as an essential part of the scientific method. Yet, historically, both the "demand for" and "supply of" replicable results in economics has been minimal. "Respect for the scientific method" is not sufficient to motivate either economists or editors of professional journals to ensure the replicability of published results. We enumerate the costs and benefits of mandatory data and code archives, and argue that the benefits far exceed the costs. Progress has been made since the gloomy assessment of Dewald, Thursby and Anderson some twenty years ago in the American Economic Review, but much remains to be done before empirical economics ceases to be a "dismal science" when judged by the replicability of its published results.Econometrics ; Research
    • 

    corecore