30,425 research outputs found

    FORGE: An eLearning Framework for Remote Laboratory Experimentation on FIRE Testbed Infrastructure

    Get PDF
    The Forging Online Education through FIRE (FORGE) initiative provides educators and learners in higher education with access to world-class FIRE testbed infrastructure. FORGE supports experimentally driven research in an eLearning environment by complementing traditional classroom and online courses with interactive remote laboratory experiments. The project has achieved its objectives by defining and implementing a framework called FORGEBox. This framework offers the methodology, environment, tools and resources to support the creation of HTML-based online educational material capable accessing virtualized and physical FIRE testbed infrastruc- ture easily. FORGEBox also captures valuable quantitative and qualitative learning analytic information using questionnaires and Learning Analytics that can help optimise and support student learning. To date, FORGE has produced courses covering a wide range of networking and communication domains. These are freely available from FORGEBox.eu and have resulted in over 24,000 experiments undertaken by more than 1,800 students across 10 countries worldwide. This work has shown that the use of remote high- performance testbed facilities for hands-on remote experimentation can have a valuable impact on the learning experience for both educators and learners. Additionally, certain challenges in developing FIRE-based courseware have been identified, which has led to a set of recommendations in order to support the use of FIRE facilities for teaching and learning purposes

    Distributed Learning System Design: A New Approach and an Agenda for Future Research

    Get PDF
    This article presents a theoretical framework designed to guide distributed learning design, with the goal of enhancing the effectiveness of distributed learning systems. The authors begin with a review of the extant research on distributed learning design, and themes embedded in this literature are extracted and discussed to identify critical gaps that should be addressed by future work in this area. A conceptual framework that integrates instructional objectives, targeted competencies, instructional design considerations, and technological features is then developed to address the most pressing gaps in current research and practice. The rationale and logic underlying this framework is explicated. The framework is designed to help guide trainers and instructional designers through critical stages of the distributed learning system design process. In addition, it is intended to help researchers identify critical issues that should serve as the focus of future research efforts. Recommendations and future research directions are presented and discussed

    Improving Knowledge Retrieval in Digital Libraries Applying Intelligent Techniques

    Get PDF
    Nowadays an enormous quantity of heterogeneous and distributed information is stored in the digital University. Exploring online collections to find knowledge relevant to a user’s interests is a challenging work. The artificial intelligence and Semantic Web provide a common framework that allows knowledge to be shared and reused in an efficient way. In this work we propose a comprehensive approach for discovering E-learning objects in large digital collections based on analysis of recorded semantic metadata in those objects and the application of expert system technologies. We have used Case Based-Reasoning methodology to develop a prototype for supporting efficient retrieval knowledge from online repositories. We suggest a conceptual architecture for a semantic search engine. OntoUS is a collaborative effort that proposes a new form of interaction between users and digital libraries, where the latter are adapted to users and their surroundings

    The evolution of pedagogic models for work-based learning within a virtual university

    Get PDF
    The process of designing a pedagogic model for work-based learning within a virtual university is not a simple matter of using ‘off the shelf’ good practice. Instead, it can be characterised as an evolutionary process that reflects the backgrounds, skills and experiences of the project partners. Within the context of a large-scale project that was building a virtual university for work-based learners, an ambitious goal was set: to base the development of learning materials on a pedagogic model that would be adopted across the project. However, the reality proved to be far more complex than simply putting together an appropriate model from existing research evidence. Instead, the project progressed through a series of redevelopments, each of which was pre-empted by the involvement of a different team from within the project consortium. The pedagogic models that evolved as part of the project will be outlined, and the reasons for rejecting each will be given. They moved from a simple model, relying on core computer-based materials (assessed by multiple choice questions with optional work-based learning), to a more sophisticated model that integrated different forms of learning. The challenges that were addressed included making learning flexible and suitable for work-based learning, the coherence of accreditation pathways, the appropriate use of the opportunities provided by online learning and the learning curves and training needs of the different project teams. Although some of these issues were project-specific (being influenced by the needs of the learners, the aims of the project and the partners involved), the evolutionary process described in this case study illustrates that there can be a steep learning curve for the different collaborating groups within the project team. Whilst this example focuses on work-based learning, the process and the lessons may equally be applicable to a range of learning scenarios

    Tackling Version Management and Reproducibility in MLOps

    Get PDF
    A crescente adoção de soluções baseadas em machine learning (ML) exige avanços na aplicação das melhores práticas para manter estes sistemas em produção. Operações de machine learning (MLOps) incorporam princípios de automação contínua ao desenvolvimento de modelos de ML, promovendo entrega, monitoramento e treinamento contínuos. Devido a vários fatores, como a natureza experimental do desenvolvimento de modelos de ML ou a necessidade de otimizações derivadas de mudanças nas necessidades de negócios, espera-se que os cientistas de dados criem vários experimentos para desenvolver um modelo ou preditor que atenda satisfatoriamente aos principais desafios de um dado problema. Como a reavaliação de modelos é uma necessidade constante, metadados são constantemente produzidos devido a várias execuções de experimentos. Esses metadados são conhecidos como artefatos ou ativos de ML. A linhagem adequada entre esses artefatos possibilita a recriação do ambiente em que foram desenvolvidos, facilitando a reprodutibilidade do modelo. Vincular informações de experimentos, modelos, conjuntos de dados, configurações e alterações de código requer organização, rastreamento, manutenção e controle de versão adequados. Este trabalho investigará as melhores práticas, problemas atuais e desafios relacionados ao gerenciamento e versão de artefatos e aplicará esse conhecimento para desenvolver um fluxo de trabalho que suporte a engenharia e operacionalização de ML, aplicando princípios de MLOps que facilitam a reprodutibilidade dos modelos. Cenários cobrindo preparação de dados, geração de modelo, comparação entre versões de modelo, implantação, monitoramento, depuração e re-treinamento demonstraram como as estruturas e ferramentas selecionadas podem ser integradas para atingir esse objetivo.The growing adoption of machine learning solutions requires advancements in applying best practices to maintain artificial intelligence systems in production. Machine Learning Operations (MLOps) incorporates DevOps principles into machine learning development, promoting automation, continuous delivery, monitoring, and training capabilities. Due to multiple factors, such as the experimental nature of the machine learning process or the need for model optimizations derived from changes in business needs, data scientists are expected to create multiple experiments to develop a model or predictor that satisfactorily addresses the main challenges of a given problem. Since the re-evaluation of models is a constant need, metadata is constantly produced due to multiple experiment runs. This metadata is known as ML artifacts or assets. The proper lineage between these artifacts enables environment recreation, facilitating model reproducibility. Linking information from experiments, models, datasets, configurations, and code changes requires proper organization, tracking, maintenance, and version control of these artifacts. This work will investigate the best practices, current issues, and open challenges related to artifact versioning and management and apply this knowledge to develop an ML workflow that supports ML engineering and operationalization, applying MLOps principles that facilitate model reproducibility. Scenarios covering data preparation, model generation, comparison between model versions, deployment, monitoring, debugging, and retraining demonstrated how the selected frameworks and tools could be integrated to achieve that goal

    Early Learning Innovation Fund Evaluation Final Report

    Get PDF
    This is a formative evaluation of the Hewlett Foundation's Early Learning Innovation Fund that began in 2011 as part of the Quality Education in Developing Countries (QEDC) initiative.  The Fund has four overarching objectives, which are to: promote promising approaches to improve children's learning; strengthen the capacity of organizations implementing those approaches; strengthen those organizations' networks and ownership; and grow 20 percent of implementing organizations into significant players in the education sector. The Fund's original design was to create a "pipeline" of innovative approaches to improve learning outcomes, with the assumption that donors and partners would adopt the most successful ones. A defining feature of the Fund was that it delivered assistance through two intermediary support organizations (ISOs), rather than providing funds directly to implementing organizations. Through an open solicitation process, the Hewlett Foundation selected Firelight Foundation and TrustAfrica to manage the Fund. Firelight Foundation, based in California, was founded in 1999 with a mission to channel resources to community-based organizations (CBOs) working to improve the lives of vulnerable children and families in Africa. It supports 12 implementing organizations in Tanzania for the Fund. TrustAfrica, based in Dakar, Senegal, is a convener that seeks to strengthen African-led initiatives addressing some of the continent's most difficult challenges. The Fund was its first experience working specifically with early learning and childhood development organizations. Under the Fund, it supported 16 such organizations: one in Mali and five each in Senegal, Uganda and Kenya. At the end of 2014, the Hewlett Foundation commissioned Management Systems International (MSI) to conduct a mid-term evaluation assessing the implementation of the Fund exploring the extent to which it achieved intended outcomes and any factors that had limited or enabled its achievements. It analyzed the support that the ISOs provided to their implementing organizations, with specific focus on monitoring and evaluation (M&E). The evaluation included an audit of the implementing organizations' M&E systems and a review of the feasibility of compiling data collected to support an impact evaluation. Finally, the Foundation and the ISOs hoped that this evaluation would reveal the most promising innovations and inform planning for Phase II of the Fund. The evaluation findings sought to inform the Hewlett Foundation and other donors interested in supporting intermediary grant-makers, early learning innovations and the expansion of innovations. TrustAfrica and Firelight Foundation provided input to the evaluation's scope of work. Mid-term evaluation reports for each ISO provided findings about their management of the Fund's Phase I and recommendations for Phase II. This final evaluation report will inform donors, ISOs and other implementing organizations about the best approaches to support promising early learning innovations and their expansion. The full report outlines findings common across both ISOs' experience and includes recommendations in four key areas: adequate time; appropriate capacity building; advocacy and scaling up; and evaluating and documenting innovations. Overall, both Firelight Foundation and TrustAfrica supported a number of effective innovations working through committed and largely competent implementing organizations. The program's open-ended nature avoided being prescriptive in its approach, but based on the lessons learned in this evaluation and the broader literature, the Hewlett Foundation and other donors could have offered more guidance to ISOs to avoid the need to continually relearn some lessons. For example, over the evaluation period, it became increasingly evident that the current context demands more focused advance planning to measure impact on beneficiaries and other stakeholders and a more concrete approach to promoting and resourcing potential scale-up. The main findings from the evaluation and recommendations are summarized here

    Online experimentation and interactive learning resources for teaching network engineering

    Get PDF
    This paper presents a case study on teaching network engineering in conjunction with interactive learning resources. This case study has been developed in collaboration with the Cisco Networking Academy in the context of the FORGE project, which promotes online learning and experimentation by offering access to virtual and remote labs. The main goal of this work is allowing learners and educators to perform network simulations within a web browser or an interactive eBook by using any type of mobile, tablet or desktop device. Learning Analytics are employed in order to monitor learning behaviour for further analysis of the learning experience offered to students

    The Open Networking Lab: Hands-on Vocational Learning in Computer Networking

    Get PDF
    An increasingly connected society demands people who can design, set up, monitor and maintain networks of computers and devices. Traditional classroom instruction cannot keep pace with demand, and networking hardware costs can be too high for widespread classroom use. This paper presents the Open Networking Lab, a new UK initiative for supporting hands-on vocational learning in computer networking. The Open Networking Lab will facilitate the development of introductory practical networking skills without using hardware, through the provision of a web-based network simulation package integrated into learning resources and activities. These learning resources will be evaluated by students and lecturers from a cluster of Further Education colleges in the UK and will subsequently be made available to learners worldwide via free and open courseware

    The 1990 progress report and future plans

    Get PDF
    This document describes the progress and plans of the Artificial Intelligence Research Branch (RIA) at ARC in 1990. Activities span a range from basic scientific research to engineering development and to fielded NASA applications, particularly those applications that are enabled by basic research carried out at RIA. Work is conducted in-house and through collaborative partners in academia and industry. Our major focus is on a limited number of research themes with a dual commitment to technical excellence and proven applicability to NASA short, medium, and long-term problems. RIA acts as the Agency's lead organization for research aspects of artificial intelligence, working closely with a second research laboratory at JPL and AI applications groups at all NASA centers
    corecore