572,391 research outputs found

    Evidencing the development of distributed leadership capacity in the quality management of online learning environments (OLEs) in Australian higher education

    Get PDF
    The poster will present findings from the first year of a two-year nationally funded Australian Learning and Teaching Council (ALTC) project, Building distributed leadership in designing and implementing a quality management framework for Online Learning Environments undertaken by Deakin University, Macquarie University, University of South Australia, University of Southern Queensland and RMIT University. The project is running over 2011-2012. This project aims to design and implement a framework that uses a distributed leadership approach for the quality management of Online Learning Environments (OLEs) in Australian higher education. The distributed leadership approach enables the development of the framework and in turn contributes to its implementation. The framework is the vehicle for building leadership capacity. The national project team itself represents a broad range of educational, technical and managerial expertise

    Social networks and performance in distributed learning communities

    Get PDF
    Social networks play an essential role in learning environments as a key channel for knowledge sharing and students' support. In distributed learning communities, knowledge sharing does not occur as spontaneously as when a working group shares the same physical space; knowledge sharing depends even more on student informal connections. In this study we analyse two distributed learning communities' social networks in order to understand how characteristics of the social structure can enhance students' success and performance. We used a monitoring system for social network data gathering. Results from correlation analyses showed that students' social network characteristics are related to their performancePostprint (published version

    Towards collaborative learning via shared artefacts over the Grid

    Get PDF
    The Web is the most pervasive collaborative technology in widespread use today; and its use to support eLearning has been highly successful. There are many web-based Virtual Learning Environments such as WebCT, FirstClass, and BlackBoard as well as associated web-based Managed Learning Environments. In the future, the Grid promises to provide an extremely powerful infrastructure allowing both learners and teachers to collaborate in various learning contexts and to share learning materials, learning processes, learning systems, and experiences. This position paper addresses the role of support for sharing artefacts in distributed systems such as the Grid. An analogy is made between collaborative software development and collaborative learning with the goal of gaining insights into the requisite support for artefact sharing within the eLearning community

    Consistency in Models for Distributed Learning under Communication Constraints

    Full text link
    Motivated by sensor networks and other distributed settings, several models for distributed learning are presented. The models differ from classical works in statistical pattern recognition by allocating observations of an independent and identically distributed (i.i.d.) sampling process amongst members of a network of simple learning agents. The agents are limited in their ability to communicate to a central fusion center and thus, the amount of information available for use in classification or regression is constrained. For several basic communication models in both the binary classification and regression frameworks, we question the existence of agent decision rules and fusion rules that result in a universally consistent ensemble. The answers to this question present new issues to consider with regard to universal consistency. Insofar as these models present a useful picture of distributed scenarios, this paper addresses the issue of whether or not the guarantees provided by Stone's Theorem in centralized environments hold in distributed settings.Comment: To appear in the IEEE Transactions on Information Theor

    Dynamic Control Flow in Large-Scale Machine Learning

    Full text link
    Many recent machine learning models rely on fine-grained dynamic control flow for training and inference. In particular, models based on recurrent neural networks and on reinforcement learning depend on recurrence relations, data-dependent conditional execution, and other features that call for dynamic control flow. These applications benefit from the ability to make rapid control-flow decisions across a set of computing devices in a distributed system. For performance, scalability, and expressiveness, a machine learning system must support dynamic control flow in distributed and heterogeneous environments. This paper presents a programming model for distributed machine learning that supports dynamic control flow. We describe the design of the programming model, and its implementation in TensorFlow, a distributed machine learning system. Our approach extends the use of dataflow graphs to represent machine learning models, offering several distinctive features. First, the branches of conditionals and bodies of loops can be partitioned across many machines to run on a set of heterogeneous devices, including CPUs, GPUs, and custom ASICs. Second, programs written in our model support automatic differentiation and distributed gradient computations, which are necessary for training machine learning models that use control flow. Third, our choice of non-strict semantics enables multiple loop iterations to execute in parallel across machines, and to overlap compute and I/O operations. We have done our work in the context of TensorFlow, and it has been used extensively in research and production. We evaluate it using several real-world applications, and demonstrate its performance and scalability.Comment: Appeared in EuroSys 2018. 14 pages, 16 figure

    CoCoA: A General Framework for Communication-Efficient Distributed Optimization

    Get PDF
    The scale of modern datasets necessitates the development of efficient distributed optimization methods for machine learning. We present a general-purpose framework for distributed computing environments, CoCoA, that has an efficient communication scheme and is applicable to a wide variety of problems in machine learning and signal processing. We extend the framework to cover general non-strongly-convex regularizers, including L1-regularized problems like lasso, sparse logistic regression, and elastic net regularization, and show how earlier work can be derived as a special case. We provide convergence guarantees for the class of convex regularized loss minimization objectives, leveraging a novel approach in handling non-strongly-convex regularizers and non-smooth loss functions. The resulting framework has markedly improved performance over state-of-the-art methods, as we illustrate with an extensive set of experiments on real distributed datasets
    • …
    corecore