9,846 research outputs found

    Pedagogically informed metadata content and structure for learning and teaching

    No full text
    In order to be able to search, compare, gap analyse, recommend, and visualise learning objects, learning resources, or teaching assets, the metadata structure and content must be able to support pedagogically informed reasoning, inference, and machine processing over the knowledge representations. In this paper, we present the difficulties with current metadata standards in education: Dublin Core educational version and IEEELOM, using examples drawn from the areas of e-learning, institutional admissions, and learners seeking courses. The paper suggests expanded metadata components based on an e-learning system engineering model to support pedagogically informed interoperability. We illustrate some examples of the metadata relevant to competency in the nurse training domain

    SMiT: Local System Administration Across Disparate Environments Utilizing the Cloud

    Get PDF
    System administration can be tedious. Most IT departments maintain several (if not several hundred) computers, each of which requires periodic housecleaning: updating of software, clearing of log files, removing old cache files, etc. Compounding the problem is the computing environment itself. Because of the distributed nature of these computers, system administration time is often consumed in repetitive tasks that should be automated. Although current system administration tools exist, they are often centralized, unscalable, unintuitive, or inflexible. To meet the needs of system administrators and IT professionals, we developed the Script Management Tool (SMiT). SMiT is a web-based tool that permits administration of distributed computers from virtually anywhere via a common web browser. SMiT consists of a cloud-based server running on Google App Engine enabling users to intuitively create, manage, and deploy administration scripts. To support local execution of scripts, SMiT provides an execution engine that runs on the organization’s local machines and communicates with the server to fetch scripts, execute them, and deliver results back to the server. Because of its distributed asynchronous architecture SMiT is scalable to thousands of machines. SMiT is also extensible to a wide variety of system administration tasks via its plugin architecture

    Towards Coordination-Intensive Visualization Software

    Get PDF
    Most coordination realizations in current visualization systems are ''last-minute'' ad-hoc and rely on the richness of the chosen implementation language. Moreover, very few visualization models implicitly consider coordination. If coordination is contemplated from the design point of view, it is usually only regarded as part of the communication protocol and is generally dealt with within that restricted domain. Coordinated multiple views are beneficial and a flexible model for coordination will ensure easy embedding of coordination in such exploratory environments. This paper compares different approaches to coordination in exploratory visualization (EV). We recognize the need for a coordination model and for that we formalize aspects of coordination in EV. Furthermore, our work draws on the findings of the interdisciplinary study of coordination by various researchers

    Integrative Use of Information Extraction, Semantic Matchmaking and Adaptive Coupling Techniques in Support of Distributed Information Processing and Decision-Making

    No full text
    In order to press maximal cognitive benefit from their social, technological and informational environments, military coalitions need to understand how best to exploit available information assets as well as how best to organize their socially-distributed information processing activities. The International Technology Alliance (ITA) program is beginning to address the challenges associated with enhanced cognition in military coalition environments by integrating a variety of research and development efforts. In particular, research in one component of the ITA ('Project 4: Shared Understanding and Information Exploitation') is seeking to develop capabilities that enable military coalitions to better exploit and distribute networked information assets in the service of collective cognitive outcomes (e.g. improved decision-making). In this paper, we provide an overview of the various research activities in Project 4. We also show how these research activities complement one another in terms of supporting coalition-based collective cognition

    Universal Reinforcement Learning Algorithms: Survey and Experiments

    Full text link
    Many state-of-the-art reinforcement learning (RL) algorithms typically assume that the environment is an ergodic Markov Decision Process (MDP). In contrast, the field of universal reinforcement learning (URL) is concerned with algorithms that make as few assumptions as possible about the environment. The universal Bayesian agent AIXI and a family of related URL algorithms have been developed in this setting. While numerous theoretical optimality results have been proven for these agents, there has been no empirical investigation of their behavior to date. We present a short and accessible survey of these URL algorithms under a unified notation and framework, along with results of some experiments that qualitatively illustrate some properties of the resulting policies, and their relative performance on partially-observable gridworld environments. We also present an open-source reference implementation of the algorithms which we hope will facilitate further understanding of, and experimentation with, these ideas.Comment: 8 pages, 6 figures, Twenty-sixth International Joint Conference on Artificial Intelligence (IJCAI-17

    Measuring the Global Research Environment: Information Science Challenges for the 21st Century

    Get PDF
    “What does the global research environment look like?” This paper presents a summary look at the results of efforts to address this question using available indicators on global research production. It was surprising how little information is available, how difficult some of it is to access and how flawed the data are. The three most useful data sources were UNESCO (United Nations Educational, Scientific and Cultural Organization) Research and Development data (1996-2002), the Institute of Scientific Information publications listings for January 1998 through March 2003, and the World of Learning 2002 reference volume. The data showed that it is difficult to easily get a good overview of the global research situation from existing sources. Furthermore, inequalities between countries in research capacity are marked and challenging. Information science offers strategies for responding to both of these challenges. In both cases improvements are likely if access to information can be facilitated and the process of integrating information from different sources can be simplified, allowing transformation into effective action. The global research environment thus serves as a case study for the focus of this paper – the exploration of information science responses to challenges in the management, exchange and implementation of knowledge globally

    How the web continues to fail people with disabilities

    Get PDF
    The digital divide is most often understood as that between the IT haves and have-nots. However, if there is one minority group that can be, and often is excluded from the world wide web, even if they have a computer, it is disabled people. The Special Educational Needs and Disabilities Act 2001 (SENDA) extended the provisions within the Disability Discrimination Act 1995 regarding the provision of services to the education sector. Yet accessible web design, dependent on professional coding standards, adherence to guidelines, and user testing, remains rare on the web. This paper examines the background to professional coding standards, and adherence to guidelines, in an attempt to find out why the web continues to fail people with disabilities. It begins by examining the progress of the transition in the 1990s from old style HTML to strict XHTML. It applauds the vision behind that transition, charts its progress identifying the principle constituencies that it involves – and how well each has played its part. It then focuses on the further problem of the requirement for user testing to iron out anomalies not covered by standards and guidelines. It concludes that validating XHTML code is desirable, but that user testing also needs to be undertaken. It identifies the complex and heterogeneous network of interrelated concerns through which the needs of disabled web users remain unheeded. To support its argument, the paper details the results of two studies – 1) of the homepages of 778 public bodies and blue chip companies, which found only 8% of homepages validated against any declared Document Type Declarations (DTD), and 2) a wider research project on employment websites which also included disabled user testing and a number of focus groups and interviews with disabled users and web development companies

    Scholars Forum: A New Model For Scholarly Communication

    Get PDF
    Scholarly journals have flourished for over 300 years because they successfully address a broad range of authors' needs: to communicate findings to colleagues, to establish precedence of their work, to gain validation through peer review, to establish their reputation, to know the final version of their work is secure, and to know their work will be accessible by future scholars. Eventually, the development of comprehensive paper and then electronic indexes allowed past work to be readily identified and cited. Just as postal service made it possible to share scholarly work regularly and among a broad readership, the Internet now provides a distribution channel with the power to reduce publication time and to expand traditional print formats by supporting multi-media options and threaded discourse. Despite widespread acceptance of the web by the academic and research community, the incorporation of advanced network technology into a new paradigm for scholarly communication by the publishers of print journals has not materialized. Nor have journal publishers used the lower cost of distribution on the web to make online versions of journals available at lower prices than print versions. It is becoming increasingly clear to the scholarly community that we must envision and develop for ourselves a new, affordable model for disseminating and preserving results, that synthesizes digital technology and the ongoing needs of scholars. In March 1997, with support from the Engineering Information Foundation, Caltech sponsored a Conference on Scholarly Communication to open a dialogue around key issues and to consider the feasibility of alternative undertakings. A general consensus emerged recognizing that the certification of scholarly articles through peer review could be "decoupled" from the rest of the publishing process, and that the peer review process is already supported by the universities whose faculty serve as editors, members of editorial boards, and referees. In the meantime, pressure to enact regressive copyright legislation has added another important element. The ease with which electronic files may be copied and forwarded has encouraged publishers and other owners of copyrighted material to seek means for denying access to anything they own in digital form to all but active subscribers or licensees. Furthermore, should publishers retain the only version of a publication in a digital form, there is a significant risk that this material may eventually be lost through culling little-used or unprofitable back-files, through not investing in conversion expense as technology evolves, through changes in ownership, or through catastrophic physical events. Such a scenario presents an intolerable threat to the future of scholarship
    corecore