13,813 research outputs found

    Towards an Intelligent Tutor for Mathematical Proofs

    Get PDF
    Computer-supported learning is an increasingly important form of study since it allows for independent learning and individualized instruction. In this paper, we discuss a novel approach to developing an intelligent tutoring system for teaching textbook-style mathematical proofs. We characterize the particularities of the domain and discuss common ITS design models. Our approach is motivated by phenomena found in a corpus of tutorial dialogs that were collected in a Wizard-of-Oz experiment. We show how an intelligent tutor for textbook-style mathematical proofs can be built on top of an adapted assertion-level proof assistant by reusing representations and proof search strategies originally developed for automated and interactive theorem proving. The resulting prototype was successfully evaluated on a corpus of tutorial dialogs and yields good results.Comment: In Proceedings THedu'11, arXiv:1202.453

    Engineering a static verification tool for GPU kernels

    Get PDF
    We report on practical experiences over the last 2.5 years related to the engineering of GPUVerify, a static verification tool for OpenCL and CUDA GPU kernels, plotting the progress of GPUVerify from a prototype to a fully functional and relatively efficient analysis tool. Our hope is that this experience report will serve the verification community by helping to inform future tooling efforts. © 2014 Springer International Publishing

    Investigating the feasibility of a distributed, mapping-based, approach to solving subject interoperability problems in a multi-scheme, cross-service, retrieval environment

    Get PDF
    The HILT project is researching the problems of facilitating interoperability of subject descriptions in a distributed multi-scheme environment. HILT Phase I found a UK community consensus in favour of utilising an inter-scheme mapping service to improve interoperability. HILT Phase II investigated the approach by building a pilot server, and identified a range of issues that would have to be tackled if an operational service was to be successful. HILT Phase III will implement a centralised version of an M2M pilot, but will aim to design it so that the possibility of a move to a distributed service remains open. This aim will impact on likely future research concerns in Phase III and beyond. Wide adoption of a distributed approach to the problem could lead to the creation of a framework within which regional, national, and international efforts in the area can be harmonised and co-ordinated

    Dealing with mobility: Understanding access anytime, anywhere

    Get PDF
    The rapid and accelerating move towards the adoption and use of mobile technologies has increasingly provided people and organisations with the ability to work away from the office and on the move. The new ways of working afforded by these technologies are often characterised in terms of access to information and people ‘anytime, anywhere’. This paper presents a study of mobile workers that highlights different facets of access to remote people and information, and different facets of anytime, anywhere. Four key factors in mobile work are identified from the study: the role of planning, working in ‘dead time’, accessing remote technological and informational resources, and monitoring the activities of remote colleagues. By reflecting on these issues, we can better understand the role of technology and artefact use in mobile work and identify the opportunities for the development of appropriate technological solutions to support mobile workers

    Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning

    Full text link
    Deep Learning has recently become hugely popular in machine learning, providing significant improvements in classification accuracy in the presence of highly-structured and large databases. Researchers have also considered privacy implications of deep learning. Models are typically trained in a centralized manner with all the data being processed by the same training algorithm. If the data is a collection of users' private data, including habits, personal pictures, geographical positions, interests, and more, the centralized server will have access to sensitive information that could potentially be mishandled. To tackle this problem, collaborative deep learning models have recently been proposed where parties locally train their deep learning structures and only share a subset of the parameters in the attempt to keep their respective training sets private. Parameters can also be obfuscated via differential privacy (DP) to make information extraction even more challenging, as proposed by Shokri and Shmatikov at CCS'15. Unfortunately, we show that any privacy-preserving collaborative deep learning is susceptible to a powerful attack that we devise in this paper. In particular, we show that a distributed, federated, or decentralized deep learning approach is fundamentally broken and does not protect the training sets of honest participants. The attack we developed exploits the real-time nature of the learning process that allows the adversary to train a Generative Adversarial Network (GAN) that generates prototypical samples of the targeted training set that was meant to be private (the samples generated by the GAN are intended to come from the same distribution as the training data). Interestingly, we show that record-level DP applied to the shared parameters of the model, as suggested in previous work, is ineffective (i.e., record-level DP is not designed to address our attack).Comment: ACM CCS'17, 16 pages, 18 figure

    The role of artificial intelligence techniques in scheduling systems

    Get PDF
    Artificial Intelligence (AI) techniques provide good solutions for many of the problems which are characteristic of scheduling applications. However, scheduling is a large, complex heterogeneous problem. Different applications will require different solutions. Any individual application will require the use of a variety of techniques, including both AI and conventional software methods. The operational context of the scheduling system will also play a large role in design considerations. The key is to identify those places where a specific AI technique is in fact the preferable solution, and to integrate that technique into the overall architecture
    • …
    corecore