387,208 research outputs found

    On computer-based assessment of mathematics

    Get PDF
    This work explores some issues arising from the widespread use of computer based assessment of Mathematics in primary and secondary education. In particular, it considers the potential of computer based assessment for testing “process skills” and “problem solving”. This is discussed through a case study of the World Class Tests project which set out to test problem solving skills. The study also considers how on-screen “eAssessment” differs from conventional paper tests and how transferring established assessment tasks to the new media might change their difficulty, or even alter what they assess. Once source of evidence is a detailed comparison of the paper and computer versions of a commercially published test – nferNelson's Progress in Maths - including a new analysis of the publisher's own equating study. The other major aspect of the work is a design research exercise which starts by analysing tasks from Mathematics GCSE papers and proceeds to design, implement and trial a computer-based system for delivering and marking similar styles of tasks. This produces a number of insights into the design challenges of computer-based assessment, and also raises some questions about the design assumptions behind the original paper tests. One unanticipated finding was that, unlike younger pupils, some GCSE candidates expressed doubts about the idea of a computer-based examination. The study concludes that implementing a Mathematics test on a computer involves detailed decisions requiring expertise in both assessment and software design, particularly in the case of richer tasks targeting process skills. It concludes with the proposal that, in contrast to its advantages in literacy-based subjects, the computer may not provide a “natural medium for doing mathematics”, and instead places an additional demand on students. The solution might be to reform the curriculum to better reflect the role of computing in modern Mathematics

    On computer-based assessment of mathematics

    Get PDF
    This work explores some issues arising from the widespread use of computer based assessment of Mathematics in primary and secondary education. In particular, it considers the potential of computer based assessment for testing “process skills” and “problem solving”. This is discussed through a case study of the World Class Tests project which set out to test problem solving skills. The study also considers how on-screen “eAssessment” differs from conventional paper tests and how transferring established assessment tasks to the new media might change their difficulty, or even alter what they assess. Once source of evidence is a detailed comparison of the paper and computer versions of a commercially published test – nferNelson's Progress in Maths - including a new analysis of the publisher's own equating study. The other major aspect of the work is a design research exercise which starts by analysing tasks from Mathematics GCSE papers and proceeds to design, implement and trial a computer-based system for delivering and marking similar styles of tasks. This produces a number of insights into the design challenges of computer-based assessment, and also raises some questions about the design assumptions behind the original paper tests. One unanticipated finding was that, unlike younger pupils, some GCSE candidates expressed doubts about the idea of a computer-based examination. The study concludes that implementing a Mathematics test on a computer involves detailed decisions requiring expertise in both assessment and software design, particularly in the case of richer tasks targeting process skills. It concludes with the proposal that, in contrast to its advantages in literacy-based subjects, the computer may not provide a “natural medium for doing mathematics”, and instead places an additional demand on students. The solution might be to reform the curriculum to better reflect the role of computing in modern Mathematics

    Monocular SLAM Supported Object Recognition

    Get PDF
    In this work, we develop a monocular SLAM-aware object recognition system that is able to achieve considerably stronger recognition performance, as compared to classical object recognition systems that function on a frame-by-frame basis. By incorporating several key ideas including multi-view object proposals and efficient feature encoding methods, our proposed system is able to detect and robustly recognize objects in its environment using a single RGB camera in near-constant time. Through experiments, we illustrate the utility of using such a system to effectively detect and recognize objects, incorporating multiple object viewpoint detections into a unified prediction hypothesis. The performance of the proposed recognition system is evaluated on the UW RGB-D Dataset, showing strong recognition performance and scalable run-time performance compared to current state-of-the-art recognition systems.Comment: Accepted to appear at Robotics: Science and Systems 2015, Rome, Ital

    Attend Refine Repeat: Active Box Proposal Generation via In-Out Localization

    Full text link
    The problem of computing category agnostic bounding box proposals is utilized as a core component in many computer vision tasks and thus has lately attracted a lot of attention. In this work we propose a new approach to tackle this problem that is based on an active strategy for generating box proposals that starts from a set of seed boxes, which are uniformly distributed on the image, and then progressively moves its attention on the promising image areas where it is more likely to discover well localized bounding box proposals. We call our approach AttractioNet and a core component of it is a CNN-based category agnostic object location refinement module that is capable of yielding accurate and robust bounding box predictions regardless of the object category. We extensively evaluate our AttractioNet approach on several image datasets (i.e. COCO, PASCAL, ImageNet detection and NYU-Depth V2 datasets) reporting on all of them state-of-the-art results that surpass the previous work in the field by a significant margin and also providing strong empirical evidence that our approach is capable to generalize to unseen categories. Furthermore, we evaluate our AttractioNet proposals in the context of the object detection task using a VGG16-Net based detector and the achieved detection performance on COCO manages to significantly surpass all other VGG16-Net based detectors while even being competitive with a heavily tuned ResNet-101 based detector. Code as well as box proposals computed for several datasets are available at:: https://github.com/gidariss/AttractioNet.Comment: Technical report. Code as well as box proposals computed for several datasets are available at:: https://github.com/gidariss/AttractioNe

    Analog Solutions: E-discovery Spoliation Sanctions and the Proposed Amendments to FRCP 37(e)

    Get PDF
    The ever-increasing importance of digital technology in today’s commercial environment has created several serious problems for courts operating under the Federal Rules of Civil Procedure’s (FRCP) discovery regime. As the volume of discoverable information has grown exponentially, so too have the opportunities for abuse and misinterpretation of the FRCP’s outdated e-discovery rules. Federal courts are divided over the criteria for imposing the most severe discovery sanctions as well as the practical ramifications of the preservation duty as applied to electronically stored information. As a result, litigants routinely feel pressured to overpreserve potentially discoverable data, often at great expense. At a conference at the Duke University School of Law in 2010, experts from all sides of the civil-litigation system concluded that the e-discovery rules were in desperate need of updating. The subsequent four years saw a flurry of rulemaking efforts. In 2014, a package of proposed FRCP amendments included a complete overhaul of Rule 37(e), the provision governing spoliation sanctions for electronically stored information. This Note analyzes the proposed Rule and argues that the amendment will fail to accomplish the Advisory Committee’s goals because it focuses too heavily on preserving the trial court’s discretion in imposing sanctions and focuses too little on incentivizing efficient and cooperative pretrial discovery. The Note concludes by offering revisions and enforcement mechanisms that would allow the new Rule 37(e) to better address the e-discovery issues identified at the Duke Conference

    Criteria for the Diploma qualifications in information technology at levels 1, 2 and 3

    Get PDF

    Local and Global Trust Based on the Concept of Promises

    Get PDF
    We use the notion of a promise to define local trust between agents possessing autonomous decision-making. An agent is trustworthy if it is expected that it will keep a promise. This definition satisfies most commonplace meanings of trust. Reputation is then an estimation of this expectation value that is passed on from agent to agent. Our definition distinguishes types of trust, for different behaviours, and decouples the concept of agent reliability from the behaviour on which the judgement is based. We show, however, that trust is fundamentally heuristic, as it provides insufficient information for agents to make a rational judgement. A global trustworthiness, or community trust can be defined by a proportional, self-consistent voting process, as a weighted eigenvector-centrality function of the promise theoretical graph

    KEMNAD: A Knowledge Engineering Methodology for Negotiating Agent Development

    Get PDF
    Automated negotiation is widely applied in various domains. However, the development of such systems is a complex knowledge and software engineering task. So, a methodology there will be helpful. Unfortunately, none of existing methodologies can offer sufficient, detailed support for such system development. To remove this limitation, this paper develops a new methodology made up of: (1) a generic framework (architectural pattern) for the main task, and (2) a library of modular and reusable design pattern (templates) of subtasks. Thus, it is much easier to build a negotiating agent by assembling these standardised components rather than reinventing the wheel each time. Moreover, since these patterns are identified from a wide variety of existing negotiating agents(especially high impact ones), they can also improve the quality of the final systems developed. In addition, our methodology reveals what types of domain knowledge need to be input into the negotiating agents. This in turn provides a basis for developing techniques to acquire the domain knowledge from human users. This is important because negotiation agents act faithfully on the behalf of their human users and thus the relevant domain knowledge must be acquired from the human users. Finally, our methodology is validated with one high impact system

    Implementing TontineCoin

    Get PDF
    One of the alternatives to proof-of-work (PoW) consensus protocols is proof-of- stake (PoS) protocols, which address its energy and cost related issues. But they suffer from the nothing-at-stake problem; validators (PoS miners) are bound to lose nothing if they support multiple blockchain forks. Tendermint, a PoS protocol, handles this problem by forcing validators to bond their stake and then seizing a cheater’s stake when caught signing multiple competing blocks. The seized stake is then evenly distributed amongst the rest of validators. However, as the number of validators increases, the benefit in finding a cheater compared to the cost of monitoring validators reduces, weakening the system’s defense against the problem. Previous work on TontineCoin addresses this problem by utilizing the concept of tontines. A tontine is an investment scheme in which each participant receives a portion of benefits based on their share. As the number of participants in a tontine decreases, individual benefit increases, which acts as a motivation for participants to eliminate each other. Utilizing this feature in TontineCoin ensures that validators (participants of a tontine) are highly motivated to monitor each other, thus strengthening the system against the nothing-at-stake problem. This project implements a prototype of Tendermint using the Spartan Gold codebase and develops TontineCoin based on it. This implementation is the first implementation of the protocol, and simulates and contrasts five different normal operations in both the Tendermint and TontineCoin models. It also simulates and discusses how a nothing-at-stake attack is handled in TontineCoin compared to Tendermint
    • 

    corecore