63,394 research outputs found

    Security in online learning assessment towards an effective trustworthiness approach to support e-learning teams

    Get PDF
    (c) 2014 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.This paper proposes a trustworthiness model for the design of secure learning assessment in on-line collaborative learning groups. Although computer supported collaborative learning has been widely adopted in many educational institutions over the last decade, there exist still drawbacks which limit their potential in collaborative learning activities. Among these limitations, we investigate information security requirements in on-line assessment, (e-assessment), which can be developed in collaborative learning contexts. Despite information security enhancements have been developed in recent years, to the best of our knowledge, integrated and holistic security models have not been completely carried out yet. Even when security advanced methodologies and technologies are deployed in Learning Management Systems, too many types of vulnerabilities still remain opened and unsolved. Therefore, new models such as trustworthiness approaches can overcome these lacks and support e-assessment requirements for e-Learning. To this end, a trustworthiness model is designed in order to conduct the guidelines of a holistic security model for on-line collaborative learning through effective trustworthiness approaches. In addition, since users' trustworthiness analysis involves large amounts of ill-structured data, a parallel processing paradigm is proposed to build relevant information modeling trustworthiness levels for e-Learning.Peer ReviewedPostprint (author's final draft

    Towards Identifying and closing Gaps in Assurance of autonomous Road vehicleS - a collection of Technical Notes Part 1

    Get PDF
    This report provides an introduction and overview of the Technical Topic Notes (TTNs) produced in the Towards Identifying and closing Gaps in Assurance of autonomous Road vehicleS (Tigars) project. These notes aim to support the development and evaluation of autonomous vehicles. Part 1 addresses: Assurance-overview and issues, Resilience and Safety Requirements, Open Systems Perspective and Formal Verification and Static Analysis of ML Systems. Part 2: Simulation and Dynamic Testing, Defence in Depth and Diversity, Security-Informed Safety Analysis, Standards and Guidelines

    A collective intelligence approach for building student's trustworthiness profile in online learning

    Get PDF
    (c) 2014 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.Information and communication technologies have been widely adopted in most of educational institutions to support e-Learning through different learning methodologies such as computer supported collaborative learning, which has become one of the most influencing learning paradigms. In this context, e-Learning stakeholders, are increasingly demanding new requirements, among them, information security is considered as a critical factor involved in on-line collaborative processes. Information security determines the accurate development of learning activities, especially when a group of students carries out on-line assessment, which conducts to grades or certificates, in these cases, IS is an essential issue that has to be considered. To date, even most advances security technological solutions have drawbacks that impede the development of overall security e-Learning frameworks. For this reason, this paper suggests enhancing technological security models with functional approaches, namely, we propose a functional security model based on trustworthiness and collective intelligence. Both of these topics are closely related to on-line collaborative learning and on-line assessment models. Therefore, the main goal of this paper is to discover how security can be enhanced with trustworthiness in an on-line collaborative learning scenario through the study of the collective intelligence processes that occur on on-line assessment activities. To this end, a peer-to-peer public student's profile model, based on trustworthiness is proposed, and the main collective intelligence processes involved in the collaborative on-line assessments activities, are presented.Peer ReviewedPostprint (author's final draft

    Trust beyond reputation: A computational trust model based on stereotypes

    Full text link
    Models of computational trust support users in taking decisions. They are commonly used to guide users' judgements in online auction sites; or to determine quality of contributions in Web 2.0 sites. However, most existing systems require historical information about the past behavior of the specific agent being judged. In contrast, in real life, to anticipate and to predict a stranger's actions in absence of the knowledge of such behavioral history, we often use our "instinct"- essentially stereotypes developed from our past interactions with other "similar" persons. In this paper, we propose StereoTrust, a computational trust model inspired by stereotypes as used in real-life. A stereotype contains certain features of agents and an expected outcome of the transaction. When facing a stranger, an agent derives its trust by aggregating stereotypes matching the stranger's profile. Since stereotypes are formed locally, recommendations stem from the trustor's own personal experiences and perspective. Historical behavioral information, when available, can be used to refine the analysis. According to our experiments using Epinions.com dataset, StereoTrust compares favorably with existing trust models that use different kinds of information and more complete historical information

    Investing in Curation: A Shared Path to Sustainability

    Get PDF
    No abstract available

    Towards an integrated model for citizen adoption of E-government services in developing countries: A Saudi Arabia case study

    No full text
    This paper considers the challenges that face the widespread adoption of E-government in developing countries, using Saudi Arabian our case study. E-government can be defined based on an existing set of requirements. In this paper we define E-government as a matrix of stakeholders; governments to governments, governments to business and governments to citizens using information and communications technology to deliver and consume services. E-government has been implemented for a considerable time in developed countries. However E-government services still faces many challenges their implemented and general adoption in developing countries. Therefore, this paper presents an integrated model for ascertaining the intention to adopt E-government services and thereby aid governments in accessing what is required to increase adoption

    TRAVOS: Trust and Reputation in the Context of Inaccurate Information Sources

    No full text
    In many dynamic open systems, agents have to interact with one another to achieve their goals. Here, agents may be self-interested, and when trusted to perform an action for another, may betray that trust by not performing the action as required. In addition, due to the size of such systems, agents will often interact with other agents with which they have little or no past experience. There is therefore a need to develop a model of trust and reputation that will ensure good interactions among software agents in large scale open systems. Against this background, we have developed TRAVOS (Trust and Reputation model for Agent-based Virtual OrganisationS) which models an agent's trust in an interaction partner. Specifically, trust is calculated using probability theory taking account of past interactions between agents, and when there is a lack of personal experience between agents, the model draws upon reputation information gathered from third parties. In this latter case, we pay particular attention to handling the possibility that reputation information may be inaccurate
    corecore