29,307 research outputs found

    A Graph-Based Approach to Address Trust and Reputation in Ubiquitous Networks

    Get PDF
    The increasing popularity of virtual computing environments such as Cloud and Grid computing is helping to drive the realization of ubiquitous and pervasive computing. However, as computing becomes more entrenched in everyday life, the concepts of trust and risk become increasingly important. In this paper, we propose a new graph-based theoretical approach to address trust and reputation in complex ubiquitous networks. We formulate trust as a function of quality of a task and time required to authenticate agent-to-agent relationship based on the Zero-Common Knowledge (ZCK) authentication scheme. This initial representation applies a graph theory concept, accompanied by a mathematical formulation of trust metrics. The approach we propose increases awareness and trustworthiness to agents based on the values estimated for each requested task, we conclude by stating our plans for future work in this area

    A high-level semiotic trust agent scoring model for collaborative virtual organisations

    Get PDF
    In this paper, we describe how a semiotic ladder, together with a supportive trust agent, can be used to address “soft” trust issues in the context of collaborative Virtual Organisations (VO). The intention is to offer all parties better support for trust (as reputation) management including the reduction of risk and improved reliability of VO e-services. The semiotic ladder is intended to support the VO e-service lifecycle through the articulation of e-trust at various levels of system abstraction, including trust as measurable confidence. At the social level, reputation and reliability measures of e-trust are the relevant dimensions as regards choice of VO partner and are also relevant to the negotiation of service level agreements between the VO partners. By contrast, at the lower levels of the trust ladder, e-trust measures typically address the degree to which secure sign on and message level security conforms to various tangible technological security protocols. The novel trust agent provides the e-service consumer with an objective measure of the trustworthiness of the e-service at run-time, just prior to its actual consumption. Specifically, VO e-service consumer confidence level is informed, by leveraging third party objective evidence. This evidence comprises a set of Corporate Governance (CG) scores. These scores are used as a trust proxy for the "real" owner of the VO. There are also inherent limitations associated with the use of CG scores. These are duly acknowledged

    Data centric trust evaluation and prediction framework for IOT

    Get PDF
    © 2017 ITU. Application of trust principals in internet of things (IoT) has allowed to provide more trustworthy services among the corresponding stakeholders. The most common method of assessing trust in IoT applications is to estimate trust level of the end entities (entity-centric) relative to the trustor. In these systems, trust level of the data is assumed to be the same as the trust level of the data source. However, most of the IoT based systems are data centric and operate in dynamic environments, which need immediate actions without waiting for a trust report from end entities. We address this challenge by extending our previous proposals on trust establishment for entities based on their reputation, experience and knowledge, to trust estimation of data items [1-3]. First, we present a hybrid trust framework for evaluating both data trust and entity trust, which will be enhanced as a standardization for future data driven society. The modules including data trust metric extraction, data trust aggregation, evaluation and prediction are elaborated inside the proposed framework. Finally, a possible design model is described to implement the proposed ideas

    On the Simulation of Global Reputation Systems

    Get PDF
    Reputation systems evolve as a mechanism to build trust in virtual communities. In this paper we evaluate different metrics for computing reputation in multi-agent systems. We present a formal model for describing metrics in reputation systems and show how different well-known global reputation metrics are expressed by it. Based on the model a generic simulation framework for reputation metrics was implemented. We used our simulation framework to compare different global reputation systems to find their strengths and weaknesses. The strength of a metric is measured by its resistance against different threat-models, i.e. different types of hostile agents. Based on our results we propose a new metric for reputation systems.Reputation System, Trust, Formalization, Simulation

    Reputation Agent: Prompting Fair Reviews in Gig Markets

    Full text link
    Our study presents a new tool, Reputation Agent, to promote fairer reviews from requesters (employers or customers) on gig markets. Unfair reviews, created when requesters consider factors outside of a worker's control, are known to plague gig workers and can result in lost job opportunities and even termination from the marketplace. Our tool leverages machine learning to implement an intelligent interface that: (1) uses deep learning to automatically detect when an individual has included unfair factors into her review (factors outside the worker's control per the policies of the market); and (2) prompts the individual to reconsider her review if she has incorporated unfair factors. To study the effectiveness of Reputation Agent, we conducted a controlled experiment over different gig markets. Our experiment illustrates that across markets, Reputation Agent, in contrast with traditional approaches, motivates requesters to review gig workers' performance more fairly. We discuss how tools that bring more transparency to employers about the policies of a gig market can help build empathy thus resulting in reasoned discussions around potential injustices towards workers generated by these interfaces. Our vision is that with tools that promote truth and transparency we can bring fairer treatment to gig workers.Comment: 12 pages, 5 figures, The Web Conference 2020, ACM WWW 202

    Customer-engineer relationship management for converged ICT service companies

    Get PDF
    Thanks to the advent of converged communications services (often referred to as ‘triple play’), the next generation Service Engineer will need radically different skills, processes and tools from today’s counterpart. Why? in order to meet the challenges of installing and maintaining services based on multi-vendor software and hardware components in an IP-based network environment. The converged services environment is likely to be ‘smart’ and support flexible and dynamic interoperability between appliances and computing devices. These radical changes in the working environment will inevitably force managers to rethink the role of Service Engineers in relation to customer relationship management. This paper aims to identify requirements for an information system to support converged communications service engineers with regard to customer-engineer relationship management. Furthermore, an architecture for such a system is proposed and how it meets these requirements is discussed

    Attack-Surface Metrics, OSSTMM and Common Criteria Based Approach to “Composable Security” in Complex Systems

    Get PDF
    In recent studies on Complex Systems and Systems-of-Systems theory, a huge effort has been put to cope with behavioral problems, i.e. the possibility of controlling a desired overall or end-to-end behavior by acting on the individual elements that constitute the system itself. This problem is particularly important in the “SMART” environments, where the huge number of devices, their significant computational capabilities as well as their tight interconnection produce a complex architecture for which it is difficult to predict (and control) a desired behavior; furthermore, if the scenario is allowed to dynamically evolve through the modification of both topology and subsystems composition, then the control problem becomes a real challenge. In this perspective, the purpose of this paper is to cope with a specific class of control problems in complex systems, the “composability of security functionalities”, recently introduced by the European Funded research through the pSHIELD and nSHIELD projects (ARTEMIS-JU programme). In a nutshell, the objective of this research is to define a control framework that, given a target security level for a specific application scenario, is able to i) discover the system elements, ii) quantify the security level of each element as well as its contribution to the security of the overall system, and iii) compute the control action to be applied on such elements to reach the security target. The main innovations proposed by the authors are: i) the definition of a comprehensive methodology to quantify the security of a generic system independently from the technology and the environment and ii) the integration of the derived metrics into a closed-loop scheme that allows real-time control of the system. The solution described in this work moves from the proof-of-concepts performed in the early phase of the pSHIELD research and enrich es it through an innovative metric with a sound foundation, able to potentially cope with any kind of pplication scenarios (railways, automotive, manufacturing, ...)

    Reinforcement Learning for UAV Attitude Control

    Full text link
    Autopilot systems are typically composed of an "inner loop" providing stability and control, while an "outer loop" is responsible for mission-level objectives, e.g. way-point navigation. Autopilot systems for UAVs are predominately implemented using Proportional, Integral Derivative (PID) control systems, which have demonstrated exceptional performance in stable environments. However more sophisticated control is required to operate in unpredictable, and harsh environments. Intelligent flight control systems is an active area of research addressing limitations of PID control most recently through the use of reinforcement learning (RL) which has had success in other applications such as robotics. However previous work has focused primarily on using RL at the mission-level controller. In this work, we investigate the performance and accuracy of the inner control loop providing attitude control when using intelligent flight control systems trained with the state-of-the-art RL algorithms, Deep Deterministic Gradient Policy (DDGP), Trust Region Policy Optimization (TRPO) and Proximal Policy Optimization (PPO). To investigate these unknowns we first developed an open-source high-fidelity simulation environment to train a flight controller attitude control of a quadrotor through RL. We then use our environment to compare their performance to that of a PID controller to identify if using RL is appropriate in high-precision, time-critical flight control.Comment: 13 pages, 9 figure

    Human-agent collectives

    No full text
    We live in a world where a host of computer systems, distributed throughout our physical and information environments, are increasingly implicated in our everyday actions. Computer technologies impact all aspects of our lives and our relationship with the digital has fundamentally altered as computers have moved out of the workplace and away from the desktop. Networked computers, tablets, phones and personal devices are now commonplace, as are an increasingly diverse set of digital devices built into the world around us. Data and information is generated at unprecedented speeds and volumes from an increasingly diverse range of sources. It is then combined in unforeseen ways, limited only by human imagination. People’s activities and collaborations are becoming ever more dependent upon and intertwined with this ubiquitous information substrate. As these trends continue apace, it is becoming apparent that many endeavours involve the symbiotic interleaving of humans and computers. Moreover, the emergence of these close-knit partnerships is inducing profound change. Rather than issuing instructions to passive machines that wait until they are asked before doing anything, we will work in tandem with highly inter-connected computational components that act autonomously and intelligently (aka agents). As a consequence, greater attention needs to be given to the balance of control between people and machines. In many situations, humans will be in charge and agents will predominantly act in a supporting role. In other cases, however, the agents will be in control and humans will play the supporting role. We term this emerging class of systems human-agent collectives (HACs) to reflect the close partnership and the flexible social interactions between the humans and the computers. As well as exhibiting increased autonomy, such systems will be inherently open and social. This means the participants will need to continually and flexibly establish and manage a range of social relationships. Thus, depending on the task at hand, different constellations of people, resources, and information will need to come together, operate in a coordinated fashion, and then disband. The openness and presence of many distinct stakeholders means participation will be motivated by a broad range of incentives rather than diktat. This article outlines the key research challenges involved in developing a comprehensive understanding of HACs. To illuminate this agenda, a nascent application in the domain of disaster response is presented
    corecore