33,869 research outputs found

    Trust models in ubiquitous computing

    No full text
    We recapture some of the arguments for trust-based technologies in ubiquitous computing, followed by a brief survey of some of the models of trust that have been introduced in this respect. Based on this, we argue for the need of more formal and foundational trust models

    Privacy, security, and trust issues in smart environments

    Get PDF
    Recent advances in networking, handheld computing and sensor technologies have driven forward research towards the realisation of Mark Weiser's dream of calm and ubiquitous computing (variously called pervasive computing, ambient computing, active spaces, the disappearing computer or context-aware computing). In turn, this has led to the emergence of smart environments as one significant facet of research in this domain. A smart environment, or space, is a region of the real world that is extensively equipped with sensors, actuators and computing components [1]. In effect the smart space becomes a part of a larger information system: with all actions within the space potentially affecting the underlying computer applications, which may themselves affect the space through the actuators. Such smart environments have tremendous potential within many application areas to improve the utility of a space. Consider the potential offered by a smart environment that prolongs the time an elderly or infirm person can live an independent life or the potential offered by a smart environment that supports vicarious learning

    Towards self-protecting ubiquitous systems : monitoring trust-based interactions

    Get PDF
    The requirement for spontaneous interaction in ubiquitous computing creates security issues over and above those present in other areas of computing, deeming traditional approaches ineffective. As a result, to support secure collaborations entities must implement self-protective measures. Trust management is a solution well suited to this task as reasoning about future interactions is based on the outcome of past ones. This requires monitoring of interactions as they take place. Such monitoring also allows us to take corrective action when interactions are proceeding unsatisfactorily. In this vein, we first present a trust-based model of interaction based on event structures. We then describe our ongoing work in the development of a monitor architecture which enables self-protective actions to be carried out at critical points during principal interaction. Finally, we discuss some potential directions for future work

    Trust-based model for privacy control in context aware systems

    Get PDF
    In context-aware systems, there is a high demand on providing privacy solutions to users when they are interacting and exchanging personal information. Privacy in this context encompasses reasoning about trust and risk involved in interactions between users. Trust, therefore, controls the amount of information that can be revealed, and risk analysis allows us to evaluate the expected benefit that would motivate users to participate in these interactions. In this paper, we propose a trust-based model for privacy control in context-aware systems based on incorporating trust and risk. Through this approach, it is clear how to reason about trust and risk in designing and implementing context-aware systems that provide mechanisms to protect users' privacy. Our approach also includes experiential learning mechanisms from past observations in reaching better decisions in future interactions. The outlined model in this paper serves as an attempt to solve the concerns of privacy control in context-aware systems. To validate this model, we are currently applying it on a context-aware system that tracks users' location. We hope to report on the performance evaluation and the experience of implementation in the near future

    Architecture and Implementation of a Trust Model for Pervasive Applications

    Get PDF
    Collaborative effort to share resources is a significant feature of pervasive computing environments. To achieve secure service discovery and sharing, and to distinguish between malevolent and benevolent entities, trust models must be defined. It is critical to estimate a device\u27s initial trust value because of the transient nature of pervasive smart space; however, most of the prior research work on trust models for pervasive applications used the notion of constant initial trust assignment. In this paper, we design and implement a trust model called DIRT. We categorize services in different security levels and depending on the service requester\u27s context information, we calculate the initial trust value. Our trust value is assigned for each device and for each service. Our overall trust estimation for a service depends on the recommendations of the neighbouring devices, inference from other service-trust values for that device, and direct trust experience. We provide an extensive survey of related work, and we demonstrate the distinguishing features of our proposed model with respect to the existing models. We implement a healthcare-monitoring application and a location-based service prototype over DIRT. We also provide a performance analysis of the model with respect to some of its important characteristics tested in various scenarios

    Context for Ubiquitous Data Management

    Get PDF
    In response to the advance of ubiquitous computing technologies, we believe that for computer systems to be ubiquitous, they must be context-aware. In this paper, we address the impact of context-awareness on ubiquitous data management. To do this, we overview different characteristics of context in order to develop a clear understanding of context, as well as its implications and requirements for context-aware data management. References to recent research activities and applicable techniques are also provided

    Security models for trusting network appliances

    Get PDF
    A significant characteristic of pervasive computing is the need for secure interactions between highly mobile entities and the services in their environment. Moreover,these decentralised systems are also characterised by partial views over the state of the global environment, implying that we cannot guarantee verification of the properties of the mobile entity entering an unfamiliar domain. Secure in this context encompasses both the need for cryptographic security and the need for trust, on the part of both parties, that the interaction is functioning as expected. In this paper we make a broad assumption that trust and cryptographic security can be considered as orthogonal concerns (i.e. cryptographic measures do not ensure transmission of correct information). We assume the existence of reliable encryption techniques and focus on the characteristics of a model that supports the management of the trust relationships between two devices during ad-hoc interactions

    The case of online trust

    Get PDF
    “The original publication is available at www.springerlink.com”. Copyright SpringerThis paper contributes to the debate on online trust addressing the problem of whether an online environment satisfies the necessary conditions for the emergence of trust. The paper defends the thesis that online environments can foster trust, and it does so in three steps. Firstly, the arguments proposed by the detractors of online trust are presented and analysed. Secondly, it is argued that trust can emerge in uncertain and risky environments and that it is possible to trust online identities when they are diachronic and sufficient data are available to assess their reputation. Finally, a definition of trust as a second-order property of first-order relation is endorsed in order to present a new definition of online trust. According to such a definition, online trust is an occurrence of trust that specifically qualifies the relation of communication ongoing among individuals in digital environments. On the basis of this analysis, the paper concludes by arguing that online trust promotes the emergence of social behaviours rewarding honest and transparent communications.Peer reviewe

    Is a Semantic Web Agent a Knowledge-Savvy Agent?

    No full text
    The issue of knowledge sharing has permeated the field of distributed AI and in particular, its successor, multiagent systems. Through the years, many research and engineering efforts have tackled the problem of encoding and sharing knowledge without the need for a single, centralized knowledge base. However, the emergence of modern computing paradigms such as distributed, open systems have highlighted the importance of sharing distributed and heterogeneous knowledge at a larger scale—possibly at the scale of the Internet. The very characteristics that define the Semantic Web—that is, dynamic, distributed, incomplete, and uncertain knowledge—suggest the need for autonomy in distributed software systems. Semantic Web research promises more than mere management of ontologies and data through the definition of machine-understandable languages. The openness and decentralization introduced by multiagent systems and service-oriented architectures give rise to new knowledge management models, for which we can’t make a priori assumptions about the type of interaction an agent or a service may be engaged in, and likewise about the message protocols and vocabulary used. We therefore discuss the problem of knowledge management for open multi-agent systems, and highlight a number of challenges relating to the exchange and evolution of knowledge in open environments, which pertinent to both the Semantic Web and Multi Agent System communities alike
    corecore