355,702 research outputs found

    Modeling and verification of trust and reputation systems

    Get PDF
    none1noTrust is a basic soft-security condition influencing interactive and cooperative behaviors in online communities. Several systems and models have been proposed to enforce and investigate the role of trust in the process of favoring successful cooperations while minimizing selfishness and failure. However, the analysis of their effectiveness and efficiency is a challenging issue. This paper provides a formal approach to the design and verification of trust infrastructures used in the setting of software architectures and computer networks supporting online communities. The proposed framework encompasses a process calculus of concurrent systems, a temporal logic for trust, and model checking techniques. Both functional and quantitative aspects can be modeled and analyzed, while several types of trust models can be integrated.openAlessandro AldiniAldini, Alessandr

    Consumer Participation in Using Online Product Recommendation Agents: Effects of Trust, PerceivedControl, and Perceived Risk in Providing Personal Information

    Get PDF
    Online product recommendation agents are gaining greater strategic importance as an innovative technology to deliver value-added services to consumers. Yet the active role of consumers as the participants in using this technology is not well understood. This dissertation builds on the technology-based self-service (TBSS) literature, consumer participation literature, the service-dominant logic, and the trust literature on recommendation agents to develop a research framework that explains the role of consumer participation in using online product recommendation agents. The objective of this dissertation is three-fold: (1) to examine the effects of consumer participation and privacy/security disclosures in using online product recommendation agents, (2) to explore the mediating effects of trust, perceived control, and perceived risk in providing personal information, and (3) to test the trust transference process within the current research context. A field experiment using existing recommendation agents was conducted with multiple sessions in computer labs to collect data from university students, a representative sample of the online population. 67 undergraduate students participated in the pretest, and 117 participated in the main study. Structural equation modeling with AMOS 7.0 was used to test the research hypotheses. The results showed that consumer participation was a contributing factor in building consumers’ trust in recommendation agents and that privacy/security disclosures decreased consumers’ perceived risk in providing personal information. Moreover, the trust transference process was validated among the three different types of consumer trust within the agent-mediated environment, that is, trust in the recommendation agent, trust in the Web site, and trust in recommendations. Finally, perceived control was shown to be a salient factor in increasing consumers’ trust and motivating consumers to reuse the recommendation technolog

    How the Role-Based Trust Management Can Be Applied to Wireless Sensor Networks, Journal of Telecommunications and Information Technology, 2012, nr 4

    Get PDF
    Trust plays an important role in human life environments. That is why the researchers has been focusing on it for a long time. It allows us to delegate tasks and decisions to an appropriate person. In social sciences trust between humans was studied, but it also was analyzed in economic transactions. A lot of computer scientists from different areas, like security, semantic web, electronic commerce, social networks tried to transfer this concept to their domains. Trust is an essential factor in any kind of network, whether social or computer. Wireless sensor networks (WSN) are characterized by severely constrained resources, they have limited power supplies, low transmission bandwidth, small memory sizes and limited energy, therefore security techniques used in traditional wired networks cannot be adopted directly. Some effort has been expended in this fields, but the concept of trust is defined in slightly different ways by different researchers. In this paper we will show how the family of Role-based Trust management languages (RT) can be used in WSN. RT is used for representing security policies and credentials in decentralized, distributed access control systems. A credential provides information about the privileges of users and the security policies issued by one or more trusted authorities

    Computer Modeling of Personal Autonomy and Legal Equilibrium

    Full text link
    Empirical studies of personal autonomy as state and status of individual freedom, security, and capacity to control own life, particularly by independent legal reasoning, are need dependable models and methods of precise computation. Three simple models of personal autonomy are proposed. The linear model of personal autonomy displays a relation between freedom as an amount of agent's action and responsibility as an amount of legal reaction and shows legal equilibrium, the balance of rights and duties needed for sustainable development of any community. The model algorithm of judge personal autonomy shows that judicial decision making can be partly automated, like other human jobs. Model machine learning of autonomous lawyer robot under operating system constitution illustrates the idea of robot rights. Robots, i.e. material and virtual mechanisms serving the people, deserve some legal guarantees of their rights such as robot rights to exist, proper function and be protected by the law. Robots, actually, are protected as any human property by the wide scope of laws, starting with Article 17 of Universal Declaration of Human Rights, but the current level of human trust in autonomous devices and their role in contemporary society needs stronger legislation to guarantee the robot rights.Comment: 8 pages, 6 figures, presented at Computer Science On-line Conference 201

    A framework for decentralised trust reasoning.

    Get PDF
    Recent developments in the pervasiveness and mobility of computer systems in open computer networks have invalidated traditional assumptions about trust in computer communications security. In a fundamentally decentralised and open network such as the Internet, the responsibility for answering the question of whether one can trust another entity on the network now lies with the individual agent, and not a priori a decision to be governed by a central authority. Online agents represent users' digital identities. Thus, we believe that it is reasonable to explore social models of trust for secure agent communication. The thesis of this work is that it is feasible to design and formalise a dynamic model of trust for secure communications based on the properties of social trust. In showing this, we divide this work into two phases. The aim of the first is to understand the properties and dynamics of social trust and its role in computer systems. To this end, a thorough review of trust, and its supporting concept, reputation, in the social sciences was carried out. We followed this by a rigorous analysis of current trust models, comparing their properties with those of social trust. We found that current models were designed in an ad-hoc basis, with regards to trust properties. The aim of the second phase is to build a framework for trust reasoning in distributed systems. Knowledge from the previous phase is used to design and formally specify, in Z, a computational trust model. A simple model for the communication of recommendations, the recommendation protocol, is also outlined to complement the model. Finally an analysis of possible threats to the model is carried out. Elements of this work have been incorporated into Sun's JXTA framework and Ericsson Research's prototype trust model

    Mitigating Insider Sabotage and Espionage: A Review of the United States Air Force\u27s Current Posture

    Get PDF
    The security threat from malicious insiders affects all organizations. Mitigating this problem is quite difficult due to the fact that (1) there is no definitive profile for malicious insiders, (2) organizations have placed trust in these individuals, and (3) insiders have a vast knowledge of their organization’s personnel, security policies, and information systems. The purpose of this research is to analyze to what extent the United States Air Force (USAF) security policies address the insider threat problem. The policies are reviewed in terms of how well they align with best practices published by the Carnegie Mellon University Computer Emergency Readiness Team and additional factors this research deems important, including motivations, organizational priorities, and social networks. Based on the findings of the policy review, this research offers actionable recommendations that the USAF could implement in order to better prevent, detect, and respond to malicious insider attacks. The most important course of action is to better utilize its workforce. All personnel should be trained on observable behaviors that can be precursors to malicious activity. Additionally, supervisors need to be empowered as the first line of defense, monitoring for stress, unmet expectations, and disgruntlement. In addition, this research proposes three new best practices regarding (1) screening for prior concerning behaviors, predispositions, and technical incidents, (2) issuing sanctions for inappropriate technical acts, and (3) requiring supervisors to take a proactive role
    • 

    corecore