249 research outputs found

    Man vs machine – Detecting deception in online reviews

    Get PDF
    This study focused on three main research objectives: analyzing the methods used to identify deceptive online consumer reviews, evaluating insights provided by multi-method automated approaches based on individual and aggregated review data, and formulating a review interpretation framework for identifying deception. The theoretical framework is based on two critical deception-related models, information manipulation theory and self-presentation theory. The findings confirm the interchangeable characteristics of the various automated text analysis methods in drawing insights about review characteristics and underline their significant complementary aspects. An integrative multi-method model that approaches the data at the individual and aggregate level provides more complex insights regarding the quantity and quality of review information, sentiment, cues about its relevance and contextual information, perceptual aspects, and cognitive material

    The future of Cybersecurity in Italy: Strategic focus area

    Get PDF
    This volume has been created as a continuation of the previous one, with the aim of outlining a set of focus areas and actions that the Italian Nation research community considers essential. The book touches many aspects of cyber security, ranging from the definition of the infrastructure and controls needed to organize cyberdefence to the actions and technologies to be developed to be better protected, from the identification of the main technologies to be defended to the proposal of a set of horizontal actions for training, awareness raising, and risk management

    A framework for decentralised trust reasoning.

    Get PDF
    Recent developments in the pervasiveness and mobility of computer systems in open computer networks have invalidated traditional assumptions about trust in computer communications security. In a fundamentally decentralised and open network such as the Internet, the responsibility for answering the question of whether one can trust another entity on the network now lies with the individual agent, and not a priori a decision to be governed by a central authority. Online agents represent users' digital identities. Thus, we believe that it is reasonable to explore social models of trust for secure agent communication. The thesis of this work is that it is feasible to design and formalise a dynamic model of trust for secure communications based on the properties of social trust. In showing this, we divide this work into two phases. The aim of the first is to understand the properties and dynamics of social trust and its role in computer systems. To this end, a thorough review of trust, and its supporting concept, reputation, in the social sciences was carried out. We followed this by a rigorous analysis of current trust models, comparing their properties with those of social trust. We found that current models were designed in an ad-hoc basis, with regards to trust properties. The aim of the second phase is to build a framework for trust reasoning in distributed systems. Knowledge from the previous phase is used to design and formally specify, in Z, a computational trust model. A simple model for the communication of recommendations, the recommendation protocol, is also outlined to complement the model. Finally an analysis of possible threats to the model is carried out. Elements of this work have been incorporated into Sun's JXTA framework and Ericsson Research's prototype trust model

    Bridging the gap between human and machine trust : applying methods of user-centred design and usability to computer security

    Get PDF
    This work presents methods for improving the usability of security. The work focuses on trust as part of computer security. Methods of usability and user-centred design present an essential starting point for the research. The work uses the methods these fields provide to investigate differences between machine and human trust, as well as how the technical expressions of trust could be made more usable by applying these methods. The thesis is based on nine publications, which present various possibilities to research trust with user-centric methods. The publications proceed chronologically and logically from the first user interviews about trust, trusting attitudes and behaviours in general to the actual design and usability testing of user interfaces for security applications, finally presenting the outcomes and conclusions of the research. The work also presents a review of relevant previous work in the area, concentrating on work done in the fields of usability and user-centred design. The work is of cross-disciplinary nature, falling into the areas of human-computer interaction, computer science and telecommunications. The ultimate goal of the conducted research has been to find out 1) how trust is to be understood in this context; 2) what methods can be used to gain insight into trust thus defined; and, finally, 3) what means can be used to create trust in the end users in online situations, where trust is needed. The work aims at providing insight into how trust can be studied with the methods provided by user-centred design and usability. Further, it investigates how to take understanding of trust formation in humans into account when attempting to design trust-inducing systems and applications. The work includes an analysis and comparison of the methods used: what kinds of methods to study trust exist in the field of usability and user-centred design. Further, it is evaluated, what kind of results and when can be reached with the different methods available, by applying a variety of these methods. Recommendations for the appropriate application of these methods when studying the various parts of trust is one of the outcomes. The results received with the methods used have also been compared with results received by others by applying alternative methods to the same research questions. On a conceptual level, the work contains an analysis of the concept of trust. It also contains a brief investigation into both technical and humane ways to express trust, with a comparison between the two

    Enhancing trustability in MMOGs environments

    Get PDF
    Massively Multiplayer Online Games (MMOGs; e.g., World of Warcraft), virtual worlds (VW; e.g., Second Life), social networks (e.g., Facebook) strongly demand for more autonomic, security, and trust mechanisms in a way similar to humans do in the real life world. As known, this is a difficult matter because trusting in humans and organizations depends on the perception and experience of each individual, which is difficult to quantify or measure. In fact, these societal environments lack trust mechanisms similar to those involved in humans-to-human interactions. Besides, interactions mediated by compute devices are constantly evolving, requiring trust mechanisms that keep the pace with the developments and assess risk situations. In VW/MMOGs, it is widely recognized that users develop trust relationships from their in-world interactions with others. However, these trust relationships end up not being represented in the data structures (or databases) of such virtual worlds, though they sometimes appear associated to reputation and recommendation systems. In addition, as far as we know, the user is not provided with a personal trust tool to sustain his/her decision making while he/she interacts with other users in the virtual or game world. In order to solve this problem, as well as those mentioned above, we propose herein a formal representation of these personal trust relationships, which are based on avataravatar interactions. The leading idea is to provide each avatar-impersonated player with a personal trust tool that follows a distributed trust model, i.e., the trust data is distributed over the societal network of a given VW/MMOG. Representing, manipulating, and inferring trust from the user/player point of view certainly is a grand challenge. When someone meets an unknown individual, the question is “Can I trust him/her or not?”. It is clear that this requires the user to have access to a representation of trust about others, but, unless we are using an open source VW/MMOG, it is difficult —not to say unfeasible— to get access to such data. Even, in an open source system, a number of users may refuse to pass information about its friends, acquaintances, or others. Putting together its own data and gathered data obtained from others, the avatar-impersonated player should be able to come across a trust result about its current trustee. For the trust assessment method used in this thesis, we use subjective logic operators and graph search algorithms to undertake such trust inference about the trustee. The proposed trust inference system has been validated using a number of OpenSimulator (opensimulator.org) scenarios, which showed an accuracy increase in evaluating trustability of avatars. Summing up, our proposal aims thus to introduce a trust theory for virtual worlds, its trust assessment metrics (e.g., subjective logic) and trust discovery methods (e.g., graph search methods), on an individual basis, rather than based on usual centralized reputation systems. In particular, and unlike other trust discovery methods, our methods run at interactive rates.MMOGs (Massively Multiplayer Online Games, como por exemplo, World of Warcraft), mundos virtuais (VW, como por exemplo, o Second Life) e redes sociais (como por exemplo, Facebook) necessitam de mecanismos de confiança mais autónomos, capazes de assegurar a segurança e a confiança de uma forma semelhante à que os seres humanos utilizam na vida real. Como se sabe, esta não é uma questão fácil. Porque confiar em seres humanos e ou organizações depende da percepção e da experiência de cada indivíduo, o que é difícil de quantificar ou medir à partida. Na verdade, esses ambientes sociais carecem dos mecanismos de confiança presentes em interacções humanas presenciais. Além disso, as interacções mediadas por dispositivos computacionais estão em constante evolução, necessitando de mecanismos de confiança adequados ao ritmo da evolução para avaliar situações de risco. Em VW/MMOGs, é amplamente reconhecido que os utilizadores desenvolvem relações de confiança a partir das suas interacções no mundo com outros. No entanto, essas relações de confiança acabam por não ser representadas nas estruturas de dados (ou bases de dados) do VW/MMOG específico, embora às vezes apareçam associados à reputação e a sistemas de reputação. Além disso, tanto quanto sabemos, ao utilizador não lhe é facultado nenhum mecanismo que suporte uma ferramenta de confiança individual para sustentar o seu processo de tomada de decisão, enquanto ele interage com outros utilizadores no mundo virtual ou jogo. A fim de resolver este problema, bem como os mencionados acima, propomos nesta tese uma representação formal para essas relações de confiança pessoal, baseada em interacções avatar-avatar. A ideia principal é fornecer a cada jogador representado por um avatar uma ferramenta de confiança pessoal que segue um modelo de confiança distribuída, ou seja, os dados de confiança são distribuídos através da rede social de um determinado VW/MMOG. Representar, manipular e inferir a confiança do ponto de utilizador/jogador, é certamente um grande desafio. Quando alguém encontra um indivíduo desconhecido, a pergunta é “Posso confiar ou não nele?”. É claro que isto requer que o utilizador tenha acesso a uma representação de confiança sobre os outros, mas, a menos que possamos usar uma plataforma VW/MMOG de código aberto, é difícil — para não dizer impossível — obter acesso aos dados gerados pelos utilizadores. Mesmo em sistemas de código aberto, um número de utilizadores pode recusar partilhar informações sobre seus amigos, conhecidos, ou sobre outros. Ao juntar seus próprios dados com os dados obtidos de outros, o utilizador/jogador representado por um avatar deve ser capaz de produzir uma avaliação de confiança sobre o utilizador/jogador com o qual se encontra a interagir. Relativamente ao método de avaliação de confiança empregue nesta tese, utilizamos lógica subjectiva para a representação da confiança, e também operadores lógicos da lógica subjectiva juntamente com algoritmos de procura em grafos para empreender o processo de inferência da confiança relativamente a outro utilizador. O sistema de inferência de confiança proposto foi validado através de um número de cenários Open-Simulator (opensimulator.org), que mostrou um aumento na precisão na avaliação da confiança de avatares. Resumindo, a nossa proposta visa, assim, introduzir uma teoria de confiança para mundos virtuais, conjuntamente com métricas de avaliação de confiança (por exemplo, a lógica subjectiva) e em métodos de procura de caminhos de confiança (com por exemplo, através de métodos de pesquisa em grafos), partindo de uma base individual, em vez de se basear em sistemas habituais de reputação centralizados. Em particular, e ao contrário de outros métodos de determinação do grau de confiança, os nossos métodos são executados em tempo real

    Architecture Supporting Computational Trust Formation

    Get PDF
    Trust is a concept that has been used in computing to support better decision making. For example, trust can be used in access control. Trust can also be used to support service selection. Although certain elements of trust such as reputation has gained widespread acceptance, a general model of trust has so far not seen widespread usage. This is due to the challenges of implementing a general trust model. In this thesis, a middleware based approach is proposed to address the implementation challenges. The thesis proposes a general trust model known as computational trust. Computational trust is based on research in social psychology. An individual’s computational trust is formed with the support of the proposed computational trust architecture. The architecture consists of a middleware and middleware clients. The middleware can be viewed as a representation of the individual that shares its knowledge with all the middleware clients. Each application uses its own middleware client to form computational trust for its decision making needs. Computational trust formation can be adapted to changing circumstances. The thesis also proposed algorithms for computational trust formation. Experiments, evaluations and scenarios are also presented to demonstrate the feasibility of the middleware based approach to computational trust formation

    The future of Cybersecurity in Italy: Strategic focus area

    Get PDF
    • …
    corecore