11 research outputs found
Contradictory information flow in networks with trust and distrust
We offer a proof system and a NetLogo simulation for trust and distrust in networks where contradictory information is shared by ranked lazy and sceptic agents. Trust and its negative are defined as properties of edges: the former is required when a message is passed bottom-up in the hierarchy or received by a sceptic agent; the latter is attributed to channels that require contradiction resolution, or whose terminal is a lazy agent. These procedures are associated with epistemic costs, respectively for confirmation and refutation. We describe the logic, illustrate the algorithms implemented in the model and then focus on experimental results concerning the analysis of epistemic costs, the role of the agents’ epistemic attitude on distrust distribution and the influence of (dis)trust in reaching consensus
Contradictory information flow in networks with trust and distrust
We offer a proof system and a NetLogo simulation for trust and distrust in networks where contradictory information is shared by ranked lazy and sceptic agents. Trust and its negative are defined as properties of edges: the former is required when a message is passed bottom-up in the hierarchy or received by a sceptic agent; the latter is attributed to channels that require contradiction resolution, or whose terminal is a lazy agent. These procedures are associated with epistemic costs, respectively for confirmation and refutation. We describe the logic, illustrate the algorithms implemented in the model and then focus on experimental results concerning the analysis of epistemic costs, the role of the agents’ epistemic attitude on distrust distribution and the influence of (dis)trust in reaching consensus
A logic of negative trust
We present a logic to model the behaviour of an agent trusting or not trusting messages sent by another agent. The logic formalises trust as a consistency checking function with respect to currently available information. Negative trust is modelled in two forms: distrust, as the rejection of incoming inconsistent information; mistrust, as revision of previously held information becoming undesirable in view of new incoming inconsistent information, which the agent wishes to accept. We provide a natural deduction calculus, a relational semantics and prove soundness and completeness results. We overview a number of applications which have been investigated for the proof-theoretical formulation of the logic
Towards Cross-Provider Analysis of Transparency Information for Data Protection
Transparency and accountability are indispensable principles for modern data
protection, from both, legal and technical viewpoints. Regulations such as the
GDPR, therefore, require specific transparency information to be provided
including, e.g., purpose specifications, storage periods, or legal bases for
personal data processing. However, it has repeatedly been shown that all too
often, this information is practically hidden in legalese privacy policies,
hindering data subjects from exercising their rights. This paper presents a
novel approach to enable large-scale transparency information analysis across
service providers, leveraging machine-readable formats and graph data science
methods. More specifically, we propose a general approach for building a
transparency analysis platform (TAP) that is used to identify data transfers
empirically, provide evidence-based analyses of sharing clusters of more than
70 real-world data controllers, or even to simulate network dynamics using
synthetic transparency information for large-scale data-sharing scenarios. We
provide the general approach for advanced transparency information analysis, an
open source architecture and implementation in the form of a queryable analysis
platform, and versatile analysis examples. These contributions pave the way for
more transparent data processing for data subjects, and evidence-based
enforcement processes for data protection authorities. Future work can build
upon our contributions to gain more insights into so-far hidden data-sharing
practices.Comment: technical repor
Privacy-centered authentication: a new framework and analysis
© 2023 Elsevier. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/The usage of authentication schemes is increasing in our daily life with the ubiquitous spreading Internet services. The verification of user’s identity is still predominantly password-based, despite of being susceptible to various attacks and openly hated by users. Bonneau et al. presented a framework, based on Usability, Deployability, and Security criteria (UDS), to evaluate authentication schemes and find a replacement for passwords. Although the UDS framework is a mature and comprehensive evaluation framework and has been extended by other authors, it does not analyse privacy aspects in the usage of authentication schemes. In the present work, we extend the UDS framework with a privacy category to allow a more comprehensive evaluation, becoming an UDSP framework. We provide a thorough, rigorous assessment of sample authentication schemes, including analyse novel behavioural biometrics. Our work also discusses implementation aspects regarding the new privacy dimension and sketches the prospect of future authentication schemes.Javier Parra-Arnau is the recipient of a “Ramón y Cajal” fellowship (ref. RYC2021–034256-I) funded by the Spanish Ministry of Science and Innovation and the European Union – “NextGenerationEU”/PRTR (Plan de Recuperación, Transformación y Resiliencia). This work was also supported by the Spanish Government under the project “Enhancing Communication Protocols with Machine Learning while Protecting Sensitive Data (COMPROMISE)” PID2020–113795RB-C31, funded by MCIN/AEI/10.13039/501100011033, and through the project “MOBILYTICS” (TED2021–129782B-I00), funded by MCIN/AEI/10.13039/501100011033 and the European Union “NextGenerationEU”/PRTR.Peer ReviewedPostprint (published version
Agoric computation: trust and cyber-physical systems
In the past two decades advances in miniaturisation and economies of scale have led to the emergence of billions of connected components that have provided both a spur and a blueprint for the development of smart products acting in specialised environments which are uniquely identifiable, localisable, and capable of autonomy. Adopting the computational perspective of multi-agent systems (MAS) as a technological abstraction married with the engineering perspective of cyber-physical systems (CPS) has provided fertile ground for designing, developing and deploying software applications in smart automated context such as manufacturing, power grids, avionics, healthcare and logistics, capable of being decentralised, intelligent, reconfigurable, modular, flexible, robust, adaptive and responsive. Current agent technologies are, however, ill suited for information-based environments, making it difficult to formalise and implement multiagent systems based on inherently dynamical functional concepts such as trust and reliability, which present special challenges when scaling from small to large systems of agents. To overcome such challenges, it is useful to adopt a unified approach which we term agoric computation, integrating logical, mathematical and programming concepts towards the development of agent-based solutions based on recursive, compositional principles, where smaller systems feed via directed information flows into larger hierarchical systems that define their global environment. Considering information as an integral part of the environment naturally defines a web of operations where components of a systems are wired in some way and each set of inputs and outputs are allowed to carry some value. These operations are stateless abstractions and procedures that act on some stateful cells that cumulate partial information, and it is possible to compose such abstractions into higher-level ones, using a publish-and-subscribe interaction model that keeps track of update messages between abstractions and values in the data. In this thesis we review the logical and mathematical basis of such abstractions and take steps towards the software implementation of agoric modelling as a framework for simulation and verification of the reliability of increasingly complex systems, and report on experimental results related to a few select applications, such as stigmergic interaction in mobile robotics, integrating raw data into agent perceptions, trust and trustworthiness in orchestrated open systems, computing the epistemic cost of trust when reasoning in networks of agents seeded with contradictory information, and trust models for distributed ledgers in the Internet of Things (IoT); and provide a roadmap
for future developments of our research
Método para la evaluación de usabilidad de sitios web transaccionales basado en el proceso de inspección heurística
La usabilidad es considerada uno de los factores más importantes en el desarrollo de productos
de software. Este atributo de calidad está referido al grado en que, usuarios específicos de un
determinado aplicativo, pueden fácilmente hacer uso del software para lograr su propósito. Dada
la importancia de este aspecto en el éxito de las aplicaciones informáticas, múltiples métodos de
evaluación han surgido como instrumentos de medición que permiten determinar si la propuesta
de diseño de la interfaz de un sistema de software es entendible, fácil de usar, atractiva y agradable
al usuario. El método de evaluación heurística es uno de los métodos más utilizados en el área de
Interacción Humano-Computador (HCI) para este propósito debido al bajo costo de su ejecución
en comparación otras técnicas existentes. Sin embargo, a pesar de su amplio uso extensivo durante
los últimos años, no existe un procedimiento formal para llevar a cabo este proceso de evaluación.
Jakob Nielsen, el autor de esta técnica de inspección, ofrece únicamente lineamientos generales
que, según la investigación realizada, tienden a ser interpretados de diferentes maneras por los
especialistas. Por tal motivo, se ha desarrollado el presente proyecto de investigación que tiene
como objetivo establecer un proceso sistemático, estructurado, organizado y formal para llevar a
cabo evaluaciones heurísticas a productos de software. En base a un análisis exhaustivo realizado
a aquellos estudios que reportan en la literatura el uso del método de evaluación heurística como
parte del proceso de desarrollo de software, se ha formulado un nuevo método de evaluación
basado en cinco fases: (1) planificación, (2) entrenamiento, (3) evaluación, (4) discusión y (5)
reporte. Cada una de las fases propuestas que componen el protocolo de inspección contiene un
conjunto de actividades bien definidas a ser realizadas por el equipo de evaluación como parte
del proceso de inspección. Asimismo, se han establecido ciertos roles que deberán desempeñar
los integrantes del equipo de inspectores para asegurar la calidad de los resultados y un apropiado
desarrollo de la evaluación heurística. La nueva propuesta ha sido validada en dos escenarios
académicos distintos (en Colombia, en una universidad pública, y en Perú, en dos universidades
tanto en una pública como en una privada) demostrando en todos casos que es posible identificar
más problemas de usabilidad altamente severos y críticos cuando un proceso estructurado de
inspección es adoptado por los evaluadores. Otro aspecto favorable que muestran los resultados
es que los evaluadores tienden a cometer menos errores de asociación (entre heurística que es
incumplida y problemas de usabilidad identificados) y que la propuesta es percibida como fácil
de usar y útil. Al validarse la nueva propuesta desarrollada por el autor de este estudio se consolida
un nuevo conocimiento que aporta al bagaje cultural de la ciencia.Tesi