1,608 research outputs found

    Calculating and Aggregating Direct Trust and Reputation in Organic Computing Systems

    Get PDF
    The growing complexity of current computer systems requires a high amount of administration, which poses an increasingly challenging task for manual administration. The Autonomic and Organic Computing Initiatives have introduced so called self-x properties, including self-configuration, self-optimization, self-healing, and self-protection, to allow administration to become autonomous. Although research in this area revealed promising results, it expects all participants to further the system goal, i.e., their benevolence is assumed. In open systems, where arbitrary participants can join the systems, this benevolence assumption must be dropped, since such a participant may act maliciously and try to exploit the system. This introduces a not yet considered uncertainty, which needs to be addressed. In human society, trust relations are used to lower the uncertainty of transactions with unknown interaction partners. Trust is based on past experiences with someone, as well as recommendations of trusted third parties. In this work trust metrics for direct trust, reputation, confidence, and an aggregation of them are presented. While the presented metrics were primarily designed to improve the self-x properties of OC systems they can also be used by applications in Multi-Agent-Systems to evaluate the behavior of other agents. Direct trust is calculated by the Delayed-Ack metric, that assesses the reliability of nodes in Organic Computing systems. The other metrics are general enough to be used with all kinds of contexts and facets to cover any kind of trust requirements of a system, as long as corresponding direct trust values exist. These metrics include reputation (Neighbor-Trust), confidence, and an aggregation of them. Evaluations based on an Automated Design Space Exploration are conducted to find the best configurations for each metric, especially to identify the importance of direct trust, reputation, and confidence for the total trust value. They illustrate, that reputation, i.e., the recommendations of others, is an important aspect to evaluate the trustworthiness of an interaction partner. In addition, it is shown that a gradual change of priority from reputation to direct trust is preferable instead of a sudden switch when enough confidence in the correctness of ones own experiences is accumulated. All evaluations focus on systems with volatile behavior, i.e., system participants change their behavior over time. In such a system, the ability to adapt fast to behavior changes has turned out to be the most important parameter.Die steigende Komplexität aktueller Systeme benötigt einen hohen Grad an Administration, was eine wachsende Herausforderung für die manuelle Administration darstellt. Die Autonomic- und Organic-Computing Initiativen haben so genannte Selbst-x Eigenschaften vorgestellt, unter anderem Selbst-Konfiguration, Selbst-Optimierung, Selbst-Heilung sowie Selbst-Schutz, die eine autonome Administration erlauben. Obwohl die Forschung in diesem Gebiet erfolgversprechende Ergebnisse geliefert hat, wird von allen Teilnehmern erwartet, dass sie das Systemziel vorantreiben, d.h., ihr Wohlwollen wird vorausgesetzt. In offenen Systemen, in denen beliebige Teilnehmer dem System beitreten können, muss diese Wohlverhaltensannahme fallen gelassen werden, da solche Teilnehmer bösartig handeln und versuchen können, das System auszunutzen. In einer menschlichen Gesellschaft werden Vertrauensbeziehungen dazu benutzt, die Unsicherheit von Transaktionen mit unbekannten Interaktionspartnern zu mindern. Vertrauen basiert auf den bisherigen Erfahrungen mit Jemandem und auf Empfehlungen von Dritten. In dieser Arbeit werden Trust-Metriken für direkten Trust, Reputation, Konfidenz und deren Aggregation vorgestellt. Obwohl die vorgestellten Metriken hauptsächlich dafür entworfen wurden, die Selbst-x Eigenschaften von Organic-Computing Systemen zu verbessern, können sie ebenso von Applikationen in Multi-Agenten-Systemen benutzt werden, um das Verhalten anderer Agenten einschätzen zu können. Direkter Trust wird durch die Delayed-Ack Metrik berechnet, welche die Zuverlässigkeit von Knoten in Organic-Computing Systemen einschätzt. Die anderen Metriken sind allgemein genug gehalten, um in jedem Kontext und jeder Facette benutzt werden zu können, in dem ein System operiert, solange ein Trust-Wert für direkten Trust existiert. Diese Metriken beinhalten Reputation (Neighbor-Trust), Konfidenz und die Aggregation dieser. Es werden Evaluationen basierend auf einer automatischen Design Space Exploration durchgeführt, um die beste Konfiguration für jede Metrik zu finden, um dabei speziell die Wichtigkeit von direktem Trust, Reputation und Konfidenz auf den gesamten Trust-Wert zu identifizieren. Sie veranschaulichen, dass Reputation, d.h. die Vorschläge Dritter, ein wichtiger Aspekt ist, um die Vertrauenswürdigkeit eines Interaktionspartners einschätzen zu können. Zusätzlich zeigen sie, dass ein gradueller Wechsel von Reputation zu eigenen Erfahrungen einem plötzlichen Wechsel vorzuziehen ist, wenn genug Zuversicht auf die Korrektheit der eigenen Erfahrungen vorhanden ist. Alle Auswertungen befassen sich mit Systemen mit unbeständigem Verhalten, d.h. Systemteilnehmer ändern ihr Verhalten über die Zeit. In solch einem System hat sich herausgestellt, dass die Fähigkeit, sich schnell an Verhaltensänderungen anpassen zu können, der wichtigste Faktor ist

    Applying Trust for Operational States of ICT-Enabled Power Grid Services

    Full text link
    Digitalization enables the automation required to operate modern cyber-physical energy systems (CPESs), leading to a shift from hierarchical to organic systems. However, digitalization increases the number of factors affecting the state of a CPES (e.g., software bugs and cyber threats). In addition to established factors like functional correctness, others like security become relevant but are yet to be integrated into an operational viewpoint, i.e. a holistic perspective on the system state. Trust in organic computing is an approach to gain a holistic view of the state of systems. It consists of several facets (e.g., functional correctness, security, and reliability), which can be used to assess the state of CPES. Therefore, a trust assessment on all levels can contribute to a coherent state assessment. This paper focuses on the trust in ICT-enabled grid services in a CPES. These are essential for operating the CPES, and their performance relies on various data aspects like availability, timeliness, and correctness. This paper proposes to assess the trust in involved components and data to estimate data correctness, which is crucial for grid services. The assessment is presented considering two exemplary grid services, namely state estimation and coordinated voltage control. Furthermore, the interpretation of different trust facets is also discussed.Comment: Preprint of the article under revision for the ACM Transactions on Autonomous and Adaptive Systems, Special Issue on 20 Years of Organic Computin

    WikiSensing: A collaborative sensor management system with trust assessment for big data

    Get PDF
    Big Data for sensor networks and collaborative systems have become ever more important in the digital economy and is a focal point of technological interest while posing many noteworthy challenges. This research addresses some of the challenges in the areas of online collaboration and Big Data for sensor networks. This research demonstrates WikiSensing (www.wikisensing.org), a high performance, heterogeneous, collaborative data cloud for managing and analysis of real-time sensor data. The system is based on the Big Data architecture with comprehensive functionalities for smart city sensor data integration and analysis. The system is fully functional and served as the main data management platform for the 2013 UPLondon Hackathon. This system is unique as it introduced a novel methodology that incorporates online collaboration with sensor data. While there are other platforms available for sensor data management WikiSensing is one of the first platforms that enable online collaboration by providing services to store and query dynamic sensor information without any restriction of the type and format of sensor data. An emerging challenge of collaborative sensor systems is modelling and assessing the trustworthiness of sensors and their measurements. This is with direct relevance to WikiSensing as an open collaborative sensor data management system. Thus if the trustworthiness of the sensor data can be accurately assessed, WikiSensing will be more than just a collaborative data management system for sensor but also a platform that provides information to the users on the validity of its data. Hence this research presents a new generic framework for capturing and analysing sensor trustworthiness considering the different forms of evidence available to the user. It uses an extensible set of metrics that can represent such evidence and use Bayesian analysis to develop a trust classification model. Based on this work there are several publications and others are at the final stage of submission. Further improvement is also planned to make the platform serve as a cloud service accessible to any online user to build up a community of collaborators for smart city research.Open Acces

    Towards a global participatory platform: Democratising open data, complexity science and collective intelligence

    Get PDF
    The FuturICT project seeks to use the power of big data, analytic models grounded in complexity science, and the collective intelligence they yield for societal benefit. Accordingly, this paper argues that these new tools should not remain the preserve of restricted government, scientific or corporate élites, but be opened up for societal engagement and critique. To democratise such assets as a public good, requires a sustainable ecosystem enabling different kinds of stakeholder in society, including but not limited to, citizens and advocacy groups, school and university students, policy analysts, scientists, software developers, journalists and politicians. Our working name for envisioning a sociotechnical infrastructure capable of engaging such a wide constituency is the Global Participatory Platform (GPP). We consider what it means to develop a GPP at the different levels of data, models and deliberation, motivating a framework for different stakeholders to find their ecological niches at different levels within the system, serving the functions of (i) sensing the environment in order to pool data, (ii) mining the resulting data for patterns in order to model the past/present/future, and (iii) sharing and contesting possible interpretations of what those models might mean, and in a policy context, possible decisions. A research objective is also to apply the concepts and tools of complexity science and social science to the project's own work. We therefore conceive the global participatory platform as a resilient, epistemic ecosystem, whose design will make it capable of self-organization and adaptation to a dynamic environment, and whose structure and contributions are themselves networks of stakeholders, challenges, issues, ideas and arguments whose structure and dynamics can be modelled and analysed. Graphical abstrac

    Social Data Mining for Crime Intelligence

    Get PDF
    With the advancement of the Internet and related technologies, many traditional crimes have made the leap to digital environments. The successes of data mining in a wide variety of disciplines have given birth to crime analysis. Traditional crime analysis is mainly focused on understanding crime patterns, however, it is unsuitable for identifying and monitoring emerging crimes. The true nature of crime remains buried in unstructured content that represents the hidden story behind the data. User feedback leaves valuable traces that can be utilised to measure the quality of various aspects of products or services and can also be used to detect, infer, or predict crimes. Like any application of data mining, the data must be of a high quality standard in order to avoid erroneous conclusions. This thesis presents a methodology and practical experiments towards discovering whether (i) user feedback can be harnessed and processed for crime intelligence, (ii) criminal associations, structures, and roles can be inferred among entities involved in a crime, and (iii) methods and standards can be developed for measuring, predicting, and comparing the quality level of social data instances and samples. It contributes to the theory, design and development of a novel framework for crime intelligence and algorithm for the estimation of social data quality by innovatively adapting the methods of monitoring water contaminants. Several experiments were conducted and the results obtained revealed the significance of this study in mining social data for crime intelligence and in developing social data quality filters and decision support systems

    Big data analytics tools for improving the decision-making process in agrifood supply chain

    Get PDF
    Introduzione: Nell'interesse di garantire una sicurezza alimentare a lungo termine di fronte a circostanze mutevoli, è necessario comprendere e considerare gli aspetti ambientali, sociali ed economici del processo di produzione. Inoltre, a causa della globalizzazione, sono stati sollevati i problemi delle lunghe filiere agroalimentari, l'asimmetria informativa, la contraffazione, la difficoltà di tracciare e rintracciare l'origine dei prodotti e le numerose questioni correlate quali il benessere dei consumatori e i costi sanitari. Le tecnologie emergenti guidano verso il raggiungimento di nuovi approcci socioeconomici in quanto consentono al governo e ai singoli produttori agricoli di raccogliere ed analizzare una quantità sempre crescente di dati ambientali, agronomici, logistici e danno la possibilità ai consumatori ed alle autorità di controllo della qualità di accedere a tutte le informazioni necessarie in breve tempo e facilmente. Obiettivo: L'oggetto della ricerca riguarda lo studio delle modalità di miglioramento del processo produttivo attraverso la riduzione dell'asimmetria informativa, rendendola disponibile alle parti interessate in un tempo ragionevole, analizzando i dati sui processi produttivi, considerando l'impatto ambientale della produzione in termini di ecologia, economia, sicurezza alimentare e qualità di cibo, costruendo delle opportunità per le parti interessate nel prendere decisioni informate, oltre che semplificare il controllo della qualità, della contraffazione e delle frodi. Pertanto, l'obiettivo di questo lavoro è quello di studiare le attuali catene di approvvigionamento, identificare le loro debolezze e necessità, analizzare le tecnologie emergenti, le loro caratteristiche e gli impatti sulle catene di approvvigionamento e fornire utili raccomandazioni all'industria, ai governi e ai policy maker.Introduction: In the interest of ensuring long-term food security and safety in the face of changing circumstances, it is interesting and necessary to understand and to take into consideration the environmental, social and economic aspects of food and beverage production in relation to the consumers’ demand. Besides, due to the globalization, the problems of long supply chains, information asymmetry, counterfeiting, difficulty for tracing and tracking back the origin of the products and numerous related issues have been raised such as consumers’ well-being and healthcare costs. Emerging technologies drive to achieve new socio-economic approaches as they enable government and individual agricultural producers to collect and analyze an ever-increasing amount of environmental, agronomic, logistic data, and they give the possibility to the consumers and quality control authorities to get access to all necessary information in a short notice and easily. Aim: The object of the research essentially concerns the study of the ways for improving the production process through reducing the information asymmetry, making it available for interested parties in a reasonable time, analyzing the data about production processes considering the environmental impact of production in terms of ecology, economy, food safety and food quality and build the opportunity for stakeholders to make informed decisions, as well as simplifying the control of the quality, counterfeiting and fraud. Therefore, the aim of this work is to study current supply chains, to identify their weaknesses and necessities, to investigate the emerging technologies, their characteristics and the impacts on supply chains, and to provide with the useful recommendations the industry, governments and policymakers

    Trust and reputation in multi-modal sensor networks for marine environmental monitoring

    Get PDF
    Greater temporal and spatial sampling allows environmental processes and the well- being of our waterways to be monitored and characterised from previously unobtainable perspectives. It allows us to create models, make predictions and better manage our environments. New technologies are emerging in order to enable remote autonomous sensing of our water systems and subsequently meet the demands for high temporal and spatial monitoring. In particular, advances in communication and sensor technology has provided a catalyst for progress in remote monitoring of our water systems. However despite continuous improvements there are limitations with the use of this technology in marine environmental monitoring applications. We summarise these limitations in terms of scalability and reliability. In order to address these two main issues, our research proposes that environmental monitoring applications would strongly benefit from the use of a multi-modal sensor network utilising visual sensors, modelled outputs and context information alongside the more conventional in-situ wireless sensor networks. However each of these addi- tional data streams are unreliable. Hence we adapt a trust and reputation model for optimising their use to the network. For our research we use two test sites - the River Lee, Cork and Galway Bay each with a diverse range of multi-modal data sources. Firstly we investigate the coordination of multiple heterogenous information sources to allow more efficient operation of the more sophisticated in-situ analytical instrument in the network, to render the deployment of such devices more scalable. Secondly we address the issue of reliability. We investigate the ability of a multi-modal network to compensate for failure of in-situ nodes in the network, where there is no redundant identical node in the network to replace its operation. We adapt a model from the literature for dealing with the unreliability associated with each of the alternative sensor streams in order to monitor their behaviour over time and choose the most reliable output at a particular point in time in the network. We find that each of the alternative data streams demonstrates themselves to be useful tools in the network. The addition of the use of the trust and reputation model reflects their behaviour over time and demonstrates itself as a useful tool in optimising their use in the network

    Towards a global participatory platform Democratising open data, complexity science and collective intelligence

    Get PDF
    The FuturICT project seeks to use the power of big data, analytic models grounded in complexity science, and the collective intelligence they yield for societal benefit. Accordingly, this paper argues that these new tools should not remain the preserve of restricted government, scientific or corporate élites, but be opened up for societal engagement and critique. To democratise such assets as a public good, requires a sustainable ecosystem enabling different kinds of stakeholder in society, including but not limited to, citizens and advocacy groups, school and university students, policy analysts, scientists, software developers, journalists and politicians. Our working name for envisioning a sociotechnical infrastructure capable of engaging such a wide constituency is the Global Participatory Platform (GPP). We consider what it means to develop a GPP at the different levels of data, models and deliberation, motivating a framework for different stakeholders to find their ecological niches at different levels within the system, serving the functions of (i) sensing the environment in order to pool data, (ii) mining the resulting data for patterns in order to model the past/present/future, and (iii) sharing and contesting possible interpretations of what those models might mean, and in a policy context, possible decisions. A research objective is also to apply the concepts and tools of complexity science and social science to the project’s own work. We therefore conceive the global participatory platform as a resilient, epistemic ecosystem, whose design will make it capable of self-organization and adaptation to a dynamic environment, and whose structure and contributions are themselves networks of stakeholders, challenges, issues, ideas and arguments whose structure and dynamics can be modelled and analysed
    corecore