7 research outputs found
Machine Learning based Trust Computational Model for IoT Services
The Internet of Things has facilitated access to a large volume of sensitive information on each participating object in an ecosystem. This imposes many threats ranging from the risks of data management to the potential discrimination enabled by data analytics over delicate information such as locations, interests, and activities. To address these issues, the concept of trust is introduced as an important role in supporting both humans and services to overcome the perception of uncertainty and risks before making any decisions. However, establishing trust in a cyber world is a challenging task due to the volume of diversified influential factors from cyber-physical-systems. Hence, it is essential to have an intelligent trust computation model that is capable of generating accurate and intuitive trust values for prospective actors. Therefore, in this paper, a quantifiable trust assessment model is proposed. Built on this model, individual trust attributes are then calculated numerically. Moreover, a novel algorithm based on machine learning principles is devised to classify the extracted trust features and combine them to produce a final trust value to be used for decision making. Finally, our model’s effectiveness is verified through a simulation. The results show that our method has advantages over other aggregation methods
Recommended from our members
Using Blockchain to Ensure Reputation Credibility in Decentralized Review Management
In recent years, there have been incidents which decreased people's trust in some organizations and authorities responsible for ratings and accreditation. For a few prominent examples, there was a security breach at Equifax (2017), misconduct was found in the Standard & Poor's Ratings Services (2015), and the Accrediting Council for Independent Colleges and Schools (2022) validated some of the low-performing schools as delivering higher standards than they actually were. A natural solution to these types of issues is to decentralize the relevant trust management processes using blockchain technologies. The research problems which are tackled in this thesis consider the issue of trust in reputation for assessment and review credibility at different angles, in the context of blockchain applications.
We first explored the following questions. How can we trust courses in one college to provide students with the type and level of knowledge which is needed in a specific workplace? Micro-accreditation on a blockchain was our solution, including using a peer-review system to determine the rigor of a course (through a consensus). Rigor is the level of difficulty in regard to a student's expected level of knowledge. Currently, we make assumptions about the quality and rigor of what is learned, but this is prone to human bias and misunderstandings. We present a decentralized approach that tracks student records throughout the academic progress at a school and helps to match employers' requirements to students' knowledge. We do this by applying micro-accredited topics and Knowledge Units (KU) defined by NSA's Center of Academic Excellence to courses and assignments. We demonstrate that the system was successful in increasing accuracy of hires through simulated datasets, and that it is efficient, as well as scalable. Another problem is how can we trust that the peer reviews are honest and reflect an accurate rigor score? Assigning reputation to peers is a natural method to ensure correctness of these assessments. The reputation of the peers providing rigor scores needs to be taken into account for an overall rigor of a course, its topics, and its tasks. Specifically, those with a higher reputation should have more influence on the total score.
Hence, we focused on how a peer's reputation is managed. We explored decentralized reputation management for the peers, choosing a decentralized marketplace as a sample application. We presented an approach to ensuring review credibility, which is a particular aspect of trust in reviews and reputation of the parties who provide them. We use a Proof-of-Stake based Algorand system as a base of our implementation, since this system is open-source, and it has a rich community support. Specifically, we directly map reputation to stake, which allows us to deploy Algorand at the blockchain layer. Reviews are analyzed by the proposed evaluation component using Natural Language Processing (NLP). In our system, NLP gauges the positivity of the written review, compares that value to a scaled numerical rating given, and determines adjustments to a peer's reputation from that result. We demonstrate that this architecture ensures credible and trustworthy assessments. It also efficiently manages the reputation of the peers, while keeping reasonable consensus times.
We then turned our focus on ensuring that a peer's reputation is credible. This led us to introducing a new type of consensus called "Proof-of-Review". Our proposed implementation is again based on Algorand, since its modular architecture allows for easy modifications, such as adding extra components, but this time, we modified the engine. The proposed model then provides a trust in evaluations (review and assessment credibility) and in those who provide them (reputation credibility) using a blockchain. We introduce a blacklisting component, which prevents malicious nodes from participating in the protocol, and a minimum-reputation component, which limits the influence of under-performing users. Our results showed that the proposed blockchain system maintains liveliness and completeness. Specifically, blacklisting and the minimum-reputation requirement (when properly tuned) do not affect these properties. We note that the Proof-of-Review concept can be deployed in other types of applications with similar needs of trust in assessments and the players providing them, such as sensor arrays, autonomous car groups (caravans), marketplaces, and more
Recommended from our members
Computational models of trust for cooperative evolution. Reputation based game theoretic models of trust for cooperative evolution in online business games.
Online services such as e-marketplaces, social networking sites, online gaming environments etc have grown in popularity in the recent years. These services represent situation where participants do not get to negotiate face to face before interaction and most of the time parties to transaction remain anonymous. It is thus necessary to have a system that rightly assesses trustworthiness of the other party in order to maintain quality assurance in such systems. Recent works on Trust and Reputation in online communities have focused on identifying probable defaulters, but less effort has been put to come up with system that make cooperation attractive over defection in order to achieve cooperation without enforcement. Our work in this regard concerns design and investigation of trust assessment systems that not only filter defaulters but also promote evolution of cooperativeness in player society.
Based on the concept of game theory and prisoner¿s dilemma, we model business games and design incentive method, compensation method, acquaintance based assessment method and decision theoretic assessment method as mechanisms to assure trustworthiness in online business environments. Effectiveness of each of these methods in promoting the evolution of cooperation in player society has been investigated. Our results show that these methods contribute positively in promoting cooperative evolution.
We have further extended our trust assessment model to suit the needs of a mobile ad-hoc network setting. The effectiveness of this model has been tested against its capability to reduce packet drop rate and energy conservation. In both of these the results show promise.EU Asia-link project TH/Asia Link/004(91712) East-West and its local host Kantipur Engineering College in Nepal, and Kathmandu University
Trust Evaluation in the IoT Environment
Along with the many benefits of IoT, its heterogeneity brings a new challenge to establish a trustworthy environment among the objects due to the absence of proper enforcement mechanisms. Further, it can be observed that often these encounters are addressed only concerning the security and privacy matters involved. However, such common network security measures are not adequate to preserve the integrity of information and services exchanged over the internet. Hence, they remain vulnerable to threats ranging from the risks of data management at the cyber-physical layers, to the potential discrimination at the social layer. Therefore, trust in IoT can be considered as a key property to enforce trust among objects to guarantee trustworthy services. Typically, trust revolves around assurance and confidence that people, data, entities, information, or processes will function or behave in expected ways. However, trust enforcement in an artificial society like IoT is far more difficult, as the things do not have an inherited judgmental ability to assess risks and other influencing factors to evaluate trust as humans do. Hence, it is important to quantify the perception of trust such that it can be understood by the artificial agents. In computer science, trust is considered as a computational value depicted by a relationship between trustor and trustee, described in a specific context, measured by trust metrics, and evaluated by a mechanism. Several mechanisms about trust evaluation can be found in the literature. Among them, most of the work has deviated towards security and privacy issues instead of considering the universal meaning of trust and its dynamic nature. Furthermore, they lack a proper trust evaluation model and management platform that addresses all aspects of trust establishment. Hence, it is almost impossible to bring all these solutions to one place and develop a common platform that resolves end-to-end trust issues in a digital environment. Therefore, this thesis takes an attempt to fill these spaces through the following research work. First, this work proposes concrete definitions to formally identify trust as a computational concept and its characteristics. Next, a well-defined trust evaluation model is proposed to identify, evaluate and create trust relationships among objects for calculating trust. Then a trust management platform is presented identifying the major tasks of trust enforcement process including trust data collection, trust data management, trust information analysis, dissemination of trust information and trust information lifecycle management. Next, the thesis proposes several approaches to assess trust attributes and thereby the trust metrics of the above model for trust evaluation. Further, to minimize dependencies with human interactions in evaluating trust, an adaptive trust evaluation model is presented based on the machine learning techniques. From a standardization point of view, the scope of the current standards on network security and cybersecurity needs to be expanded to take trust issues into consideration. Hence, this thesis has provided several inputs towards standardization on trust, including a computational definition of trust, a trust evaluation model targeting both object and data trust, and platform to manage the trust evaluation process