4 research outputs found
M-STAR: A Modular, Evidence-based Software Trustworthiness Framework
Despite years of intensive research in the field of software vulnerabilities
discovery, exploits are becoming ever more common. Consequently, it is more
necessary than ever to choose software configurations that minimize systems'
exposure surface to these threats. In order to support users in assessing the
security risks induced by their software configurations and in making informed
decisions, we introduce M-STAR, a Modular Software Trustworthiness ARchitecture
and framework for probabilistically assessing the trustworthiness of software
systems, based on evidence, such as their vulnerability history and source code
properties.
Integral to M-STAR is a software trustworthiness model, consistent with the
concept of computational trust. Computational trust models are rooted in
Bayesian probability and Dempster-Shafer Belief theory, offering mathematical
soundness and expressiveness to our framework. To evaluate our framework, we
instantiate M-STAR for Debian Linux packages, and investigate real-world
deployment scenarios. In our experiments with real-world data, M-STAR could
assess the relative trustworthiness of complete software configurations with an
error of less than 10%. Due to its modular design, our proposed framework is
agile, as it can incorporate future advances in the field of code analysis and
vulnerability prediction. Our results point out that M-STAR can be a valuable
tool for system administrators, regular users and developers, helping them
assess and manage risks associated with their software configurations.Comment: 18 pages, 13 figure
Trust Evaluation in the IoT Environment
Along with the many benefits of IoT, its heterogeneity brings a new challenge to establish a trustworthy environment among the objects due to the absence of proper enforcement mechanisms. Further, it can be observed that often these encounters are addressed only concerning the security and privacy matters involved. However, such common network security measures are not adequate to preserve the integrity of information and services exchanged over the internet. Hence, they remain vulnerable to threats ranging from the risks of data management at the cyber-physical layers, to the potential discrimination at the social layer. Therefore, trust in IoT can be considered as a key property to enforce trust among objects to guarantee trustworthy services. Typically, trust revolves around assurance and confidence that people, data, entities, information, or processes will function or behave in expected ways. However, trust enforcement in an artificial society like IoT is far more difficult, as the things do not have an inherited judgmental ability to assess risks and other influencing factors to evaluate trust as humans do. Hence, it is important to quantify the perception of trust such that it can be understood by the artificial agents. In computer science, trust is considered as a computational value depicted by a relationship between trustor and trustee, described in a specific context, measured by trust metrics, and evaluated by a mechanism. Several mechanisms about trust evaluation can be found in the literature. Among them, most of the work has deviated towards security and privacy issues instead of considering the universal meaning of trust and its dynamic nature. Furthermore, they lack a proper trust evaluation model and management platform that addresses all aspects of trust establishment. Hence, it is almost impossible to bring all these solutions to one place and develop a common platform that resolves end-to-end trust issues in a digital environment. Therefore, this thesis takes an attempt to fill these spaces through the following research work. First, this work proposes concrete definitions to formally identify trust as a computational concept and its characteristics. Next, a well-defined trust evaluation model is proposed to identify, evaluate and create trust relationships among objects for calculating trust. Then a trust management platform is presented identifying the major tasks of trust enforcement process including trust data collection, trust data management, trust information analysis, dissemination of trust information and trust information lifecycle management. Next, the thesis proposes several approaches to assess trust attributes and thereby the trust metrics of the above model for trust evaluation. Further, to minimize dependencies with human interactions in evaluating trust, an adaptive trust evaluation model is presented based on the machine learning techniques. From a standardization point of view, the scope of the current standards on network security and cybersecurity needs to be expanded to take trust issues into consideration. Hence, this thesis has provided several inputs towards standardization on trust, including a computational definition of trust, a trust evaluation model targeting both object and data trust, and platform to manage the trust evaluation process
On the Application of Supervised Machine Learning to Trustworthiness Assessment
State-of-the art trust and reputation systems seek to apply machine learning methods to overcome generalizability issues of experience-based Bayesian trust assessment. These approaches are, however, often model-centric instead of focussing on data and the complex adaptive system that is driven by reputation-based service selection. This entails the risk of unrealistic model assumptions. We outline the requirements for robust probabilistic trust assessment using supervised learning and apply a selection of estimators to a real-world data set, in order to show the effectiveness of supervised methods. Furthermore, we provide a representational mapping of estimator output to a belief logic representation for the modular integration of supervised methods with other trust assessment methodologies