566 research outputs found

    Transparent, trustworthy and privacy-preserving supply chains

    Full text link
    Over the years, supply chains have evolved from a few regional traders to globally complex chains of trade. Consequently, supply chain management systems have become heavily dependent on digitization for the purpose of data storage and traceability of goods. However, these traceability systems suffer from issues such as scattering of information across multiple silos and susceptibility of erroneous or modified data and thus are often unable to provide reliable information about a product. Due to propriety reasons, often end-to-end traceability is not available to the general consumer. The second issue is ensuring the credibility of the collated information about a product. The digital data may not be the true representation of the physical events which raises the issues of trusting the available information. If the source of digital data is not trustworthy, the provenance or traceability of a product becomes questionable. The third issue in supply chain management is a trade-off between the provenance information and protection of this data. The information is often associated with the identity of the contributing entity to ensure trust. However, the identity association makes it difficult to protect trade secrets such as shipments, pricing, and trade frequency of traders while simultaneously ensuring the provenance/traceability to the consumers. Our work aims to address above mentioned challenges related to traceability, trustworthiness and privacy. To support traceability and provenance, a consortium blockchain based framework, ProductChain, is proposed which provides an immutable audit trail of the supply chain events pertaining to the product and its origin. The framework also presents a sharded network model to meet the scalability needs of complex supply chains. Simulation results for our Proof of Concept (PoC) implementation show that query time for retrieving end-to-end traceability is of the order of a few milliseconds even when the information is collated from multiple regional blockchains. Next, to ensure the credibility of data from the supply chain entities, it is important to have an accountability mechanism which can penalise or reward the entities for their dishonest or honest contributions, respectively. We propose the TrustChain framework, which calculates a trust score for data contributing entities to the blockchain using multiple observations. These observations include feedback from interactions among supply chain entities, inputs from third party regulators and readings from IoT sensors. The integrated reputation system with blockchain, dynamically assigns trust and reputation scores to commodities and traders using smart contracts. A PoC implementation over Hyperledger Fabric shows that TrustChain incurs minimal overheads over a baseline. For protecting trade secrets while simultaneously ensuring traceability, PrivChain is proposed. PrivChain's framework allows traders to share computation or proofs in support of provenance and traceability claims rather than sharing the data itself. The framework also proposes an integrated incentive mechanism for traders providing such proofs. A PoC implementation on Hyperledger Fabric reveals a minimal overhead of using PrivChain as the data related computations are carried off-chain. Finally, we propose TradeChain which addresses the issue of preserving the privacy of identity related information with the blockchain data and gives greater access control to the data owners, i.e. traders. This framework decouples the identities of traders by managing two ledgers: one for managing decentralised identities and another for recording supply chain events. The information from both ledgers is then collated using access tokens provided by the data owners. In this way, they can dynamically control access to the blockchain data at a granular level. A PoC implementation is developed both on Hyperledger Indy and Fabric and we demonstrate minimal overheads for the different components of TradeChain

    Framework for Security Transparency in Cloud Computing

    Get PDF
    The migration of sensitive data and applications from the on-premise data centre to a cloud environment increases cyber risks to users, mainly because the cloud environment is managed and maintained by a third-party. In particular, the partial surrender of sensitive data and application to a cloud environment creates numerous concerns that are related to a lack of security transparency. Security transparency involves the disclosure of information by cloud service providers about the security measures being put in place to protect assets and meet the expectations of customers. It establishes trust in service relationship between cloud service providers and customers, and without evidence of continuous transparency, trust and confidence are affected and are likely to hinder extensive usage of cloud services. Also, insufficient security transparency is considered as an added level of risk and increases the difficulty of demonstrating conformance to customer requirements and ensuring that the cloud service providers adequately implement security obligations. The research community have acknowledged the pressing need to address security transparency concerns, and although technical aspects for ensuring security and privacy have been researched widely, the focus on security transparency is still scarce. The relatively few literature mostly approach the issue of security transparency from cloud providers’ perspective, while other works have contributed feasible techniques for comparison and selection of cloud service providers using metrics such as transparency and trustworthiness. However, there is still a shortage of research that focuses on improving security transparency from cloud users’ point of view. In particular, there is still a gap in the literature that (i) dissects security transparency from the lens of conceptual knowledge up to implementation from organizational and technical perspectives and; (ii) support continuous transparency by enabling the vetting and probing of cloud service providers’ conformity to specific customer requirements. The significant growth in moving business to the cloud – due to its scalability and perceived effectiveness – underlines the dire need for research in this area. This thesis presents a framework that comprises the core conceptual elements that constitute security transparency in cloud computing. It contributes to the knowledge domain of security transparency in cloud computing by proposing the following. Firstly, the research analyses the basics of cloud security transparency by exploring the notion and foundational concepts that constitute security transparency. Secondly, it proposes a framework which integrates various concepts from requirement engineering domain and an accompanying process that could be followed to implement the framework. The framework and its process provide an essential set of conceptual ideas, activities and steps that can be followed at an organizational level to attain security transparency, which are based on the principles of industry standards and best practices. Thirdly, for ensuring continuous transparency, the thesis proposes an essential tool that supports the collection and assessment of evidence from cloud providers, including the establishment of remedial actions for redressing deficiencies in cloud provider practices. The tool serves as a supplementary component of the proposed framework that enables continuous inspection of how predefined customer requirements are being satisfied. The thesis also validates the proposed security transparency framework and tool in terms of validity, applicability, adaptability, and acceptability using two different case studies. Feedbacks are collected from stakeholders and analysed using essential criteria such as ease of use, relevance, usability, etc. The result of the analysis illustrates the validity and acceptability of both the framework and tool in enhancing security transparency in a real-world environment

    A customized semantic service retrieval methodology for the digital ecosystems environment

    Get PDF
    With the emergence of the Web and its pervasive intrusion on individuals, organizations, businesses etc., people now realize that they are living in a digital environment analogous to the ecological ecosystem. Consequently, no individual or organization can ignore the huge impact of the Web on social well-being, growth and prosperity, or the changes that it has brought about to the world economy, transforming it from a self-contained, isolated, and static environment to an open, connected, dynamic environment. Recently, the European Union initiated a research vision in relation to this ubiquitous digital environment, known as Digital (Business) Ecosystems. In the Digital Ecosystems environment, there exist ubiquitous and heterogeneous species, and ubiquitous, heterogeneous, context-dependent and dynamic services provided or requested by species. Nevertheless, existing commercial search engines lack sufficient semantic supports, which cannot be employed to disambiguate user queries and cannot provide trustworthy and reliable service retrieval. Furthermore, current semantic service retrieval research focuses on service retrieval in the Web service field, which cannot provide requested service retrieval functions that take into account the features of Digital Ecosystem services. Hence, in this thesis, we propose a customized semantic service retrieval methodology, enabling trustworthy and reliable service retrieval in the Digital Ecosystems environment, by considering the heterogeneous, context-dependent and dynamic nature of services and the heterogeneous and dynamic nature of service providers and service requesters in Digital Ecosystems.The customized semantic service retrieval methodology comprises: 1) a service information discovery, annotation and classification methodology; 2) a service retrieval methodology; 3) a service concept recommendation methodology; 4) a quality of service (QoS) evaluation and service ranking methodology; and 5) a service domain knowledge updating, and service-provider-based Service Description Entity (SDE) metadata publishing, maintenance and classification methodology.The service information discovery, annotation and classification methodology is designed for discovering ubiquitous service information from the Web, annotating the discovered service information with ontology mark-up languages, and classifying the annotated service information by means of specific service domain knowledge, taking into account the heterogeneous and context-dependent nature of Digital Ecosystem services and the heterogeneous nature of service providers. The methodology is realized by the prototype of a Semantic Crawler, the aim of which is to discover service advertisements and service provider profiles from webpages, and annotating the information with service domain ontologies.The service retrieval methodology enables service requesters to precisely retrieve the annotated service information, taking into account the heterogeneous nature of Digital Ecosystem service requesters. The methodology is presented by the prototype of a Service Search Engine. Since service requesters can be divided according to the group which has relevant knowledge with regard to their service requests, and the group which does not have relevant knowledge with regard to their service requests, we respectively provide two different service retrieval modules. The module for the first group enables service requesters to directly retrieve service information by querying its attributes. The module for the second group enables service requesters to interact with the search engine to denote their queries by means of service domain knowledge, and then retrieve service information based on the denoted queries.The service concept recommendation methodology concerns the issue of incomplete or incorrect queries. The methodology enables the search engine to recommend relevant concepts to service requesters, once they find that the service concepts eventually selected cannot be used to denote their service requests. We premise that there is some extent of overlap between the selected concepts and the concepts denoting service requests, as a result of the impact of service requesters’ understandings of service requests on the selected concepts by a series of human-computer interactions. Therefore, a semantic similarity model is designed that seeks semantically similar concepts based on selected concepts.The QoS evaluation and service ranking methodology is proposed to allow service requesters to evaluate the trustworthiness of a service advertisement and rank retrieved service advertisements based on their QoS values, taking into account the contextdependent nature of services in Digital Ecosystems. The core of this methodology is an extended CCCI (Correlation of Interaction, Correlation of Criterion, Clarity of Criterion, and Importance of Criterion) metrics, which allows a service requester to evaluate the performance of a service provider in a service transaction based on QoS evaluation criteria in a specific service domain. The evaluation result is then incorporated with the previous results to produce the eventual QoS value of the service advertisement in a service domain. Service requesters can rank service advertisements by considering their QoS values under each criterion in a service domain.The methodology for service domain knowledge updating, service-provider-based SDE metadata publishing, maintenance, and classification is initiated to allow: 1) knowledge users to update service domain ontologies employed in the service retrieval methodology, taking into account the dynamic nature of services in Digital Ecosystems; and 2) service providers to update their service profiles and manually annotate their published service advertisements by means of service domain knowledge, taking into account the dynamic nature of service providers in Digital Ecosystems. The methodology for service domain knowledge updating is realized by a voting system for any proposals for changes in service domain knowledge, and by assigning different weights to the votes of domain experts and normal users.In order to validate the customized semantic service retrieval methodology, we build a prototype – a Customized Semantic Service Search Engine. Based on the prototype, we test the mathematical algorithms involved in the methodology by a simulation approach and validate the proposed functions of the methodology by a functional testing approach

    Autonomy, Efficiency, Privacy and Traceability in Blockchain-enabled IoT Data Marketplace

    Full text link
    Personal data generated from IoT devices is a new economic asset that individuals can trade to generate revenue on the emerging data marketplaces. Blockchain technology can disrupt the data marketplace and make trading more democratic, trustworthy, transparent and secure. Nevertheless, the adoption of blockchain to create an IoT data marketplace requires consideration of autonomy and efficiency, privacy, and traceability. Conventional centralized approaches are built around a trusted third party that conducts and controls all management operations such as managing contracts, pricing, billing, reputation mechanisms etc, raising concern that providers lose control over their data. To tackle this issue, an efficient, autonomous and fully-functional marketplace system is needed, with no trusted third party involved in operational tasks. Moreover, an inefficient allocation of buyers’ demands on battery-operated IoT devices poses a challenge for providers to serve multiple buyers’ demands simultaneously in real-time without disrupting their SLAs (service level agreements). Furthermore, a poor privacy decision to make personal data accessible to unknown or arbitrary buyers may have adverse consequences and privacy violations for providers. Lastly, a buyer could buy data from one marketplace and without the knowledge of the provider, resell bought data to users registered in other marketplaces. This may either lead to monetary loss or privacy violation for the provider. To address such issues, a data ownership traceability mechanism is essential that can track the change in ownership of data due to its trading within and across marketplace systems. However, data ownership traceability is hard because of ownership ambiguity, undisclosed reselling, and dispersal of ownership across multiple marketplaces. This thesis makes the following novel contributions. First, we propose an autonomous and efficient IoT data marketplace, MartChain, offering key mechanisms for a marketplace leveraging smart contracts to record agreement details, participant ratings, and data prices in blockchain without involving any mediator. Second, MartChain is underpinned by an Energy-aware Demand Selection and Allocation (EDSA) mechanism for optimally selecting and allocating buyers' demands on provider’s IoT devices while satisfying the battery, quality and allocation constraints. EDSA maximizes the revenue of the provider while meeting the buyers’ requirements and ensuring the completion of the selected demands without any interruptions. The proof-of-concept implementation on the Ethereum blockchain shows that our approach is viable and benefits the provider and buyer by creating an autonomous and efficient real-time data trading model. Next, we propose KYBChain, a Know-Your-Buyer in the privacy-aware decentralized IoT data marketplace that performs a multi-faceted assessment of various characteristics of buyers and evaluates their privacy rating. Privacy rating empowers providers to make privacy-aware informed decisions about data sharing. Quantitative analysis to evaluate the utility of privacy rating demonstrates that the use of privacy rating by the providers results in a decrease of data leakage risk and generated revenue, correlating with the classical risk-utility trade-off. Evaluation results of KYBChain on Ethereum reveal that the overheads in terms of gas consumption, throughput and latency introduced by our privacy rating mechanism compared to a marketplace that does not incorporate a privacy rating system are insignificant relative to its privacy gains. Finally, we propose TrailChain which generates a trusted trade trail for tracking the data ownership spanning multiple decentralized marketplaces. Our solution includes mechanisms for detecting any unauthorized data reselling to prevent privacy violations and a fair resell payment sharing scheme to distribute payment among data owners for authorized reselling. We performed qualitative and quantitative evaluations to demonstrate the effectiveness of TrailChain in tracking data ownership using four private Ethereum networks. Qualitative security analysis demonstrates that TrailChain is resilient against several malicious activities and security attacks. Simulations show that our method detects undisclosed reselling within the same marketplace and across different marketplaces. Besides, it also identifies whether the provider has authorized the reselling and fairly distributes the revenue among the data owners at marginal overhead

    A Trust Management Framework for Vehicular Ad Hoc Networks

    Get PDF
    The inception of Vehicular Ad Hoc Networks (VANETs) provides an opportunity for road users and public infrastructure to share information that improves the operation of roads and the driver experience. However, such systems can be vulnerable to malicious external entities and legitimate users. Trust management is used to address attacks from legitimate users in accordance with a user’s trust score. Trust models evaluate messages to assign rewards or punishments. This can be used to influence a driver’s future behaviour or, in extremis, block the driver. With receiver-side schemes, various methods are used to evaluate trust including, reputation computation, neighbour recommendations, and storing historical information. However, they incur overhead and add a delay when deciding whether to accept or reject messages. In this thesis, we propose a novel Tamper-Proof Device (TPD) based trust framework for managing trust of multiple drivers at the sender side vehicle that updates trust, stores, and protects information from malicious tampering. The TPD also regulates, rewards, and punishes each specific driver, as required. Furthermore, the trust score determines the classes of message that a driver can access. Dissemination of feedback is only required when there is an attack (conflicting information). A Road-Side Unit (RSU) rules on a dispute, using either the sum of products of trust and feedback or official vehicle data if available. These “untrue attacks” are resolved by an RSU using collaboration, and then providing a fixed amount of reward and punishment, as appropriate. Repeated attacks are addressed by incremental punishments and potentially driver access-blocking when conditions are met. The lack of sophistication in this fixed RSU assessment scheme is then addressed by a novel fuzzy logic-based RSU approach. This determines a fairer level of reward and punishment based on the severity of incident, driver past behaviour, and RSU confidence. The fuzzy RSU controller assesses judgements in such a way as to encourage drivers to improve their behaviour. Although any driver can lie in any situation, we believe that trustworthy drivers are more likely to remain so, and vice versa. We capture this behaviour in a Markov chain model for the sender and reporter driver behaviours where a driver’s truthfulness is influenced by their trust score and trust state. For each trust state, the driver’s likelihood of lying or honesty is set by a probability distribution which is different for each state. This framework is analysed in Veins using various classes of vehicles under different traffic conditions. Results confirm that the framework operates effectively in the presence of untrue and inconsistent attacks. The correct functioning is confirmed with the system appropriately classifying incidents when clarifier vehicles send truthful feedback. The framework is also evaluated against a centralized reputation scheme and the results demonstrate that it outperforms the reputation approach in terms of reduced communication overhead and shorter response time. Next, we perform a set of experiments to evaluate the performance of the fuzzy assessment in Veins. The fuzzy and fixed RSU assessment schemes are compared, and the results show that the fuzzy scheme provides better overall driver behaviour. The Markov chain driver behaviour model is also examined when changing the initial trust score of all drivers

    Blockchain-based reputation models for e-commerce: a systematic literature review

    Get PDF
    The Digital Age is the present, and nobody can deny that. With it has come a digital transformation in various sectors of activity, and e-commerce is no exception. Over the last few decades, there has been a massive increase in its utilization rates, as it has several advantages over traditional commerce. At the same time, the rise in the number of crimes on the Internet and, consequently, the understanding of the risks involved in online shopping has led consumers to become more cautious, looking for information about the seller and taking it into account when making a purchase decision. The need to get to know the merchant better before making a purchase decision has encouraged the creation of reputation systems, whose services play an essential role in today's e-commerce context. Reputation systems act as mechanisms to reduce information asymmetry between consumers and sellers and establish rankings that attest to fulfilling standards and policies considered necessary for shops operating in the digital market. The critical problems in current reputation systems are the frauds and attacks that such systems currently have to deal with, which results in a lack of trust between users. These security and fraud issues are critical because users' trust is commonly based on reputation models, and many of these current systems are not immune to them, thus compromising e-commerce growth. The need for a better and safer model emerges with the development of e-commerce. Through reading the articles and pursuing the answers to the primary questions, blockchain is data register technology to be analysed in order to gain a better acknowledgment of the potential of such technology. More research work and investigation must be done to fully understand how to create a more assertive reputation model. Thus, this study systematizes the knowledge generated by reputation models in E-commerce studies in Scopus, WoS databases, and Google Scholar, using PRISMA methodology. A systematic approach was adopted in conducting a literature review. The need for a systematic literature review came from the knowledge that there are reputation systems that mitigate some of the problems. In addition to identifying some indicators used in reputation models, we also conclude that these models could help provide some insurance to buyers and sellers, with a commitment to being a problem solver, being able to mitigate known problems such as Collusion, Sybil attacks, laundering attacks, and preventing online fraud ranging from ballot stuffing and bad-mouthing. Nevertheless, the results of the present work demonstrate that even though these reputation models still cannot solve all of the problems, attacking one fraud opens the door to an attack. The architecture of the models was identified, with the realization that a few lacks that need to be fulfilled

    Insights Into Global Engineering Education After the Birth of Industry 5.0

    Get PDF
    Insights Into Global Engineering Education After the Birth of Industry 5.0 presents a comprehensive overview of recent developments in the fields of engineering and technology. The book comprises single chapters authored by various researchers and edited by an expert active in the engineering education research area. It provides a thorough overview of the latest research efforts by international authors on engineering education and opens potential new research paths for further novel developments

    Exploring Preconditions for Effective Global Responses to Climate Change

    Get PDF
    The global response to climate change depends on you… and everyone else. The decisions we make, for better or worse, contribute to the global response. This study explores decision making, climate change signals and responses, actors and interests, and the “conditions” under which we might limit climate change and related impacts. Twenty-seven experts from around the world were asked to provide scenarios where the global response succeeds or fails to limit climate change and related impacts (i.e. the UNFCCC objective). From their responses, 175 scenarios were compiled forming a “searchable sample of possible futures”. Themes included social change and behaviour, political will and policy, business and economic activity. For these themes, multiple “pathways” were mapped. The study focused on pathways towards effective global responses (i.e. fulfilling the UNFCCC objective) and understanding the most important elements of the response system. The study finds there is a “crisis of response” that risks becoming a “crisis of impacts”. The signal that drives effective responses (i.e. impacts on people, property and livelihoods) was undetectable, is detectable now, and is rapidly strengthening. As such, timely global responses at scale are essential. Other preconditions include a mix of ambition and serendipity. From the analysis of effective response scenarios, serendipitous preconditions include the scale of climate change and related impacts being limited and reversible meanwhile unexpected events help limit climate change or related impacts. Ambition driven preconditions include global responses being timely with adaptation, mitigation and atmospheric GHG removals at scale, and having contingencies available in case of extreme climate change or other unexpected events. The transformative scale of required responses means social permissions and leadership are essential, as are coalitions of actors with the capacity to apply technologies and practices (policies included) and power to ensure each of us are contributing towards effective global responses

    German-Russian gas relations: a special relationship in troubled waters

    Full text link
    In the context of the security crisis in and over Ukraine, natural gas imports from Russia have become a source of debate in Germany and the European Union. Natural gas relations with Russia are often analyzed either through the prism of commercial and market-based transactions or that of foreign policy and geopolitics. In that respect, this Research Paper takes a holistic approach and tries to analyze and define the dynamics of (geo)politics and economic/commercial logics from the beginning of the early 1970s until today. The paper provides insights into the conducting of German-Russian gas relations at the levels of infrastructure development, trade, business-to-business and commercial ties, as well as political framing. It explains the nature and texture of the gas relations, which have been subject to change over time. (Autorenreferat

    Information security and assurance : Proceedings international conference, ISA 2012, Shanghai China, April 2012

    Full text link
    • …
    corecore