109 research outputs found
Architecting a Blockchain-Based Framework for the Internet of Things
Traditionally, Internet-of-Things (IoT) solutions are based on centralized infrastructures, which necessitate high-end servers for handling and transferring data. Centralized solutions incur high costs associated to maintaining centralized servers, and do not provide built-in guarantees against security threats and trust issues. Therefore, it is an essential research problem to mitigate the aforementioned problems by developing new methods for IoT decentralisation.
In recent years, blockchain technology, the underlying technology of Bitcoin, has attracted research interest as the potential missing link towards building a truly decentralized, trustless and secure environment for the IoT. Nevertheless, employing blockchain in the IoT has significant issues and challenges, related to scalability since all transactions logged in a blockchain undergo a decentralized consensus process.
This thesis presents the design and implementation of a blockchain-based decentralized IoT framework that can leverage the inherent security characteristics of blockchains, while addressing the challenges associated with developing such a framework. This decentralized IoT framework aims to employ blockchains in combination with other peer-to-peer mechanisms to provide: access control; secure IoT data transfer; peer-to-peer data-sharing business models; and secure end-to-end IoT communications, without depending upon a centralized intermediary for authentication or data handling.
This framework uses a multi-tiered blockchain architecture with a control-plane/data-plane split, in that the bulk data is transferred through peer-to-peer data transfer mechanisms, and blockchains are used to enforce terms and conditions and store relevant timestamped metadata. Implementations of the blockchain-based framework have been presented in a multitude of use-cases, to observe the framework's viability and adaptability in real-world scenarios. These scenarios involved traceability in supply chains, IoT data monetization and security in end-to-end communications.With all the potential applications of the blockchain-based framework within the IoT, this thesis takes a step towards the goal of a truly decentralized IoT
Mustang Daily, May 16, 2002
Student newspaper of California Polytechnic State University, San Luis Obispo, CA.https://digitalcommons.calpoly.edu/studentnewspaper/6892/thumbnail.jp
Cyber Security
This open access book constitutes the refereed proceedings of the 17th International Annual Conference on Cyber Security, CNCERT 2021, held in Beijing, China, in AJuly 2021. The 14 papers presented were carefully reviewed and selected from 51 submissions. The papers are organized according to the following topical sections: data security; privacy protection; anomaly detection; traffic analysis; social network security; vulnerability detection; text classification
Recommended from our members
Privacy-preserving decentralised collaborative applications
Cloud-based applications are problematic from a privacy perspective because they typically have access to large amounts of user data and metadata. This centralisation of user data creates an attractive target for actors such as criminals, suppressive governments, and companies selling the data. At the same time, the popularity of mobile and web applications has led to a growing amount of sensitive data being stored in the cloud.
This dissertation focuses on collaborative applications, such as Google Docs and Microsoft Office Online, where users currently rely on cloud-based solutions. It explores decentralised alternatives that allow the use of end-to-end encryption and anonymous communication systems to improve both information privacy and communication privacy.
One approach for a collaborative application to synchronise data in a privacy-preserving way is to use Tor hidden services, providing end-to-end encrypted communication, while also hiding collaborators’ identity. However, running Tor comes at a cost. We explore the costs of running a hidden service on a smartphone. Smartphones are nowadays the most frequently used computing devices, but they are also relatively resource-constrained. We build an empirical model of monthly cellular data traffic, and estimate a median 198 MiB for a typical user. We further estimate that the network activity would cost at least 9.6% of daily battery capacity on a Nexus One using 3G Internet. We explore four optimisations that, in combination, reduce the estimated median data cost to 61 MiB.
We also consider the security and privacy properties of decentralised collaborative applications, and explore a challenge that is introduced by a decentralised design – the lack of a trusted server guaranteeing consistency between collaborators. We present a novel snapshot protocol that ensures consistency, whilst allowing the past edit history to be hidden from new collaborators, and without relying on a consensus mechanism.
Lastly, we evaluate the overhead of the snapshot protocol by replaying editing histories from 270 Wikipedia articles, and demonstrate how its correctness and security properties are achieved. Assuming the number of collaborators remains small, the protocol is scalable in terms of CPU, memory, and network usage. It substantially reduces the amount of data transferred to a new collaborator compared to a basic protocol that transmits the full history. The computational cost is in the order of milliseconds per operation, indicating the protocol is suitable for applications where the rate of edits is relatively low.Funding was provided by Microsoft Research, The Boeing Company, and the Computer Laboratory
Recommended from our members
Availability, Integrity, and Confidentiality for Content Centric Network internet architectures
The Internet as we know it today, despite being ``the result of a series of accidents of choices'' in Prof. Jon Crowcroft's words, has undoubtedly been an amazing success story. However, it has been constantly challenged by the demands of the overwhelming evolution of data traffic types, non-functional needs of applications and users, and device diversity. The phrase ``future internet architecture'' can be interpreted as referring to a revised set of design principles. As Dr David Clark rightfully suggested, we need to ``allow for the future in the face of the present''. Content Centric Networking (CCN) is one of the candidates for a future internet architecture. Security is one of the most significant considerations while designing a future internet architecture. Availability, Integrity, and Confidentiality (AIC) are considered the three most crucial components of security: 1) availability is the assurance of continuous, reliable, and uninterrupted access to the information by authorized people, 2) integrity is the preservation of information and prevention of any change in it caused via accident or malicious intent, and 3) confidentiality is the ability to keep the information secret from unintended audience, intruders, and adversaries. This thesis discusses AIC related security threats and corresponding remedies for Named Data Networking (NDN) which is a promising example of CCN. It also presents a system dynamics modelling approach to bridge the gap between the technical solutions and business strategy by quantifying some of the qualitative variables salient to technology architects, policymakers, lawmakers, regulators, and internet service providers for the design of a future-proof internet architecture
Managing Intellectual Property to Foster Agricultural Development
Over the past decades, consideration of IPRs has become increasingly important in many areas of agricultural development, including foreign direct investment, technology transfer, trade, investment in innovation, access to genetic resources, and the protection of traditional knowledge. The widening role of IPRs in governing the ownership of—and access to—innovation, information, and knowledge makes them particularly critical in ensuring that developing countries benefit from the introduction of new technologies that could radically alter the welfare of the poor. Failing to improve IPR policies and practices to support the needs of developing countries will eliminate significant development opportunities. The discussion in this note moves away from policy prescriptions to focus on investments to improve how IPRs are used in practice in agricultural development. These investments must be seen as complementary to other investments in agricultural development. IPRs are woven into the context of innovation and R&D. They can enable entrepreneurship and allow the leveraging of private resources for resolving the problems of poverty. Conversely, IPRs issues can delay important scientific advancements, deter investment in products for the poor, and impose crippling transaction costs on organizations if the wrong tools are used or tools are badly applied. The central benefit of pursuing the investments outlined in this note is to build into the system a more robust capacity for strategic and flexible use of IPRs tailored to development goals
SoK: Signatures With Randomizable Keys
Digital signature schemes with specific properties have recently seen various real-world applications with a strong emphasis on privacy-enhancing technologies. They have been extensively used to develop anonymous credentials schemes and to achieve an even more comprehensive range of functionalities in the decentralized web.
Substantial work has been done to formalize different types of signatures where an allowable set of transformations can be applied to message-signature pairs to obtain new related pairs. Most of the previous work focused on transformations with respect to the message being signed, but little has been done to study what happens when transformations apply to the signing keys. A first attempt to thoroughly formalize such aspects was carried by Derler and Slamanig (ePrint \u2716, Designs, Codes and Cryptography \u2719), followed by the more recent efforts by Backes et. al (ASIACRYPT \u2718) and Eaton et. al (ePrint \u2723). However, the literature on the topic is vast and different terminology is used across contributions, which makes it difficult to compare related works and understand the range of applications covered by a given construction.
In this work, we present a unified view of signatures with randomizable keys and revisit their security properties. We focus on state-of-the-art constructions and related applications, identifying existing challenges. Our systematization allows us to highlight gaps, open questions and directions for future research on signatures with randomizable keys
Cyber Security
This open access book constitutes the refereed proceedings of the 17th International Annual Conference on Cyber Security, CNCERT 2021, held in Beijing, China, in AJuly 2021. The 14 papers presented were carefully reviewed and selected from 51 submissions. The papers are organized according to the following topical sections: data security; privacy protection; anomaly detection; traffic analysis; social network security; vulnerability detection; text classification
An ant-inspired, deniable routing approach in ad hoc question & answer networks
The ubiquity of the Internet facilitates electronic question and answering
(Q&A) between real people with ease via community portals and social networking
websites. It is a useful service which allows users to appeal to a broad
range of answerers. In most cases however, Q&A services produce answers
by presenting questions to the general public or associated digital community
with little regard for the amount of time users spend examining and answering
them. Ultimately, a question may receive large amounts of attention but still
not be answered adequately.
Several existing pieces of research investigate the reasons why questions do
not receive answers on Q&A services and suggest that it may be associated
with users being afraid of expressing themselves. Q&A works well for solving
information needs, however, it rarely takes into account the privacy requirements
of the users who form the service.
This thesis was motivated by the need for a more targeted approach towards
Q&A by distributing the service across ad hoc networks. The main
contribution of this thesis is a novel routing technique and networking environment
(distributed Q&A) which balances answer quality and user attention
while protecting privacy through plausible deniability. Routing approaches
are evaluated experimentally by statistics gained from peer-to-peer network
simulations, composed of Q&A users modelled via features extracted from the
analysis of a large Yahoo! Answers dataset. Suggestions for future directions
to this work are presented from the knowledge gained from our results and
conclusion
Virtual Machine Image Management for Elastic Resource Usage in Grid Computing
Grid Computing has evolved from an academic concept to a powerful paradigm in the area of high performance computing (HPC). Over the last few years, powerful Grid computing solutions were developed that allow the execution of computational tasks on distributed computing resources. Grid computing has recently attracted many commercial customers. To enable commercial customers to be able to execute sensitive data in the Grid, strong security mechanisms must be put in place to secure the customers' data.
In contrast, the development of Cloud Computing, which entered the scene in 2006, was driven by industry: it was designed with respect to security from the beginning. Virtualization technology is used to separate the users e.g., by putting the different users of a system inside a virtual machine, which prevents them from accessing other users' data.
The use of virtualization in the context of Grid computing has been examined early and was found to be a promising approach to counter the security threats that have appeared with commercial customers.
One main part of the work presented in this thesis is the Image Creation Station (ICS), a component which allows users to administer their virtual execution environments (virtual machines) themselves and which is responsible for managing and distributing the virtual machines in the entire system.
In contrast to Cloud computing, which was designed to allow even inexperienced users to execute their computational tasks in the Cloud easily, Grid computing is much more complex to use. The ICS makes it easier to use the Grid by overcoming traditional limitations like installing needed software on the compute nodes that users use to execute the computational tasks. This allows users to bring commercial software to the Grid for the first time, without the need for local administrators to install the software to computing nodes that are accessible by all users. Moreover, the administrative burden is shifted from the local Grid site's administrator to the users or experienced software providers that allow the provision of individually tailored virtual machines to each user. But the ICS is not only responsible for enabling users to manage their virtual machines themselves, it also ensures that the virtual machines are available on every site that is part of the distributed Grid system.
A second aspect of the presented solution focuses on the elasticity of the system by automatically acquiring free external resources depending on the system's current workload. In contrast to existing systems, the presented approach allows the system's administrator to add or remove resource sets during runtime without needing to restart the entire system. Moreover, the presented solution allows users to not only use existing Grid resources but allows them to scale out to Cloud resources and use these resources on-demand. By ensuring that unused resources are shut down as soon as possible, the computational costs of a given task are minimized. In addition, the presented solution allows each user to specify which resources can be used to execute a particular job. This is useful when a job processes sensitive data e.g., that is not allowed to leave the company. To obtain a comparable function in today's systems, a user must submit her computational task to a particular resource set, losing the ability to automatically schedule if more than one set of resources can be used.
In addition, the proposed solution prioritizes each set of resources by taking different metrics into account (e.g. the level of trust or computational costs) and tries to schedule the job to resources with the highest priority first. It is notable that the priority often mimics the physical distance from the resources to the user: a locally available Cluster usually has a higher priority due to the high level of trust and the computational costs, that are usually lower than the costs of using Cloud resources. Therefore, this scheduling strategy minimizes the costs of job execution by improving security at the same time since data is not necessarily transferred to remote resources and the probability of attacks by malicious external users is minimized.
Bringing both components together results in a system that adapts automatically to the current workload by using external (e.g., Cloud) resources together with existing locally available resources or Grid sites and provides individually tailored virtual execution environments to the system's users
- …