1,033 research outputs found
Decentralized brokered enabled ecosystem for data marketplace in smart cities towards a data sharing economy
Presently data are indispensably important as cities consider data as a commodity which can be traded to earn revenues. In urban environment, data generated from internet of things devices, smart meters, smart sensors, etc. can provide a new source of income for citizens and enterprises who are data owners. These data can be traded as digital assets. To support such trading digital data marketplaces have emerged. Data marketplaces promote a data sharing economy which is crucial for provision of available data useful for cities which aims to develop data driven services. But currently existing data marketplaces are mostly inadequate due to several issues such as security, efficiency, and adherence to privacy regulations. Likewise, there is no consolidated understanding of how to achieve trust and fairness among data owners and data sellers when trading data. Therefore, this study presents the design of an ecosystem which comprises of a distributed ledger technology data marketplace enabled by message queueing telemetry transport (MQTT) to facilitate trust and fairness among data owners and data sellers. The designed ecosystem for data marketplaces is powered by IOTA technology and MQTT broker to support the trading of sdata sources by automating trade agreements, negotiations and payment settlement between data producers/sellers and data consumers/buyers. Overall, findings from this article discuss the issues associated in developing a decentralized data marketplace for smart cities suggesting recommendations to enhance the deployment of decentralized and distributed data marketplaces.publishedVersio
CamFlow: Managed Data-sharing for Cloud Services
A model of cloud services is emerging whereby a few trusted providers manage
the underlying hardware and communications whereas many companies build on this
infrastructure to offer higher level, cloud-hosted PaaS services and/or SaaS
applications. From the start, strong isolation between cloud tenants was seen
to be of paramount importance, provided first by virtual machines (VM) and
later by containers, which share the operating system (OS) kernel. Increasingly
it is the case that applications also require facilities to effect isolation
and protection of data managed by those applications. They also require
flexible data sharing with other applications, often across the traditional
cloud-isolation boundaries; for example, when government provides many related
services for its citizens on a common platform. Similar considerations apply to
the end-users of applications. But in particular, the incorporation of cloud
services within `Internet of Things' architectures is driving the requirements
for both protection and cross-application data sharing.
These concerns relate to the management of data. Traditional access control
is application and principal/role specific, applied at policy enforcement
points, after which there is no subsequent control over where data flows; a
crucial issue once data has left its owner's control by cloud-hosted
applications and within cloud-services. Information Flow Control (IFC), in
addition, offers system-wide, end-to-end, flow control based on the properties
of the data. We discuss the potential of cloud-deployed IFC for enforcing
owners' dataflow policy with regard to protection and sharing, as well as
safeguarding against malicious or buggy software. In addition, the audit log
associated with IFC provides transparency, giving configurable system-wide
visibility over data flows. [...]Comment: 14 pages, 8 figure
Anonymity and trust in the electronic world
Privacy has never been an explicit goal of authorization mechanisms. The traditional
approach to authorisation relies on strong authentication of a stable identity
using long term credentials. Audit is then linked to authorization via the same
identity. Such an approach compels users to enter into a trust relationship with
large parts of the system infrastructure, including entities in remote domains. In
this dissertation we advance the view that this type of compulsive trust relationship
is unnecessary and can have undesirable consequences. We examine in some
detail the consequences which such undesirable trust relationships can have on
individual privacy, and investigate the extent to which taking a unified approach
to trust and anonymity can actually provide useful leverage to address threats to
privacy without compromising the principal goals of authentication and audit. We
conclude that many applications would benefit from mechanisms which enabled
them to make authorization decisions without using long-term credentials. We
next propose specific mechanisms to achieve this, introducing a novel notion of
a short-lived electronic identity, which we call a surrogate. This approach allows
a localisation of trust and entities are not compelled to transitively trust other entities
in remote domains. In particular, resolution of stable identities needs only
ever to be done locally to the entity named. Our surrogates allow delegation, enable
role-based access control policies to be enforced across multiple domains,
and permit the use of non-anonymous payment mechanisms, all without compromising
the privacy of a user. The localisation of trust resulting from the approach
proposed in this dissertation also has the potential to allow clients to control the
risks to which they are exposed by bearing the cost of relevant countermeasures
themselves, rather than forcing clients to trust the system infrastructure to protect
them and to bear an equal share of the cost of all countermeasures whether or not
effective for them. This consideration means that our surrogate-based approach
and mechanisms are of interest even in Kerberos-like scenarios where anonymity
is not a requirement, but the remote authentication mechanism is untrustworthy
Safety, Trust, and Ethics Considerations for Human-AI Teaming in Aerospace Control
Designing a safe, trusted, and ethical AI may be practically impossible;
however, designing AI with safe, trusted, and ethical use in mind is possible
and necessary in safety and mission-critical domains like aerospace. Safe,
trusted, and ethical use of AI are often used interchangeably; however, a
system can be safely used but not trusted or ethical, have a trusted use that
is not safe or ethical, and have an ethical use that is not safe or trusted.
This manuscript serves as a primer to illuminate the nuanced differences
between these concepts, with a specific focus on applications of Human-AI
teaming in aerospace system control, where humans may be in, on, or
out-of-the-loop of decision-making
Temporary Access to Medical Records in Emergency Situations
Access to patients Electronic Health Records (EHR) is a daily operation in mainstream healthcare. However, having access to EHR in emergencies while is vitally important to save patients’ life, it could potentially lead to security breaches and violating patients’ privacy. In this regards, getting access to patients’ medical records in emergency situations is one of the issues that emergency responder teams are facing. This access can be temporary until patients reach hospitals or healthcare centers. In this paper, we aim to explore different technology-based solutions to give responders temporary access to patients\u27 medical records in emergency situations. The core of this study is patients and responders authentication methods that can save precious emergency time and protect the privacy and confidentiality of patients data to the utmost. We also have explored control access mechanism and security audits to increase the security of the procedure and patient privacy
Trustworthy Federated Learning: A Survey
Federated Learning (FL) has emerged as a significant advancement in the field
of Artificial Intelligence (AI), enabling collaborative model training across
distributed devices while maintaining data privacy. As the importance of FL
increases, addressing trustworthiness issues in its various aspects becomes
crucial. In this survey, we provide an extensive overview of the current state
of Trustworthy FL, exploring existing solutions and well-defined pillars
relevant to Trustworthy . Despite the growth in literature on trustworthy
centralized Machine Learning (ML)/Deep Learning (DL), further efforts are
necessary to identify trustworthiness pillars and evaluation metrics specific
to FL models, as well as to develop solutions for computing trustworthiness
levels. We propose a taxonomy that encompasses three main pillars:
Interpretability, Fairness, and Security & Privacy. Each pillar represents a
dimension of trust, further broken down into different notions. Our survey
covers trustworthiness challenges at every level in FL settings. We present a
comprehensive architecture of Trustworthy FL, addressing the fundamental
principles underlying the concept, and offer an in-depth analysis of trust
assessment mechanisms. In conclusion, we identify key research challenges
related to every aspect of Trustworthy FL and suggest future research
directions. This comprehensive survey serves as a valuable resource for
researchers and practitioners working on the development and implementation of
Trustworthy FL systems, contributing to a more secure and reliable AI
landscape.Comment: 45 Pages, 8 Figures, 9 Table
A blockchain-based framework for trusted quality data sharing towards zero-defect manufacturing
There is a current wave of a new generation of digital solutions based on intelligent systems, hybrid digital twins and AI-driven optimization tools to assure quality in smart factories. Such digital solutions heavily depend on quality-related information within the supply chain business ecosystem to drive zero-waste value chains. To empower zero-waste value chain strategies with meaningful, reliable, and trustful data, there must be a solution for end-to-end industrial data traceability, trust, and security across multiple process chains or even inter-organizational supply chains. In this paper, we first present Product, Process, and Data quality services to drive zero-waste value chain strategies. Following this, we present the Trusted Framework (TF), which is a key enabler for the secure and effective sharing of quality-related information within the supply chain business ecosystem, and thus for quality optimization actions towards zero-defect manufacturing. The TF specification includes the data model and format of the Process/Product/Data (PPD) Quality Hallmark, the OpenAPI exposed to factory system and a comprehensive Identity Management layer, for secure horizontal- and vertical quality data integration. The PPD hallmark and the TF already address some of the industrial needs to have a trusted approach to share quality data between the different stakeholders of the production chain to empower zero-waste value chain strategies.publishedVersio
Recommended from our members
Camflow: Managed Data-Sharing for Cloud Services
A model of cloud services is emerging whereby a few trusted providers manage the underlying hardware and communications whereas many companies build on this infrastructure to offer higher level, cloud-hosted PaaS services and/or SaaS applications. From the start, strong isolation between cloud tenants was seen to be of paramount importance, provided first by virtual machines (VM) and later by containers, which share the operating system (OS) kernel. Increasingly it is the case that applications also require facilities to effect isolation and protection of data managed by those applications. They also require flexible data sharing with other applications, often across the traditional cloud-isolation boundaries; for example, when government, consisting of different departments, provides services to its citizens through a common platform. These concerns relate to the management of data. Traditional access control is application and principal/role specific, applied at policy enforcement points, after which there is no subsequent control over where data flows;a crucial issue once data has left its owner's control by cloud-hosted applications andwithin cloud-services. Information Flow Control (IFC), in addition, offers system-wide, end-To-end, flow control based on the properties of the data. We discuss the potential of cloud-deployed IFC for enforcing owners' data flow policy with regard to protection and sharing, aswell as safeguarding against malicious or buggy software. In addition, the audit log associated with IFC provides transparency and offers system-wide visibility over data flows. This helps those responsible to meet their data management obligations, providing evidence of compliance, and aids in the identification ofpolicy errors and misconfigurations. We present our IFC model and describe and evaluate our IFC architecture and implementation (CamFlow). This comprises an OS level implementation of IFC with support for application management, together with an IFC-enabled middleware.This work was supported by UK Engineering and Physical Sciences Research Council grant EP/K011510 CloudSafetyNet: End-to-End Application Security in the Cloud. We acknowledge the support of Microsoft through the Microsoft Cloud Computing Research Centre
Blockchain-Empowered Security Enhancement IoT Framework in Building Management System
Centralized architectures, like the cloud model, have their advantages, but they also come with drawbacks, such as higher upfront costs, longer deployment times, and a higher probability of catastrophic failure. Building Management Systems (BMS) is an application that can adopt Internet of Things (IoT) designs and services. However, implementing IoT in a highly modular environment with various moving parts and interdependencies between stakeholders can create security issues. Therefore, this paper proposes a system design using Blockchain technology as a means to protect and control the system, which includes the integration of IoT and BMS technologies. This paper has also included broad discussion on current Blockchain based IoT solution and its IoT limitations in Building Management Systems
- …