5,118 research outputs found
In Things We Trust? Towards trustability in the Internet of Things
This essay discusses the main privacy, security and trustability issues with
the Internet of Things
CEPS Task Force on Artificial Intelligence and Cybersecurity Technology, Governance and Policy Challenges Task Force Evaluation of the HLEG Trustworthy AI Assessment List (Pilot Version). CEPS Task Force Report 22 January 2020
The Centre for European Policy Studies launched a Task Force on Artificial Intelligence (AI) and
Cybersecurity in September 2019. The goal of this Task Force is to bring attention to the market,
technical, ethical and governance challenges posed by the intersection of AI and cybersecurity,
focusing both on AI for cybersecurity but also cybersecurity for AI. The Task Force is multi-stakeholder
by design and composed of academics, industry players from various sectors, policymakers and civil
society.
The Task Force is currently discussing issues such as the state and evolution of the application of AI
in cybersecurity and cybersecurity for AI; the debate on the role that AI could play in the dynamics
between cyber attackers and defenders; the increasing need for sharing information on threats and
how to deal with the vulnerabilities of AI-enabled systems; options for policy experimentation; and
possible EU policy measures to ease the adoption of AI in cybersecurity in Europe.
As part of such activities, this report aims at assessing the High-Level Expert Group (HLEG) on AI Ethics
Guidelines for Trustworthy AI, presented on April 8, 2019. In particular, this report analyses and
makes suggestions on the Trustworthy AI Assessment List (Pilot version), a non-exhaustive list aimed
at helping the public and the private sector in operationalising Trustworthy AI. The list is composed
of 131 items that are supposed to guide AI designers and developers throughout the process of
design, development, and deployment of AI, although not intended as guidance to ensure
compliance with the applicable laws. The list is in its piloting phase and is currently undergoing a
revision that will be finalised in early 2020.
This report would like to contribute to this revision by addressing in particular the interplay between
AI and cybersecurity. This evaluation has been made according to specific criteria: whether and how
the items of the Assessment List refer to existing legislation (e.g. GDPR, EU Charter of Fundamental
Rights); whether they refer to moral principles (but not laws); whether they consider that AI attacks
are fundamentally different from traditional cyberattacks; whether they are compatible with
different risk levels; whether they are flexible enough in terms of clear/easy measurement,
implementation by AI developers and SMEs; and overall, whether they are likely to create obstacles
for the industry.
The HLEG is a diverse group, with more than 50 members representing different stakeholders, such
as think tanks, academia, EU Agencies, civil society, and industry, who were given the difficult task of
producing a simple checklist for a complex issue. The public engagement exercise looks successful
overall in that more than 450 stakeholders have signed in and are contributing to the process.
The next sections of this report present the items listed by the HLEG followed by the analysis and
suggestions raised by the Task Force (see list of the members of the Task Force in Annex 1)
Privacy, security, and trust issues in smart environments
Recent advances in networking, handheld computing and sensor technologies have driven forward research towards the realisation of Mark Weiser's dream of calm and ubiquitous computing (variously called pervasive computing, ambient computing, active spaces, the disappearing computer or context-aware computing). In turn, this has led to the emergence of smart environments as one significant facet of research in this domain. A smart environment, or space, is a region of the real world that is extensively equipped with sensors, actuators and computing components [1]. In effect the smart space becomes a part of a larger information system: with all actions within the space potentially affecting the underlying computer applications, which may themselves affect the space through the actuators. Such smart environments have tremendous potential within many application areas to improve the utility of a space. Consider the potential offered by a smart environment that prolongs the time an elderly or infirm person can live an independent life or the potential offered by a smart environment that supports vicarious learning
Intelligent Trust based Security Framework for Internet of Things
Trust models have recently been proposed for Internet of Things (IoT) applications as a significant system of protection against external threats. This approach to IoT risk management is viable, trustworthy, and secure. At present, the trust security mechanism for immersion applications has not been specified for IoT systems. Several unfamiliar participants or machines share their resources through distributed systems to carry out a job or provide a service. One can have access to tools, network routes, connections, power processing, and storage space. This puts users of the IoT at much greater risk of, for example, anonymity, data leakage, and other safety violations. Trust measurement for new nodes has become crucial for unknown peer threats to be mitigated. Trust must be evaluated in the application sense using acceptable metrics based on the functional properties of nodes. The multifaceted confidence parameterization cannot be clarified explicitly by current stable models. In most current models, loss of confidence is inadequately modeled. Esteem ratings are frequently mis-weighted when previous confidence is taken into account, increasing the impact of harmful recommendations.
In this manuscript, a systematic method called Relationship History along with cumulative trust value (Distributed confidence management scheme model) has been proposed to evaluate interactive peers trust worthiness in a specific context. It includes estimating confidence decline, gathering & weighing trust parameters and calculating the cumulative trust value between nodes. Trust standards can rely on practical contextual resources, determining if a service provider is trustworthy or not and does it deliver effective service? The simulation results suggest that the proposed model outperforms other similar models in terms of security, routing and efficiency and further assesses its performance based on derived utility and trust precision, convergence, and longevity
Charlie and the CryptoFactory: Towards Secure and Trusted Manufacturing Environments
The modernisation that stems from Industry 4.0 started populating the manufacturing sector with networked devices, complex sensors, and a significant proportion of physical actuation components. However, new capabilities in networked cyber-physical systems demand more complex infrastructure and algorithms and often lead to new security flaws and operational risks that increase the attack surface area exponentially. The interconnected nature of Industry 4.0-driven operations and the pace of digital transformation mean that cyber-attacks can have far more extensive effects than ever before. Based on that, the core ideas of this paper are driven by the observation that cyber security is one of the key enablers of Industry 4.0. Having this in mind, we propose CryptoFactory – a forward looking design of a layered-based architecture that can be used as a starting point for building secure and privacy-preserving smart factories. CryptoFactory aims to change the security outlook in smart manufacturing by discussing a set of fundamental requirements and functionality that modern factories should support in order to be resistant to both internal and external attacks. To this end, CryptoFactory first focuses on how to build trust relationships between the hardware devices in the factory. Then, we look on how to use several cryptographic approaches to allow IoT devices to securely collect, store and share their data while we also touch upon the emerging topic of secure and privacy-preserving communication and collaboration between manufacturing environments and value chains. Finally, we look into the problem of how to perform privacy-preserving analytics by leveraging Trusted Execution Environments and the promising concept of Functional Encryption
Sensing as a Service Model for Smart Cities Supported by Internet of Things
The world population is growing at a rapid pace. Towns and cities are
accommodating half of the world's population thereby creating tremendous
pressure on every aspect of urban living. Cities are known to have large
concentration of resources and facilities. Such environments attract people
from rural areas. However, unprecedented attraction has now become an
overwhelming issue for city governance and politics. The enormous pressure
towards efficient city management has triggered various Smart City initiatives
by both government and private sector businesses to invest in ICT to find
sustainable solutions to the growing issues. The Internet of Things (IoT) has
also gained significant attention over the past decade. IoT envisions to
connect billions of sensors to the Internet and expects to use them for
efficient and effective resource management in Smart Cities. Today
infrastructure, platforms, and software applications are offered as services
using cloud technologies. In this paper, we explore the concept of sensing as a
service and how it fits with the Internet of Things. Our objective is to
investigate the concept of sensing as a service model in technological,
economical, and social perspectives and identify the major open challenges and
issues.Comment: Transactions on Emerging Telecommunications Technologies 2014
(Accepted for Publication
- …