10,586 research outputs found
Deep Learning-Based Dynamic Watermarking for Secure Signal Authentication in the Internet of Things
Securing the Internet of Things (IoT) is a necessary milestone toward
expediting the deployment of its applications and services. In particular, the
functionality of the IoT devices is extremely dependent on the reliability of
their message transmission. Cyber attacks such as data injection,
eavesdropping, and man-in-the-middle threats can lead to security challenges.
Securing IoT devices against such attacks requires accounting for their
stringent computational power and need for low-latency operations. In this
paper, a novel deep learning method is proposed for dynamic watermarking of IoT
signals to detect cyber attacks. The proposed learning framework, based on a
long short-term memory (LSTM) structure, enables the IoT devices to extract a
set of stochastic features from their generated signal and dynamically
watermark these features into the signal. This method enables the IoT's cloud
center, which collects signals from the IoT devices, to effectively
authenticate the reliability of the signals. Furthermore, the proposed method
prevents complicated attack scenarios such as eavesdropping in which the cyber
attacker collects the data from the IoT devices and aims to break the
watermarking algorithm. Simulation results show that, with an attack detection
delay of under 1 second the messages can be transmitted from IoT devices with
an almost 100% reliability.Comment: 6 pages, 9 figure
Recommended from our members
The THREAT-ARREST Cyber-Security Training Platform
Cyber security is always a main concern for critical infrastructures and nation-wide safety and sustainability. Thus, advanced cyber ranges and security training is becoming imperative for the involved organizations. This paper presets a cyber security training platform, called THREAT-ARREST. The various platform modules can analyze an organizationâs system, identify the most critical threats, and tailor a training program to its personnel needs. Then, different training programmes are created based on the trainee types (i.e. administrator, simple operator, etc.), providing several teaching procedures and accomplishing diverse learning goals. One of the main novelties of THREAT-ARREST is the modelling of these programmes along with the runtime monitoring, management, and evaluation operations. The platform is generic. Nevertheless, its applicability in a smart energy case study is detailed
Learning-based attacks in cyber-physical systems
We introduce the problem of learning-based attacks in a simple abstraction of
cyber-physical systems---the case of a discrete-time, linear, time-invariant
plant that may be subject to an attack that overrides the sensor readings and
the controller actions. The attacker attempts to learn the dynamics of the
plant and subsequently override the controller's actuation signal, to destroy
the plant without being detected. The attacker can feed fictitious sensor
readings to the controller using its estimate of the plant dynamics and mimic
the legitimate plant operation. The controller, on the other hand, is
constantly on the lookout for an attack; once the controller detects an attack,
it immediately shuts the plant off. In the case of scalar plants, we derive an
upper bound on the attacker's deception probability for any measurable control
policy when the attacker uses an arbitrary learning algorithm to estimate the
system dynamics. We then derive lower bounds for the attacker's deception
probability for both scalar and vector plants by assuming a specific
authentication test that inspects the empirical variance of the system
disturbance. We also show how the controller can improve the security of the
system by superimposing a carefully crafted privacy-enhancing signal on top of
the "nominal control policy." Finally, for nonlinear scalar dynamics that
belong to the Reproducing Kernel Hilbert Space (RKHS), we investigate the
performance of attacks based on nonlinear Gaussian-processes (GP) learning
algorithms
Usability and Trust in Information Systems
The need for people to protect themselves and their assets is as old as humankind. People's physical safety and their possessions have always been at risk from deliberate attack or accidental damage. The advance of information technology means that many individuals, as well as corporations, have an additional range of physical (equipment) and electronic (data) assets that are at risk. Furthermore, the increased number and types of interactions in cyberspace has enabled new forms of attack on people and their possessions. Consider grooming of minors in chat-rooms, or Nigerian email cons: minors were targeted by paedophiles before the creation of chat-rooms, and Nigerian criminals sent the same letters by physical mail or fax before there was email. But the technology has decreased the cost of many types of attacks, or the degree of risk for the attackers. At the same time, cyberspace is still new to many people, which means they do not understand risks, or recognise the signs of an attack, as readily as they might in the physical world. The IT industry has developed a plethora of security mechanisms, which could be used to mitigate risks or make attacks significantly more difficult. Currently, many people are either not aware of these mechanisms, or are unable or unwilling or to use them. Security experts have taken to portraying people as "the weakest link" in their efforts to deploy effective security [e.g. Schneier, 2000]. However, recent research has revealed at least some of the problem may be that security mechanisms are hard to use, or be ineffective. The review summarises current research on the usability of security mechanisms, and discusses options for increasing their usability and effectiveness
- âŠ