408 research outputs found
Security risk modeling in smart grid critical infrastructures in the era of big data and artificial intelligence
Smart grids (SG) emerged as a response to the need to modernize the electricity grid. The current security tools are almost perfect when it comes to identifying and preventing known attacks in the smart grid. Still, unfortunately, they do not quite meet the requirements of advanced cybersecurity. Adequate protection against cyber threats requires a whole set of processes and tools. Therefore, a more flexible mechanism is needed to examine data sets holistically and detect otherwise unknown threats. This is possible with big modern data analyses based on deep learning, machine learning, and artificial intelligence. Machine learning, which can rely on adaptive baseline behavior models, effectively detects new, unknown attacks. Combined known and unknown data sets based on predictive analytics and machine intelligence will decisively change the security landscape. This paper identifies the trends, problems, and challenges of cybersecurity in smart grid critical infrastructures in big data and artificial intelligence. We present an overview of the SG with its architectures and functionalities and confirm how technology has configured the modern electricity grid. A qualitative risk assessment method is presented. The most significant contributions to the reliability, safety, and efficiency of the electrical network are described. We expose levels while proposing suitable security countermeasures. Finally, the smart grid’s cybersecurity risk assessment methods for supervisory control and data acquisition are presented
A Survey on Layer-Wise Security Attacks in IoT: Attacks, Countermeasures, and Open-Issues
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).Security is a mandatory issue in any network, where sensitive data are transferred safely in the required direction. Wireless sensor networks (WSNs) are the networks formed in hostile areas for different applications. Whatever the application, the WSNs must gather a large amount of sensitive data and send them to an authorized body, generally a sink. WSN has integrated with Internet-of-Things (IoT) via internet access in sensor nodes along with internet-connected devices. The data gathered with IoT are enormous, which are eventually collected by WSN over the Internet. Due to several resource constraints, it is challenging to design a secure sensor network, and for a secure IoT it is essential to have a secure WSN. Most of the traditional security techniques do not work well for WSN. The merger of IoT and WSN has opened new challenges in designing a secure network. In this paper, we have discussed the challenges of creating a secure WSN. This research reviews the layer-wise security protocols for WSN and IoT in the literature. There are several issues and challenges for a secure WSN and IoT, which we have addressed in this research. This research pinpoints the new research opportunities in the security issues of both WSN and IoT. This survey climaxes in abstruse psychoanalysis of the network layer attacks. Finally, various attacks on the network using Cooja, a simulator of ContikiOS, are simulated.Peer reviewe
Preliminaries of orthogonal layered defence using functional and assurance controls in industrial control systems
Industrial Control Systems (ICSs) are responsible for the automation of different processes and the overall control of systems that include highly sensitive potential targets such as nuclear facilities, energy-distribution, water-supply, and mass-transit systems. Given the increased complexity and rapid evolvement of their threat landscape, and the fact that these systems form part of the Critical National infrastructure (CNI), makes them an emerging domain of conflict, terrorist attacks, and a playground for cyberexploitation. Existing layered-defence approaches are increasingly criticised for their inability to adequately protect against resourceful and persistent adversaries. It is therefore essential that emerging techniques, such as orthogonality, be combined with existing security strategies to leverage defence advantages against adaptive and often asymmetrical attack vectors. The concept of orthogonality is relatively new and unexplored in an ICS environment and consists of having assurance control as well as functional control at each layer. Our work seeks to partially articulate a framework where multiple functional and assurance controls are introduced at each layer of ICS architectural design to further enhance security while maintaining critical real-time transfer of command and control traffic
A Survey of Enabling Technologies for Smart Communities
In 2016, the Japanese Government publicized an initiative and a call to action for the implementation of a Super Smart Society announced as Society 5.0. The stated goal of Society 5.0 is to meet the various needs of the members of society through the provisioning of goods and services to those who require them, when they are required and in the amount required, thus enabling the citizens to live an active and comfortable life. In spite of its genuine appeal, details of a feasible path to Society 5.0 are conspicuously missing. The first main goal of this survey is to suggest such an implementation path. Specifically, we define a Smart Community as a human-centric entity where technology is used to equip the citizenry with information and services that they can use to inform their decisions. The arbiter of this ecosystem of services is a Marketplace of Services that will reward services aligned with the wants and needs of the citizens, while discouraging the proliferation of those that are not. In the limit, the Smart Community we defined will morph into Society 5.0. At that point, the Marketplace of Services will become a platform for the co-creation of services by a close cooperation between the citizens and their government. The second objective and contribution of this survey paper is to review known technologies that, in our opinion, will play a significant role in the transition to Society 5.0. These technologies will be surveyed in chronological order, as newer technologies often extend old technologies while avoiding their limitations
Corporate influence and the academic computer science discipline. [4: CMU]
Prosopographical work on the four major centers for computer
research in the United States has now been conducted, resulting in big
questions about the independence of, so called, computer science
Models versus Datasets: Reducing Bias through Building a Comprehensive IDS Benchmark
Today, deep learning approaches are widely used to build Intrusion Detection Systems for securing IoT environments. However, the models’ hidden and complex nature raises various concerns, such as trusting the model output and understanding why the model made certain decisions. Researchers generally publish their proposed model’s settings and performance results based on a specific dataset and a classification model but do not report the proposed model’s output and findings. Similarly, many researchers suggest an IDS solution by focusing only on a single benchmark dataset and classifier. Such solutions are prone to generating inaccurate and biased results. This paper overcomes these limitations in previous work by analyzing various benchmark datasets and various individual and hybrid deep learning classifiers towards finding the best IDS solution for IoT that is efficient, lightweight, and comprehensive in detecting network anomalies. We also showed the model’s localized predictions and analyzed the top contributing features impacting the global performance of deep learning models. This paper aims to extract the aggregate knowledge from various datasets and classifiers and analyze the commonalities to avoid any possible bias in results and increase the trust and transparency of deep learning models. We believe this paper’s findings will help future researchers build a comprehensive IDS based on well-performing classifiers and utilize the aggregated knowledge and the minimum set of significantly contributing features
PhishReplicant: A Language Model-based Approach to Detect Generated Squatting Domain Names
Domain squatting is a technique used by attackers to create domain names for
phishing sites. In recent phishing attempts, we have observed many domain names
that use multiple techniques to evade existing methods for domain squatting.
These domain names, which we call generated squatting domains (GSDs), are quite
different in appearance from legitimate domain names and do not contain brand
names, making them difficult to associate with phishing. In this paper, we
propose a system called PhishReplicant that detects GSDs by focusing on the
linguistic similarity of domain names. We analyzed newly registered and
observed domain names extracted from certificate transparency logs, passive
DNS, and DNS zone files. We detected 3,498 domain names acquired by attackers
in a four-week experiment, of which 2,821 were used for phishing sites within a
month of detection. We also confirmed that our proposed system outperformed
existing systems in both detection accuracy and number of domain names
detected. As an in-depth analysis, we examined 205k GSDs collected over 150
days and found that phishing using GSDs was distributed globally. However,
attackers intensively targeted brands in specific regions and industries. By
analyzing GSDs in real time, we can block phishing sites before or immediately
after they appear.Comment: Accepted at ACSAC 202
Isolation Without Taxation: {N}ear-Zero-Cost Transitions for {WebAssembly} and {SFI}
Software sandboxing or software-based fault isolation (SFI) is a lightweight
approach to building secure systems out of untrusted components. Mozilla, for
example, uses SFI to harden the Firefox browser by sandboxing third-party
libraries, and companies like Fastly and Cloudflare use SFI to safely co-locate
untrusted tenants on their edge clouds. While there have been significant
efforts to optimize and verify SFI enforcement, context switching in SFI
systems remains largely unexplored: almost all SFI systems use
\emph{heavyweight transitions} that are not only error-prone but incur
significant performance overhead from saving, clearing, and restoring registers
when context switching. We identify a set of \emph{zero-cost conditions} that
characterize when sandboxed code has sufficient structured to guarantee
security via lightweight \emph{zero-cost} transitions (simple function calls).
We modify the Lucet Wasm compiler and its runtime to use zero-cost transitions,
eliminating the undue performance tax on systems that rely on Lucet for
sandboxing (e.g., we speed up image and font rendering in Firefox by up to
29.7\% and 10\% respectively). To remove the Lucet compiler and its correct
implementation of the Wasm specification from the trusted computing base, we
(1) develop a \emph{static binary verifier}, VeriZero, which (in seconds)
checks that binaries produced by Lucet satisfy our zero-cost conditions, and
(2) prove the soundness of VeriZero by developing a logical relation that
captures when a compiled Wasm function is semantically well-behaved with
respect to our zero-cost conditions. Finally, we show that our model is useful
beyond Wasm by describing a new, purpose-built SFI system, SegmentZero32, that
uses x86 segmentation and LLVM with mostly off-the-shelf passes to enforce our
zero-cost conditions; our prototype performs on-par with the state-of-the-art
Native Client SFI system
- …