12 research outputs found
Analytical attack modeling and security assessment based on the common vulnerability scoring system
The paper analyzes an approach to the analytical attack modeling and security assessment on the base of the Common Vulnerability Scoring System (CVSS) format, considering different modifications that appeared in the new version of the CVSS specification. The common approach to the analytical attack modeling and security assessment was suggested by the authors earlier. The paper outlines disadvantages of previous CVSS version that influenced negatively on the results of the attack modeling and security assessment. Differences between new and previous CVSS versions are analyzed. Modifications of the approach to the analytical attack modeling and security assessment that follow from the CVSS modifications are suggested. Advantages of the modified approach are described. Case study that illustrates enhanced approach is provided
ΠΠ²ΡΠΎΠΌΠ°ΡΠΈΠ·ΠΈΡΠΎΠ²Π°Π½Π½ΠΎΠ΅ ΠΎΠΏΡΠ΅Π΄Π΅Π»Π΅Π½ΠΈΠ΅ Π°ΠΊΡΠΈΠ²ΠΎΠ² ΠΈ ΠΎΡΠ΅Π½ΠΊΠ° ΠΈΡ ΠΊΡΠΈΡΠΈΡΠ½ΠΎΡΡΠΈ Π΄Π»Ρ Π°Π½Π°Π»ΠΈΠ·Π° Π·Π°ΡΠΈΡΠ΅Π½Π½ΠΎΡΡΠΈ ΠΈΠ½ΡΠΎΡΠΌΠ°ΡΠΈΠΎΠ½Π½ΡΡ ΡΠΈΡΡΠ΅ΠΌ
Π¦Π΅Π»Ρ ΠΈΡΡΠ»Π΅Π΄ΠΎΠ²Π°Π½ΠΈΡ Π·Π°ΠΊΠ»ΡΡΠ°Π΅ΡΡΡ Π² ΡΠ°Π·ΡΠ°Π±ΠΎΡΠΊΠ΅ ΠΌΠ΅ΡΠΎΠ΄ΠΈΠΊΠΈ Π°Π²ΡΠΎΠΌΠ°ΡΠΈΠ·ΠΈΡΠΎΠ²Π°Π½Π½ΠΎΠ³ΠΎ Π²ΡΠ΄Π΅Π»Π΅Π½ΠΈΡ Π°ΠΊΡΠΈΠ²ΠΎΠ² ΠΈΠ½ΡΠΎΡΠΌΠ°ΡΠΈΠΎΠ½Π½ΠΎΠΉ ΡΠΈΡΡΠ΅ΠΌΡ ΠΈ ΡΡΠ°Π²Π½ΠΈΡΠ΅Π»ΡΠ½ΠΎΠΉ ΠΎΡΠ΅Π½ΠΊΠΈ ΡΡΠΎΠ²Π½Ρ ΠΈΡ
ΠΊΡΠΈΡΠΈΡΠ½ΠΎΡΡΠΈ Π΄Π»Ρ ΠΏΠΎΡΠ»Π΅Π΄ΡΡΡΠ΅ΠΉ ΠΎΡΠ΅Π½ΠΊΠΈ Π·Π°ΡΠΈΡΠ΅Π½Π½ΠΎΡΡΠΈ Π°Π½Π°Π»ΠΈΠ·ΠΈΡΡΠ΅ΠΌΠΎΠΉ ΡΠ΅Π»Π΅Π²ΠΎΠΉ ΠΈΠ½ΡΡΠ°ΡΡΡΡΠΊΡΡΡΡ. ΠΠΎΠ΄ Π°ΠΊΡΠΈΠ²Π°ΠΌΠΈ Π² Π΄Π°Π½Π½ΠΎΠΌ ΡΠ»ΡΡΠ°Π΅ ΠΏΠΎΠ½ΠΈΠΌΠ°ΡΡΡΡ Π²ΡΠ΅ ΠΈΠ½ΡΠΎΡΠΌΠ°ΡΠΈΠΎΠ½Π½ΠΎ-ΡΠ΅Ρ
Π½ΠΎΠ»ΠΎΠ³ΠΈΡΠ΅ΡΠΊΠΈΠ΅ ΠΎΠ±ΡΠ΅ΠΊΡΡ ΡΠ΅Π»Π΅Π²ΠΎΠΉ ΠΈΠ½ΡΡΠ°ΡΡΡΡΠΊΡΡΡΡ. Π Π°Π·ΠΌΠ΅ΡΡ, ΡΠ°Π·Π½ΠΎΡΠΎΠ΄Π½ΠΎΡΡΡ, ΡΠ»ΠΎΠΆΠ½ΠΎΡΡΡ Π²Π·Π°ΠΈΠΌΠΎΡΠ²ΡΠ·Π΅ΠΉ, ΡΠ°ΡΠΏΡΠ΅Π΄Π΅Π»Π΅Π½Π½ΠΎΡΡΡ ΠΈ Π΄ΠΈΠ½Π°ΠΌΠΈΡΠ½ΠΎΡΡΡ ΡΠΎΠ²ΡΠ΅ΠΌΠ΅Π½Π½ΡΡ
ΠΈΠ½ΡΠΎΡΠΌΠ°ΡΠΈΠΎΠ½Π½ΡΡ
ΡΠΈΡΡΠ΅ΠΌ Π·Π°ΡΡΡΠ΄Π½ΡΡΡ ΠΎΠΏΡΠ΅Π΄Π΅Π»Π΅Π½ΠΈΠ΅ ΡΠ΅Π»Π΅Π²ΠΎΠΉ ΠΈΠ½ΡΡΠ°ΡΡΡΡΠΊΡΡΡΡ ΠΈ ΠΊΡΠΈΡΠΈΡΠ½ΠΎΡΡΠΈ ΠΈΠ½ΡΠΎΡΠΌΠ°ΡΠΈΠΎΠ½Π½ΠΎ-ΡΠ΅Ρ
Π½ΠΎΠ»ΠΎΠ³ΠΈΡΠ΅ΡΠΊΠΈΡ
Π°ΠΊΡΠΈΠ²ΠΎΠ² Π΄Π»Ρ Π΅Π΅ ΠΊΠΎΡΡΠ΅ΠΊΡΠ½ΠΎΠ³ΠΎ ΡΡΠ½ΠΊΡΠΈΠΎΠ½ΠΈΡΠΎΠ²Π°Π½ΠΈΡ. ΠΠ²ΡΠΎΠΌΠ°ΡΠΈΠ·ΠΈΡΠΎΠ²Π°Π½Π½ΠΎΠ΅ ΠΈ Π°Π΄Π°ΠΏΡΠΈΠ²Π½ΠΎΠ΅ ΠΎΠΏΡΠ΅Π΄Π΅Π»Π΅Π½ΠΈΠ΅ ΡΠΎΡΡΠ°Π²Π° ΠΈΠ½ΡΠΎΡΠΌΠ°ΡΠΈΠΎΠ½Π½ΠΎ-ΡΠ΅Ρ
Π½ΠΎΠ»ΠΎΠ³ΠΈΡΠ΅ΡΠΊΠΈΡ
Π°ΠΊΡΠΈΠ²ΠΎΠ² ΠΈ ΡΠ²ΡΠ·Π΅ΠΉ ΠΌΠ΅ΠΆΠ΄Ρ Π½ΠΈΠΌΠΈ Π½Π° ΠΎΡΠ½ΠΎΠ²Π΅ Π²ΡΠ΄Π΅Π»Π΅Π½ΠΈΡ ΡΡΠ°ΡΠΈΡΠ½ΡΡ
ΠΈ Π΄ΠΈΠ½Π°ΠΌΠΈΡΠ½ΡΡ
ΠΎΠ±ΡΠ΅ΠΊΡΠΎΠ² ΠΈΠ·Π½Π°ΡΠ°Π»ΡΠ½ΠΎ Π½Π΅ΠΎΠΏΡΠ΅Π΄Π΅Π»Π΅Π½Π½ΠΎΠΉ ΠΈΠ½ΡΡΠ°ΡΡΡΡΠΊΡΡΡΡ ΡΠ²Π»ΡΠ΅ΡΡΡ Π΄ΠΎΡΡΠ°ΡΠΎΡΠ½ΠΎ ΡΠ»ΠΎΠΆΠ½ΠΎΠΉ Π·Π°Π΄Π°ΡΠ΅ΠΉ. ΠΠ΅ ΠΏΡΠ΅Π΄Π»Π°Π³Π°Π΅ΡΡΡ ΡΠ΅ΡΠΈΡΡ Π·Π° ΡΡΠ΅Ρ ΠΏΠΎΡΡΡΠΎΠ΅Π½ΠΈΡ Π°ΠΊΡΡΠ°Π»ΡΠ½ΠΎΠΉ Π΄ΠΈΠ½Π°ΠΌΠΈΡΠ΅ΡΠΊΠΎΠΉ ΠΌΠΎΠ΄Π΅Π»ΠΈ ΠΎΡΠ½ΠΎΡΠ΅Π½ΠΈΠΉ ΠΎΠ±ΡΠ΅ΠΊΡΠΎΠ² ΡΠ΅Π»Π΅Π²ΠΎΠΉ ΠΈΠ½ΡΡΠ°ΡΡΡΡΠΊΡΡΡΡ Ρ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°Π½ΠΈΠ΅ΠΌ ΡΠ°Π·ΡΠ°Π±ΠΎΡΠ°Π½Π½ΠΎΠΉ ΠΌΠ΅ΡΠΎΠ΄ΠΈΠΊΠΈ, ΠΊΠΎΡΠΎΡΠ°Ρ ΡΠ΅Π°Π»ΠΈΠ·ΡΠ΅Ρ ΠΏΠΎΠ΄Ρ
ΠΎΠ΄ Π½Π° ΠΎΡΠ½ΠΎΠ²Π΅ ΠΊΠΎΡΡΠ΅Π»ΡΡΠΈΠΈ ΡΠΎΠ±ΡΡΠΈΠΉ, ΠΏΡΠΎΠΈΡΡ
ΠΎΠ΄ΡΡΠΈΡ
Π² ΡΠΈΡΡΠ΅ΠΌΠ΅. Π Π°Π·ΡΠ°Π±ΠΎΡΠ°Π½Π½Π°Ρ ΠΌΠ΅ΡΠΎΠ΄ΠΈΠΊΠ° ΠΎΡΠ½ΠΎΠ²Π°Π½Π° Π½Π° ΡΡΠ°ΡΠΈΡΡΠΈΡΠ΅ΡΠΊΠΎΠΌ Π°Π½Π°Π»ΠΈΠ·Π΅ ΡΠΌΠΏΠΈΡΠΈΡΠ΅ΡΠΊΠΈΡ
Π΄Π°Π½Π½ΡΡ
ΠΎ ΡΠΎΠ±ΡΡΠΈΡΡ
Π² ΡΠΈΡΡΠ΅ΠΌΠ΅. ΠΠ΅ΡΠΎΠ΄ΠΈΠΊΠ° ΠΏΠΎΠ·Π²ΠΎΠ»ΡΠ΅Ρ Π²ΡΠ΄Π΅Π»ΠΈΡΡ ΠΎΡΠ½ΠΎΠ²Π½ΡΠ΅ ΡΠΈΠΏΡ ΠΎΠ±ΡΠ΅ΠΊΡΠΎΠ² ΠΈΠ½ΡΡΠ°ΡΡΡΡΠΊΡΡΡΡ, ΠΈΡ
Ρ
Π°ΡΠ°ΠΊΡΠ΅ΡΠΈΡΡΠΈΠΊΠΈ ΠΈ ΠΈΠ΅ΡΠ°ΡΡ
ΠΈΡ, ΠΎΡΠ½ΠΎΠ²Π°Π½Π½ΡΡ Π½Π° ΡΠ°ΡΡΠΎΡΠ΅ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°Π½ΠΈΡ ΠΎΠ±ΡΠ΅ΠΊΡΠΎΠ², ΠΈ, ΠΊΠ°ΠΊ ΡΠ»Π΅Π΄ΡΡΠ²ΠΈΠ΅, ΠΎΡΡΠ°ΠΆΠ°ΡΡΡΡ ΠΈΡ
ΠΎΡΠ½ΠΎΡΠΈΡΠ΅Π»ΡΠ½ΡΡ ΠΊΡΠΈΡΠΈΡΠ½ΠΎΡΡΡ Π΄Π»Ρ ΡΡΠ½ΠΊΡΠΈΠΎΠ½ΠΈΡΠΎΠ²Π°Π½ΠΈΡ ΡΠΈΡΡΠ΅ΠΌΡ. ΠΠ»Ρ ΡΡΠΎΠ³ΠΎ Π² ΡΠ°Π±ΠΎΡΠ΅ Π²Π²ΠΎΠ΄ΡΡΡΡ ΠΏΠΎΠΊΠ°Π·Π°ΡΠ΅Π»ΠΈ, Ρ
Π°ΡΠ°ΠΊΡΠ΅ΡΠΈΠ·ΡΡΡΠΈΠ΅ ΠΏΡΠΈΠ½Π°Π΄Π»Π΅ΠΆΠ½ΠΎΡΡΡ ΡΠ²ΠΎΠΉΡΡΠ² ΠΎΠ΄Π½ΠΎΠΌΡ ΡΠΈΠΏΡ, ΡΠΎΠ²ΠΌΠ΅ΡΡΠ½ΠΎΠ΅ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°Π½ΠΈΠ΅ ΡΠ²ΠΎΠΉΡΡΠ², Π° ΡΠ°ΠΊΠΆΠ΅ ΠΏΠΎΠΊΠ°Π·Π°ΡΠ΅Π»ΠΈ Π΄ΠΈΠ½Π°ΠΌΠΈΡΠ½ΠΎΡΡΠΈ, Ρ
Π°ΡΠ°ΠΊΡΠ΅ΡΠΈΠ·ΡΡΡΠΈΠ΅ Π²Π°ΡΠΈΠ°ΡΠΈΠ²Π½ΠΎΡΡΡ ΡΠ²ΠΎΠΉΡΡΠ² ΠΎΡΠ½ΠΎΡΠΈΡΠ΅Π»ΡΠ½ΠΎ Π΄ΡΡΠ³ Π΄ΡΡΠ³Π°. Π Π΅Π·ΡΠ»ΡΡΠΈΡΡΡΡΠ°Ρ ΠΌΠΎΠ΄Π΅Π»Ρ ΠΈΡΠΏΠΎΠ»ΡΠ·ΡΠ΅ΡΡΡ Π΄Π»Ρ ΡΡΠ°Π²Π½ΠΈΡΠ΅Π»ΡΠ½ΠΎΠΉ ΠΎΡΠ΅Π½ΠΊΠΈ ΡΡΠΎΠ²Π½Ρ ΠΊΡΠΈΡΠΈΡΠ½ΠΎΡΡΠΈ ΡΠΈΠΏΠΎΠ² ΠΎΠ±ΡΠ΅ΠΊΡΠΎΠ² ΡΠΈΡΡΠ΅ΠΌΡ. Π ΡΠ°Π±ΠΎΡΠ΅ ΠΎΠΏΠΈΡΡΠ²Π°ΡΡΡΡ ΠΈΡΠΏΠΎΠ»ΡΠ·ΡΠ΅ΠΌΡΠ΅ Π²Ρ
ΠΎΠ΄Π½ΡΠ΅ Π΄Π°Π½Π½ΡΠ΅ ΠΈ ΠΌΠΎΠ΄Π΅Π»ΠΈ, Π° ΡΠ°ΠΊΠΆΠ΅ ΠΌΠ΅ΡΠΎΠ΄ΠΈΠΊΠ° ΠΎΠΏΡΠ΅Π΄Π΅Π»Π΅Π½ΠΈΡ ΡΠΈΠΏΠΎΠ² ΠΈ ΡΡΠ°Π²Π½Π΅Π½ΠΈΡ ΠΊΡΠΈΡΠΈΡΠ½ΠΎΡΡΠΈ Π°ΠΊΡΠΈΠ²ΠΎΠ² ΡΠΈΡΡΠ΅ΠΌΡ. ΠΡΠΈΠ²Π΅Π΄Π΅Π½Ρ ΡΠΊΡΠΏΠ΅ΡΠΈΠΌΠ΅Π½ΡΡ, ΠΏΠΎΠΊΠ°Π·ΡΠ²Π°ΡΡΠΈΠ΅ ΡΠ°Π±ΠΎΡΠΎΡΠΏΠΎΡΠΎΠ±Π½ΠΎΡΡΡ ΠΌΠ΅ΡΠΎΠ΄ΠΈΠΊΠΈ Π½Π° ΠΏΡΠΈΠΌΠ΅ΡΠ΅ Π°Π½Π°Π»ΠΈΠ·Π° ΠΆΡΡΠ½Π°Π»ΠΎΠ² Π±Π΅Π·ΠΎΠΏΠ°ΡΠ½ΠΎΡΡΠΈ ΠΎΠΏΠ΅ΡΠ°ΡΠΈΠΎΠ½Π½ΠΎΠΉ ΡΠΈΡΡΠ΅ΠΌΡ Windows
Federated Learning for Intrusion Detection in the Critical Infrastructures: Vertically Partitioned Data Use Case
One of the challenges in the Internet of Things systems is the security of the critical data, for example, data used for intrusion detection. The paper research construction of an intrusion detection system that ensures the confidentiality of critical data at a given level of intrusion detection accuracy. For this goal, federated learning is used to train an intrusion detection model. Federated learning is a computational model for distributed machine learning that allows different collaborating entities to train one global model without sharing data. This paper considers the case when entities have data that are different in attributes. Authors believe that it is a common situation for the critical systems constructed using Internet of Things (IoT) technology, when industrial objects are monitored by different sets of sensors. To evaluate the applicability of the federated learning for this case, the authors developed an approach and an architecture of the intrusion detection system for vertically partitioned data that consider the principles of federated learning and conducted the series of experiments. To model vertically partitioned data, the authors used the Secure Water Treatment (SWaT) data set that describes the functioning of the water treatment facility. The conducted experiments demonstrate that the accuracy of the intrusion detection model trained using federated learning is compared with the accuracy of the intrusion detection model trained using the centralized machine learning model. However, the computational efficiency of the learning and inference process is currently extremely low. It is explained by the application of homomorphic encryption for input data protection from different data owners or data sources. This defines the necessity to elaborate techniques for generating attributes that could model horizontally partitioned data even for the cases when the collaborating entities share datasets that differ in their attributes
Attacker Behaviour Forecasting Using Methods of Intelligent Data Analysis: A Comparative Review and Prospects
Early detection of the security incidents and correct forecasting of the attack development is the basis for the efficient and timely response to cyber threats. The development of the attack depends on future steps available to the attackers, their goals, and their motivation—that is, the attacker “profile” that defines the malefactor behaviour in the system. Usually, the “attacker profile” is a set of attacker’s attributes—both inner such as motives and skills, and external such as existing financial support and tools used. The definition of the attacker’s profile allows determining the type of the malefactor and the complexity of the countermeasures, and may significantly simplify the attacker attribution process when investigating security incidents. The goal of the paper is to analyze existing techniques of the attacker’s behaviour, the attacker’ profile specifications, and their application for the forecasting of the attack future steps. The implemented analysis allowed outlining the main advantages and limitations of the approaches to attack forecasting and attacker’s profile constructing, existing challenges, and prospects in the area. The approach for attack forecasting implementation is suggested that specifies further research steps and is the basis for the development of an attacker behaviour forecasting technique
Determination of System Weaknesses Based on the Analysis of Vulnerability Indexes and the Source Code of Exploits
Currently the problem of monitoring the security of information systems is highly relevant. One of the important security monitoring tasks is to automate the process of determination of the system weaknesses for their further elimination. The paper considers the techniques for analysis of vulnerability indexes and exploit source code, as well as their subsequent classification. The suggested approach uses open security sources and incorporates two techniques, depending on the available security data. The first technique is based on the analysis of publicly available vulnerability indexes of the Common Vulnerability Scoring System for vulnerability classification by weaknesses. The second one complements the first one in case if there are exploits but there are no associated vulnerabilities and therefore the indexes for classification are absent. It is based on the analysis of the exploit source code for the features, i.e. indexes, using graph models. The extracted indexes are further used for weakness determination using the first technique. The paper provides the experiments demonstrating an effectiveness and potential of the developed techniques. The obtained results and the methods for their enhancement are discussed
Construction and Analysis of Integral User-Oriented Trustworthiness Metrics
Trustworthiness metrics help users to understand information system’s or a device’s security, safety, privacy, resilience, and reliability level. These metrics have different types and natures. The challenge consists of the integration of these metrics into one clear, scalable, sensitive, and reasonable metric representing overall trustworthiness level, useful for understanding if the users can trust the system or for the comparison of the devices and information systems. In this research, the authors propose a novel algorithm for calculation of an integral trustworthiness risk score that is scalable to any number of metrics, considers their criticality, and does not perform averaging in a case when all metrics are of equal importance. The obtained trustworthiness risk score could be further transformed to trustworthiness level. The authors analyze the resulting integral metric sensitivity and demonstrate its advantages on the series of experiments
Automation of Asset Inventory for Cyber Security: Investigation of Event Correlation-Based Technique
Asset inventory is one of the essential steps in cyber security analysis and management. It is required for security risk identification. Current information systems are large-scale, heterogeneous, and dynamic. This complicates manual inventory of the assets as it requires a lot of time and human resources. At the same time, an asset inventory should be continuously repeated because continuous modifications of system objects and topology lead to changes in the cyber security situation. Thus, a technique for automated identification of system assets and connections between them is required. The paper proposes a technique for automated inventory of assets and connections between them in different organizations. The developed technique is constructed based on event correlation methods, namely linking the system events to each other. The essence of the technique consists of the investigation of event characteristics and identifying the characteristics that arise solely together. This allows determining system assets via assigning event characteristics to specific asset types. The security risks depend on the criticality of the assets; thus, a discussion of automated calculation of the outlined assetsβ criticality is provided. Outlined system objects and topology can be further used for restoring possible attack paths and security assessment. The applicability of the developed technique to reveal object properties and types is demonstrated in the experiments
Automation of Asset Inventory for Cyber Security: Investigation of Event Correlation-Based Technique
Asset inventory is one of the essential steps in cyber security analysis and management. It is required for security risk identification. Current information systems are large-scale, heterogeneous, and dynamic. This complicates manual inventory of the assets as it requires a lot of time and human resources. At the same time, an asset inventory should be continuously repeated because continuous modifications of system objects and topology lead to changes in the cyber security situation. Thus, a technique for automated identification of system assets and connections between them is required. The paper proposes a technique for automated inventory of assets and connections between them in different organizations. The developed technique is constructed based on event correlation methods, namely linking the system events to each other. The essence of the technique consists of the investigation of event characteristics and identifying the characteristics that arise solely together. This allows determining system assets via assigning event characteristics to specific asset types. The security risks depend on the criticality of the assets; thus, a discussion of automated calculation of the outlined assets’ criticality is provided. Outlined system objects and topology can be further used for restoring possible attack paths and security assessment. The applicability of the developed technique to reveal object properties and types is demonstrated in the experiments
Dynamic Security Assessment Of Computer Networks In Siem-Systems
The paper suggests an approach to the security assessment of computer networks. The approach is based on attack graphs and intended for Security Information and Events Management systems (SIEM-systems). Key feature of the approach consists in the application of the multilevel security metrics taxonomy. The taxonomy allows definition of the system profile according to the input data used for the metrics calculation and techniques of security metrics calculation. This allows specification of the security assessment in near real time, identification of previous and future attacker steps, identification of attackers goals and characteristics. A security assessment system prototype is implemented for the suggested approach. Analysis of its operation is conducted for several attack scenarios
Synthesis and Analysis of the Fixed-Point HodgkinβHuxley Neuron Model
In many tasks related to realistic neurons and neural network simulation, the performance of desktop computers is nowhere near enough. To overcome this obstacle, researchers are developing FPGA-based simulators that naturally use fixed-point arithmetic. In these implementations, little attention is usually paid to the choice of numerical method for the discretization of the continuous neuron model. In our study, the implementation accuracy of a neuron described by simplified Hodgkin–Huxley equations in fixed-point arithmetic is under investigation. The principle of constructing a fixed-point neuron model with various numerical methods is described. Interspike diagrams and refractory period analysis are used for the experimental study of the synthesized discrete maps of the simplified Hodgkin–Huxley neuron model. We show that the explicit midpoint method is much better suited to simulate the neuron dynamics on an FPGA than the explicit Euler method which is in common use