5 research outputs found
Detecting Security Leaks in Hybrid Systems with Information Flow Analysis
Information flow analysis is an effective way to check useful security properties, such as whether secret information can leak to adversaries. Despite being widely investigated in the realm of programming languages, information-flow- based security analysis has not been widely studied in the domain of cyber-physical systems (CPS). CPS provide interesting challenges to traditional type-based techniques, as they model mixed discrete-continuous behaviors and are usually expressed as a composition of state machines. In this paper, we propose a lightweight static analysis methodology that enables information security properties for CPS models.We introduce a set of security rules for hybrid automata that characterizes the property of non-interference. Based on those rules, we propose an algorithm that generates security constraints between each sub-component of hybrid automata, and then transforms these constraints into a directed dependency graph to search for non-interference violations. The proposed algorithm can be applied directly to parallel compositions of automata without resorting to model-flattening techniques. Our static checker works on hybrid systems modeled in Simulink/Stateflow format and decides whether or not the model satisfies non-interference given a user-provided security annotation for each variable. Moreover, our approach can also infer the security labels of variables, allowing a designer to verify the correctness of partial security annotations. We demonstrate the potential benefits of the proposed methodology on two case studies
Recommended from our members
Understanding vulnerabilities in cyber physical production systems
Development of future manufacturing systems is featured with flexibility, mass customization, intelligence and context based learning to produce smart products. These production systems are characterized through networked, cooperating objects called cyber physical systems (CPSs). From the manufacturing perspective, the ability to communicate data and develop interaction between devices, manufacturing machinery, raw materials, working robots, humans and the plant environment develops the concept of cyber physical production systems (CPPS). Human-robot collaboration is a technology area that will be an integrated part of the future factory floor and the CPPS. With the involvement of human part in the automated system industrial scenarios, practical safety issues are expected to arise in the connected environment due to the use of a large number of devices, sensors, and cloud services causing complex network, IP conflicts, compromised nodes and communication issues. This all may lead to occupational safety issues on the factory floor in different ways and combinations. Overall, the system's physical vulnerability will be increased in the context of compromised connected working space and cyber-security. In this paper, the authors developed a risk assessment based on system vulnerability of a CPPS developed for a use case requirement and performed a simulated approach by launching a cyber-attack and measuring the causal effect to identify implications on human worker safety
Security is an Architectural Design Constraint
In state-of-the-art design paradigm, time, space and power efficiency are considered the primary design constraints. Quite often, this approach adversely impacts the security of the overall system, especially when security is adopted as a countermeasure after some vulnerability is identified. In this position paper, we motivate the idea that security should also be considered as an architectural design constraint in addition to time, space and power. We show that security and efficiency objectives along the three design axes of time, space and power are in fact tightly coupled while identifying that security stands in direct contrast with them across all layers of architectural design. We attempt to prove our case utilizing a
proof-by-evidence approach wherein we refer to various works across literature that explicitly imply the eternal conflict between security and efficiency. Thus, security has to be treated as a design constraint from the very beginning. Additionally, we advocate a security-aware design flow starting from the choice of cryptographic primitives, protocols and system design
Recommended from our members
Large Language Models for Programming Industrial Control Systems and Mitigating Real-World Software Vulnerabilities
This manuscript is comprised of two sections — automated code generation for Programmable Logic Controllers and vulnerability repair for Common Vulnerabilities & Exposures (CVEs) with Large Language Models (LLMs). The application of LLMs to Industrial Control Systems (ICS) is a relatively unexplored area. State-of-the-art LLMs such as GPT-4 and Code Llama fail to produce valid programs for ICS operated by Programmable Logic Controllers (PLCs). As a result, there is abundant potential to incorporate the use of Large Language Models into the PLC programming process to achieve end-to-end automation of common ICS tasks. We propose LLM4PLC, a user-guided iterative pipeline leveraging user feedback and external verification tools — including grammar checkers, compilers, SMV verifiers — as well as Parameter-Efficient Fine-Tuning and Prompt Engineering, to guide the LLM's generation. We run a complete test suite on GPT-3.5, GPT-4, Code Llama-7B, a fine-tuned Code Llama-7B model, Code Llama-34B, and a fine-tuned Code Llama-34B model. Ultimately, we demonstrate that the LLM4PLC pipeline improves the generation success rate from 47% to 72%, and the Survey-of-Experts code quality from 2.25/10 to 7.75/10. Software vulnerabilities continue to be ubiquitous, even in the era of AI-powered code assistants, advanced static analysis tools, and the adoption of extensive testing frameworks. It has become apparent that we must not simply prevent these bugs, but also eliminate them in a quick, efficient manner. Yet, human code intervention is slow, costly, and can often lead to further security vulnerabilities, especially in legacy codebases. The advent of highly advanced Large Language Models (LLM) has opened up the possibility for many software defects to be patched automatically. We propose LLM4CVE — an LLM-based iterative pipeline that robustly fixes vulnerable functions with high accuracy. We examine our pipeline with State-of-the-Art LLMs, such as GPT-3.5, GPT-4o, Llama 3 8B, and Llama 3 70B, along with fine-tuned variants of selected models. We achieve an increase in ground-truth code similarity of 20% with Llama 3 80B