195 research outputs found

    Self-Healing in Cyber–Physical Systems Using Machine Learning:A Critical Analysis of Theories and Tools

    Get PDF
    The rapid advancement of networking, computing, sensing, and control systems has introduced a wide range of cyber threats, including those from new devices deployed during the development of scenarios. With recent advancements in automobiles, medical devices, smart industrial systems, and other technologies, system failures resulting from external attacks or internal process malfunctions are increasingly common. Restoring the system’s stable state requires autonomous intervention through the self-healing process to maintain service quality. This paper, therefore, aims to analyse state of the art and identify where self-healing using machine learning can be applied to cyber–physical systems to enhance security and prevent failures within the system. The paper describes three key components of self-healing functionality in computer systems: anomaly detection, fault alert, and fault auto-remediation. The significance of these components is that self-healing functionality cannot be practical without considering all three. Understanding the self-healing theories that form the guiding principles for implementing these functionalities with real-life implications is crucial. There are strong indications that self-healing functionality in the cyber–physical system is an emerging area of research that holds great promise for the future of computing technology. It has the potential to provide seamless self-organising and self-restoration functionality to cyber–physical systems, leading to increased security of systems and improved user experience. For instance, a functional self-healing system implemented on a power grid will react autonomously when a threat or fault occurs, without requiring human intervention to restore power to communities and preserve critical services after power outages or defects. This paper presents the existing vulnerabilities, threats, and challenges and critically analyses the current self-healing theories and methods that use machine learning for cyber–physical systems

    Reliable massively parallel symbolic computing : fault tolerance for a distributed Haskell

    Get PDF
    As the number of cores in manycore systems grows exponentially, the number of failures is also predicted to grow exponentially. Hence massively parallel computations must be able to tolerate faults. Moreover new approaches to language design and system architecture are needed to address the resilience of massively parallel heterogeneous architectures. Symbolic computation has underpinned key advances in Mathematics and Computer Science, for example in number theory, cryptography, and coding theory. Computer algebra software systems facilitate symbolic mathematics. Developing these at scale has its own distinctive set of challenges, as symbolic algorithms tend to employ complex irregular data and control structures. SymGridParII is a middleware for parallel symbolic computing on massively parallel High Performance Computing platforms. A key element of SymGridParII is a domain specific language (DSL) called Haskell Distributed Parallel Haskell (HdpH). It is explicitly designed for scalable distributed-memory parallelism, and employs work stealing to load balance dynamically generated irregular task sizes. To investigate providing scalable fault tolerant symbolic computation we design, implement and evaluate a reliable version of HdpH, HdpH-RS. Its reliable scheduler detects and handles faults, using task replication as a key recovery strategy. The scheduler supports load balancing with a fault tolerant work stealing protocol. The reliable scheduler is invoked with two fault tolerance primitives for implicit and explicit work placement, and 10 fault tolerant parallel skeletons that encapsulate common parallel programming patterns. The user is oblivious to many failures, they are instead handled by the scheduler. An operational semantics describes small-step reductions on states. A simple abstract machine for scheduling transitions and task evaluation is presented. It defines the semantics of supervised futures, and the transition rules for recovering tasks in the presence of failure. The transition rules are demonstrated with a fault-free execution, and three executions that recover from faults. The fault tolerant work stealing has been abstracted in to a Promela model. The SPIN model checker is used to exhaustively search the intersection of states in this automaton to validate a key resiliency property of the protocol. It asserts that an initially empty supervised future on the supervisor node will eventually be full in the presence of all possible combinations of failures. The performance of HdpH-RS is measured using five benchmarks. Supervised scheduling achieves a speedup of 757 with explicit task placement and 340 with lazy work stealing when executing Summatory Liouville up to 1400 cores of a HPC architecture. Moreover, supervision overheads are consistently low scaling up to 1400 cores. Low recovery overheads are observed in the presence of frequent failure when lazy on-demand work stealing is used. A Chaos Monkey mechanism has been developed for stress testing resiliency with random failure combinations. All unit tests pass in the presence of random failure, terminating with the expected results

    2022 roadmap on neuromorphic computing and engineering

    Full text link
    Modern computation based on von Neumann architecture is now a mature cutting-edge science. In the von Neumann architecture, processing and memory units are implemented as separate blocks interchanging data intensively and continuously. This data transfer is responsible for a large part of the power consumption. The next generation computer technology is expected to solve problems at the exascale with 1018^{18} calculations each second. Even though these future computers will be incredibly powerful, if they are based on von Neumann type architectures, they will consume between 20 and 30 megawatts of power and will not have intrinsic physically built-in capabilities to learn or deal with complex data as our brain does. These needs can be addressed by neuromorphic computing systems which are inspired by the biological concepts of the human brain. This new generation of computers has the potential to be used for the storage and processing of large amounts of digital information with much lower power consumption than conventional processors. Among their potential future applications, an important niche is moving the control from data centers to edge devices. The aim of this roadmap is to present a snapshot of the present state of neuromorphic technology and provide an opinion on the challenges and opportunities that the future holds in the major areas of neuromorphic technology, namely materials, devices, neuromorphic circuits, neuromorphic algorithms, applications, and ethics. The roadmap is a collection of perspectives where leading researchers in the neuromorphic community provide their own view about the current state and the future challenges for each research area. We hope that this roadmap will be a useful resource by providing a concise yet comprehensive introduction to readers outside this field, for those who are just entering the field, as well as providing future perspectives for those who are well established in the neuromorphic computing community

    Coordination and Self-Adaptive Communication Primitives for Low-Power Wireless Networks

    Get PDF
    The Internet of Things (IoT) is a recent trend where objects are augmented with computing and communication capabilities, often via low-power wireless radios. The Internet of Things is an enabler for a connected and more sustainable modern society: smart grids are deployed to improve energy production and consumption, wireless monitoring systems allow smart factories to detect faults early and reduce waste, while connected vehicles coordinate on the road to ensure our safety and save fuel. Many recent IoT applications have stringent requirements for their wireless communication substrate: devices must cooperate and coordinate, must perform efficiently under varying and sometimes extreme environments, while strict deadlines must be met. Current distributed coordination algorithms have high overheads and are unfit to meet the requirements of today\u27s wireless applications, while current wireless protocols are often best-effort and lack the guarantees provided by well-studied coordination solutions. Further, many communication primitives available today lack the ability to adapt to dynamic environments, and are often tuned during their design phase to reach a target performance, rather than be continuously updated at runtime to adapt to reality.In this thesis, we study the problem of efficient and low-latency consensus in the context of low-power wireless networks, where communication is unreliable and nodes can fail, and we investigate the design of a self-adaptive wireless stack, where the communication substrate is able to adapt to changes to its environment. We propose three new communication primitives: Wireless Paxos brings fault-tolerant consensus to low-power wireless networking, STARC is a middleware for safe vehicular coordination at intersections, while Dimmer builds on reinforcement learning to provide adaptivity to low-power wireless networks. We evaluate in-depth each primitive on testbed deployments and we provide an open-source implementation to enable their use and improvement by the community

    Self-Adaptation in SDN-based IoT Networks

    Get PDF
    In the digital age, frightening patterns in digital threats are emerging. It is impossible to ignore threats to IoT networks. Threats can take on any of the typical forms, including Denial-of-Service (DoS), Distributed Denial-of-Service (DDoS), Virus assault, Man-in-the-middle attack (Mitm), Advanced Persistent Threats (APT), Password Assault, and more. It is crucial to eliminate all threats from IoT networks and devices. Reinforcement learning to detect anomalies in an IoT network is seen to be the greatest option for correcting risks in a network, hence fixing the afflicted nodes, according to this thesis, "Self-Adaptation of SDN-based IoT Networks." (Markov) MDP policies and MAPE-K loop properties in Self-aware systems are the bases of the design in this thesis. The network system exhibited self-adaptability features, which makes it self-correcting and self-healing. The objective of this research is to propose a means to secure the devices in an IoT network by protecting them from any form of threats and ensuring that the devices function normally. Even at the advent of abnormal functioning of any node in the network, the system should be able to correct itself. A Software Defined Network (SDN) architecture is proposed for the design in a later section, which explains the kind of SDN that should be in place for the intrusion detection system. Further into the thesis, we dived deep into the general overview of deep reinforcement learning. Then comes the implementation, which talks about the kind of reinforcement learning policy used in the work and how the result was derived. The other section discusses the result and discussion, where the result in this work was compared with the result of the traditional machine learning algorithm

    Large Language Models for Forecasting and Anomaly Detection: A Systematic Literature Review

    Full text link
    This systematic literature review comprehensively examines the application of Large Language Models (LLMs) in forecasting and anomaly detection, highlighting the current state of research, inherent challenges, and prospective future directions. LLMs have demonstrated significant potential in parsing and analyzing extensive datasets to identify patterns, predict future events, and detect anomalous behavior across various domains. However, this review identifies several critical challenges that impede their broader adoption and effectiveness, including the reliance on vast historical datasets, issues with generalizability across different contexts, the phenomenon of model hallucinations, limitations within the models' knowledge boundaries, and the substantial computational resources required. Through detailed analysis, this review discusses potential solutions and strategies to overcome these obstacles, such as integrating multimodal data, advancements in learning methodologies, and emphasizing model explainability and computational efficiency. Moreover, this review outlines critical trends that are likely to shape the evolution of LLMs in these fields, including the push toward real-time processing, the importance of sustainable modeling practices, and the value of interdisciplinary collaboration. Conclusively, this review underscores the transformative impact LLMs could have on forecasting and anomaly detection while emphasizing the need for continuous innovation, ethical considerations, and practical solutions to realize their full potential

    Design of Neuromemristive Systems for Visual Information Processing

    Get PDF
    Neuromemristive systems (NMSs) are brain-inspired, adaptive computer architectures based on emerging resistive memory technology (memristors). NMSs adopt a mixed-signal design approach with closely-coupled memory and processing, resulting in high area and energy efficiencies. Previous work suggests that NMSs could even supplant conventional architectures in niche application domains such as visual information processing. However, given the infancy of the field, there are still several obstacles impeding the transition of these systems from theory to practice. This dissertation advances the state of NMS research by addressing open design problems spanning circuit, architecture, and system levels. Novel synapse, neuron, and plasticity circuits are designed to reduce NMSs’ area and power consumption by using current-mode design techniques and exploiting device variability. Circuits are designed in a 45 nm CMOS process with memristor models based on multilevel (W/Ag-chalcogenide/W) and bistable (Ag/GeS2/W) device data. Higher-level behavioral, power, area, and variability models are ported into MATLAB to accelerate the overall simulation time. The circuits designed in this work are integrated into neural network architectures for visual information processing tasks, including feature detection, clustering, and classification. Networks in the NMSs are trained with novel stochastic learning algorithms that achieve 3.5 reduction in circuit area, reduced design complexity, and exhibit similar convergence properties compared to the least-mean-squares algorithm. This work also examines the effects of device-level variations on NMS performance, which has received limited attention in previous work. The impact of device variations is reduced with a partial on-chip training methodology that enables NMSs to be configured with relatively sophisticated algorithms (e.g. resilient backpropagation), while maximizing their area-accuracy tradeoff
    • …
    corecore