1,121 research outputs found

    Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

    Full text link
    Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.Comment: 46 pages, 22 fig

    Multilayer Cyberattacks Identification and Classification Using Machine Learning in Internet of Blockchain (IoBC)-Based Energy Networks

    Get PDF
    The world's need for energy is rising due to factors like population growth, economic expansion, and technological breakthroughs. However, there are major consequences when gas and coal are burnt to meet this surge in energy needs. Although these fossil fuels are still essential for meeting energy demands, their combustion releases a large amount of carbon dioxide and other pollutants into the atmosphere. This significantly jeopardizes community health in addition to exacerbating climate change, thus it is essential need to move swiftly to incorporate renewable energy sources by employing advanced information and communication technologies. However, this change brings up several security issues emphasizing the need for innovative cyber threats detection and prevention solutions. Consequently, this study presents bigdata sets obtained from the solar and wind powered distributed energy systems through the blockchain-based energy networks in the smart grid (SG). A hybrid machine learning (HML) model that combines both the Deep Learning (DL) and Long-Short-Term-Memory (LSTM) models characteristics is developed and applied to identify the unique patterns of Denial of Service (DoS) and Distributed Denial of Service (DDoS) cyberattacks in the power generation, transmission, and distribution processes. The presented big datasets are essential and significantly helps in identifying and classifying cyberattacks, leading to predicting the accurate energy systems behavior in the SG.© 2024 The Author(s). Published by Elsevier Inc. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/)fi=vertaisarvioitu|en=peerReviewed

    Artificial Intelligence for Resilience in Smart Grid Operations

    Get PDF
    Today, the electric power grid is transforming into a highly interconnected network of advanced technologies, equipment, and controls to enable a smarter grid. The growing complexity of smart grid requires resilient operation and control. Power system resilience is defined as the ability to harden the system against and quickly recover from high-impact, low-frequency events. The introduction of two-way flows of information and electricity in the smart grid raises concerns of cyber-physical attacks. Proliferated penetration of renewable energy sources such as solar photovoltaic (PV) and wind power introduce challenges due to the high variability and uncertainty in generation. Unintentional disruptions and power system component outages have become a threat to real-time power system operations. Recent extreme weather events and natural disasters such as hurricanes, storms, and wildfires demonstrate the importance of resilience in the power system. It is essential to find solutions to overcome these challenges in maintaining resilience in smart grid. In this dissertation, artificial intelligence (AI) based approaches have been developed to enhance resilience in smart grid. Methods for optimal automatic generation control (AGC) have been developed for multi-area multi-machine power systems. Reliable AI models have been developed for predicting solar irradiance, PV power generation, and power system frequencies. The proposed short-horizon AI prediction models ranging from few seconds to a minute plus, outperform the state-of-art persistence models. The AI prediction models have been applied to provide situational intelligence for power system operations. An enhanced tie-line bias control in a multi-area power system for variable and uncertain environments has been developed with predicted PV power and bus frequencies. A distributed and parallel security-constrained optimal power flow (SCOPF) algorithm has been developed to overcome the challenges in solving SCOPF problem for large power networks. The methods have been developed and tested on an experimental laboratory platform consisting of real-time digital simulators, hardware/software phasor measurement units, and a real-time weather station

    Dagstuhl News January - December 2011

    Get PDF
    "Dagstuhl News" is a publication edited especially for the members of the Foundation "Informatikzentrum Schloss Dagstuhl" to thank them for their support. The News give a summary of the scientific work being done in Dagstuhl. Each Dagstuhl Seminar is presented by a small abstract describing the contents and scientific highlights of the seminar as well as the perspectives or challenges of the research topic

    Agoric computation: trust and cyber-physical systems

    Get PDF
    In the past two decades advances in miniaturisation and economies of scale have led to the emergence of billions of connected components that have provided both a spur and a blueprint for the development of smart products acting in specialised environments which are uniquely identifiable, localisable, and capable of autonomy. Adopting the computational perspective of multi-agent systems (MAS) as a technological abstraction married with the engineering perspective of cyber-physical systems (CPS) has provided fertile ground for designing, developing and deploying software applications in smart automated context such as manufacturing, power grids, avionics, healthcare and logistics, capable of being decentralised, intelligent, reconfigurable, modular, flexible, robust, adaptive and responsive. Current agent technologies are, however, ill suited for information-based environments, making it difficult to formalise and implement multiagent systems based on inherently dynamical functional concepts such as trust and reliability, which present special challenges when scaling from small to large systems of agents. To overcome such challenges, it is useful to adopt a unified approach which we term agoric computation, integrating logical, mathematical and programming concepts towards the development of agent-based solutions based on recursive, compositional principles, where smaller systems feed via directed information flows into larger hierarchical systems that define their global environment. Considering information as an integral part of the environment naturally defines a web of operations where components of a systems are wired in some way and each set of inputs and outputs are allowed to carry some value. These operations are stateless abstractions and procedures that act on some stateful cells that cumulate partial information, and it is possible to compose such abstractions into higher-level ones, using a publish-and-subscribe interaction model that keeps track of update messages between abstractions and values in the data. In this thesis we review the logical and mathematical basis of such abstractions and take steps towards the software implementation of agoric modelling as a framework for simulation and verification of the reliability of increasingly complex systems, and report on experimental results related to a few select applications, such as stigmergic interaction in mobile robotics, integrating raw data into agent perceptions, trust and trustworthiness in orchestrated open systems, computing the epistemic cost of trust when reasoning in networks of agents seeded with contradictory information, and trust models for distributed ledgers in the Internet of Things (IoT); and provide a roadmap for future developments of our research

    Enhancing cloud security through the integration of deep learning and data mining techniques: A comprehensive review

    Get PDF
    Cloud computing is crucial in all areas of data storage and online service delivery. It adds various benefits to the conventional storage and sharing system, such as simple access, on-demand storage, scalability, and cost savings. The employment of its rapidly expanding technologies may give several benefits in protecting the Internet of Things (IoT) and physical cyber systems (CPS) from various cyber threats, with IoT and CPS providing facilities for people in their everyday lives. Because malware (malware) is on the rise and there is no well-known strategy for malware detection, leveraging the cloud environment to identify malware might be a viable way forward. To avoid detection, a new kind of malware employs complex jamming and packing methods. Because of this, it is very hard to identify sophisticated malware using typical detection methods. The article presents a detailed assessment of cloud-based malware detection technologies, as well as insight into understanding the cloud's use in protecting the Internet of Things and critical infrastructure from intrusions. This study examines the benefits and drawbacks of cloud environments in malware detection, as well as presents a methodology for detecting cloud-based malware using deep learning and data extraction and highlights new research on the issues of propagating existing malware. Finally, similarities and variations across detection approaches will be exposed, as well as detection technique flaws. The findings of this work may be utilized to highlight the current issue being tackled in malware research in the future

    REACTIVE MOTION REPLANNING FOR HUMAN-ROBOT COLLABORATION

    Get PDF
    Negli ultimi anni si è assistito a un incremento significativo di robot che condividono lo spazio di lavoro con operatori umani, per combinare la rapidità e la precisione proprie dei robot con l'adattabilità e l'intelligenza umana. Tuttavia, questa integrazione ha introdotto nuove sfide in termini di sicurezza ed efficienza della collaborazione. I robot devono essere in grado di adattarsi prontamente ai cambiamenti nell'ambiente circostante, come i movimenti degli operatori, adeguando in tempo reale il loro percorso per evitare collisioni, preferibilmente senza interruzioni. Inoltre, nelle operazioni di collaborazione tra uomo e robot, le traiettorie ripianificate devono rispettare i protocolli di sicurezza, al fine di evitare rallentamenti e fermate dovute alla prossimità eccessiva del robot all'operatore. In questo contesto è fondamentale fornire soluzioni di alta qualità in tempi rapidi per garantire la reattività del robot. Le tecniche di ripianificazione tradizionali tendono a faticare in ambienti complessi, soprattutto quando si tratta di robot con molti gradi di libertà e numerosi ostacoli di dimensioni considerevoli. La presente tesi affronta queste sfide proponendo un nuovo algoritmo sampling-based di ripianificazione del percorso per manipolatori robotici. Questo approccio sfrutta percorsi pre-calcolati per generare rapidamente nuove soluzioni in poche centinaia di millisecondi. Inoltre, incorpora una funzione di costo che guida l'algoritmo verso soluzioni che rispettano lo standard di sicurezza ISO/TS 15066, riducendo così gli interventi di sicurezza e promuovendo una cooperazione efficiente tra uomo e robot. Viene inoltre presentata un'architettura per gestire il processo di ripianificazione durante l'esecuzione del percorso del robot. Infine, viene introdotto uno strumento software che semplifica l'implementazione e il testing degli algoritmi di ripianificazione del percorso. Simulazioni ed esperimenti condotti su robot reali dimostrano le prestazioni superiori del metodo proposto rispetto ad altre tecniche popolari.In recent years, there has been a significant increase in robots sharing workspace with human operators, combining the speed and precision inherent to robots with human adaptability and intelligence. However, this integration has introduced new challenges in terms of safety and collaborative efficiency. Robots now need to swiftly adjust to dynamic changes in their environment, such as the movements of operators, altering their path in real-time to avoid collisions, ideally without any disruptions. Moreover, in human-robot collaborations, replanned trajectories should adhere to safety protocols, preventing safety-induced slowdowns or stops caused by the robot's proximity to the operator. In this context, quickly providing high-quality solutions is crucial for ensuring the robot's responsiveness. Conventional replanning techniques often fall short in complex environments, especially for robots with numerous degrees of freedom contending with sizable obstacles. This thesis tackles these challenges by introducing a novel sampling-based path replanning algorithm tailored for robotic manipulators. This approach exploits pre-computed paths to generate new solutions in a few hundred milliseconds. Additionally, it integrates a cost function that steers the algorithm towards solutions that strictly adhere to the ISO/TS 15066 safety standard, thereby minimizing the need for safety interventions and fostering efficient cooperation between humans and robots. Furthermore, an architecture for managing the replanning process during the execution of the robot's path is introduced. Finally, a software tool is presented to streamline the implementation and testing of path replanning algorithms. Simulations and experiments conducted on real robots demonstrate the superior performance of the proposed method compared to other popular techniques

    Proceedings of Abstracts, School of Physics, Engineering and Computer Science Research Conference 2022

    Get PDF
    © 2022 The Author(s). This is an open-access work distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. For further details please see https://creativecommons.org/licenses/by/4.0/. Plenary by Prof. Timothy Foat, ‘Indoor dispersion at Dstl and its recent application to COVID-19 transmission’ is © Crown copyright (2022), Dstl. This material is licensed under the terms of the Open Government Licence except where otherwise stated. To view this licence, visit http://www.nationalarchives.gov.uk/doc/open-government-licence/version/3 or write to the Information Policy Team, The National Archives, Kew, London TW9 4DU, or email: [email protected] present proceedings record the abstracts submitted and accepted for presentation at SPECS 2022, the second edition of the School of Physics, Engineering and Computer Science Research Conference that took place online, the 12th April 2022

    Enabling Model-Driven Live Analytics For Cyber-Physical Systems: The Case of Smart Grids

    Get PDF
    Advances in software, embedded computing, sensors, and networking technologies will lead to a new generation of smart cyber-physical systems that will far exceed the capabilities of today’s embedded systems. They will be entrusted with increasingly complex tasks like controlling electric grids or autonomously driving cars. These systems have the potential to lay the foundations for tomorrow’s critical infrastructures, to form the basis of emerging and future smart services, and to improve the quality of our everyday lives in many areas. In order to solve their tasks, they have to continuously monitor and collect data from physical processes, analyse this data, and make decisions based on it. Making smart decisions requires a deep understanding of the environment, internal state, and the impacts of actions. Such deep understanding relies on efficient data models to organise the sensed data and on advanced analytics. Considering that cyber-physical systems are controlling physical processes, decisions need to be taken very fast. This makes it necessary to analyse data in live, as opposed to conventional batch analytics. However, the complex nature combined with the massive amount of data generated by such systems impose fundamental challenges. While data in the context of cyber-physical systems has some similar characteristics as big data, it holds a particular complexity. This complexity results from the complicated physical phenomena described by this data, which makes it difficult to extract a model able to explain such data and its various multi-layered relationships. Existing solutions fail to provide sustainable mechanisms to analyse such data in live. This dissertation presents a novel approach, named model-driven live analytics. The main contribution of this thesis is a multi-dimensional graph data model that brings raw data, domain knowledge, and machine learning together in a single model, which can drive live analytic processes. This model is continuously updated with the sensed data and can be leveraged by live analytic processes to support decision-making of cyber-physical systems. The presented approach has been developed in collaboration with an industrial partner and, in form of a prototype, applied to the domain of smart grids. The addressed challenges are derived from this collaboration as a response to shortcomings in the current state of the art. More specifically, this dissertation provides solutions for the following challenges: First, data handled by cyber-physical systems is usually dynamic—data in motion as opposed to traditional data at rest—and changes frequently and at different paces. Analysing such data is challenging since data models usually can only represent a snapshot of a system at one specific point in time. A common approach consists in a discretisation, which regularly samples and stores such snapshots at specific timestamps to keep track of the history. Continuously changing data is then represented as a finite sequence of such snapshots. Such data representations would be very inefficient to analyse, since it would require to mine the snapshots, extract a relevant dataset, and finally analyse it. For this problem, this thesis presents a temporal graph data model and storage system, which consider time as a first-class property. A time-relative navigation concept enables to analyse frequently changing data very efficiently. Secondly, making sustainable decisions requires to anticipate what impacts certain actions would have. Considering complex cyber-physical systems, it can come to situations where hundreds or thousands of such hypothetical actions must be explored before a solid decision can be made. Every action leads to an independent alternative from where a set of other actions can be applied and so forth. Finding the sequence of actions that leads to the desired alternative, requires to efficiently create, represent, and analyse many different alternatives. Given that every alternative has its own history, this creates a very high combinatorial complexity of alternatives and histories, which is hard to analyse. To tackle this problem, this dissertation introduces a multi-dimensional graph data model (as an extension of the temporal graph data model) that enables to efficiently represent, store, and analyse many different alternatives in live. Thirdly, complex cyber-physical systems are often distributed, but to fulfil their tasks these systems typically need to share context information between computational entities. This requires analytic algorithms to reason over distributed data, which is a complex task since it relies on the aggregation and processing of various distributed and constantly changing data. To address this challenge, this dissertation proposes an approach to transparently distribute the presented multi-dimensional graph data model in a peer-to-peer manner and defines a stream processing concept to efficiently handle frequent changes. Fourthly, to meet future needs, cyber-physical systems need to become increasingly intelligent. To make smart decisions, these systems have to continuously refine behavioural models that are known at design time, with what can only be learned from live data. Machine learning algorithms can help to solve this unknown behaviour by extracting commonalities over massive datasets. Nevertheless, searching a coarse-grained common behaviour model can be very inaccurate for cyber-physical systems, which are composed of completely different entities with very different behaviour. For these systems, fine-grained learning can be significantly more accurate. However, modelling, structuring, and synchronising many fine-grained learning units is challenging. To tackle this, this thesis presents an approach to define reusable, chainable, and independently computable fine-grained learning units, which can be modelled together with and on the same level as domain data. This allows to weave machine learning directly into the presented multi-dimensional graph data model. In summary, this thesis provides an efficient multi-dimensional graph data model to enable live analytics of complex, frequently changing, and distributed data of cyber-physical systems. This model can significantly improve data analytics for such systems and empower cyber-physical systems to make smart decisions in live. The presented solutions combine and extend methods from model-driven engineering, [email protected], data analytics, database systems, and machine learning
    • …
    corecore