63 research outputs found

    Ternary and Hybrid Event-based Particle Filtering for Distributed State Estimation in Cyber-Physical Systems

    Get PDF
    The thesis is motivated by recent advancements and developments in large, distributed, autonomous, and self-aware Cyber-Physical Systems (CPSs), which are emerging engineering systems with integrated processing, control, and communication capabilities. Efficient usage of available resources (communication,computation, bandwidth, and energy) is a pre-requisite for productive operation of CPSs, where security, privacy, and/or power considerations limit the number of information transfers between neighbouring sensors. In this regard, the focus of the thesis is on information acquisition, state estimation, and learning in the context of CPSs by adopting an Event-based Estimation (EBE) strategy, where information transfer is performed only in the occurrence of specific events identified via the localized triggering mechanisms. In particular, the thesis aims to address the following identified drawbacks of the existing EBE methodologies: (i) At one hand, while EBE using Gaussian-based approximations of the event-triggered posterior has been fairly investigated, application of non-linear, non-Gaussian filtering using particle filters is still in its infancy, and; (ii) On the other hand, the common assumption in the existing EBE strategies is having a binary (idle and event) decision process where during idle epochs, the sensor holds on to its local measurements while during the event epochs measurement communication happens. Although the binary event-based transfer of measurements potentially reduces the communication overhead, still communicating raw measurements during all the event instances could be very costly. To address the aforementioned shortcomings of existing EBE methodologies, first, an intuitively pleasing event-based particle filtering (EBPF) framework is proposed for centralized, hierarchical, and distributed (iii)state estimation architectures. Furthermore, a novel ternary event-triggering framework, referred to as the TEB-PF, is proposed by introducing the ternary event-triggering (TET) mechanism coupled with a non-Gaussian fusion strategy that jointly incorporates hybrid measurements within the particle filtering framework. Instead of using binary decision criteria, the proposed TET mechanism uses three local decision cases resulting in set-valued, quantized, and point-valued measurements. Due to a joint utilization of quantized and set-valued measurements in addition to the point-valued ones, the proposed TEB-PF simultaneously reduces the communication overhead, in comparison to its binary triggering counterparts, while also improves the estimation accuracy especially in low communication rates

    Computational Optimizations for Machine Learning

    Get PDF
    The present book contains the 10 articles finally accepted for publication in the Special Issue “Computational Optimizations for Machine Learning” of the MDPI journal Mathematics, which cover a wide range of topics connected to the theory and applications of machine learning, neural networks and artificial intelligence. These topics include, among others, various types of machine learning classes, such as supervised, unsupervised and reinforcement learning, deep neural networks, convolutional neural networks, GANs, decision trees, linear regression, SVM, K-means clustering, Q-learning, temporal difference, deep adversarial networks and more. It is hoped that the book will be interesting and useful to those developing mathematical algorithms and applications in the domain of artificial intelligence and machine learning as well as for those having the appropriate mathematical background and willing to become familiar with recent advances of machine learning computational optimization mathematics, which has nowadays permeated into almost all sectors of human life and activity

    Evolutionary computation for software testing

    Get PDF
    A variety of products undergo a transformation from a pure mechanical design to more and more software and electronic components. A polarized example are watches. Several decades ago they have been purely mechanical. Modern smart watches are almost completely electronic devices which heavily rely on software. Further, a smart watch offers a lot more features than just the information about the current time. This change had a crucial impact on how software is being developed. A first attempt to control the rising complexity was to move to agile development practices such as extreme programming or scrum. This rise in complexity is not only affecting the development process but also quality assurance and software testing. If a product contains more and more features then this leads to a higher number of tests necessary to ensure quality standards. Furthermore agile development practices work in an iterative manner which leads to repetitive testing that puts more effort on the testing team. We aimed within the thesis to ease the pain of testing. Thereby we examined a series of subproblems that arise. A key complexity is the number of test cases. We intended to reduce the number of test cases before they are executed manually or implemented as automated tests. Thereby we examined the test specification and based on the requirements coverage of the individual tests, we were able to identify redundant tests. We relied on a novel metaheuristic called GCAIS which we improved upon iteratively. Another task is to control the remaining complexity. Testing is often time crucial and an appropriate subset of the available tests must be chosen in order to get a quick insight into the status of the device under test. We examined this challenge in two different testing scenarios. The first scenario is located in semi-automated testing where engineers execute a set of automated tests locally and closely observe the behaviour of the system under test. We extended GCAIS to compute test suites that satisfy different criteria if provided with sufficient search time. The second use case is located in fully automated testing in a continuous integration (CI) setting. CI focuses on frequent software build cycles which also include testing. These builds contain a testing stage which greatly emphasizes speed. Thus there we also have to compute crucial tests. However, due to the nature of the process we have to continuously recompute a test suite for each build as the software and maybe even the test cases at hand have changed. Hence it is hard to compute the test suite ahead of time and these tests have to be determined as part of the CI execution. Thus we switched to a computational lightweight learning classifier system (LCS) to prioritize and select test cases. We integrated a series of innovations we made into an LCS known as XCSF such as continuous priorities, experience replay and transfer learning. This enabled us to outperform a state of the art artificial neural network which is used by companies such as Netflix. We further investigated how LCS can be made faster using parallelism. We developed generic approaches which may run on any multicore computing device. This is of interest for our CI use case as the build server's architecture is unknown. However, the methods are also independent of the concrete LCS and are not linked to our testing problem. We identified that many of the challenges that need to be faced in the CI use case have been tackled by Organic Computing (OC), for example the need to adapt to an ever changing environment. Hence we relied on OC design principles to create a system architecture which wraps the LCS developed and integrates it into existing CI processes. The final system is robust and highly autonomous. A side-effect of the high degree of autonomy is a high level of automatization which fits CI well. We also gave insight on the usability and delivery of the full system to our industrial partner. Test engineers can easily integrate it with a few lines of code and need no knowledge about LCS and OC in order to use it. Another implication of the developed system is that OC's ideas and design principles can also be employed outside the field of embedded systems. This shows that OC has a greater level of generality. The process of testing and correcting found errors is still only partially automated. We make a first step into automating the entire process and thereby take an analogy to the concept of self-healing of OC. As a first proof of concept of this school of thought we take a look at touch interfaces. There we can automatically manipulate the software to fulfill the specified behaviour. Thus only a minimalistic amount of manual work is required

    Demystifying Parallel and Distributed Deep Learning: An In-Depth Concurrency Analysis

    Full text link
    Deep Neural Networks (DNNs) are becoming an important tool in modern computing applications. Accelerating their training is a major challenge and techniques range from distributed algorithms to low-level circuit design. In this survey, we describe the problem from a theoretical perspective, followed by approaches for its parallelization. We present trends in DNN architectures and the resulting implications on parallelization strategies. We then review and model the different types of concurrency in DNNs: from the single operator, through parallelism in network inference and training, to distributed deep learning. We discuss asynchronous stochastic optimization, distributed system architectures, communication schemes, and neural architecture search. Based on those approaches, we extrapolate potential directions for parallelism in deep learning

    Advances in Evolutionary Algorithms

    Get PDF
    With the recent trends towards massive data sets and significant computational power, combined with evolutionary algorithmic advances evolutionary computation is becoming much more relevant to practice. Aim of the book is to present recent improvements, innovative ideas and concepts in a part of a huge EA field

    Time Localization of Abrupt Changes in Cutting Process using Hilbert Huang Transform

    Get PDF
    Cutting process is extremely dynamical process influenced by different phenomena such as chip formation, dynamical responses and condition of machining system elements. Different phenomena in cutting zone have signatures in different frequency bands in signal acquired during process monitoring. The time localization of signal’s frequency content is very important. An emerging technique for simultaneous analysis of the signal in time and frequency domain that can be used for time localization of frequency is Hilbert Huang Transform (HHT). It is based on empirical mode decomposition (EMD) of the signal into intrinsic mode functions (IMFs) as simple oscillatory modes. IMFs obtained using EMD can be processed using Hilbert Transform and instantaneous frequency of the signal can be computed. This paper gives a methodology for time localization of cutting process stop during intermittent turning. Cutting process stop leads to abrupt changes in acquired signal correlated to certain frequency band. The frequency band related to abrupt changes is localized in time using HHT. The potentials and limitations of HHT application in machining process monitoring are shown

    Modelling, Monitoring, Control and Optimization for Complex Industrial Processes

    Get PDF
    This reprint includes 22 research papers and an editorial, collected from the Special Issue "Modelling, Monitoring, Control and Optimization for Complex Industrial Processes", highlighting recent research advances and emerging research directions in complex industrial processes. This reprint aims to promote the research field and benefit the readers from both academic communities and industrial sectors

    Universal Smart Grid Agent for Distributed Power Generation Management

    Get PDF
    "Somewhere, there is always wind blowing or the sun shining." This maxim could lead the global shift from fossil to renewable energy sources, suggesting that there is enough energy available to be turned into electricity. But the already impressive numbers that are available today, along with the European Union's 20-20-20 goal – to power 20% of the EU energy consumption from renewables until 2020 –, might mislead us over the problem that the go-to renewables readily available rely on a primary energy source mankind cannot control: the weather. At the same time, the notion of the smart grid introduces a vast array of new data coming from sensors in the power grid, at wind farms, power plants, transformers, and consumers. The new wealth of information might seem overwhelming, but can help to manage the different actors in the power grid. This book proposes to view the problem of power generation and distribution in the face of increased volatility as a problem of information distribution and processing. It enhances the power grid by turning its nodes into agents that forecast their local power balance from historical data, using artificial neural networks and the multi-part evolutionary training algorithm described in this book. They pro-actively communicate power demand and supply, adhering to a set of behavioral rules this book defines, and finally solve the 0-1 knapsack problem of choosing offers in such a way that not only solves the disequilibrium, but also minimizes line loss, by elegant modeling in the Boolean domain. The book shows that the Divide-et-Impera approach of a distributed grid control can lead to an efficient, reliable integration of volatile renewable energy sources into the power grid

    IoT in smart communities, technologies and applications.

    Get PDF
    Internet of Things is a system that integrates different devices and technologies, removing the necessity of human intervention. This enables the capacity of having smart (or smarter) cities around the world. By hosting different technologies and allowing interactions between them, the internet of things has spearheaded the development of smart city systems for sustainable living, increased comfort and productivity for citizens. The Internet of Things (IoT) for Smart Cities has many different domains and draws upon various underlying systems for its operation, in this work, we provide a holistic coverage of the Internet of Things in Smart Cities by discussing the fundamental components that make up the IoT Smart City landscape, the technologies that enable these domains to exist, the most prevalent practices and techniques which are used in these domains as well as the challenges that deployment of IoT systems for smart cities encounter and which need to be addressed for ubiquitous use of smart city applications. It also presents a coverage of optimization methods and applications from a smart city perspective enabled by the Internet of Things. Towards this end, a mapping is provided for the most encountered applications of computational optimization within IoT smart cities for five popular optimization methods, ant colony optimization, genetic algorithm, particle swarm optimization, artificial bee colony optimization and differential evolution. For each application identified, the algorithms used, objectives considered, the nature of the formulation and constraints taken in to account have been specified and discussed. Lastly, the data setup used by each covered work is also mentioned and directions for future work have been identified. Within the smart health domain of IoT smart cities, human activity recognition has been a key study topic in the development of cyber physical systems and assisted living applications. In particular, inertial sensor based systems have become increasingly popular because they do not restrict users’ movement and are also relatively simple to implement compared to other approaches. Fall detection is one of the most important tasks in human activity recognition. With an increasingly aging world population and an inclination by the elderly to live alone, the need to incorporate dependable fall detection schemes in smart devices such as phones, watches has gained momentum. Therefore, differentiating between falls and activities of daily living (ADLs) has been the focus of researchers in recent years with very good results. However, one aspect within fall detection that has not been investigated much is direction and severity aware fall detection. Since a fall detection system aims to detect falls in people and notify medical personnel, it could be of added value to health professionals tending to a patient suffering from a fall to know the nature of the accident. In this regard, as a case study for smart health, four different experiments have been conducted for the task of fall detection with direction and severity consideration on two publicly available datasets. These four experiments not only tackle the problem on an increasingly complicated level (the first one considers a fall only scenario and the other two a combined activity of daily living and fall scenario) but also present methodologies which outperform the state of the art techniques as discussed. Lastly, future recommendations have also been provided for researchers

    A Methodology to Enhance Quantitative Technology Evaluation Through Exploration of Employment Concepts in Engagement Analysis

    Get PDF
    The process of designing a new system has often been treated as a purely technological problem, where the infusion or synthesis of new technologies forms the basis of progress. However, recent trends in design and analysis methodologies have tried to shift away from the narrow scope of technology-centric approaches. One such trend is the increase in analysis scope from the level of an isolated system to that of multiple interacting systems. Analysis under this broader scope allows for the exploration of non-materiel solutions to existing or future problems. Solutions of this type can reduce the cost of closing capability gaps by mitigating the need to procure new systems to achieve desired levels of performance. In particular, innovations in the employment concepts can enhance existing, evolutionary, or revolutionary materiel solutions. The task of experimenting with non-materiel solutions often falls to operators after the system has been designed and produced. This begs the question as to whether the chosen design adequately accounted for the possibility of innovative employment concepts which operators might discover. Attempts can be made to bring the empirical knowledge possessed by skilled operators upstream in the design process. However, care must be taken to ensure such attempts do not introduce unwanted bias, and there can be significant difficulty in translating human intuition into an appropriate modeling paradigm for analysis. Furthermore, the capacity for human operators to capitalize on the potential benefits of a given technology may be limited or otherwise infeasible in design space explorations where the number of alternatives becomes very large. This is especially relevant to revolutionary concepts to which prior knowledge may not be applicable. Each of these complicating factors is exacerbated by interactions between systems, where changes in the decision-making processes of individual entities can greatly influence outcomes. This necessitates exploration and analysis of employment concepts for all relevant entities, not only that or those to which the technology applies. This research sought to address the issues of exploring employment concepts in the early phases of the system design process. A characterization of the problem identified several gaps in existing methodologies, particularly with respect to the representation, generation, and evaluation of alternative employment concepts. Relevant theories, including behavioral psychology, control theory, and game theory were identified to facilitate closure of these gaps. However, these theories also introduced technical challenges which had to be overcome. These challenges stemmed from systematic problems such as the curse of dimensionality, temporal credit assignment, and the complexities of entity interactions. A candidate approach was identified through thorough review of available literature: Multi-agent reinforcement learning. Experiments show the proposed approach can be used to generate highly effective models of behavior which could out-perform existing models on a representative problem. It was further shown that models produced by this new method can achieve consistently high levels of performance in competitive scenarios. Additional experimentation demonstrated how incorporation of design variables into the state space allowed models to learn policies which were effective across a continuous design space and outperformed their respective baselines. All of these results were obtained without reliance on prior knowledge, mitigating risks in and enhancing the capabilities of the analysis process. Lastly, the completed methodology was applied to the design of a fighter aircraft for one-on-one, gun-only air combat engagements to demonstrate its efficacy on and applicability to more complex problems.Ph.D
    • …
    corecore