388 research outputs found

    Memory-Enhanced Evolutionary Robotics: The Echo State Network Approach

    Get PDF
    International audienceInterested in Evolutionary Robotics, this paper focuses on the acquisition and exploitation of memory skills. The targeted task is a well-studied benchmark problem, the Tolman maze, requiring in principle the robotic controller to feature some (limited) counting abilities. An elaborate experimental setting is used to enforce the controller generality and prevent opportunistic evolution from mimicking deliberative skills through smart reactive heuristics. The paper compares the prominent NEAT approach, achieving the non-parametric optimization of Neural Nets, with the evolutionary optimization of Echo State Networks, pertaining to the recent field of Reservoir Computing. While both search spaces offer a sufficient expressivity and enable the modelling of complex dynamic systems, the latter one is amenable to robust parametric, linear optimization with Covariance Matrix Adaptation-Evolution Strategies

    Memory-Enhanced Evolutionary Robotics: The Echo State Network Approach

    Get PDF
    International audienceInterested in Evolutionary Robotics, this paper focuses on the acquisition and exploitation of memory skills. The targeted task is a well-studied benchmark problem, the Tolman maze, requiring in principle the robotic controller to feature some (limited) counting abilities. An elaborate experimental setting is used to enforce the controller generality and prevent opportunistic evolution from mimicking deliberative skills through smart reactive heuristics. The paper compares the prominent NEAT approach, achieving the non-parametric optimization of Neural Nets, with the evolutionary optimization of Echo State Networks, pertaining to the recent field of Reservoir Computing. While both search spaces offer a sufficient expressivity and enable the modelling of complex dynamic systems, the latter one is amenable to robust parametric, linear optimization with Covariance Matrix Adaptation-Evolution Strategies

    Training Passive Photonic Reservoirs with Integrated Optical Readout

    Full text link
    As Moore's law comes to an end, neuromorphic approaches to computing are on the rise. One of these, passive photonic reservoir computing, is a strong candidate for computing at high bitrates (> 10 Gbps) and with low energy consumption. Currently though, both benefits are limited by the necessity to perform training and readout operations in the electrical domain. Thus, efforts are currently underway in the photonic community to design an integrated optical readout, which allows to perform all operations in the optical domain. In addition to the technological challenge of designing such a readout, new algorithms have to be designed in order to train it. Foremost, suitable algorithms need to be able to deal with the fact that the actual on-chip reservoir states are not directly observable. In this work, we investigate several options for such a training algorithm and propose a solution in which the complex states of the reservoir can be observed by appropriately setting the readout weights, while iterating over a predefined input sequence. We perform numerical simulations in order to compare our method with an ideal baseline requiring full observability as well as with an established black-box optimization approach (CMA-ES).Comment: Accepted for publication in IEEE Transactions on Neural Networks and Learning Systems (TNNLS-2017-P-8539.R1), copyright 2018 IEEE. This research was funded by the EU Horizon 2020 PHRESCO Grant (Grant No. 688579) and the BELSPO IAP P7-35 program Photonics@be. 11 pages, 9 figure

    Process analytical technology in food biotechnology

    Get PDF
    Biotechnology is an area where precision and reproducibility are vital. This is due to the fact that products are often in form of food, pharmaceutical or cosmetic products and therefore very close to the human being. To avoid human error during the production or the evaluation of the quality of a product and to increase the optimal utilization of raw materials, a very high amount of automation is desired. Tools in the food and chemical industry that aim to reach this degree of higher automation are summarized in an initiative called Process Analytical Technology (PAT). Within the scope of the PAT, is to provide new measurement technologies for the purpose of closed loop control in biotechnological processes. These processes are the most demanding processes in regards of control issues due to their very often biological rate-determining component. Most important for an automation attempt is deep process knowledge, which can only be achieved via appropriate measurements. These measurements can either be carried out directly, measuring a crucial physical value, or if not accessible either due to the lack of technology or a complicated sample state, via a soft-sensor.Even after several years the ideal aim of the PAT initiative is not fully implemented in the industry and in many production processes. On the one hand a lot effort still needs to be put into the development of more general algorithms which are more easy to implement and especially more reliable. On the other hand, not all the available advances in this field are employed yet. The potential users seem to stick to approved methods and show certain reservations towards new technologies.Die Biotechnologie ist ein Wissenschaftsbereich, in dem hohe Genauigkeit und Wiederholbarkeit eine wichtige Rolle spielen. Dies ist der Tatsache geschuldet, dass die hergestellten Produkte sehr oft den Bereichen Nahrungsmitteln, Pharmazeutika oder Kosmetik angehöhren und daher besonders den Menschen beeinflussen. Um den menschlichen Fehler bei der Produktion zu vermeiden, die Qualität eines Produktes zu sichern und die optimale Verwertung der Rohmaterialen zu gewährleisten, wird ein besonders hohes Maß an Automation angestrebt. Die Werkzeuge, die in der Nahrungsmittel- und chemischen Industrie hierfür zum Einsatz kommen, werden in der Process Analytical Technology (PAT) Initiative zusammengefasst. Ziel der PAT ist die Entwicklung zuverlässiger neuer Methoden, um Prozesse zu beschreiben und eine automatische Regelungsstrategie zu realisieren. Biotechnologische Prozesse gehören hierbei zu den aufwändigsten Regelungsaufgaben, da in den meisten Fällen eine biologische Komponente der entscheidende Faktor ist. Entscheidend für eine erfolgreiche Regelungsstrategie ist ein hohes Maß an Prozessverständnis. Dieses kann entweder durch eine direkte Messung der entscheidenden physikalischen, chemischen oder biologischen Größen gewonnen werden oder durch einen SoftSensor. Zusammengefasst zeigt sich, dass das finale Ziel der PAT Initiative auch nach einigen Jahren des Propagierens weder komplett in der Industrie noch bei vielen Produktionsprozessen angekommen ist. Auf der einen Seite liegt dies mit Sicherheit an der Tatsache, dass noch viel Arbeit in die Generalisierung von Algorithmen gesteckt werden muss. Diese müsse einfacher zu implementieren und vor allem noch zuverlässiger in der Funktionsweise sein. Auf der anderen Seite wurden jedoch auch Algorithmen, Regelungsstrategien und eigne Ansätze für einen neuartigen Sensor sowie einen Soft-Sensors vorgestellt, die großes Potential zeigen. Nicht zuletzt müssen die möglichen Anwender neue Strategien einsetzen und Vorbehalte gegenüber unbekannten Technologien ablegen

    Determination of intrinsic physical properties of porous media by applying Bayesian Optimization to inverse problems in Laplace NMR relaxometry

    Full text link
    Nuclear magnetic resonance (NMR) longitudinal (T1) and transverse (T2) relaxation time distributions are widely used for the characterization of porous media. Subject to simplifying assumptions predictions about pore size, permeability and fluid content can be made. Numerical forward models based on high-resolution images are employed to naturally incorporate structural heterogeneity and diffusive motion without limiting assumptions, offering alternate interpretation approaches. Extracting the required multiple intrinsic parameters of the system poses an ill-conditioned inverse problem where multiple scales are covered by the underlying microstructure. Three general and robust inverse solution workflows (ISW) utilizing Bayesian optimization for the inverse problem of estimation of intrinsic physical quantities from the integration of pore-scale forward modeling and experimental measurements of macroscopic system responses are developed. A single-task ISW identifies multiple intrinsic properties for a single core by minimization of the deviation between simulated and measured T2 distributions. A multi-task ISW efficiently identifies the same set of unknown quantities for different cores by leveraging information from completed tasks using transfer learning. Finally, a dual-task ISW inspired by the multi-task ISW incorporates transfer learning for the simultaneous statistical modeling of T1 and T2 distributions, providing robust estimates of T1 and T2 intrinsic properties. A multi-modal search strategy comprising the multi-start L-BFGS-B optimizer and the social-learning particle swarm optimizer, and a multi-modal solution analysis procedure are applied in these workflows for the identification of non-unique solution sets. The performance of the single-task ISW is demonstrated on T2 relaxation responses of a Bentheimer sandstone, extracting three physical parameters simultaneously, and the results facilitate the multi-task ISW to study the spatial variability of the three physical quantities of three Bentheimer sandstone cored from two different blocks. The performance of the dual-task ISW is demonstrated on the identification of the five physical quantities with two extra T1 related unknowns. The effect of SNR on the identified parameter values is demonstrated. Inverse solution workflows enable the use of classical interpretation techniques and local analysis of responses based on numerical simulation

    Review and Classification of Bio-inspired Algorithms and Their Applications

    Get PDF
    Scientists have long looked to nature and biology in order to understand and model solutions for complex real-world problems. The study of bionics bridges the functions, biological structures and functions and organizational principles found in nature with our modern technologies, numerous mathematical and metaheuristic algorithms have been developed along with the knowledge transferring process from the lifeforms to the human technologies. Output of bionics study includes not only physical products, but also various optimization computation methods that can be applied in different areas. Related algorithms can broadly be divided into four groups: evolutionary based bio-inspired algorithms, swarm intelligence-based bio-inspired algorithms, ecology-based bio-inspired algorithms and multi-objective bio-inspired algorithms. Bio-inspired algorithms such as neural network, ant colony algorithms, particle swarm optimization and others have been applied in almost every area of science, engineering and business management with a dramatic increase of number of relevant publications. This paper provides a systematic, pragmatic and comprehensive review of the latest developments in evolutionary based bio-inspired algorithms, swarm intelligence based bio-inspired algorithms, ecology based bio-inspired algorithms and multi-objective bio-inspired algorithms

    Extraction and Detection of Fetal Electrocardiograms from Abdominal Recordings

    Get PDF
    The non-invasive fetal ECG (NIFECG), derived from abdominal surface electrodes, offers novel diagnostic possibilities for prenatal medicine. Despite its straightforward applicability, NIFECG signals are usually corrupted by many interfering sources. Most significantly, by the maternal ECG (MECG), whose amplitude usually exceeds that of the fetal ECG (FECG) by multiple times. The presence of additional noise sources (e.g. muscular/uterine noise, electrode motion, etc.) further affects the signal-to-noise ratio (SNR) of the FECG. These interfering sources, which typically show a strong non-stationary behavior, render the FECG extraction and fetal QRS (FQRS) detection demanding signal processing tasks. In this thesis, several of the challenges regarding NIFECG signal analysis were addressed. In order to improve NIFECG extraction, the dynamic model of a Kalman filter approach was extended, thus, providing a more adequate representation of the mixture of FECG, MECG, and noise. In addition, aiming at the FECG signal quality assessment, novel metrics were proposed and evaluated. Further, these quality metrics were applied in improving FQRS detection and fetal heart rate estimation based on an innovative evolutionary algorithm and Kalman filtering signal fusion, respectively. The elaborated methods were characterized in depth using both simulated and clinical data, produced throughout this thesis. To stress-test extraction algorithms under ideal circumstances, a comprehensive benchmark protocol was created and contributed to an extensively improved NIFECG simulation toolbox. The developed toolbox and a large simulated dataset were released under an open-source license, allowing researchers to compare results in a reproducible manner. Furthermore, to validate the developed approaches under more realistic and challenging situations, a clinical trial was performed in collaboration with the University Hospital of Leipzig. Aside from serving as a test set for the developed algorithms, the clinical trial enabled an exploratory research. This enables a better understanding about the pathophysiological variables and measurement setup configurations that lead to changes in the abdominal signal's SNR. With such broad scope, this dissertation addresses many of the current aspects of NIFECG analysis and provides future suggestions to establish NIFECG in clinical settings.:Abstract Acknowledgment Contents List of Figures List of Tables List of Abbreviations List of Symbols (1)Introduction 1.1)Background and Motivation 1.2)Aim of this Work 1.3)Dissertation Outline 1.4)Collaborators and Conflicts of Interest (2)Clinical Background 2.1)Physiology 2.1.1)Changes in the maternal circulatory system 2.1.2)Intrauterine structures and feto-maternal connection 2.1.3)Fetal growth and presentation 2.1.4)Fetal circulatory system 2.1.5)Fetal autonomic nervous system 2.1.6)Fetal heart activity and underlying factors 2.2)Pathology 2.2.1)Premature rupture of membrane 2.2.2)Intrauterine growth restriction 2.2.3)Fetal anemia 2.3)Interpretation of Fetal Heart Activity 2.3.1)Summary of clinical studies on FHR/FHRV 2.3.2)Summary of studies on heart conduction 2.4)Chapter Summary (3)Technical State of the Art 3.1)Prenatal Diagnostic and Measuring Technique 3.1.1)Fetal heart monitoring 3.1.2)Related metrics 3.2)Non-Invasive Fetal ECG Acquisition 3.2.1)Overview 3.2.2)Commercial equipment 3.2.3)Electrode configurations 3.2.4)Available NIFECG databases 3.2.5)Validity and usability of the non-invasive fetal ECG 3.3)Non-Invasive Fetal ECG Extraction Methods 3.3.1)Overview on the non-invasive fetal ECG extraction methods 3.3.2)Kalman filtering basics 3.3.3)Nonlinear Kalman filtering 3.3.4)Extended Kalman filter for FECG estimation 3.4)Fetal QRS Detection 3.4.1)Merging multichannel fetal QRS detections 3.4.2)Detection performance 3.5)Fetal Heart Rate Estimation 3.5.1)Preprocessing the fetal heart rate 3.5.2)Fetal heart rate statistics 3.6)Fetal ECG Morphological Analysis 3.7)Problem Description 3.8)Chapter Summary (4)Novel Approaches for Fetal ECG Analysis 4.1)Preliminary Considerations 4.2)Fetal ECG Extraction by means of Kalman Filtering 4.2.1)Optimized Gaussian approximation 4.2.2)Time-varying covariance matrices 4.2.3)Extended Kalman filter with unknown inputs 4.2.4)Filter calibration 4.3)Accurate Fetal QRS and Heart Rate Detection 4.3.1)Multichannel evolutionary QRS correction 4.3.2)Multichannel fetal heart rate estimation using Kalman filters 4.4)Chapter Summary (5)Data Material 5.1)Simulated Data 5.1.1)The FECG Synthetic Generator (FECGSYN) 5.1.2)The FECG Synthetic Database (FECGSYNDB) 5.2)Clinical Data 5.2.1)Clinical NIFECG recording 5.2.2)Scope and limitations of this study 5.2.3)Data annotation: signal quality and fetal amplitude 5.2.4)Data annotation: fetal QRS annotation 5.3)Chapter Summary (6)Results for Data Analysis 6.1)Simulated Data 6.1.1)Fetal QRS detection 6.1.2)Morphological analysis 6.2)Own Clinical Data 6.2.1)FQRS correction using the evolutionary algorithm 6.2.2)FHR correction by means of Kalman filtering (7)Discussion and Prospective 7.1)Data Availability 7.1.1)New measurement protocol 7.2)Signal Quality 7.3)Extraction Methods 7.4)FQRS and FHR Correction Algorithms (8)Conclusion References (A)Appendix A - Signal Quality Annotation (B)Appendix B - Fetal QRS Annotation (C)Appendix C - Data Recording GU

    Efficient Optimization and Robust Value Quantification of Enhanced Oil Recovery Strategies

    Get PDF
    With an increasing demand for hydrocarbon reservoir produces such as oil, etc., and difficulties in finding green oil fields, the use of Enhanced Oil Recovery (EOR) methods such as polymer, Smart water, and solvent flooding for further development of existing fields can not be overemphasized. For reservoir profitability and reduced environmental impact, it is crucial to consider appropriate well control settings of EOR methods for given reservoir characterization. Moreover, finding appropriate well settings requires solving a constrained optimization problem with suitable numerical solution methods. Conventionally, the solution method requires many iterations involving several computationally demanding function evaluations before convergence to the appropriate near optimum. The major subject of this thesis is to develop an efficient and accurate solution method for constrained optimization problems associated with EOR methods for their value quantifications and ranking in the face of reservoir uncertainties. The first contribution of the thesis develops a solution method based on the inexact line search method (with Ensemble Based Optimization (EnOpt) for approximate gradient computation) for robust constrained optimization problems associated with polymer, Smart water, and solvent flooding. Here, the objective function is the expectation of the Net Present Value (NPV) function over given geological realizations. For a given set of well settings, the NPV function is defined based on the EOR simulation model, which follows from an appropriate extension of the black-oil model. The developed solution method is used to find the economic benefits and also the ranking of EOR methods for different oil reservoirs developed to mimic North Sea reservoirs. Performing the entire optimization routine in a transformed domain along with truncations has been a common practice for handling simple linear constraints in reservoir optimization. Aside from the fact that this method has a negative impact on the quality of gradient computation, it is complicated to use for non-linear constraints. The second contribution of this thesis proposes a technique based on the exterior penalty method for handling general linear and non-linear constraints in reservoir optimization problems to improve gradient computation quality by the EnOpt method for efficient and improved optimization algorithm. Because of the computationally expensive NPV function due to the costly reservoir simulation of EOR methods, the solution method for the underlying EOR optimization problem becomes inefficient, especially for large reservoir problems. To speedup the overall computation of the solution method, this thesis introduces a novel full order model (FOM)-based certified adaptive machine learning optimization procedures to locally approximate the expensive NPV function. A supervised feedforward deep neural network (DNN) algorithm is employed to locally create surrogate model. In the FOM-based optimization algorithm of this study, several FOM NPV function evaluations are required by the EnOpt method to approximate the gradient function at each (outer) iteration until convergence. To limit the number FOM-based evaluations, we consider building surrogate models locally to replace the FOM based NPV function at each outer iteration and proceed with an inner optimization routine until convergence. We adapt the surrogate model using some FOM-based criterion where necessary until convergence. The demonstration of methodology for polymer optimization problem on a benchmark model results in an improved optimum and found to be more efficient compared to using the full order model optimization procedures
    • …
    corecore