1,171 research outputs found
Enhanced parallel Differential Evolution algorithm for problems in computational systems biology
[Abstract] Many key problems in computational systems biology and bioinformatics can be formulated and solved using a global optimization framework. The complexity of the underlying mathematical models require the use of efficient solvers in order to obtain satisfactory results in reasonable computation times. Metaheuristics are gaining recognition in this context, with Differential Evolution (DE) as one of the most popular methods. However, for most realistic applications, like those considering parameter estimation in dynamic models, DE still requires excessive computation times.
Here we consider this latter class of problems and present several enhancements to DE based on the introduction of additional algorithmic steps and the exploitation of parallelism. In particular, we propose an asynchronous parallel implementation of DE which has been extended with improved heuristics to exploit the specific structure of parameter estimation problems in computational systems biology. The proposed method is evaluated with different types of benchmarks problems: (i) black-box global optimization problems and (ii) calibration of non-linear dynamic models of biological systems, obtaining excellent results both in terms of quality of the solution and regarding speedup and scalability.Ministerio de Economía y Competitividad; DPI2011-28112-C04-03Consejo Superior de Investigaciones Científicas; PIE-201170E018Ministerio de Ciencia e Innovación; TIN2013-42148-PGalicia. Consellería de Cultura, Educación e Ordenación Universitaria; GRC2013/05
Recommended from our members
Control Theory: Mathematical Perspectives on Complex Networked Systems
Control theory is an interdisciplinary field that is located at the crossroads of pure and applied mathematics with systems engineering and the sciences. Its range of applicability and its techniques evolve rapidly with new developments in communication systems and electronic data processing. Thus, in recent years networked control systems emerged as a new fundamental topic, which combines complex communication structures with classical control methods and requires new mathematical methods. A substantial number of contributions to this workshop was devoted to the control of networks of systems. This was complemented by a series of lectures on other current topics like fundamentals of nonlinear control systems, model reduction and identification, algorithmic aspects in control, as well as open problems in control
Causal decomposition of complex systems and prediction of chaos using machine learning
We live in a complex system. Therefore, it is essential to possess techniques to analyze and comprehend its intricate dynamics in order to improve decision making. The objective of this dissertation is to contribute to the research that enhances our ability to make these complex systems less intransparent to us.
Firstly, we illustrate the impact on practical applications when nonlinearity - an often disregarded factor in causal inference - is taken into account. Therefore, we investigate the causal relationships within these systems, particularly shedding light on the distinction between linear and nonlinear drivers of causality. After developing the necessary methods, we apply them to a real-world use case and demonstrate that making slight adjustments to certain financial market frameworks can result in considerable advantages because of the resolution of the correlation-causation fallacy.
Subsequently, once the linear and nonlinear causal connections are understood, we can derive governing equations from the underlying causality structure to enhance the interpretability of models and predictions. By fine-tuning the parameters of these equations through the phenomenon of synchronization of chaos, we can ensure that they optimally represent the data.
Nevertheless, not all complex systems can be accurately described by governing equations. Therefore, the implementation of machine learning techniques like reservoir computing in predicting chaotic systems offers significant data-driven advantages. While their architecture is relatively simple, ensuring full interpretability and hardware realizations still relies on increased efficiency and reduced data requirements. This dissertation presents some of the necessary modifications to the traditional reservoir computing architecture to bring physical reservoir computing closer to realization.Wir leben in einem komplexen System. Daher ist es unerlässlich, über Techniken zur Analyse und zum Verständnis seiner verschleierten Dynamik zu verfügen, um die Entscheidungsfindung zu verbessern. Ziel dieser Dissertation ist es, einen Beitrag zur Forschung zu leisten, die unsere Möglichkeiten erweitert, diese komplexen Systeme für uns weniger intransparent zu machen.
Zunächst wird aufgezeigt, welche Auswirkungen es auf praktische Anwendungen hat, wenn Nichtlinearität - ein oft vernachlässigter Faktor bei kausaler Inferenz - berücksichtigt wird. Daher untersuchen wir die kausalen Beziehungen innerhalb dieser Systeme und beleuchten insbesondere die Unterscheidung zwischen linearen und nichtlinearen Kausalitätsfaktoren. Nachdem wir die erforderlichen Methoden entwickelt haben, wenden wir sie auf einen realen Anwendungsfall an und zeigen, dass leichte Anpassungen bestimmter Finanzmarktmodelle durch die Auflösung des Korrelations-Kausalitäts-Fehlschlusses zu erheblichen Vorteilen führen können.
Sobald die linearen und nichtlinearen Kausalzusammenhänge bekannt sind, können wir aus der zugrunde liegenden Kausalitätsstruktur die Differentialgleichungen ableiten, um die Interpretierbarkeit von Modellierungen und Vorhersagen zu verbessern. Durch die Feinjustierung der Parameter dieser Gleichungen durch das Phänomen der Synchronisierung von Chaos können wir sicherstellen, dass sie die Daten optimal darstellen.
Allerdings lassen sich nicht alle komplexen Systeme durch Differentialgleichungen adäquat beschreiben. Daher bietet die Anwendung von Techniken des maschinellen Lernens wie Reservoir Computing bei der Vorhersage chaotischer Systeme erhebliche datenbasierte Vorteile. Obwohl ihre Architektur relativ einfach ist, ist die Gewährleistung einer vollständigen Interpretierbarkeit und Hardware-Realisierung immer noch von einer erhöhten Effizienz und reduzierten Datenanforderungen abhängig. In dieser Dissertation werden einige der notwendigen Änderungen an der traditionellen Architektur vorgestellt, um physikalisches Reservoir Computing näher an die Realisierung zu bringen
Asynchronous Gossip for Averaging and Spectral Ranking
We consider two variants of the classical gossip algorithm. The first variant
is a version of asynchronous stochastic approximation. We highlight a
fundamental difficulty associated with the classical asynchronous gossip
scheme, viz., that it may not converge to a desired average, and suggest an
alternative scheme based on reinforcement learning that has guaranteed
convergence to the desired average. We then discuss a potential application to
a wireless network setting with simultaneous link activation constraints. The
second variant is a gossip algorithm for distributed computation of the
Perron-Frobenius eigenvector of a nonnegative matrix. While the first variant
draws upon a reinforcement learning algorithm for an average cost controlled
Markov decision problem, the second variant draws upon a reinforcement learning
algorithm for risk-sensitive control. We then discuss potential applications of
the second variant to ranking schemes, reputation networks, and principal
component analysis.Comment: 14 pages, 7 figures. Minor revisio
Principles of Neuromorphic Photonics
In an age overrun with information, the ability to process reams of data has
become crucial. The demand for data will continue to grow as smart gadgets
multiply and become increasingly integrated into our daily lives.
Next-generation industries in artificial intelligence services and
high-performance computing are so far supported by microelectronic platforms.
These data-intensive enterprises rely on continual improvements in hardware.
Their prospects are running up against a stark reality: conventional
one-size-fits-all solutions offered by digital electronics can no longer
satisfy this need, as Moore's law (exponential hardware scaling),
interconnection density, and the von Neumann architecture reach their limits.
With its superior speed and reconfigurability, analog photonics can provide
some relief to these problems; however, complex applications of analog
photonics have remained largely unexplored due to the absence of a robust
photonic integration industry. Recently, the landscape for
commercially-manufacturable photonic chips has been changing rapidly and now
promises to achieve economies of scale previously enjoyed solely by
microelectronics.
The scientific community has set out to build bridges between the domains of
photonic device physics and neural networks, giving rise to the field of
\emph{neuromorphic photonics}. This article reviews the recent progress in
integrated neuromorphic photonics. We provide an overview of neuromorphic
computing, discuss the associated technology (microelectronic and photonic)
platforms and compare their metric performance. We discuss photonic neural
network approaches and challenges for integrated neuromorphic photonic
processors while providing an in-depth description of photonic neurons and a
candidate interconnection architecture. We conclude with a future outlook of
neuro-inspired photonic processing.Comment: 28 pages, 19 figure
Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age
Simultaneous Localization and Mapping (SLAM)consists in the concurrent
construction of a model of the environment (the map), and the estimation of the
state of the robot moving within it. The SLAM community has made astonishing
progress over the last 30 years, enabling large-scale real-world applications,
and witnessing a steady transition of this technology to industry. We survey
the current state of SLAM. We start by presenting what is now the de-facto
standard formulation for SLAM. We then review related work, covering a broad
set of topics including robustness and scalability in long-term mapping, metric
and semantic representations for mapping, theoretical performance guarantees,
active SLAM and exploration, and other new frontiers. This paper simultaneously
serves as a position paper and tutorial to those who are users of SLAM. By
looking at the published research with a critical eye, we delineate open
challenges and new research issues, that still deserve careful scientific
investigation. The paper also contains the authors' take on two questions that
often animate discussions during robotics conferences: Do robots need SLAM? and
Is SLAM solved
- …