745 research outputs found
Hybrid performance modelling of opportunistic networks
We demonstrate the modelling of opportunistic networks using the process
algebra stochastic HYPE. Network traffic is modelled as continuous flows,
contact between nodes in the network is modelled stochastically, and
instantaneous decisions are modelled as discrete events. Our model describes a
network of stationary video sensors with a mobile ferry which collects data
from the sensors and delivers it to the base station. We consider different
mobility models and different buffer sizes for the ferries. This case study
illustrates the flexibility and expressive power of stochastic HYPE. We also
discuss the software that enables us to describe stochastic HYPE models and
simulate them.Comment: In Proceedings QAPL 2012, arXiv:1207.055
Evolutionary Design of the Memory Subsystem
The memory hierarchy has a high impact on the performance and power
consumption in the system. Moreover, current embedded systems, included in
mobile devices, are specifically designed to run multimedia applications, which
are memory intensive. This increases the pressure on the memory subsystem and
affects the performance and energy consumption. In this regard, the thermal
problems, performance degradation and high energy consumption, can cause
irreversible damage to the devices. We address the optimization of the whole
memory subsystem with three approaches integrated as a single methodology.
Firstly, the thermal impact of register file is analyzed and optimized.
Secondly, the cache memory is addressed by optimizing cache configuration
according to running applications and improving both performance and power
consumption. Finally, we simplify the design and evaluation process of
general-purpose and customized dynamic memory manager, in the main memory. To
this aim, we apply different evolutionary algorithms in combination with memory
simulators and profiling tools. This way, we are able to evaluate the quality
of each candidate solution and take advantage of the exploration of solutions
given by the optimization algorithm.We also provide an experimental experience
where our proposal is assessed using well-known benchmark applications
Parallelization of dynamic programming recurrences in computational biology
The rapid growth of biosequence databases over the last decade has led to a performance bottleneck in the applications analyzing them. In particular, over the last five years DNA sequencing capacity of next-generation sequencers has been doubling every six months as costs have plummeted. The data produced by these sequencers is overwhelming traditional compute systems. We believe that in the future compute performance, not sequencing, will become the bottleneck in advancing genome science. In this work, we investigate novel computing platforms to accelerate dynamic programming algorithms, which are popular in bioinformatics workloads. We study algorithm-specific hardware architectures that exploit fine-grained parallelism in dynamic programming kernels using field-programmable gate arrays: FPGAs). We advocate a high-level synthesis approach, using the recurrence equation abstraction to represent dynamic programming and polyhedral analysis to exploit parallelism. We suggest a novel technique within the polyhedral model to optimize for throughput by pipelining independent computations on an array. This design technique improves on the state of the art, which builds latency-optimal arrays. We also suggest a method to dynamically switch between a family of designs using FPGA reconfiguration to achieve a significant performance boost. We have used polyhedral methods to parallelize the Nussinov RNA folding algorithm to build a family of accelerators that can trade resources for parallelism and are between 15-130x faster than a modern dual core CPU implementation. A Zuker RNA folding accelerator we built on a single workstation with four Xilinx Virtex 4 FPGAs outperforms 198 3 GHz Intel Core 2 Duo processors. Furthermore, our design running on a single FPGA is an order of magnitude faster than competing implementations on similar-generation FPGAs and graphics processors. Our work is a step toward the goal of automated synthesis of hardware accelerators for dynamic programming algorithms
Morphological Plant Modeling: Unleashing Geometric and Topological Potential within the Plant Sciences
The geometries and topologies of leaves, flowers, roots, shoots, and their arrangements have fascinated plant biologists and mathematicians alike. As such, plant morphology is inherently mathematical in that it describes plant form and architecture with geometrical and topological techniques. Gaining an understanding of how to modify plant morphology, through molecular biology and breeding, aided by a mathematical perspective, is critical to improving agriculture, and the monitoring of ecosystems is vital to modeling a future with fewer natural resources. In this white paper, we begin with an overview in quantifying the form of plants and mathematical models of patterning in plants. We then explore the fundamental challenges that remain unanswered concerning plant morphology, from the barriers preventing the prediction of phenotype from genotype to modeling the movement of leaves in air streams. We end with a discussion concerning the education of plant morphology synthesizing biological and mathematical approaches and ways to facilitate research advances through outreach, cross-disciplinary training, and open science. Unleashing the potential of geometric and topological approaches in the plant sciences promises to transform our understanding of both plants and mathematics
Using Genetic Programming to Build Self-Adaptivity into Software-Defined Networks
Self-adaptation solutions need to periodically monitor, reason about, and
adapt a running system. The adaptation step involves generating an adaptation
strategy and applying it to the running system whenever an anomaly arises. In
this article, we argue that, rather than generating individual adaptation
strategies, the goal should be to adapt the control logic of the running system
in such a way that the system itself would learn how to steer clear of future
anomalies, without triggering self-adaptation too frequently. While the need
for adaptation is never eliminated, especially noting the uncertain and
evolving environment of complex systems, reducing the frequency of adaptation
interventions is advantageous for various reasons, e.g., to increase
performance and to make a running system more robust. We instantiate and
empirically examine the above idea for software-defined networking -- a key
enabling technology for modern data centres and Internet of Things
applications. Using genetic programming,(GP), we propose a self-adaptation
solution that continuously learns and updates the control constructs in the
data-forwarding logic of a software-defined network. Our evaluation, performed
using open-source synthetic and industrial data, indicates that, compared to a
baseline adaptation technique that attempts to generate individual adaptations,
our GP-based approach is more effective in resolving network congestion, and
further, reduces the frequency of adaptation interventions over time. In
addition, we show that, for networks with the same topology, reusing over
larger networks the knowledge that is learned on smaller networks leads to
significant improvements in the performance of our GP-based adaptation
approach. Finally, we compare our approach against a standard data-forwarding
algorithm from the network literature, demonstrating that our approach
significantly reduces packet loss.Comment: arXiv admin note: text overlap with arXiv:2205.0435
On microelectronic self-learning cognitive chip systems
After a brief review of machine learning techniques and applications, this Ph.D. thesis examines several approaches for implementing machine learning architectures and algorithms into hardware within our laboratory.
From this interdisciplinary background support, we have motivations for novel approaches that we intend to follow as an objective of innovative hardware implementations of dynamically self-reconfigurable logic for enhanced self-adaptive, self-(re)organizing and eventually self-assembling machine learning systems, while developing this new particular area of research.
And after reviewing some relevant background of robotic control methods followed by most recent advanced cognitive controllers, this Ph.D. thesis suggests that amongst many well-known ways of designing operational technologies, the design methodologies of those leading-edge high-tech devices such as cognitive chips that may well lead to intelligent machines exhibiting
conscious phenomena should crucially be restricted to extremely well defined constraints.
Roboticists also need those as specifications to help decide upfront on otherwise infinitely free hardware/software design details.
In addition and most importantly, we propose these specifications as methodological guidelines tightly related to ethics and the nowadays well-identified workings of the human body and of its psyche
Proceedings of the First PhD Symposium on Sustainable Ultrascale Computing Systems (NESUS PhD 2016)
Proceedings of the First PhD Symposium on Sustainable Ultrascale Computing Systems (NESUS PhD 2016) Timisoara, Romania. February 8-11, 2016.The PhD Symposium was a very good opportunity for the young researchers to share information and knowledge, to
present their current research, and to discuss topics with other students in order to look for synergies and common research
topics. The idea was very successful and the assessment made by the PhD Student was very good. It also helped to
achieve one of the major goals of the NESUS Action: to establish an open European research network targeting sustainable
solutions for ultrascale computing aiming at cross fertilization among HPC, large scale distributed systems, and big
data management, training, contributing to glue disparate researchers working across different areas and provide a meeting
ground for researchers in these separate areas to exchange ideas, to identify synergies, and to pursue common activities in
research topics such as sustainable software solutions (applications and system software stack), data management, energy
efficiency, and resilience.European Cooperation in Science and Technology. COS
Fundamental Approaches to Software Engineering
This open access book constitutes the proceedings of the 25th International Conference on Fundamental Approaches to Software Engineering, FASE 2022, which was held during April 4-5, 2022, in Munich, Germany, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022. The 17 regular papers presented in this volume were carefully reviewed and selected from 64 submissions. The proceedings also contain 3 contributions from the Test-Comp Competition. The papers deal with the foundations on which software engineering is built, including topics like software engineering as an engineering discipline, requirements engineering, software architectures, software quality, model-driven development, software processes, software evolution, AI-based software engineering, and the specification, design, and implementation of particular classes of systems, such as (self-)adaptive, collaborative, AI, embedded, distributed, mobile, pervasive, cyber-physical, or service-oriented applications
Constraint-based specifications for system configuration
Declarative, object-oriented configuration management systems are widely used, and
there is a desire to extend such systems with automated analysis and decision-making.
This thesis introduces a new formulation for configuration management problems based
on the tools and techniques of constraint programming, which enables automated
decision-making.
We present ConfSolve, an object-oriented declarative configuration language, in
which logical constraints on a system can be specified. Verification, impact analysis,
and the generation of valid configurations can then be performed. This is achieved via
translation to the MiniZinc constraint programming language, which is in turn solved
via the Gecode constraint solver. We formally define the syntax, type system, and
semantics of ConfSolve, in order to provide it with a rigorous foundation. Additionally
we show that our implementation outperforms previous work, which utilised an SMT
solver, while adding new features such as optimisation.
We next develop an extension of the ConfSolve language, which facilitates not
only one-off configuration tasks, but also subsequent re-configurations in which the
previous state of the system is taken into account. In a practical setting one does not
wish for a re-configuration to deviate too far from the existing state, unless the benefits
are substantial. Re-configuration is of crucial importance if automated configuration
systems are to gain industry adoption. We present a novel approach to incorporating
state-change into ConfSolve while remaining declarative and providing acceptable
performance
- âŠ