17 research outputs found

    Konfliktbehandlung im policy–basierten Management mittels a priori Modellierung

    Get PDF
    Das policy–basierte Management nimmt sowohl in der Forschung als auch in der Industrie einen steigenden Stellenwert ein. Durch die verteilte Spezifikation und aufgrund divergenter Ziele können Policies zueinander in Konflikt stehen. Die bereits existierenden Ansätze zur Policy–Konfliktbehandlung sind nur begrenzt einsetzbar, da sie häufig auf eine dediziert Policy–Sprache limitiert sind, wichtige Konfliktarten per se nicht erkennen können oder für neuartige Konfliktarten keine Methodik zur Integration bieten. Diese Arbeit zeigt, dass unter Berücksichtigung von Managementmodellen neue Konfliktarten nachgewiesen werden können, die bis jetzt mit den Ansätzen in der Literatur nicht behandelbar sind. Dazu werden Managementmodelle als a priori Modelle aufgefasst. Ein a priori Modell beschreibt den Sollzustand eines Systems und definiert somit eine Menge von einzuhaltenden Bedingungen. Unter dieser Prämisse werden neuartige Konflikte — Konflikte zwischen Beziehungen von Managementobjekten — nachgewiesen. Den Kern der Lösungsidee bildet eine Methodik zur Ableitung von Konfliktdefinitionen aus Modellaspekten. Dabei werden ausgehend von Modellaspekten Invarianten abgeleitet, mit Policy–Aktionen verknüpft und schließlich Vorbedingungen definiert, deren Einhaltung Konflikte verhindert. Die breite Anwendbarkeit der Methodik wird anhand eines statischen Beziehungsmodells für die Beziehungen der funktionalen Abhängigkeit und Enthaltenseinsrelationen gezeigt. Ebenso wird die Anwendbarkeit der Methodik für Vertreter von dynamischen Modellen, den endlichen Automaten demonstriert. Zur Konfliktbehandlung wurde ein neuer Algorithmus entwickelt, der aus den Phasen Konfliktlokalisierung, Konflikterkennung und Konfliktlösung besteht. In der ersten Phase wird durch Teilmengenbildung die Anzahl der zu betrachtenden Policies schnell reduziert. In der letzten Phase werden für die einzelnen Konfliktarten Strategien entwickelt, die eine optimale Konfliktlösung gewährleisten. Der Algorithmus ist sowohl für die präventive als auch die reaktive Konfliktbehandlung anwendbar. Damit eine generische Lösung erreicht wird, sind wichtige Designziele für die Methodik und dem Algorithmus: die Unabhängigkeit von einer dedizierten Policy–Sprache, die Breite der behandelbaren Konfliktarten sowie die Unabhängigkeit von einem spezifischen Managementinformationsmodell. Die Anwendbarkeit der Lösung in der Praxis wird durch eine exemplarische Abbildung der Konfliktdefinitionen in das Common Information Model gezeigt

    The scenario coevolution paradigm: adaptive quality assurance for adaptive systems

    Get PDF
    Systems are becoming increasingly more adaptive, using techniques like machine learning to enhance their behavior on their own rather than only through human developers programming them. We analyze the impact the advent of these new techniques has on the discipline of rigorous software engineering, especially on the issue of quality assurance. To this end, we provide a general description of the processes related to machine learning and embed them into a formal framework for the analysis of adaptivity, recognizing that to test an adaptive system a new approach to adaptive testing is necessary. We introduce scenario coevolution as a design pattern describing how system and test can work as antagonists in the process of software evolution. While the general pattern applies to large-scale processes (including human developers further augmenting the system), we show all techniques on a smaller-scale example of an agent navigating a simple smart factory. We point out new aspects in software engineering for adaptive systems that may be tackled naturally using scenario coevolution. This work is a substantially extended take on Gabor et al. (International symposium on leveraging applications of formal methods, Springer, pp 137–154, 2018)

    Large-Scale simulations of plastic neural networks on neuromorphic hardware

    Get PDF
    SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Rather than using bespoke analog or digital hardware, the basic computational unit of a SpiNNaker system is a general-purpose ARM processor, allowing it to be programmed to simulate a wide variety of neuron and synapse models. This flexibility is particularly valuable in the study of biological plasticity phenomena. A recently proposed learning rule based on the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm offers a generic framework for modeling the interaction of different plasticity mechanisms using spiking neurons. However, it can be computationally expensive to simulate large networks with BCPNN learning since it requires multiple state variables for each synapse, each of which needs to be updated every simulation time-step. We discuss the trade-offs in efficiency and accuracy involved in developing an event-based BCPNN implementation for SpiNNaker based on an analytical solution to the BCPNN equations, and detail the steps taken to fit this within the limited computational and memory resources of the SpiNNaker architecture. We demonstrate this learning rule by learning temporal sequences of neural activity within a recurrent attractor network which we simulate at scales of up to 2.0 × 104 neurons and 5.1 × 107 plastic synapses: the largest plastic neural network ever to be simulated on neuromorphic hardware. We also run a comparable simulation on a Cray XC-30 supercomputer system and find that, if it is to match the run-time of our SpiNNaker simulation, the super computer system uses approximately 45× more power. This suggests that cheaper, more power efficient neuromorphic systems are becoming useful discovery tools in the study of plasticity in large-scale brain models

    Democratic population decisions result in robust policy-gradient learning: A parametric study with GPU simulations

    Get PDF
    High performance computing on the Graphics Processing Unit (GPU) is an emerging field driven by the promise of high computational power at a low cost. However, GPU programming is a non-trivial task and moreover architectural limitations raise the question of whether investing effort in this direction may be worthwhile. In this work, we use GPU programming to simulate a two-layer network of Integrate-and-Fire neurons with varying degrees of recurrent connectivity and investigate its ability to learn a simplified navigation task using a policy-gradient learning rule stemming from Reinforcement Learning. The purpose of this paper is twofold. First, we want to support the use of GPUs in the field of Computational Neuroscience. Second, using GPU computing power, we investigate the conditions under which the said architecture and learning rule demonstrate best performance. Our work indicates that networks featuring strong Mexican-Hat-shaped recurrent connections in the top layer, where decision making is governed by the formation of a stable activity bump in the neural population (a "non-democratic" mechanism), achieve mediocre learning results at best. In absence of recurrent connections, where all neurons "vote" independently ("democratic") for a decision via population vector readout, the task is generally learned better and more robustly. Our study would have been extremely difficult on a desktop computer without the use of GPU programming. We present the routines developed for this purpose and show that a speed improvement of 5x up to 42x is provided versus optimised Python code. The higher speed is achieved when we exploit the parallelism of the GPU in the search of learning parameters. This suggests that efficient GPU programming can significantly reduce the time needed for simulating networks of spiking neurons, particularly when multiple parameter configurations are investigated. © 2011 Richmond et al

    Spatial Heterogeneity of Methane Ebullition in a Large Tropical Reservoir

    No full text
    Tropical reservoirs have been identified as important methane (CH4) sources to the atmosphere, primarily through turbine and downstream degassing. However, the importance of ebullition (gas bubbling) remains unclear. We hypothesized that ebullition is a disproportionately large CH4 source from reservoirs with dendritic littoral zones because of ebullition hot spots occurring where rivers supply allochthonous organic material. We explored this hypothesis in Lake Kariba (Zambia/Zimbabwe; surface area >5000 km(2)) by surveying ebullition in bays with and without river inputs using an echosounder and traditional surface chambers. The two techniques yielded similar results, and revealed substantially higher fluxes in river deltas (similar to 10(3) mg CH4 m(-2) d(-1)) compared to nonriver bays (<100 mg CH4 m(-2) d(-1)) Hydroacoustic measurements resolved at 5 m intervals showed that flux events varied over several orders of magnitude (up to 10(5) mg CH4 m(-2) d(-1)), and also identified strong differences in ebullition frequency. Both factors contributed to emission differences between all sites. A CH4 mass balance for the deepest basin of Lake Kariba indicated that hot spot ebullition was the largest atmospheric emission pathway, suggesting that future greenhouse gas budgets for tropical reservoirs should include a spatially well-resolved analysis of ebullition hot spots
    corecore