39 research outputs found

    A Middleware framework for self-adaptive large scale distributed services

    Get PDF
    Modern service-oriented applications demand the ability to adapt to changing conditions and unexpected situations while maintaining a required QoS. Existing self-adaptation approaches seem inadequate to address this challenge because many of their assumptions are not met on the large-scale, highly dynamic infrastructures where these applications are generally deployed on. The main motivation of our research is to devise principles that guide the construction of large scale self-adaptive distributed services. We aim to provide sound modeling abstractions based on a clear conceptual background, and their realization as a middleware framework that supports the development of such services. Taking the inspiration from the concepts of decentralized markets in economics, we propose a solution based on three principles: emergent self-organization, utility driven behavior and model-less adaptation. Based on these principles, we designed Collectives, a middleware framework which provides a comprehensive solution for the diverse adaptation concerns that rise in the development of distributed systems. We tested the soundness and comprehensiveness of the Collectives framework by implementing eUDON, a middleware for self-adaptive web services, which we then evaluated extensively by means of a simulation model to analyze its adaptation capabilities in diverse settings. We found that eUDON exhibits the intended properties: it adapts to diverse conditions like peaks in the workload and massive failures, maintaining its QoS and using efficiently the available resources; it is highly scalable and robust; can be implemented on existing services in a non-intrusive way; and do not require any performance model of the services, their workload or the resources they use. We can conclude that our work proposes a solution for the requirements of self-adaptation in demanding usage scenarios without introducing additional complexity. In that sense, we believe we make a significant contribution towards the development of future generation service-oriented applications.Las Aplicaciones Orientadas a Servicios modernas demandan la capacidad de adaptarse a condiciones variables y situaciones inesperadas mientras mantienen un cierto nivel de servio esperado (QoS). Los enfoques de auto-adaptación existentes parecen no ser adacuados debido a sus supuestos no se cumplen en infrastructuras compartidas de gran escala. La principal motivación de nuestra investigación es inerir un conjunto de principios para guiar el desarrollo de servicios auto-adaptativos de gran escala. Nuesto objetivo es proveer abstraciones de modelaje apropiadas, basadas en un marco conceptual claro, y su implemetnacion en un middleware que soporte el desarrollo de estos servicios. Tomando como inspiración conceptos económicos de mercados decentralizados, hemos propuesto una solución basada en tres principios: auto-organización emergente, comportamiento guiado por la utilidad y adaptación sin modelos. Basados en estos principios diseñamos Collectives, un middleware que proveer una solución exhaustiva para los diversos aspectos de adaptación que surgen en el desarrollo de sistemas distribuidos. La adecuación y completitud de Collectives ha sido provada por medio de la implementación de eUDON, un middleware para servicios auto-adaptativos, el ha sido evaluado de manera exhaustiva por medio de un modelo de simulación, analizando sus propiedades de adaptación en diversos escenarios de uso. Hemos encontrado que eUDON exhibe las propiedades esperadas: se adapta a diversas condiciones como picos en la carga de trabajo o fallos masivos, mateniendo su calidad de servicio y haciendo un uso eficiente de los recusos disponibles. Es altamente escalable y robusto; puedeoo ser implementado en servicios existentes de manera no intrusiva; y no requiere la obtención de un modelo de desempeño para los servicios. Podemos concluir que nuestro trabajo nos ha permitido desarrollar una solucion que aborda los requerimientos de auto-adaptacion en escenarios de uso exigentes sin introducir complejidad adicional. En este sentido, consideramos que nuestra propuesta hace una contribución significativa hacia el desarrollo de la futura generación de aplicaciones orientadas a servicios.Postprint (published version

    Ein verteilter und agentenbasierter Ansatz für gekoppelte Probleme der rechnergestützten Ingenieurwissenschaften

    Get PDF
    Challenging questions in science and engineering often require to decouple a complex problem and to focus on isolated sub-problems first. The knowledge of those individual solutions can later be combined to obtain the result for the full question. A similar technique is applied in numerical modeling. Here, the software solver for subsets of the coupled problem might already exist and can directly be used. This thesis describes a software environment capable of combining multiple software solvers, the result being a new, combined model. Two important design decisions were crucial at the beginning: First, every sub-model keeps full control of its execution. Second, the source code of the sub-model requires only minimal adaptation. The sub-models choose themselves when to issue communication calls, with no outer synchronisation mechanism required. The coupling of heterogeneous hardware is supported as well as the use of homogeneous compute clusters. Furthermore, the coupling framework allows sub-solvers to be written in different programming languages. Also, each of the sub-models may operate on its own spatial and temporal scales. The next challenge was to allow the potential coupling of thousands software agents, being able to utilise today's petascale hardware. For this purpose, a specific coupling framework was designed and implemented, combining the experiences from the previous work with additions required to cope with the targeted number of coupled sub-models. The large number of interacting models required a much more dynamic approach, where the agents automatically detect their communication partners at runtime. This eliminates the need to explicitly specify the coupling graph a~priori. Agents are allowed to enter (and leave) the simulation at any time, with the coupling graph changing accordingly.Da viele Problemstellungen im Ingenieurwesen sehr komplex sind, ist es oft sinnvoll, sie in einzelne Teilprobleme aufzugliedern. Diese Teilbereiche können nun einzeln angegangen und dann zur Gesamtlösung kombiniert werden. Ein ähnlicher Ansatz wird bei der numerischen Modellierung verfolgt: Komplexe Software wird schrittweise erstellt, indem Software-Löser für einzelne Bereiche zuerst separat erarbeitet werden. In dieser Arbeit wird eine Software beschrieben, die eine Vielzahl von unabhängigen Software-Lösern kombinieren kann. Jedes Teilmodell verhält sich weiterhin wie ein selbständiges Programm. Hierfür wird es in einen Software-Agenten gehüllt. Zur Kopplung sind lediglich minimale Ergänzungen am Quellcode des Teilmodells nötig. Möglich wird dies durch die Struktur der Kommunikation zwischen den Teilmodellen. Sie lässt den Modellen die Kontrolle über die Kommunikationsaufrufe und benötigt zur Synchronisation keine Einflussnahme einer übergeordneten Instanz. Manche Teilmodelle sind für den Gebrauch mit einer speziellen Hardware optimiert. Daher musste das Zusammenspiel unterschiedlicher Hardware ebenso berücksichtigt werden wie homogene Rechencluster. Weiterhin ermöglicht das Kopplungs-Framework, dass unterschiedliche Programmiersprachen verbunden werden können. Wie schon der Programmablauf, so können auch die Modellparameter, etwa die räumliche und zeitliche Skala, von Teilmodell zu Teilmodell unterschiedlich bleiben. Weiter behandelt diese Arbeit eine Vorgehensweise um tausende von Software-Agenten zu einem Groß-Modell zu koppeln. Dies ist erforderlich, wenn die Ressourcen heutiger Petascale Rechencluster benutzt werden sollen. Hierzu wurde das bisherige Framework neu aufgelegt, da die große Anzahl von zu koppelnden Modellen einer wesentlich dynamischeren Kommunikationsstruktur bedarf. Die Agenten der Teilmodelle können einer laufenden Simulation hinzugefügt werden (oder diese verlassen) und die globalen Kopplungsbeziehungen passen sich dementsprechend an

    Ontologies for the Interoperability of Heterogeneous Multi-Agent Systems in the scope of Energy and Power Systems

    Get PDF
    Tesis por compendio de publicaciones[ES]El sector eléctrico, tradicionalmente dirigido por monopolios y poderosas empresas de servicios públicos, ha experimentado cambios significativos en las últimas décadas. Los avances más notables son una mayor penetración de las fuentes de energía renovable (RES por sus siglas en inglés) y la generación distribuida, que han llevado a la adopción del paradigma de las redes inteligentes (SG por sus siglas en inglés) y a la introducción de enfoques competitivos en los mercados de electricidad (EMs por sus siglas en inglés) mayoristas y algunos minoristas. Las SG emergieron rápidamente de un concepto ampliamente aceptado en la realidad. La intermitencia de las fuentes de energía renovable y su integración a gran escala plantea nuevas limitaciones y desafíos que afectan en gran medida las operaciones de los EMs. El desafiante entorno de los sistemas de potencia y energía (PES por sus siglas en inglés) refuerza la necesidad de estudiar, experimentar y validar operaciones e interacciones competitivas, dinámicas y complejas. En este contexto, la simulación, el apoyo a la toma de decisiones, y las herramientas de gestión inteligente, se vuelven imprescindibles para estudiar los diferentes mecanismos del mercado y las relaciones entre los actores involucrados. Para ello, la nueva generación de herramientas debe ser capaz de hacer frente a la rápida evolución de los PES, proporcionando a los participantes los medios adecuados para adaptarse, abordando nuevos modelos y limitaciones, y su compleja relación con los desarrollos tecnológicos y de negocios. Las plataformas basadas en múltiples agentes son particularmente adecuadas para analizar interacciones complejas en sistemas dinámicos, como PES, debido a su naturaleza distribuida e independiente. La descomposición de tareas complejas en asignaciones simples y la fácil inclusión de nuevos datos y modelos de negocio, restricciones, tipos de actores y operadores, y sus interacciones, son algunas de las principales ventajas de los enfoques basados en agentes. En este dominio, han surgido varias herramientas de modelado para simular, estudiar y resolver problemas de subdominios específicos de PES. Sin embargo, existe una limitación generalizada referida a la importante falta de interoperabilidad entre sistemas heterogéneos, que impide abordar el problema de manera global, considerando todas las interrelaciones relevantes existentes. Esto es esencial para que los jugadores puedan aprovechar al máximo las oportunidades en evolución. Por lo tanto, para lograr un marco tan completo aprovechando las herramientas existentes que permiten el estudio de partes específicas del problema global, se requiere la interoperabilidad entre estos sistemas. Las ontologías facilitan la interoperabilidad entre sistemas heterogéneos al dar un significado semántico a la información intercambiada entre las distintas partes. La ventaja radica en el hecho de que todos los involucrados en un dominio particular los conocen, comprenden y están de acuerdo con la conceptualización allí definida. Existen, en la literatura, varias propuestas para el uso de ontologías dentro de PES, fomentando su reutilización y extensión. Sin embargo, la mayoría de las ontologías se centran en un escenario de aplicación específico o en una abstracción de alto nivel de un subdominio de los PES. Además, existe una considerable heterogeneidad entre estos modelos, lo que complica su integración y adopción. Es fundamental desarrollar ontologías que representen distintas fuentes de conocimiento para facilitar las interacciones entre entidades de diferente naturaleza, promoviendo la interoperabilidad entre sistemas heterogéneos basados en agentes que permitan resolver problemas específicos de PES. Estas brechas motivan el desarrollo del trabajo de investigación de este doctorado, que surge para brindar una solución a la interoperabilidad de sistemas heterogéneos dentro de los PES. Las diversas aportaciones de este trabajo dan como resultado una sociedad de sistemas multi-agente (MAS por sus siglas en inglés) para la simulación, estudio, soporte de decisiones, operación y gestión inteligente de PES. Esta sociedad de MAS aborda los PES desde el EM mayorista hasta el SG y la eficiencia energética del consumidor, aprovechando las herramientas de simulación y apoyo a la toma de decisiones existentes, complementadas con las desarrolladas recientemente, asegurando la interoperabilidad entre ellas. Utiliza ontologías para la representación del conocimiento en un vocabulario común, lo que facilita la interoperabilidad entre los distintos sistemas. Además, el uso de ontologías y tecnologías de web semántica permite el desarrollo de herramientas agnósticas de modelos para una adaptación flexible a nuevas reglas y restricciones, promoviendo el razonamiento semántico para sistemas sensibles al contexto

    Multi-Agent Systems

    Get PDF
    A multi-agent system (MAS) is a system composed of multiple interacting intelligent agents. Multi-agent systems can be used to solve problems which are difficult or impossible for an individual agent or monolithic system to solve. Agent systems are open and extensible systems that allow for the deployment of autonomous and proactive software components. Multi-agent systems have been brought up and used in several application domains

    Modeling the Evolution of Artifact Capabilities in Multi-Agent Based Simulations

    Get PDF
    Cognitive scientists agree that the exploitation of objects as tools or artifacts has played a significant role in the evolution of human societies. In the realm of autonomous agents and multi-agent systems, a recent artifact theory proposes the artifact concept as an abstraction for representing functional system components that proactive agents may exploit towards realizing their goals. As a complement, the cognition of rational agents has been extended to accommodate the notion of artifact capabilities denoting the reasoning and planning capacities of agents with respect to artifacts. Multi-Agent Based Simulation (MABS) a well established discipline for modeling complex social systems, has been identified as an area that should benefit from these theories. In MABS the evolution of artifact exploitation can play an important role in the overall performance of the system. The primary contribution of this dissertation is a computational model for integrating artifacts into MABS. The emphasis of the model is on an evolutionary approach that facilitates understanding the effects of artifacts and their exploitation in artificial social systems over time. The artifact theories are extended to support agents designed to evolve artifact exploitation through a variety of learning and adaptation strategies. The model accents strategies that benefit from the social dimensions of MABS. Realized with evolutionary computation methods specifically genetic algorithms, cultural algorithms and multi-population cultural algorithms, artifact capability evolution is supported at individual, population and multi-population levels. A generic MABS and case studies are provided to demonstrate the use of the model in new and existing MABS systems. The accommodation of artifact capability evolution in artificial social systems is applicable in many domains, particularly when the modeled system is one where artifact exploitation is relevant to the evolution of the society and its overall behavior. With artifacts acknowledged as major contributors to societal evolution the impact of our model is significant, providing advanced tools that enable social scientists to analyze their findings. The model can inform archaeologists, economists, evolution theorists, sociologists and anthropologists among others

    Smart Wireless Sensor Networks

    Get PDF
    The recent development of communication and sensor technology results in the growth of a new attractive and challenging area - wireless sensor networks (WSNs). A wireless sensor network which consists of a large number of sensor nodes is deployed in environmental fields to serve various applications. Facilitated with the ability of wireless communication and intelligent computation, these nodes become smart sensors which do not only perceive ambient physical parameters but also be able to process information, cooperate with each other and self-organize into the network. These new features assist the sensor nodes as well as the network to operate more efficiently in terms of both data acquisition and energy consumption. Special purposes of the applications require design and operation of WSNs different from conventional networks such as the internet. The network design must take into account of the objectives of specific applications. The nature of deployed environment must be considered. The limited of sensor nodes� resources such as memory, computational ability, communication bandwidth and energy source are the challenges in network design. A smart wireless sensor network must be able to deal with these constraints as well as to guarantee the connectivity, coverage, reliability and security of network's operation for a maximized lifetime. This book discusses various aspects of designing such smart wireless sensor networks. Main topics includes: design methodologies, network protocols and algorithms, quality of service management, coverage optimization, time synchronization and security techniques for sensor networks

    On Experimentation in Software-Intensive Systems

    Get PDF
    Context: Delivering software that has value to customers is a primary concern of every software company. Prevalent in web-facing companies, controlled experiments are used to validate and deliver value in incremental deployments. At the same that web-facing companies are aiming to automate and reduce the cost of each experiment iteration, embedded systems companies are starting to adopt experimentation practices and leverage their activities on the automation developments made in the online domain. Objective: This thesis has two main objectives. The first objective is to analyze how software companies can run and optimize their systems through automated experiments. This objective is investigated from the perspectives of the software architecture, the algorithms for the experiment execution and the experimentation process. The second objective is to analyze how non web-facing companies can adopt experimentation as part of their development process to validate and deliver value to their customers continuously. This objective is investigated from the perspectives of the software development process and focuses on the experimentation aspects that are distinct from web-facing companies. Method: To achieve these objectives, we conducted research in close collaboration with industry and used a combination of different empirical research methods: case studies, literature reviews, simulations, and empirical evaluations. Results: This thesis provides six main results. First, it proposes an architecture framework for automated experimentation that can be used with different types of experimental designs in both embedded systems and web-facing systems. Second, it proposes a new experimentation process to capture the details of a trustworthy experimentation process that can be used as the basis for an automated experimentation process. Third, it identifies the restrictions and pitfalls of different multi-armed bandit algorithms for automating experiments in industry. This thesis also proposes a set of guidelines to help practitioners select a technique that minimizes the occurrence of these pitfalls. Fourth, it proposes statistical models to analyze optimization algorithms that can be used in automated experimentation. Fifth, it identifies the key challenges faced by embedded systems companies when adopting controlled experimentation, and we propose a set of strategies to address these challenges. Sixth, it identifies experimentation techniques and proposes a new continuous experimentation model for mission-critical and business-to-business. Conclusion: The results presented in this thesis indicate that the trustworthiness in the experimentation process and the selection of algorithms still need to be addressed before automated experimentation can be used at scale in industry. The embedded systems industry faces challenges in adopting experimentation as part of its development process. In part, this is due to the low number of users and devices that can be used in experiments and the diversity of the required experimental designs for each new situation. This limitation increases both the complexity of the experimentation process and the number of techniques used to address this constraint

    Towards Automated Experiments in Software Intensive Systems

    Get PDF
    Context: Delivering software that has value to customers is a primary concern of every software company. One of the techniques to continuously validate and deliver value in online software systems is the use of controlled experiments. The time cost of each experiment iteration, the increasing growth in the development organization to run experiments and the need for a more automated and systematic approach is leading companies to look for different techniques to automate the experimentation process. Objective: The overall objective of this thesis is to analyze how to automate different types of experiments and how companies can support and optimize their systems through automated experiments. This thesis explores the topic of automated online experiments from the perspectives of the software architecture, the algorithms for the experiment execution and the experimentation process, and focuses on two main application domains: the online and the embedded systems domain. Method: To achieve the objective, we conducted this research in close collaboration with industry using a combination of different empirical research methods: case studies, literature reviews, simulations and empirical evaluations. Results and conclusions: This thesis provides five main results. First, we propose an architecture framework for automated experimentation that can be used with different types of experimental designs in both embedded systems and web-facing systems. Second, we identify the key challenges faced by embedded systems companies when adopting controlled experimentation and we propose a set of strategies to address these challenges. Third, we develop a new algorithm for online experiments. Fourth, we identify restrictions and pitfalls of different algorithms for automating experiments in industry and we propose a set of guidelines to help practitioners select a technique that minimizes the occurrence of these pitfalls. Fifth, we propose a new experimentation process to capture the details of a trustworthy experimentation process that can be used as basis for an automated experimentation process. Future work: In future work, we plan to investigate how embedded systems can incorporate experiments in their development process without compromising existing real-time and safety requirements. We also plan to analyze the impact and costs of automating the different parts of the experimentation process
    corecore