1,798 research outputs found

    Lightweight adaptive filtering for efficient learning and updating of probabilistic models

    No full text
    Adaptive software systems are designed to cope with unpredictable and evolving usage behaviors and environmental conditions. For these systems reasoning mechanisms are needed to drive evolution, which are usually based on models capturing relevant aspects of the running software. The continuous update of these models in evolving environments requires efficient learning procedures, having low overhead and being robust to changes. Most of the available approaches achieve one of these goals at the price of the other. In this paper we propose a lightweight adaptive filter to accurately learn time-varying transition probabilities of discrete time Markov models, which provides robustness to noise and fast adaptation to changes with a very low overhead. A formal stability, unbiasedness and consistency assessment of the learning approach is provided, as well as an experimental comparison with state-of-the-art alternatives

    Evaluating Architectural Safeguards for Uncertain AI Black-Box Components

    Get PDF
    Although tremendous progress has been made in Artificial Intelligence (AI), it entails new challenges. The growing complexity of learning tasks requires more complex AI components, which increasingly exhibit unreliable behaviour. In this book, we present a model-driven approach to model architectural safeguards for AI components and analyse their effect on the overall system reliability

    Extra Functional Properties Evaluation of Self-managed Software Systems with Formal Methods

    Get PDF
    Multitud de aplicaciones software actuales están abocadas a operar en contextos dinámicos. Estos pueden manifestarse en términos de cambios en el entorno de ejecución de la aplicación, cambios en los requisitos de la aplicación, cambios en la carga de trabajo recibida por la aplicación, o cambios en cualquiera de los elementos que la aplicación software pueda percibir y verse afectada. Además, estos contextos dinámicos no están restringidos a un dominio particular de aplicaciones sino que se pueden encontrar en múltiples dominios, tales como: sistemas empotrados, arquitecturas orientadas a servicios, clusters para computación de altas prestaciones, dispositivos móviles o software para el funcionamiento de la red. La existencia de estas características disuade a los ingenieros de desarrollar software que no sea capaz de cambiar de modo alguno su ejecución para acomodarla al contexto en el que se está ejecutando el software en cada momento. Por lo tanto, con el objetivo de que el software pueda satisfacer sus requisitos en todo momento, este debe incluir mecanismos para poder cambiar su configuración de ejecución. Además, debido a que los cambios de contexto son frecuentes y afectan a múltiples dispositivos de la aplicación, la intervención humana que cambie manualmente la configuración del software no es una solución factible. Para enfrentarse a estos desafíos, la comunidad de Ingeniería del Software ha propuesto nuevos paradigmas que posibilitan el desarrollo de software que se enfrenta a contextos cambiantes de un modo automático; por ejemplo las propuestas Autonomic Computing y Self-* Software. En tales propuestas es el propio software quien gestiona sus mecanismos para cambiar la configuración de ejecución, sin requerir por lo tanto intervención humana alguna. Un aspecto esencial del software auto-adaptativo (Self-adaptive Software es uno de los términos más generales para referirse a Self-* Software) es el de planear sus cambios o adaptaciones. Los planes de adaptación determinan tanto el modo en el que se adaptará el software como los momentos oportunos para ejecutar tales adaptaciones. Hay un gran conjunto de situaciones para las cuales la propiedad de auto- adaptación es una solución. Una de esas situaciones es la de mantener al sistema satisfaciendo sus requisitos extra funcionales, tales como la calidad de servicio (Quality of Service, QoS) y su consumo de energía. Esta tesis ha investigado esa situación mediante el uso de métodos formales. Una de las contribuciones de esta tesis es la propuesta para asentar en una arquitectura software los sistemas que son auto-adaptativos respecto a su QoS y su consumo de energía. Con este objetivo, esta parte de la investigación la guía una arquitectura de tres capas de referencia para sistemas auto-adaptativos. La bondad del uso de una arquitectura de referencia es que muestra fácilmente los nuevos desafíos en el diseño de este tipo de sistemas. Naturalmente, la planificación de la adaptación es una de las actividades consideradas en la arquitectura. Otra de las contribuciones de la tesis es la propuesta de métodos para la creación de planes de adaptación. Los métodos formales juegan un rol esencial en esta actividad, ya que posibilitan el estudio de las propiedades extra funcionales de los sistemas en diferentes configuraciones. El método formal utilizado para estos análisis es el de las redes de Petri markovianas. Una vez que se ha creado el plan de adaptación, hemos investigado la utilización de los métodos formales para la evaluación de QoS y consumo de energía de los sistemas auto-adaptativos. Por lo tanto, se ha contribuido a la comunidad de análisis de QoS con el análisis de un nuevo y particularmente complejo tipo de sistemas software. Para llevar a cabo este análisis se requiere el modelado de los cambios din·micos del contexto de ejecución, para lo que se han utilizado una variedad de métodos formales, como los Markov modulated Poisson processes para estimar los parámetros de las variaciones en la carga de trabajo recibida por la aplicación, o los hidden Markov models para predecir el estado del entorno de ejecución. Estos modelos han sido usados junto a las redes de Petri para evaluar sistemas auto-adaptativos y obtener resultados sobre su QoS y consumo de energía. El trabajo de investigación anterior sacó a la luz el hecho de que la adaptabilidad de un sistema no es una propiedad tan fácilmente cuantificable como las propiedades de QoS -por ejemplo, el tiempo de respuesta- o el consumo de energÌa. En consecuencia, se ha investigado en esa dirección y, como resultado, otra de las contribuciones de esta tesis es la propuesta de un conjunto de métricas para la cuantificación de la propiedad de adaptabilidad de sistemas basados en servicios. Para conseguir las anteriores contribuciones se realiza un uso intensivo de modelos y transformaciones de modelos; tarea para la que se han seguido las mejores prácticas en el campo de investigación de la Ingeniería orientada a modelos (Model-driven Engineering, MDE). El trabajo de investigación de esta tesis en el campo MDE ha contribuido con: el aumento de la potencia de modelado de un lenguaje de modelado de software propuesto anteriormente y métodos de transformación desde dos lenguajes de modelado de software a redes de Petri estocasticas

    CPS Data Streams Analytics based on Machine Learning for Cloud and Fog Computing: A Survey

    Get PDF
    Cloud and Fog computing has emerged as a promising paradigm for the Internet of things (IoT) and cyber-physical systems (CPS). One characteristic of CPS is the reciprocal feedback loops between physical processes and cyber elements (computation, software and networking), which implies that data stream analytics is one of the core components of CPS. The reasons for this are: (i) it extracts the insights and the knowledge from the data streams generated by various sensors and other monitoring components embedded in the physical systems; (ii) it supports informed decision making; (iii) it enables feedback from the physical processes to the cyber counterparts; (iv) it eventually facilitates the integration of cyber and physical systems. There have been many successful applications of data streams analytics, powered by machine learning techniques, to CPS systems. Thus, it is necessary to have a survey on the particularities of the application of machine learning techniques to the CPS domain. In particular, we explore how machine learning methods should be deployed and integrated in cloud and fog architectures for better fulfilment of the requirements, e.g. mission criticality and time criticality, arising in CPS domains. To the best of our knowledge, this paper is the first to systematically study machine learning techniques for CPS data stream analytics from various perspectives, especially from a perspective that leads to the discussion and guidance of how the CPS machine learning methods should be deployed in a cloud and fog architecture

    A Framework for Discovery and Diagnosis of Behavioral Transitions in Event-streams

    Get PDF
    Date stream mining techniques can be used in tracking user behaviors as they attempt to achieve their goals. Quality metrics over stream-mined models identify potential changes in user goal attainment. When the quality of some data mined models varies significantly from nearby models—as defined by quality metrics—then the user’s behavior is automatically flagged as a potentially significant behavioral change. Decision tree, sequence pattern and Hidden Markov modeling being used in this study. These three types of modeling can expose different aspect of user’s behavior. In case of decision tree modeling, the specific changes in user behavior can automatically characterized by differencing the data-mined decision-tree models. The sequence pattern modeling can shed light on how the user changes his sequence of actions and Hidden Markov modeling can identifies the learning transition points. This research describes how model-quality monitoring and these three types of modeling as a generic framework can aid recognition and diagnoses of behavioral changes in a case study of cognitive rehabilitation via emailing. The date stream mining techniques mentioned are used to monitor patient goals as part of a clinical plan to aid cognitive rehabilitation. In this context, real time data mining aids clinicians in tracking user behaviors as they attempt to achieve their goals. This generic framework can be widely applicable to other real-time data-intensive analysis problems. In order to illustrate this fact, the similar Hidden Markov modeling is being used for analyzing the transactional behavior of a telecommunication company for fraud detection. Fraud similarly can be considered as a potentially significant transaction behavioral change

    Self-adaptation via concurrent multi-action evaluation for unknown context

    Get PDF
    Context-aware computing has been attracting growing attention in recent years. Generally, there are several ways for a context-aware system to select a course of action for a particular change of context. One way is for the system developers to encompass all possible context changes in the domain knowledge. Other methods include system inferences and adaptive learning whereby the system executes one action and evaluates the outcome and self-adapts/self-learns based on that. However, in situations where a system encounters unknown contexts, the iterative approach would become unfeasible when the size of the action space increases. Providing efficient solutions to this problem has been the main goal of this research project. Based on the developed abstract model, the designed methodology replaces the single action implementation and evaluation by multiple actions implemented and evaluated concurrently. This parallel evaluation of actions speeds up significantly the evolution time taken to select the best action suited to unknown context compared to the iterative approach. The designed and implemented framework efficiently carries out concurrent multi-action evaluation when an unknown context is encountered and finds the best course of action. Two concrete implementations of the framework were carried out demonstrating the usability and adaptability of the framework across multiple domains. The first implementation was in the domain of database performance tuning. The concrete implementation of the framework demonstrated the ability of concurrent multi-action evaluation technique to performance tune a database when performance is regressed for an unknown reason. The second implementation demonstrated the ability of the framework to correctly determine the threshold price to be used in a name-your-own-price channel when an unknown context is encountered. In conclusion the research introduced a new paradigm of a self-adaptation technique for context-aware application. Among the existing body of work, the concurrent multi-action evaluation is classified under the abstract concept of experiment-based self-adaptation techniques

    A framework for robust control of uncertainty in self-adaptive software connectors

    Get PDF
    Context and motivations. The desired behavior of a system in ubiquitous environments considers not only its correct functionality, but also the satisfaction of its non-functional properties, i.e., its quality of service. Given the heterogeneity and dynamism characterizing the ubiquitous environments and the need for continuous satisfaction of non-functional properties, self-adaptive solutions appear to be an appropriate approach to achieve interoperability. In this work, self-adaptation is adopted to enable software connectors to adapt the interaction protocols run by the connected components to let them communicate in a timely manner and with the required level of quality. However, this self-adaptation should be dependable, reliable and resilient to be adopted in dynamic, unpredictable environments with different sources of uncertainty. The majority of current approaches for the construction of self-adaptive software ignore the uncertainty underlying non-functional requirement verification and adaptation reasoning. Consequently, these approaches jeopardize system reliability and hinder the adoption of self-adaptive software in areas where dependability is of utmost importance. Objective. The main objective of this research is to properly handle the uncertainties in the non-functional requirement verification and the adaptation reasoning part of the self-adaptive feedback control loop of software connectors. This will enable a robust and runtime efficient adaptation in software connectors and make them reliable for usage in uncertain environments. Method. In the context of this thesis, a framework has been developed with the following functionalities: 1) Robust control of uncertainty in runtime requirement verification. The main activity in runtime verification is fine-tuning of the models that are adopted for runtime reasoning. The proposed stochastic approach is able to update the unknown parameters of the models at runtime even in the presence of incomplete and noisy observations. 2) Robust control of uncertainty in adaptation reasoning. A general methodology based on type-2 fuzzy logic has been introduced for the control of adaptation decision-making that adjusts the configuration of component connectors to the appropriate mode. The methodology enables a systematic development of fuzzy logic controllers that can derive the right mode for connectors even in the presence of measurement inaccuracy and adaptation policy conflicts. Results. The proposed model evolution mechanism is empirically evaluated, showing a significant precision of parameter estimation with an acceptable overhead at runtime. In addition, the fuzzy based controller, generated by the methodology, has been shown to be robust against uncertainties in the input data, efficient in terms of runtime overhead even in large-scale knowledge bases and stable in terms of control theory properties. We also demonstrate the applicability of the developed framework in a real-world domain. Thesis statement. We enable reliable and dependable self-adaptations of component connectors in unreliable environments with imperfect monitoring facilities and conflicting user opinions about adaptation policies by developing a framework which comprises: (a) mechanisms for robust model evolution, (b) a method for adaptation reasoning, and (c) tool support that allows an end-to-end application of the developed techniques in real-world domains
    corecore