5 research outputs found

    On Designing Self-Adaptive Software Systems

    Get PDF
    Ante condiciones cambiantes del entorno, los sistemas autoadaptativos pueden modificarse a sí mismos para controlar la satisfacción de sus requerimientos en tiempo de ejecución. Durante el siglo pasado los sistemas de retroalimentación fueron importantes modelos para controlar el comportamiento dinámico de sistemas mecánicos, eléctricos, de fluidos y químicos, en sus respectivos campos de la ingeniería. Más recientemente fueron adoptados para diseñar software autoadaptativo. No obstante, lograr mapeos coherentes y explícitos consistentemente entre las arquitecturas de software adaptativo y los elementos de sistemas de retroalimentación es aún un reto abierto. Este artículo, sobre un modelo de referencia propuesto con ese propósito, discute aspectos clave del diseño de software autoadaptativo, en que los elementos de sistemas de retroalimentación se definen explícitamente como componentes de primer nivel en su arquitectura. Adicionalmente, ilustra la aplicación de este modelo de referencia a un ejemplo real de software adaptativo. El artículo ofrece a los ingenieros de software un punto de referencia para iniciar el diseño de software autoadaptativo.Self-adaptive systems modify themselves at run-time in order to control the satisfaction of their requirements under changing environmental conditions. Over the past century, feedback-loops have been used as important models for controlling dynamic behavior of mechanical, electrical, fluid and chemical systems in the corresponding fields of engineering. More recently, they also have been adopted for engineering self-adaptive software systems. However, obtaining sound and explicit mappings consistently between adaptive software architectures and feedback loop elements is still an open challenge. This paper, recalling a reference model proposed previously with that goal, discuss key aspects on the design of adaptive software where feedback loop elements are explicitly defined as first-class components in its software architecture. It complements this discussion with an illustration of the process to use this reference model by applying it to a plausible adaptive software example. This paper aims at providing a reference starting point to support software engineers in the process of designing self-adaptive software systems

    Gestion de la qualité de contexte pour l'intelligence ambiante

    Get PDF
    L'informatique sensible au contexte vise à réduire la quantité d'informations explicites qu'un utilisateur doit fournir pour que le système accomplisse la tâche souhaitée. Ceci est particulièrement vrai dans le domaine récent de l'intelligence ambiante où les objets de la vie courante deviennent capables de déclencher une action ou un échange spontané d'informations, sans interaction avec l'utilisateur. Les progrès techniques en matière de réseaux de communication sans fil, d'équipements mobiles individuels, de capteurs et de logiciels embarqués, rendent aujourd'hui possibles des services aux usagers dépendants du contexte, mais les applications concrètes demeurent encore très limitées. Les travaux existants dans la littérature décomposent la gestion de contexte en quatre fonctionnalités: la collecte, l'interprétation, la détection de situations et l'utilisation pour l'adaptation. L'élément discriminant des solutions existantes est la qualité des informations abstraites obtenues par inférence et devant caractériser les situations de l'utilisateur. Les limites de ces solutions sont le manque de composition aisée des informations de contexte, le passage à l'échelle, tant en termes de quantité d'informations de contexte que de nombre d'applications clientes, l'absence de garantie sur la cohérence et la qualité des informations de contexte, et le manque de solutions intergicielles permettant de libérer le concepteur d'applications des aspects liés à la gestion de contexte. Nous nous intéressons dans cette thèse à la gestion de la qualité de contexte (QoC) dans un environnement ambiant. Les problématiques de gestion de la qualité de contexte sont multiples: choisir la méthode adéquate pour la gestion du contexte, extraire la qualité associée au contexte, interpréter et analyser la qualité de contexte pour les applications sensibles au contexte. Nous proposons de répondre à ces problématiques en intégrant la qualité de contexte au sein de la plateforme de gestion de contexte COSMOS (http://picoforge.lntevrv.fr/proiects/svn/cosmos) de l'équipe MARGE (http://www-inf.itsudparis.eu/MARGE) de Télécom SudParis. Afin d'effectuer cette intégration, nous avons conçu des éléments spécifiques à la qualité de contexte et avons mis en place une gestion fine et efficiente de cette qualité en limitant le surcoût associé. Nous proposons également un processus de conception basé sur l'ingénierie dirigée par les modèles afin de générer les éléments requis à la gestion de la qualité de contexte. Nous avons validé nos contributions à l'aide de deux applications fonctionnant sur téléphone mobile : une application de "vente flash" dans un centre commercial et une application de détection de localisation sur un campus. Les tests de performances que nous avons effectués permettent de comparer les résultats avec et sans la prise en compte de la QoC et montrent le faible coût de la gestion de la qualité par rapport aux améliorations apportées aux applications sensibles au contexteContext-aware computing aims to reduce the amount of explicit information required from a user for a system to perform a task. This is particularly true in the recent domain of ambient intelligence where everyday life objects are able to trigger an action or a spontaneous information exchange, without any interaction with the user. Technical advances in wireless communication, personal mobile devices, sensors and embedded software make context-aware services possible, but concrete applications are still very limited. The solutions proposed in the literature decompose context management into four functions: acquisition, interpretation, situation detection and application adaptation. The differentiating element in these proposals is the quality of the high-level context information obtained by inference and characterising the situation of the user. The limits of these solutions are the difficulty for composing context information scalability in terms of the quantity of context information and of the number of client applications, the absence of guarantee on the consistency of context information and the lack of middleware solutions able to free the designer of context-aware applications from the management of context data. In this thesis, we are interested in the management of the quality of context information (QoC) in an ambient environment. There are several key issues in QoC management: choosing the adequate method for context management, extracting the quality associated to the context, analysing and interpreting the quality of the context with regard to the requirements of context-aware applications. We propose to answer these questions by integrating QoC management into the COSMOS context management framework (http://picoforge.int-evry.fr/projects/svn/cosmos) developed by the MARGE team (http://www-inf.itsudparis.eu/MARGE) of Télécom SudParis.For this purpose, we have designed the necessary components dedicated to QoC management and we have implemented the mechanisms allowing a fine-grain manipulation of the QoC together with a limitation of the associated overhead. We also propose a design process based on model-driven engineering in order to automatically generate the elements responsible of QoC management. We validate our contributions through the development of two prototype applications running on mobile phones: a Flash sale offer application to be used in malls and a location detection application proposed to the students of a campus. The performance tests we have conducted allow to compare the results obtained with and without taking into account the QoC and show the low overhead associated to QoC manaqement with regard to the benefits brought to context-aware applications and servicesEVRY-INT (912282302) / SudocSudocFranceF

    The role of volunteered geographic information in land administration systems in developing countries

    Get PDF
    PhD ThesisDeveloping countries, especially in Africa are faced with a lack of formally registered land. Available limited records are outdated, inaccurate and unreliable, which makes it a challenge to properly administer and manage land and its resources. Moreover, limited maintenance budgets prevalent in these countries make it difficult for organizations to conduct regular systematic updates of geographic information. Despite these challenges, geographic information still forms a major component for effective land administration. For a land administration system (LAS) to remain useful, it must reflect realities on the ground, and this can only be achieved if land information is reported regularly. However, if changes in land are not captured in properly administered land registers, LAS lose societal relevance and are eventually replaced by informal systems. Volunteered Geographic Information (VGI) can address these LAS challenges by providing timely, affordable, up-to-date, flexible, and fit for purpose (FFP) land information to support the limited current systems. Nonetheless, the involvement of volunteers, who in most cases are untrained or non-experts in handling geographic information, implies that VGI can be of varying quality. Thus, VGI is characterised by unstructured, heterogeneous, unreliable data which makes data integration for value-added purposes difficult to effect. These quality challenges can make land authorities reluctant to incorporate the contributed datasets into their official databases. This research has developed an innovative approach for establishing the quality and credibility of VGI such that it can be considered in LAS on an FFP basis. However, verifying volunteer efforts can be difficult without reference to ground truth, which is prevalent in many developing countries. Therefore, a novel Trust and Reputation Modelling (TRM) methodology is proposed as a suitable technique to effect such VGI validation. TRM relies on a view that the public can police themselves in establishing ‘proxy’ measures of VGI quality and credibility of volunteers, thus facilitating VGI to be used on an FFP basis in LAS. The output of this research is a conceptual participatory framework for an FFP land administration based on VGI. The framework outlines major aspects (social, legal, technical, and institutional) necessary for establishing a participatory FFP LAS in developing countries.University of Botswan

    Querying and cleaning uncertain data

    No full text
    LNCS v. 5786 is Proceedings of the 1st International Workshop, QuaCon 2009Invited PaperThe management of uncertainty in large databases has recently attracted tremendous research interest. Data uncertainty is inherent in many emerging and important applications, including location-based services, wireless sensor networks, biometric and biological databases, and data stream applications. In these systems, it is important to manage data uncertainty carefully, in order to make correct decisions and provide high-quality services to users. To enable the development of these applications, uncertain database systems have been proposed. They consider data uncertainty as a "first-class citizen", and use generic data models to capture uncertainty, as well as provide query operators that return answers with statistical confidences. We summarize our work on uncertain databases in recent years. We explain how data uncertainty can be modeled, and present a classification of probabilistic queries (e.g., range query and nearest-neighbor query). We further study how probabilistic queries can be efficiently evaluated and indexed. We also highlight the issue of removing uncertainty under a stringent cleaning budget, with an attempt of generating high-quality probabilistic answers. © 2009 Springer Berlin Heidelberg.link_to_subscribed_fulltextThe 1st International Workshop on Quality of Context (QuaCon 2009), Stuttgart, Germany, 25-26 June 2009. In Lecture Notes in Computer Science, 2009, v. 5786, p. 41-5

    A probabilistic filter protocol for continuous queries

    No full text
    LNCS v. 5786 is Proceedings of the 1st International Workshop, QuaCon 2009Research PapersPervasive applications, such as location-based services and natural habitat monitoring, have attracted plenty of research interest. These applications make use of a large number of remote positioning devices like Global Positioning System (GPS) for collecting users' physical locations. Generally, these devices have battery power limitation. They also cannot report very accurate position values. In this paper, we consider the evaluation of a long-standing (or continuous) query over inaccurate location data collected from positioning devices. Our goal is to develop an energy-efficient protocol, which provides some degree of confidence on the query answers evaluated on imperfect data. In particular, we propose the probabilistic filter, which governs GPS devices to decide upon whether location values collected should be reported to the server. We further discuss how these filters can be developed. This scheme reduces the cost of transmitting location updates, and hence the energy spent by the GPS devices. It also allows some portion of query processing to be deployed to the devices, thereby alleviating the processing burden of the server. © 2009 Springer Berlin Heidelberg.link_to_subscribed_fulltextThe 1st International Workshop on Quality of Context (QuaCon 2009), Stuttgart, Germany, 25-26 June 2009. In Lecture Notes in Computer Science, 2009, v. 5786, p. 88-9
    corecore