4 research outputs found

    Dynamic verification of mashups of service-oriented things through a mediation platform

    Get PDF
    The new Internet is evolving into the vision of the Internet of Things, where physical world entities are integrated into virtual world things. Things are expected to become active participants in business, information and social processes. Then, the Internet of Things could benefit from the Web Service architecture like today’s Web does; so Future service-oriented Internet things will offer their functionality via service-enabled interfaces. As demonstrated in previous work, there is a need of considering the behaviour of things to develop applications in a more rigorous way. We proposed a lightweight model for representing such behaviour based on the service-oriented paradigm and extending the standard DPWS profile to specify the (partial) order with which things can receive messages. To check whether a mashup of things respects the behaviour, specified at design-time, of composed things, we proposed a static verification. However, at run-time a thing may change its behaviour or receive requests from instances of different mashups. Then, it is required to check and detect dynamically possible invalid invocations provoked by the behaviour’s changes. In this work, we extend our static verification with an approach based on mediation techniques and complex event processing to detect and inhibit invalid invocations, checking that things only receive requests compatible with their behaviour. The solution automatically generates the required elements to perform run-time validation of invocations, and it may be extended to validate other issues. Here, we have also dealt with quality of service and temporal restrictions

    Temporal and Contextual Dependencies in Relational Data Modeling

    Get PDF
    Although a solid theoretical foundation of relational data modeling has existed for decades, critical reassessment from temporal requirements’ perspective reveals shortcomings in its integrity constraints. We identify the need for this work by discussing how existing relational databases fail to ensure correctness of data when the data to be stored is time sensitive. The analysis presented in this work becomes particularly important in present times where, because of relational databases’ inadequacy to cater to all the requirements, new forms of database systems such as temporal databases, active databases, real time databases, and NoSQL (non-relational) databases have been introduced. In relational databases, temporal requirements have been dealt with either at application level using scripts or through manual assistance, but no attempts have been made to address them at design level. These requirements are the ones that need changing metadata as the time progresses, which remains unsupported by Relational Database Management System (RDBMS) to date. Starting with shortcomings of data, entity, and referential integrity in relational data modeling, we propose a new form of integrity that works at a more detailed level of granularity. We also present several important concepts including temporal dependency, contextual dependency, and cell level integrity. We then introduce cellular-constraints to implement the proposed integrity and dependencies, and also how they can be incorporated into the relational data model to enable RDBMS to handle temporal requirements in future. Overall, we provide a formal description to address the temporal requirements’ problem in relational data model, and design a framework for solving this problem. We have supplemented our proposition using examples, experiments and results

    Performance Evaluation and Benchmarking of Event Processing Systems

    Get PDF
    Tese de Doutoramento em Ciências e Tecnologias da Informação apresentada à Faculdade de Ciências e Tecnologia da Universidade de Coimbra.Esta dissertação tem por objetivo estudar e comparar o desempenho dos sistemas de processamento de eventos, bem como propor novas técnicas que melhorem sua eficiência e escalabilidade. Nos últimos anos os sistemas de processamento de eventos têm tido uma difusão bastante rápida, tanto no meio acadêmico, onde deram origem a vários projetos de investigação, como na indústria, onde fomentaram o aparecimento de dezenas de startups e fazem-se hoje presentes nos mais diversos domínios de aplicação. No entanto, tem-se observado uma falta generalizada de informação, metodologias de avaliação e ferramentas no que diz respeito ao desempenho das plataformas de processamento de eventos. Até recentemente, não era conhecido ao certo que fatores afetam mais o seu desempenho, se os sistemas seriam capazes de escalar e adaptar-se às mudanças frequentes nas condições de carga, ou se teriam alguma limitação específica. Além disso, a falta de benchmarks padronizados impedia que se estabelecesse qualquer comparação objetiva entre os diversos produtos. Este trabalho visa preencher estas lacunas, e para isso foram abordados quatro tópicos principais. Primeiramente, desenvolvemos o framework FINCoS, um conjunto de ferramentas de benchmarking para a geração de carga e medição de desempenho de sistemas de processamento de eventos. O framework foi especificamente concebido de modo a ser independente dos produtos testados e da carga de trabalho utilizada, permitindo, assim, a sua reutilização em diversos estudos de desempenho e benchmarks. Em seguida, definimos uma série de microbenchmarks e conduzimos um estudo alargado de desempenho envolvendo três sistemas distintos. Essa análise não só permitiu identificar alguns fatores críticos para o desempenho das plataformas de processamento de eventos, como também expôs limitações importantes dos produtos, tais como má utilização de recursos e falhas devido à falta de memória. A partir dos resultados obtidos, passamos a nos dedicar à investigação de melhorias de desempenho. A fim de aprimorar a utilização de recursos, propusemos novos algoritmos e avaliamos esquemas de organização de dados alternativos que não só reduziram substancialmente o consumo de memória, como também se mostraram significativamente mais eficientes ao nível da microarquitetura. Para dirimir o problema de falta de memória, propusemos SlideM, um algoritmo de paginação que seletivamente envia partes do estado de queries contínuas para disco quando a memória física se torna-se insuficiente. Desenvolvemos também uma estratégia baseada no algoritmo SlideM para partilhar recursos computacionais durante o processamento de queries simultâneas. Concluímos esta dissertação propondo o benchmark Pairs. O benchmark visa avaliar a capacidade das plataformas de processamento de eventos em responder rapidamente a números progressivamente maiores de queries e taxas de entrada de dados cada vez mais altas. Para isso, a carga de trabalho do benchmark foi cuidadosamente concebida de modo a exercitar as operações encontradas com maior frequência em aplicações reais de processamento de eventos, tais como agregação, correlação e detecção de padrões. O benchmark Pairs também se diferencia de propostas anteriores em áreas relacionadas por permitir avaliar outros aspectos fundamentais, como adaptabilidade e escalabilidade com relação ao número de queries. De uma forma geral, esperamos que os resultados e propostas apresentados neste trabalho venham a contribuir para ampliar o entendimento acerca do desempenho das plataformas de processamento de eventos, e sirvam como estímulo para novos projetos de investigação que levem a melhorias adicionais à geração atual de sistemas.This thesis aims at studying, comparing, and improving the performance and scalability of event processing (EP) systems. In the last 15 years, event processing systems have gained increased attention from academia and industry, having found application in a number of mission-critical scenarios and motivated the onset of several research projects and specialized startups. Nonetheless, there has been a general lack of information, evaluation methodologies and tools in what concerns the performance of EP platforms. Until recently, it was not clear which factors impact most their performance, if the systems would scale well and adapt to changes in load conditions or if they had any serious limitations. Moreover, the lack of standardized benchmarks hindered any objective comparison among the diverse platforms. In this thesis, we tackle these problems by acting in several fronts. First, we developed FINCoS, a set of benchmarking tools for load generation and performance measurement of event processing systems. The framework has been designed to be independent on any particular workload or product so that it can be reused in multiple performance studies and benchmark kits. FINCoS has been made publicly available under the terms of the GNU General Public License and is also currently hosted at the Standard Performance Evaluation Corporation (SPEC) repository of peer-reviewed tools for quantitative system evaluation and analysis. We then defined a set of microbenchmarks and used them to conduct an extensive performance study on three EP systems. This analysis helped identifying critical factors affecting the performance of event processing platforms and exposed important limitations of the products, such as poor utilization of resources, trashing or failures in the presence of memory shortages, and no/incipient query plan sharing capabilities. With these results in hands, we moved our focus to performance enhancement. To improve resource utilization, we proposed novel algorithms and evaluated alternative data organization schemes that not only reduce substantially memory consumption, but also are significantly more efficient at the microarchitectural level. Our experimental evaluation corroborated the efficacy of the proposed optimizations: together they provided a 6-fold reduction in memory usage and order-of-magnitude increase on query throughput. In addition, we addressed the problem of memory-constrained applications by introducing SlideM, an optimal buffer management algorithm that selectively offloads sliding windows state to disk when main memory becomes insufficient. We also developed a strategy based on SlideM to share computational resources when processing multiple aggregation queries over overlapping sliding windows. Our experimental results demonstrate that, contrary to common sense, storing windows data on disk can be appropriate even for applications with very high event arrival rates. We concluded this thesis by proposing the Pairs benchmark. Pairs was designed to assess the ability of EP platforms in processing increasingly larger numbers of simultaneous queries and event arrival rates while providing quick answers. The benchmark workload exercises several common features that appear repeatedly in most event processing applications, including event filtering, aggregation, correlation and pattern detection. Furthermore, differently from previous proposals in related areas, Pairs allows evaluating important aspects of event processing systems such as adaptivity and query scalability. In general, we expect that the findings and proposals presented in this thesis serve to broaden the understanding on the performance of event processing platforms and open avenues for additional improvements in the current generation of EP systems.FCT Nº 45121/200

    BiCEP Benchmarking Complex Event Processing Systems

    Get PDF
    Abstract. BiCEP is a new project being started at the University of Coimbra to benchmark Complex Event Processing systems (CEP). Although BiCEP is still in the early stages, we list here some of the design considerations that will drive our future work and some of the metrics we plan to include in the benchmark
    corecore