483 research outputs found

    QoS-aware architectures, technologies, and middleware for the cloud continuum

    Get PDF
    The recent trend of moving Cloud Computing capabilities to the Edge of the network is reshaping how applications and their middleware supports are designed, deployed, and operated. This new model envisions a continuum of virtual resources between the traditional cloud and the network edge, which is potentially more suitable to meet the heterogeneous Quality of Service (QoS) requirements of diverse application domains and next-generation applications. Several classes of advanced Internet of Things (IoT) applications, e.g., in the industrial manufacturing domain, are expected to serve a wide range of applications with heterogeneous QoS requirements and call for QoS management systems to guarantee/control performance indicators, even in the presence of real-world factors such as limited bandwidth and concurrent virtual resource utilization. The present dissertation proposes a comprehensive QoS-aware architecture that addresses the challenges of integrating cloud infrastructure with edge nodes in IoT applications. The architecture provides end-to-end QoS support by incorporating several components for managing physical and virtual resources. The proposed architecture features: i) a multilevel middleware for resolving the convergence between Operational Technology (OT) and Information Technology (IT), ii) an end-to-end QoS management approach compliant with the Time-Sensitive Networking (TSN) standard, iii) new approaches for virtualized network environments, such as running TSN-based applications under Ultra-low Latency (ULL) constraints in virtual and 5G environments, and iv) an accelerated and deterministic container overlay network architecture. Additionally, the QoS-aware architecture includes two novel middlewares: i) a middleware that transparently integrates multiple acceleration technologies in heterogeneous Edge contexts and ii) a QoS-aware middleware for Serverless platforms that leverages coordination of various QoS mechanisms and virtualized Function-as-a-Service (FaaS) invocation stack to manage end-to-end QoS metrics. Finally, all architecture components were tested and evaluated by leveraging realistic testbeds, demonstrating the efficacy of the proposed solutions

    The Role of a Microservice Architecture on cybersecurity and operational resilience in critical systems

    Get PDF
    Critical systems are characterized by their high degree of intolerance to threats, in other words, their high level of resilience, because depending on the context in which the system is inserted, the slightest failure could imply significant damage, whether in economic terms, or loss of reputation, of information, of infrastructure, of the environment, or human life. The security of such systems is traditionally associated with legacy infrastructures and data centers that are monolithic, which translates into increasingly high evolution and protection challenges. In the current context of rapid transformation where the variety of threats to systems has been consistently increasing, this dissertation aims to carry out a compatibility study of the microservice architecture, which is denoted by its characteristics such as resilience, scalability, modifiability and technological heterogeneity, being flexible in structural adaptations, and in rapidly evolving and highly complex settings, making it suited for agile environments. It also explores what response artificial intelligence, more specifically machine learning, can provide in a context of security and monitorability when combined with a simple banking system that adopts the microservice architecture.Os sistemas críticos são caracterizados pelo seu elevado grau de intolerância às ameaças, por outras palavras, o seu alto nível de resiliência, pois dependendo do contexto onde se insere o sistema, a mínima falha poderá implicar danos significativos, seja em termos económicos, de perda de reputação, de informação, de infraestrutura, de ambiente, ou de vida humana. A segurança informática de tais sistemas está tradicionalmente associada a infraestruturas e data centers legacy, ou seja, de natureza monolítica, o que se traduz em desafios de evolução e proteção cada vez mais elevados. No contexto atual de rápida transformação, onde as variedades de ameaças aos sistemas têm vindo consistentemente a aumentar, esta dissertação visa realizar um estudo de compatibilidade da arquitetura de microserviços, que se denota pelas suas caraterísticas tais como a resiliência, escalabilidade, modificabilidade e heterogeneidade tecnológica, sendo flexível em adaptações estruturais, e em cenários de rápida evolução e elevada complexidade, tornando-a adequada a ambientes ágeis. Explora também a resposta que a inteligência artificial, mais concretamente, machine learning, pode dar num contexto de segurança e monitorabilidade quando combinado com um simples sistema bancário que adota uma arquitetura de microserviços

    Internet-of-Things Streaming over Realtime Transport Protocol : A reusablility-oriented approach to enable IoT Streaming

    Get PDF
    The Internet of Things (IoT) as a group of technologies is gaining momentum to become a prominent factor for novel applications. The existence of high computing capability and the vast amount of IoT devices can be observed in the market today. However, transport protocols are also required to bridge these two advantages. This thesis discussed the delivery of IoT through the lens of a few selected streaming protocols, which are Realtime Transport Protocol(RTP) and its cooperatives like RTP Control Protocol(RTCP) and Session Initiation Protocol (SIP). These protocols support multimedia content transfer with a heavy-stream characteristic requirement. The main contribution of this work was the multi-layer reusability schema for IoT streaming over RTP. IoT streaming as a new concept was defined, and its characteristics were introduced to clarify its requirements. After that, the RTP stacks and their commercial implementation-VoLTE(Voice over LTE) were investigated to collect technical insights. Based on this distilled knowledge, the application areas for IoT usage and the adopting methods were described. In addition to the realization, prototypes were made to be a proof of concept for streaming IoT data with RTP functionalities on distanced devices. These prototypes proved the possibility of applying the same duo-plane architect (signaling/data transferring) widely used in RTP implementation for multimedia services. Following a standard IETF, this implementation is a minimal example of adopting an existing standard for IoT streaming applications

    Integration of a Contextual Observation System in a Multi-Process Architecture for Autonomous Vehicles

    Get PDF
    We propose a software layered architecture for autonomous vehicles whose efficiency is driven by pull-based acquisition of sensor data. This multiprocess software architecture, to be embedded into the control loop of these vehicles, includes a Belief-Desire-Intention agent that can consistently assist the achievement of intentions. Since driving on roads implies huge dynamic considerations, we tackle both reactivity and context awareness considerations on the execution loop of the vehicle. While the proposed architecture gradually offers 4 levels of reactivity, from arch-reflex to the deep modification of the previously built execution plan, the observation module concurrently exploits noise filtering and introduces frequency control to allow symbolic feature extraction while both fuzzy and first order logic management are used to enforce consistency and certainty over the context information properties. The presented use-case, the daily delivery of a network of pharmacy offices by an autonomous vehicle taking into account contextual (spatio-temporal) traffic features, shows the efficiency and the modularity of the architecture, as well as the scalability of the reaction levels

    Integração de usabilidade no paradigma de IoT em telesaúde: Automatização ao serviço da usabilidade

    Get PDF
    Durante os últimos anos os países desenvolvidos têm sofrido um shift demográfico fomentado pelo aumento da população idosa e pela redução da taxa de natalidade. A proeminência destes fatores nas sociedades atuais despoletou desafios de natureza societal, técnica e económica em várias áreas de atuação. Nessas áreas, destaca-se a área de saúde pela sua sensibilidade e relevância para o quotidiano de utilizadores com necessidades especiais (pessoas idosas, deficientes motores, entre outros). Nesse sentido, para mitigar os desafios impostos nos sistemas de saúde, têm-se adotado tecnologias de informação e comunicação para o dimensionamento de soluções dedicadas, que visam satisfazer necessidades específicas – os ecossistemas AAL (Ambiente de Vida Assistida). Apesar do seu atual estado de desenvolvimento, enfrentam múltiplos desafios relacionados com a autonomia, robustez, segurança, integração, interação humano-computador, armazenamento de dados e usabilidade, que condicionam a sua aceitação junto dos principais intervenientes [1][2][3][4]. O foco do desenvolvimento desta tipologia de ecossistemas sobre o paradigma tecnológico fomentou o desenvolvimento de aplicações específicas centralizadas sobre a mitigação de lacunas técnico-científicas [5][6][7][8], e é apontado como um dos motivos para os seus atuais níveis de adesão. A maximização da sua introdução no mercado impõe que o seu dimensionamento se centralize sobre o utilizador final, em termos de design, requisitos funcionais e não funcionais; e contemple o contexto de integração e continuidade de cuidados inseridos num sistema complexo, por contabilização da diversidade multidimensional dos utilizadores, da natureza das tarefas, do contexto de utilização e das plataformas tecnológicas [5]. Neste contexto, a usabilidade e a utilidade percecionada adquirem um papel de destaque, devido à sua estreita relação com o público-alvo. A necessidade crescente a nível empresarial de minimização do tempo necessário à colocação de produtos no mercado tem motivado a colocação da usabilidade do produto dimensionado em segundo plano [9][10][11]. Fator que aliado à morosidade do processo de análise e ao número de dependências, existência de profissionais na área, de utilizadores finais disponíveis para testar os protótipos dimensionados, entre outras, inviabiliza um estudo extensivo da usabilidade do produto antes, durante e após o seu desenvolvimento. No sentido de mitigar as lacunas identificadas no processo em termos de tempo de execução e dependências explícitas, visar-se-á dotar equipas de desenvolvimento de uma ferramenta que analise o produto dimensionado em tempo real ao nível das linhas orientadoras definidas na literatura. Para quantificar as linhas orientadoras especificadas, impor-se-á a sua parametrização baseada na informação existente na literatura. Nesse sentido, a tese visa compilar os parâmetros necessários a quantificar as linhas orientadoras definidas na literatura: Jakob Nielsen, Gerhardt‐Powals, Shneiderman, Weinschenk e Barker, e Tognazzini. Através da parametrização definir-se-á a base para traduzir linhas orientadoras em lógica a utilizar no dimensionamento de uma ferramenta de análise de usabilidade em tempo real das interfaces. Ferramenta que conferirá aos intervenientes diretos no ciclo de desenvolvimento, os programadores, uma forma objetiva de analisar a usabilidade do produto dimensionado sem requerer a intervenção de entidades externas a título inicial.In the past few years there has been a significant growth of the elderly population in both developing and developed countries. This event provided new economic, technical and demographic challenges to current societies in several areas and services. Among them the healthcare services can be highlighted, due to its impact in people daily lives. As a natural response an effort has been made by both the scientific and industrial community to develop alternatives, which could mitigate the current healthcare services bottlenecks and provide means in aiding and improve the end-user life quality. Through a combination of information and communication technologies specialized ecosystems have been developed, however multiple challenges arose, which compromise their adoption and acceptance among the main stakeholders, such as their autonomy, robustness, security, integration, human-computer interactions, and usability. As consequence an effort has been made to deal with the technical related bottlenecks, which shifted the development process focus from the end-user to the ecosystem’s technological impairments. Despite there being user related issues, such as usability, which remains to be addressed. Therefore, this thesis focuses over the ecosystem’s usability through the analysis of the process used to check the ecosystem’s compliance level with the usability guidelines subset from Jakob Nielsen and Rolf Molich, from Ben Shneiderman, from Weinschenk and Barker and from Tognazzini; and the identification of the quantifiable parameters for each principle that could aid in the heuristics evaluation process by maximizing its objectivity improve its overall accuracy. Through this quantification the base ground is set up to translate the broad guidelines defined in the literature to business rules that can be used to create a tool to check an interface usability overall status in real time. Tool which will provide the main entities in the development cycle an objective approach to check the usability of the product/service created without the intervention of real users in the initial stage of the project

    Serverless middlewares to integrate heterogeneous and distributed services in cloud continuum environments

    Get PDF
    The application of modern ICT technologies is radically changing many fields pushing toward more open and dynamic value chains fostering the cooperation and integration of many connected partners, sensors, and devices. As a valuable example, the emerging Smart Tourism field derived from the application of ICT to Tourism so to create richer and more integrated experiences, making them more accessible and sustainable. From a technological viewpoint, a recurring challenge in these decentralized environments is the integration of heterogeneous services and data spanning multiple administrative domains, each possibly applying different security/privacy policies, device and process control mechanisms, service access, and provisioning schemes, etc. The distribution and heterogeneity of those sources exacerbate the complexity in the development of integrating solutions with consequent high effort and costs for partners seeking them. Taking a step towards addressing these issues, we propose APERTO, a decentralized and distributed architecture that aims at facilitating the blending of data and services. At its core, APERTO relies on APERTO FaaS, a Serverless platform allowing fast prototyping of the business logic, lowering the barrier of entry and development costs to newcomers, (zero) fine-grained scaling of resources servicing end-users, and reduced management overhead. APERTO FaaS infrastructure is based on asynchronous and transparent communications between the components of the architecture, allowing the development of optimized solutions that exploit the peculiarities of distributed and heterogeneous environments. In particular, APERTO addresses the provisioning of scalable and cost-efficient mechanisms targeting: i) function composition allowing the definition of complex workloads from simple, ready-to-use functions, enabling smarter management of complex tasks and improved multiplexing capabilities; ii) the creation of end-to-end differentiated QoS slices minimizing interfaces among application/service running on a shared infrastructure; i) an abstraction providing uniform and optimized access to heterogeneous data sources, iv) a decentralized approach for the verification of access rights to resources

    Improving data preparation for the application of process mining

    Get PDF
    Immersed in what is already known as the fourth industrial revolution, automation and data exchange are taking on a particularly relevant role in complex environments, such as industrial manufacturing environments or logistics. This digitisation and transition to the Industry 4.0 paradigm is causing experts to start analysing business processes from other perspectives. Consequently, where management and business intelligence used to dominate, process mining appears as a link, trying to build a bridge between both disciplines to unite and improve them. This new perspective on process analysis helps to improve strategic decision making and competitive capabilities. Process mining brings together data and process perspectives in a single discipline that covers the entire spectrum of process management. Through process mining, and based on observations of their actual operations, organisations can understand the state of their operations, detect deviations, and improve their performance based on what they observe. In this way, process mining is an ally, occupying a large part of current academic and industrial research. However, although this discipline is receiving more and more attention, it presents severe application problems when it is implemented in real environments. The variety of input data in terms of form, content, semantics, and levels of abstraction makes the execution of process mining tasks in industry an iterative, tedious, and manual process, requiring multidisciplinary experts with extensive knowledge of the domain, process management, and data processing. Currently, although there are numerous academic proposals, there are no industrial solutions capable of automating these tasks. For this reason, in this thesis by compendium we address the problem of improving business processes in complex environments thanks to the study of the state-of-the-art and a set of proposals that improve relevant aspects in the life cycle of processes, from the creation of logs, log preparation, process quality assessment, and improvement of business processes. Firstly, for this thesis, a systematic study of the literature was carried out in order to gain an in-depth knowledge of the state-of-the-art in this field, as well as the different challenges faced by this discipline. This in-depth analysis has allowed us to detect a number of challenges that have not been addressed or received insufficient attention, of which three have been selected and presented as the objectives of this thesis. The first challenge is related to the assessment of the quality of input data, known as event logs, since the requeriment of the application of techniques for improving the event log must be based on the level of quality of the initial data, which is why this thesis presents a methodology and a set of metrics that support the expert in selecting which technique to apply to the data according to the quality estimation at each moment, another challenge obtained as a result of our analysis of the literature. Likewise, the use of a set of metrics to evaluate the quality of the resulting process models is also proposed, with the aim of assessing whether improvement in the quality of the input data has a direct impact on the final results. The second challenge identified is the need to improve the input data used in the analysis of business processes. As in any data-driven discipline, the quality of the results strongly depends on the quality of the input data, so the second challenge to be addressed is the improvement of the preparation of event logs. The contribution in this area is the application of natural language processing techniques to relabel activities from textual descriptions of process activities, as well as the application of clustering techniques to help simplify the results, generating more understandable models from a human point of view. Finally, the third challenge detected is related to the process optimisation, so we contribute with an approach for the optimisation of resources associated with business processes, which, through the inclusion of decision-making in the creation of flexible processes, enables significant cost reductions. Furthermore, all the proposals made in this thesis are validated and designed in collaboration with experts from different fields of industry and have been evaluated through real case studies in public and private projects in collaboration with the aeronautical industry and the logistics sector

    Designing and implementing a distributed earthquake early warning system for resilient communities: a PhD thesis

    Get PDF
    The present work aims to comprehensively contribute to the process, design, and technologies of Earthquake Early Warning (EEW). EEW systems aim to detect the earthquake immediately at the epicenter and relay the information in real-time to nearby areas, anticipating the arrival of the shake. These systems exploit the difference between the earthquake wave speed and the time needed to detect and send alerts. This Ph.D. thesis aims to improve the adoption, robustness, security, and scalability of Earthquake Early Warning systems using a decentralized approach to data processing and information exchange. The proposed architecture aims to have a more resilient detection, remove Single point of failure, higher efficiency, mitigate security vulnerabilities, and improve privacy regarding centralized EEW architectures. A prototype of the proposed architecture has been implemented using low-cost sensors and processing devices to quickly assess the ability to provide the expected information and guarantees. The capabilities of the proposed architecture are evaluated not only on the main EEW problem but also on the quick estimation of the epicentral area of an earthquake, and the results demonstrated that our proposal is capable of matching the performance of current centralized counterparts
    corecore