4,791 research outputs found

    Knowledge-infused and Consistent Complex Event Processing over Real-time and Persistent Streams

    Full text link
    Emerging applications in Internet of Things (IoT) and Cyber-Physical Systems (CPS) present novel challenges to Big Data platforms for performing online analytics. Ubiquitous sensors from IoT deployments are able to generate data streams at high velocity, that include information from a variety of domains, and accumulate to large volumes on disk. Complex Event Processing (CEP) is recognized as an important real-time computing paradigm for analyzing continuous data streams. However, existing work on CEP is largely limited to relational query processing, exposing two distinctive gaps for query specification and execution: (1) infusing the relational query model with higher level knowledge semantics, and (2) seamless query evaluation across temporal spaces that span past, present and future events. These allow accessible analytics over data streams having properties from different disciplines, and help span the velocity (real-time) and volume (persistent) dimensions. In this article, we introduce a Knowledge-infused CEP (X-CEP) framework that provides domain-aware knowledge query constructs along with temporal operators that allow end-to-end queries to span across real-time and persistent streams. We translate this query model to efficient query execution over online and offline data streams, proposing several optimizations to mitigate the overheads introduced by evaluating semantic predicates and in accessing high-volume historic data streams. The proposed X-CEP query model and execution approaches are implemented in our prototype semantic CEP engine, SCEPter. We validate our query model using domain-aware CEP queries from a real-world Smart Power Grid application, and experimentally analyze the benefits of our optimizations for executing these queries, using event streams from a campus-microgrid IoT deployment.Comment: 34 pages, 16 figures, accepted in Future Generation Computer Systems, October 27, 201

    Semantic Programming for Device-Edge-Cloud Continuum

    Full text link
    This position paper presents ThothSP, a Semantic Programming framework with the aim of lowering the coding effort in building smart applications on the Device-Edge-Cloud continuum by leveraging semantic knowledge. It introduces a novel neural-symbolic stream fusion mechanism, which enables the specification of data fusion pipelines via declarative rules, with degrees of learnable probabilistic weights. Moreover, it includes an adaptive federator that allows the Thoth>runtime to be distributed across multiple compute nodes in a network, and to coordinate their resources to collaboratively process tasks by delegating partial workloads to their peers. To demonstrate ThothSP's capability, we report a case study on a distributed camera network to show ThothSP's behaviour against a traditional edge-cloud setup.Comment: arXiv admin note: text overlap with arXiv:2202.1395

    Managing Event-Driven Applications in Heterogeneous Fog Infrastructures

    Get PDF
    The steady increase in digitalization propelled by the Internet of Things (IoT) has led to a deluge of generated data at unprecedented pace. Thereby, the promise to realize data-driven decision-making is a major innovation driver in a myriad of industries. Based on the widely used event processing paradigm, event-driven applications allow to analyze data in the form of event streams in order to extract relevant information in a timely manner. Most recently, graphical flow-based approaches in no-code event processing systems have been introduced to significantly lower technological entry barriers. This empowers non-technical citizen technologists to create event-driven applications comprised of multiple interconnected event-driven processing services. Still, today’s event-driven applications are focused on centralized cloud deployments that come with inevitable drawbacks, especially in the context of IoT scenarios that require fast results, are limited by the available bandwidth, or are bound by the regulations in terms of privacy and security. Despite recent advances in the area of fog computing which mitigate these shortcomings by extending the cloud and moving certain processing closer to the event source, these approaches are hardly established in existing systems. Inherent fog computing characteristics, especially the heterogeneity of resources alongside novel application management demands, particularly the aspects of geo-distribution and dynamic adaptation, pose challenges that are currently insufficiently addressed and hinder the transition to a next generation of no-code event processing systems. The contributions of this thesis enable citizen technologists to manage event-driven applications in heterogeneous fog infrastructures along the application life cycle. Therefore, an approach for a holistic application management is proposed which abstracts citizen technologists from underlying technicalities. This allows to evolve present event processing systems and advances the democratization of event-driven application management in fog computing. Individual contributions of this thesis are summarized as follows: 1. A model, manifested in a geo-distributed system architecture, to semantically describe characteristics specific to node resources, event-driven applications and their management to blend application-centric and infrastructure-centric realms. 2. Concepts for geo-distributed deployment and operation of event-driven applications alongside strategies for flexible event stream management. 3. A methodology to support the evolution of event-driven applications including methods to dynamically reconfigure, migrate and offload individual event-driven processing services at run-time. The contributions are introduced, applied and evaluated along two scenarios from the manufacturing and logistics domain

    QoE on media deliveriy in 5G environments

    Get PDF
    231 p.5G expandirá las redes móviles con un mayor ancho de banda, menor latencia y la capacidad de proveer conectividad de forma masiva y sin fallos. Los usuarios de servicios multimedia esperan una experiencia de reproducción multimedia fluida que se adapte de forma dinámica a los intereses del usuario y a su contexto de movilidad. Sin embargo, la red, adoptando una posición neutral, no ayuda a fortalecer los parámetros que inciden en la calidad de experiencia. En consecuencia, las soluciones diseñadas para realizar un envío de tráfico multimedia de forma dinámica y eficiente cobran un especial interés. Para mejorar la calidad de la experiencia de servicios multimedia en entornos 5G la investigación llevada a cabo en esta tesis ha diseñado un sistema múltiple, basado en cuatro contribuciones.El primer mecanismo, SaW, crea una granja elástica de recursos de computación que ejecutan tareas de análisis multimedia. Los resultados confirman la competitividad de este enfoque respecto a granjas de servidores. El segundo mecanismo, LAMB-DASH, elige la calidad en el reproductor multimedia con un diseño que requiere una baja complejidad de procesamiento. Las pruebas concluyen su habilidad para mejorar la estabilidad, consistencia y uniformidad de la calidad de experiencia entre los clientes que comparten una celda de red. El tercer mecanismo, MEC4FAIR, explota las capacidades 5G de analizar métricas del envío de los diferentes flujos. Los resultados muestran cómo habilita al servicio a coordinar a los diferentes clientes en la celda para mejorar la calidad del servicio. El cuarto mecanismo, CogNet, sirve para provisionar recursos de red y configurar una topología capaz de conmutar una demanda estimada y garantizar unas cotas de calidad del servicio. En este caso, los resultados arrojan una mayor precisión cuando la demanda de un servicio es mayor

    Scalability Benchmarking of Cloud-Native Applications Applied to Event-Driven Microservices

    Get PDF
    Cloud-native applications constitute a recent trend for designing large-scale software systems. This thesis introduces the Theodolite benchmarking method, allowing researchers and practitioners to conduct empirical scalability evaluations of cloud-native applications, their frameworks, configurations, and deployments. The benchmarking method is applied to event-driven microservices, a specific type of cloud-native applications that employ distributed stream processing frameworks to scale with massive data volumes. Extensive experimental evaluations benchmark and compare the scalability of various stream processing frameworks under different configurations and deployments, including different public and private cloud environments. These experiments show that the presented benchmarking method provides statistically sound results in an adequate amount of time. In addition, three case studies demonstrate that the Theodolite benchmarking method can be applied to a wide range of applications beyond stream processing

    The Partner Ecosystem Evolution from On-premises Software to Cloud Services: a case study of SAP

    Get PDF
    The application software enterprise market is facing a fundamental change from onpremises software products to cloud-installed services based on ‘pay-per-use’ subscriptions. We propose a novel conceptual framework to analyze this transformation from an ecosystem perspective. Through a case study of SAP, we demonstrate that the cloud platform ecosystem differs from the on-premises software product ecosystem, with changed roles, responsibilities, patterns, and key stakeholder relationships. The findings suggest that the traditional product platform ecosystem has evolved in three directions: 1) the structures of partner ecosystems are changing, with partners and platform leaders forming a new micro-ecosystem as a basic unit to interact with customers; 2) the role and function of the traditional distribution channel have been eroded and weakened; and 3) the growing importance of platforms has changed the value relationships amongst stakeholders. Based on these findings, we discuss the managerial implications for the stakeholders in the cloud platform

    Network e-Volution

    Full text link
    Modern society is a network society permeated by information technology (IT). As a result of innovations in IT, enormous amounts of information can be communicated to a larger number of recipients faster than ever before. The evolution of networks is heavily influenced by the extensive use of IT, which has enabled co-evolving advanced quantitative and qualitative forms of networking. Although several networks have been formed with the aim to reduce or deal with uncertainty through faster and broader access to information, it is in fact IT that has created new kinds of uncertainty. For instance, although digital information integration in supply chains has made production planning more robust, it has at the same time intensified mutual dependencies, thereby actually increasing the level of uncertainty. The aim of this working paper is to investigate the aspects of evolving networks and uncertainty in networks at the cutting edges of different types of networks and from the perspective of different layers defining these networks

    Appearance of Dark Clouds? - An Empirical Analysis of Users\u27 Shadow Sourcing of Cloud Services

    Get PDF
    Encouraged by recent practical observations of employees\u27 usage of public cloud services for work tasks instead of mandatory internal support systems, this study investigates end users\u27 utilitarian and normative motivators based on the theory of reasoned action. Partial least squares analyses of survey data comprising 71 computer end users at work, employed across various companies and industries, show that perceived benefits for job performance, social influences of the entire work environment, and employees\u27 lack of identification with the organizational norms and values drive insiders to threaten the security of organizational IT assets
    corecore