31 research outputs found

    From event streams to process models and back: Challenges and opportunities

    Get PDF
    The domains of complex event processing (CEP) and business process management (BPM) have different origins but for many aspects draw on similar concepts. While specific combinations of BPM and CEP have attracted research attention, resulting in solutions to specific problems, we attempt to take a broad view at the opportunities and challenges involved. We first illustrate these by a detailed example from the logistics domain. We then propose a mapping of this area into four quadrants — - two quadrants drawing from CEP to create or extend process models and two quadrants starting from a process model to address how it can guide CEP. Existing literature is reviewed and specific challenges and opportunities are indicated for each of these quadrants. Based on this mapping, we identify challenges and opportunities that recur across quadrants and can be considered as the core issues of this combination. We suggest that addressing these issues in a generic manner would form a sound basis for future applications and advance this area significantly

    Towards Interoperable Research Infrastructures for Environmental and Earth Sciences

    Get PDF
    This open access book summarises the latest developments on data management in the EU H2020 ENVRIplus project, which brought together more than 20 environmental and Earth science research infrastructures into a single community. It provides readers with a systematic overview of the common challenges faced by research infrastructures and how a ‘reference model guided’ engineering approach can be used to achieve greater interoperability among such infrastructures in the environmental and earth sciences. The 20 contributions in this book are structured in 5 parts on the design, development, deployment, operation and use of research infrastructures. Part one provides an overview of the state of the art of research infrastructure and relevant e-Infrastructure technologies, part two discusses the reference model guided engineering approach, the third part presents the software and tools developed for common data management challenges, the fourth part demonstrates the software via several use cases, and the last part discusses the sustainability and future directions

    Serverless Strategies and Tools in the Cloud Computing Continuum

    Full text link
    Tesis por compendio[ES] En los últimos años, la popularidad de la computación en nube ha permitido a los usuarios acceder a recursos de cómputo, red y almacenamiento sin precedentes bajo un modelo de pago por uso. Esta popularidad ha propiciado la aparición de nuevos servicios para resolver determinados problemas informáticos a gran escala y simplificar el desarrollo y el despliegue de aplicaciones. Entre los servicios más destacados en los últimos años se encuentran las plataformas FaaS (Función como Servicio), cuyo principal atractivo es la facilidad de despliegue de pequeños fragmentos de código en determinados lenguajes de programación para realizar tareas específicas en respuesta a eventos. Estas funciones son ejecutadas en los servidores del proveedor Cloud sin que los usuarios se preocupen de su mantenimiento ni de la gestión de su elasticidad, manteniendo siempre un modelo de pago por uso de grano fino. Las plataformas FaaS pertenecen al paradigma informático conocido como Serverless, cuyo propósito es abstraer la gestión de servidores por parte de los usuarios, permitiéndoles centrar sus esfuerzos únicamente en el desarrollo de aplicaciones. El problema del modelo FaaS es que está enfocado principalmente en microservicios y tiende a tener limitaciones en el tiempo de ejecución y en las capacidades de computación (por ejemplo, carece de soporte para hardware de aceleración como GPUs). Sin embargo, se ha demostrado que la capacidad de autoaprovisionamiento y el alto grado de paralelismo de estos servicios pueden ser muy adecuados para una mayor variedad de aplicaciones. Además, su inherente ejecución dirigida por eventos hace que las funciones sean perfectamente adecuadas para ser definidas como pasos en flujos de trabajo de procesamiento de archivos (por ejemplo, flujos de trabajo de computación científica). Por otra parte, el auge de los dispositivos inteligentes e integrados (IoT), las innovaciones en las redes de comunicación y la necesidad de reducir la latencia en casos de uso complejos han dado lugar al concepto de Edge computing, o computación en el borde. El Edge computing consiste en el procesamiento en dispositivos cercanos a las fuentes de datos para mejorar los tiempos de respuesta. La combinación de este paradigma con la computación en nube, formando arquitecturas con dispositivos a distintos niveles en función de su proximidad a la fuente y su capacidad de cómputo, se ha acuñado como continuo de la computación en la nube (o continuo computacional). Esta tesis doctoral pretende, por lo tanto, aplicar diferentes estrategias Serverless para permitir el despliegue de aplicaciones generalistas, empaquetadas en contenedores de software, a través de los diferentes niveles del continuo computacional. Para ello, se han desarrollado múltiples herramientas con el fin de: i) adaptar servicios FaaS de proveedores Cloud públicos; ii) integrar diferentes componentes software para definir una plataforma Serverless en infraestructuras privadas y en el borde; iii) aprovechar dispositivos de aceleración en plataformas Serverless; y iv) facilitar el despliegue de aplicaciones y flujos de trabajo a través de interfaces de usuario. Además, se han creado y adaptado varios casos de uso para evaluar los desarrollos conseguidos.[CA] En els últims anys, la popularitat de la computació al núvol ha permès als usuaris accedir a recursos de còmput, xarxa i emmagatzematge sense precedents sota un model de pagament per ús. Aquesta popularitat ha propiciat l'aparició de nous serveis per resoldre determinats problemes informàtics a gran escala i simplificar el desenvolupament i desplegament d'aplicacions. Entre els serveis més destacats en els darrers anys hi ha les plataformes FaaS (Funcions com a Servei), el principal atractiu de les quals és la facilitat de desplegament de petits fragments de codi en determinats llenguatges de programació per realitzar tasques específiques en resposta a esdeveniments. Aquestes funcions són executades als servidors del proveïdor Cloud sense que els usuaris es preocupen del seu manteniment ni de la gestió de la seva elasticitat, mantenint sempre un model de pagament per ús de gra fi. Les plataformes FaaS pertanyen al paradigma informàtic conegut com a Serverless, el propòsit del qual és abstraure la gestió de servidors per part dels usuaris, permetent centrar els seus esforços únicament en el desenvolupament d'aplicacions. El problema del model FaaS és que està enfocat principalment a microserveis i tendeix a tenir limitacions en el temps d'execució i en les capacitats de computació (per exemple, no té suport per a maquinari d'acceleració com GPU). Tot i això, s'ha demostrat que la capacitat d'autoaprovisionament i l'alt grau de paral·lelisme d'aquests serveis poden ser molt adequats per a més aplicacions. A més, la seva inherent execució dirigida per esdeveniments fa que les funcions siguen perfectament adequades per ser definides com a passos en fluxos de treball de processament d'arxius (per exemple, fluxos de treball de computació científica). D'altra banda, l'auge dels dispositius intel·ligents i integrats (IoT), les innovacions a les xarxes de comunicació i la necessitat de reduir la latència en casos d'ús complexos han donat lloc al concepte d'Edge computing, o computació a la vora. L'Edge computing consisteix en el processament en dispositius propers a les fonts de dades per millorar els temps de resposta. La combinació d'aquest paradigma amb la computació en núvol, formant arquitectures amb dispositius a diferents nivells en funció de la proximitat a la font i la capacitat de còmput, s'ha encunyat com a continu de la computació al núvol (o continu computacional). Aquesta tesi doctoral pretén, doncs, aplicar diferents estratègies Serverless per permetre el desplegament d'aplicacions generalistes, empaquetades en contenidors de programari, a través dels diferents nivells del continu computacional. Per això, s'han desenvolupat múltiples eines per tal de: i) adaptar serveis FaaS de proveïdors Cloud públics; ii) integrar diferents components de programari per definir una plataforma Serverless en infraestructures privades i a la vora; iii) aprofitar dispositius d'acceleració a plataformes Serverless; i iv) facilitar el desplegament d'aplicacions i fluxos de treball mitjançant interfícies d'usuari. A més, s'han creat i s'han adaptat diversos casos d'ús per avaluar els desenvolupaments aconseguits.[EN] In recent years, the popularity of Cloud computing has allowed users to access unprecedented compute, network, and storage resources under a pay-per-use model. This popularity led to new services to solve specific large-scale computing challenges and simplify the development and deployment of applications. Among the most prominent services in recent years are FaaS (Function as a Service) platforms, whose primary appeal is the ease of deploying small pieces of code in certain programming languages to perform specific tasks on an event-driven basis. These functions are executed on the Cloud provider's servers without users worrying about their maintenance or elasticity management, always keeping a fine-grained pay-per-use model. FaaS platforms belong to the computing paradigm known as Serverless, which aims to abstract the management of servers from the users, allowing them to focus their efforts solely on the development of applications. The problem with FaaS is that it focuses on microservices and tends to have limitations regarding the execution time and the computing capabilities (e.g. lack of support for acceleration hardware such as GPUs). However, it has been demonstrated that the self-provisioning capability and high degree of parallelism of these services can be well suited to broader applications. In addition, their inherent event-driven triggering makes functions perfectly suitable to be defined as steps in file processing workflows (e.g. scientific computing workflows). Furthermore, the rise of smart and embedded devices (IoT), innovations in communication networks and the need to reduce latency in challenging use cases have led to the concept of Edge computing. Edge computing consists of conducting the processing on devices close to the data sources to improve response times. The coupling of this paradigm together with Cloud computing, involving architectures with devices at different levels depending on their proximity to the source and their compute capability, has been coined as Cloud Computing Continuum (or Computing Continuum). Therefore, this PhD thesis aims to apply different Serverless strategies to enable the deployment of generalist applications, packaged in software containers, across the different tiers of the Cloud Computing Continuum. To this end, multiple tools have been developed in order to: i) adapt FaaS services from public Cloud providers; ii) integrate different software components to define a Serverless platform on on-premises and Edge infrastructures; iii) leverage acceleration devices on Serverless platforms; and iv) facilitate the deployment of applications and workflows through user interfaces. Additionally, several use cases have been created and adapted to assess the developments achieved.Risco Gallardo, S. (2023). Serverless Strategies and Tools in the Cloud Computing Continuum [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/202013Compendi

    Distributed collaborative context-aware content-centric workflow management for mobile devices

    Get PDF
    Ubiquitous mobile devices have become a necessity in today’s society, opening new opportunities for interaction and collaboration between geographically distributed people. With the increased use of mobile phones, people can collaborate while on the move. Collaborators expect technologies that would enhance their teamwork and respond to their individual needs. Workflow is a widely used technology that supports collaboration and can be adapted for a variety of collaborative scenarios. Although the originally computer-based workflow technology has expanded also on mobile devices, there are still research challenges in the development of user-focused device-oriented collaborative workflows. As opposed to desktop computers, mobile devices provide a different, more personalised user experience and are carried by their owners everywhere. Mobile devices can capture user context and behave as digitalised user complements. By integrating context awareness into the workflow technology, workflow decisions can be based on local, context information and therefore, be more adapted to individual collaborators’ circumstances and expectations. Knowing the current context of collaborators and their mobile devices is useful, especially in mobile peer-topeer collaboration where the workflow process execution can be driven by devices according to the situation. In mobile collaboration, team workers share pictures, videos, or other content. Monitoring and exchanging the information on the current state of the content processed on devices can enhance the overall workflow execution. As mobile devices in peer-to-peer collaboration are not aware of a global workflow state, the content state information can be used to communicate progress among collaborators. However, there is still a lack of integrating content lifecycles in process-oriented workflows. The aim of this research was therefore to investigate how workflow technology can be adapted for mobile peer-to-peer collaboration, in particular, how the level of context awareness in mobile collaborative workflows can be increased and how the extra content lifecycle management support can be integrated. The collaborative workflow technology has been adapted for mobile peerto- peer collaboration by integrating context and content awareness. In the first place, a workflow-specific context management approach has been developed that allows defining workflow-specific context models and supports the integration of context models with collaborative workflows. Workflow process has been adapted to make decisions based on context information. Secondly, extra content management support has been added to the workflow technology. A representation for content lifecycles has been designed, and content lifecycles have been integrated with the workflow process. In this thesis, the MobWEL workflow approach is introduced. The Mob- WEL workflow approach allows defining, managing and executing mobile context-aware content-centric workflows. MobWEL is a workflow execution language that extends BPEL, using constructs from existing workflow approaches, Context4BPEL and BPELlight, and adopting elements from the BALSA workflow model. The MobWEL workflow management approach is a technology-based solution that has been designed to provide workflow management support to a specific class of mobile applications

    Serverless middlewares to integrate heterogeneous and distributed services in cloud continuum environments

    Get PDF
    The application of modern ICT technologies is radically changing many fields pushing toward more open and dynamic value chains fostering the cooperation and integration of many connected partners, sensors, and devices. As a valuable example, the emerging Smart Tourism field derived from the application of ICT to Tourism so to create richer and more integrated experiences, making them more accessible and sustainable. From a technological viewpoint, a recurring challenge in these decentralized environments is the integration of heterogeneous services and data spanning multiple administrative domains, each possibly applying different security/privacy policies, device and process control mechanisms, service access, and provisioning schemes, etc. The distribution and heterogeneity of those sources exacerbate the complexity in the development of integrating solutions with consequent high effort and costs for partners seeking them. Taking a step towards addressing these issues, we propose APERTO, a decentralized and distributed architecture that aims at facilitating the blending of data and services. At its core, APERTO relies on APERTO FaaS, a Serverless platform allowing fast prototyping of the business logic, lowering the barrier of entry and development costs to newcomers, (zero) fine-grained scaling of resources servicing end-users, and reduced management overhead. APERTO FaaS infrastructure is based on asynchronous and transparent communications between the components of the architecture, allowing the development of optimized solutions that exploit the peculiarities of distributed and heterogeneous environments. In particular, APERTO addresses the provisioning of scalable and cost-efficient mechanisms targeting: i) function composition allowing the definition of complex workloads from simple, ready-to-use functions, enabling smarter management of complex tasks and improved multiplexing capabilities; ii) the creation of end-to-end differentiated QoS slices minimizing interfaces among application/service running on a shared infrastructure; i) an abstraction providing uniform and optimized access to heterogeneous data sources, iv) a decentralized approach for the verification of access rights to resources

    Context-aware workflow management in eHealth applications

    Get PDF
    Workflows are a technology to structure work in functional, non-overlapping steps. They define not only the order of execution of the steps, and describe whether steps are executed in parallel, they also specify who or what tool has to fulfill which step. Workflows offer the possibility to automate work, to increase the understandability of processes, and they ease the control of process execution. The tools to manage workflows, so called workflow management systems (WfMSs), are traditionally rigid as they separate workflow definition done at build time from workflow execution done at run time. This makes them ill-suited for managing flexible and unstructured workflows. In this thesis, we focus on the support of flexible processes in eHealth, which are affected by more foreseen than unforeseen events. To bridge the gap between rigid WfMSs and flexible workflows, we developed a concept for dynamic and context-aware workflow management called Flexwoman. Although our focus lies on flexible eHealth processes, Flexwoman is a generic approach that can be applied to several different application domains. Flexwoman supports the usage of context information to adapt processes automatically at run time to foreseen events. Processes can also be manually adapted to handle unforeseen events. To achieve this flexibility, context information from different sensors is unified and thus can be analyzed in the same way. The analysis and adaptation of workflows is executed with a rule engine. A rule engine can store, reason about and apply knowledge automatically and efficiently. Rules and application logic are separated, thus, rules can be changed during run time without affecting application logic or process description. Workflows are internally described by Hierarchical Colored Petri nets (HCPNs) and executed by a HCPN execution engine. HCPNs allow for a deterministic execution of workflows and can represent workflows on different levels of detail. In summary, in Flexwoman, significant context changes (events) trigger automated adaptations that replace parts of the workflow by sub workflows, which can in turn be adapted. The adaptations and the rules for context-aware adaptation are saved in the organizational memory for later reuse. Flexwoman’s event based behavior facilitates proactive adaptations instead of only allowing for adaptations while entering or leaving a task. Replacements are not bound to special places defined at build time but each part of the workflow, which has not been executed yet, can be replaced at run time. We implemented and evaluated the concept. The evaluations show i) that all required functionality is available, ii) that the system scales with a growing number of rules, and iii) that the system correctly handles failure situations
    corecore