1,532 research outputs found

    Implementation of context-aware workflows with Multi-agent Systems

    Get PDF
    Systems in Ambient Intelligence (AmI) need to manage workflows that represent users’ activities. These workflows can be quite complex, as they may involve multiple participants, both physical and computational, playing different roles. Their execution implies monitoring the development of the activities in the environment, and taking the necessary actions for them and the workflow to reach a certain end. The context-aware approach supports the development of these applications to cope with event processing and regarding information issues. Modeling the actors in these context-aware workflows, where complex decisions and interactions must be considered, can be achieved with multi-agent systems. Agents are autonomous entities with sophisticated and flexible behaviors, which are able to adapt to complex and evolving environments, and to collaborate to reach common goals. This work presents architectural patterns to integrate agents on top of an existing context-aware architecture. This allows an additional abstraction layer on top of context-aware systems, where knowledge management is performed by agents.This approach improves the flexibility of AmI systems and facilitates their design. A case study on guiding users in buildings to their meetings illustrates this approach

    Trusted resource allocation in volunteer edge-cloud computing for scientific applications

    Get PDF
    Data-intensive science applications in fields such as e.g., bioinformatics, health sciences, and material discovery are becoming increasingly dynamic and demanding with resource requirements. Researchers using these applications which are based on advanced scientific workflows frequently require a diverse set of resources that are often not available within private servers or a single Cloud Service Provider (CSP). For example, a user working with Precision Medicine applications would prefer only those CSPs who follow guidelines from HIPAA (Health Insurance Portability and Accountability Act) for implementing their data services and might want services from other CSPs for economic viability. With the generation of more and more data these workflows often require deployment and dynamic scaling of multi-cloud resources in an efficient and high-performance manner (e.g., quick setup, reduced computation time, and increased application throughput). At the same time, users seek to minimize the costs of configuring the related multi-cloud resources. While performance and cost are among the key factors to decide upon CSP resource selection, the scientific workflows often process proprietary/confidential data that introduces additional constraints of security postures. Thus, users have to make an informed decision on the selection of resources that are most suited for their applications while trading off between the key factors of resource selection which are performance, agility, cost, and security (PACS). Furthermore, even with the most efficient resource allocation across multi-cloud, the cost to solution might not be economical for all users which have led to the development of new paradigms of computing such as volunteer computing where users utilize volunteered cyber resources to meet their computing requirements. For economical and readily available resources, it is essential that such volunteered resources can integrate well with cloud resources for providing the most efficient computing infrastructure for users. In this dissertation, individual stages such as user requirement collection, user's resource preferences, resource brokering and task scheduling, in lifecycle of resource brokering for users are tackled. For collection of user requirements, a novel approach through an iterative design interface is proposed. In addition, fuzzy interference-based approach is proposed to capture users' biases and expertise for guiding their resource selection for their applications. The results showed improvement in performance i.e. time to execute in 98 percent of the studied applications. The data collected on user's requirements and preferences is later used by optimizer engine and machine learning algorithms for resource brokering. For resource brokering, a new integer linear programming based solution (OnTimeURB) is proposed which creates multi-cloud template solutions for resource allocation while also optimizing performance, agility, cost, and security. The solution was further improved by the addition of a machine learning model based on naive bayes classifier which captures the true QoS of cloud resources for guiding template solution creation. The proposed solution was able to improve the time to execute for as much as 96 percent of the largest applications. As discussed above, to fulfill necessity of economical computing resources, a new paradigm of computing viz-a-viz Volunteer Edge Computing (VEC) is proposed which reduces cost and improves performance and security by creating edge clusters comprising of volunteered computing resources close to users. The initial results have shown improved time of execution for application workflows against state-of-the-art solutions while utilizing only the most secure VEC resources. Consequently, we have utilized reinforcement learning based solutions to characterize volunteered resources for their availability and flexibility towards implementation of security policies. The characterization of volunteered resources facilitates efficient allocation of resources and scheduling of workflows tasks which improves performance and throughput of workflow executions. VEC architecture is further validated with state-of-the-art bioinformatics workflows and manufacturing workflows.Includes bibliographical references

    FAUSTA: Scaling Dynamic Analysis with Traffic Generation at WhatsApp

    Get PDF
    We introduce Fausta, an algorithmic traffic gener-ation platform that enables analysis and testing at scale. Fausta has been deployed at Meta to analyze and test the WhatsApp plat-form infrastructure since September 2020, enabling WhatsApp developers to deploy reliable code changes to a code base of millions of lines of code, supporting over 2 billion users who rely on WhatsApp for their daily communications. Fausta covers expected and unexpected program behaviors in a privacy-safe controlled environment to support multiple use cases such as reliability testing, privacy analysis and performance regression detection. It currently supports three different algorithmic input generation strategies, each of which construct realistic backend server traffic that closely simulates production data, without replaying any real user data. Fausta has been deployed and closely integrated into the WhatsApp continuous integration process, catching bugs in development before they hit production. We report on the development and deployment of Fausta's reliability use case between September 2020 and August 2021. During this period it has found 1,876 unique reliability issues, with a fix rate of 74%, indicating a high degree of true positive fault revelation. We also report on the distribution of fault types revealed by Fausta, and the correlation between coverage and faults found. Overall, we do find evidence that higher coverage is correlated with fault revelation

    Deployment and Operation of Complex Software in Heterogeneous Execution Environments

    Get PDF
    This open access book provides an overview of the work developed within the SODALITE project, which aims at facilitating the deployment and operation of distributed software on top of heterogeneous infrastructures, including cloud, HPC and edge resources. The experts participating in the project describe how SODALITE works and how it can be exploited by end users. While multiple languages and tools are available in the literature to support DevOps teams in the automation of deployment and operation steps, still these activities require specific know-how and skills that cannot be found in average teams. The SODALITE framework tackles this problem by offering modelling and smart editing features to allow those we call Application Ops Experts to work without knowing low level details about the adopted, potentially heterogeneous, infrastructures. The framework offers also mechanisms to verify the quality of the defined models, generate the corresponding executable infrastructural code, automatically wrap application components within proper execution containers, orchestrate all activities concerned with deployment and operation of all system components, and support on-the-fly self-adaptation and refactoring

    A formal architecture-centric and model driven approach for the engineering of science gateways

    Get PDF
    From n-Tier client/server applications, to more complex academic Grids, or even the most recent and promising industrial Clouds, the last decade has witnessed significant developments in distributed computing. In spite of this conceptual heterogeneity, Service-Oriented Architecture (SOA) seems to have emerged as the common and underlying abstraction paradigm, even though different standards and technologies are applied across application domains. Suitable access to data and algorithms resident in SOAs via so-called ‘Science Gateways’ has thus become a pressing need in order to realize the benefits of distributed computing infrastructures.In an attempt to inform service-oriented systems design and developments in Grid-based biomedical research infrastructures, the applicant has consolidated work from three complementary experiences in European projects, which have developed and deployed large-scale production quality infrastructures and more recently Science Gateways to support research in breast cancer, pediatric diseases and neurodegenerative pathologies respectively. In analyzing the requirements from these biomedical applications the applicant was able to elaborate on commonly faced issues in Grid development and deployment, while proposing an adapted and extensible engineering framework. Grids implement a number of protocols, applications, standards and attempt to virtualize and harmonize accesses to them. Most Grid implementations therefore are instantiated as superposed software layers, often resulting in a low quality of services and quality of applications, thus making design and development increasingly complex, and rendering classical software engineering approaches unsuitable for Grid developments.The applicant proposes the application of a formal Model-Driven Engineering (MDE) approach to service-oriented developments, making it possible to define Grid-based architectures and Science Gateways that satisfy quality of service requirements, execution platform and distribution criteria at design time. An novel investigation is thus presented on the applicability of the resulting grid MDE (gMDE) to specific examples and conclusions are drawn on the benefits of this approach and its possible application to other areas, in particular that of Distributed Computing Infrastructures (DCI) interoperability, Science Gateways and Cloud architectures developments

    Tight lower bounds for the Workflow Satisfiability Problem based on the Strong Exponential Time Hypothesis

    Get PDF
    The Workflow Satisfiability Problem (WSP) asks whether there exists an assignment of authorized users to the steps in a workflow specification, subject to certain constraints on the assignment. The problem is NP-hard even when restricted to just not equals constraints. Since the number of steps kk is relatively small in practice, Wang and Li (2010) introduced a parametrisation of WSP by kk. Wang and Li (2010) showed that, in general, the WSP is W[1]-hard, i.e., it is unlikely that there exists a fixed-parameter tractable (FPT) algorithm for solving the WSP. Crampton et al. (2013) and Cohen et al. (2014) designed FPT algorithms of running time O(2k)O^*(2^{k}) and O(2klog2k)O^*(2^{k\log_2 k}) for the WSP with so-called regular and user-independent constraints, respectively. In this note, we show that there are no algorithms of running time O(2ck)O^*(2^{ck}) and O(2cklog2k)O^*(2^{ck\log_2 k}) for the two restrictions of WSP, respectively, with any c<1c<1, unless the Strong Exponential Time Hypothesis fails

    Unified System on Chip RESTAPI Service (USOCRS)

    Get PDF
    Abstract. This thesis investigates the development of a Unified System on Chip RESTAPI Service (USOCRS) to enhance the efficiency and effectiveness of SOC verification reporting. The research aims to overcome the challenges associated with the transfer, utilization, and interpretation of SoC verification reports by creating a unified platform that integrates various tools and technologies. The research methodology used in this study follows a design science approach. A thorough literature review was conducted to explore existing approaches and technologies related to SOC verification reporting, automation, data visualization, and API development. The review revealed gaps in the current state of the field, providing a basis for further investigation. Using the insights gained from the literature review, a system design and implementation plan were developed. This plan makes use of cutting-edge technologies such as FASTAPI, SQL and NoSQL databases, Azure Active Directory for authentication, and Cloud services. The Verification Toolbox was employed to validate SoC reports based on the organization’s standards. The system went through manual testing, and user satisfaction was evaluated to ensure its functionality and usability. The results of this study demonstrate the successful design and implementation of the USOCRS, offering SOC engineers a unified and secure platform for uploading, validating, storing, and retrieving verification reports. The USOCRS facilitates seamless communication between users and the API, granting easy access to vital information including successes, failures, and test coverage derived from submitted SoC verification reports. By automating and standardizing the SOC verification reporting process, the USOCRS eliminates manual and repetitive tasks usually done by developers, thereby enhancing productivity, and establishing a robust and reliable framework for report storage and retrieval. Through the integration of diverse tools and technologies, the USOCRS presents a comprehensive solution that adheres to the required specifications of the SOC schema used within the organization. Furthermore, the USOCRS significantly improves the efficiency and effectiveness of SOC verification reporting. It facilitates the submission process, reduces latency through optimized data storage, and enables meaningful extraction and analysis of report data

    RADON: Rational decomposition and orchestration for serverless computing

    Get PDF
    Emerging serverless computing technologies, such as function as a service (FaaS), enable developers to virtualize the internal logic of an application, simplifying the management of cloud-native services and allowing cost savings through billing and scaling at the level of individual functions. Serverless computing is therefore rapidly shifting the attention of software vendors to the challenge of developing cloud applications deployable on FaaS platforms. In this vision paper, we present the research agenda of the RADON project (http://radon-h2020.eu), which aims to develop a model-driven DevOps framework for creating and managing applications based on serverless computing. RADON applications will consist of fine-grained and independent microservices that can efficiently and optimally exploit FaaS and container technologies. Our methodology strives to tackle complexity in designing such applications, including the solution of optimal decomposition, the reuse of serverless functions as well as the abstraction and actuation of event processing chains, while avoiding cloud vendor lock-in through models
    corecore