851 research outputs found

    Reputation-guided Evolutionary Scheduling Algorithm for Independent Tasks in inter-Clouds Environments

    Get PDF
    Self-adaptation provides software with flexibility to different behaviours (configurations) it incorporates and the (semi-) autonomous ability to switch between these behaviours in response to changes. To empower clouds with the ability to capture and respond to quality feedback provided by users at runtime, we propose a reputation guided genetic scheduling algorithm for independent tasks. Current resource management services consider evolutionary strategies to improve the performance on resource allocation procedures or tasks scheduling algorithms, but they fail to consider the user as part of the scheduling process. Evolutionary computing offers different methods to find a near-optimal solution. In this paper we extended previous work with new optimisation heuristics for the problem of scheduling. We show how reputation is considered as an optimisation metric, and analyse how our metrics can be considered as upper bounds for others in the optimisation algorithm. By experimental comparison, we show our techniques can lead to optimised results.Peer Reviewe

    Trustworthy Edge Machine Learning: A Survey

    Full text link
    The convergence of Edge Computing (EC) and Machine Learning (ML), known as Edge Machine Learning (EML), has become a highly regarded research area by utilizing distributed network resources to perform joint training and inference in a cooperative manner. However, EML faces various challenges due to resource constraints, heterogeneous network environments, and diverse service requirements of different applications, which together affect the trustworthiness of EML in the eyes of its stakeholders. This survey provides a comprehensive summary of definitions, attributes, frameworks, techniques, and solutions for trustworthy EML. Specifically, we first emphasize the importance of trustworthy EML within the context of Sixth-Generation (6G) networks. We then discuss the necessity of trustworthiness from the perspective of challenges encountered during deployment and real-world application scenarios. Subsequently, we provide a preliminary definition of trustworthy EML and explore its key attributes. Following this, we introduce fundamental frameworks and enabling technologies for trustworthy EML systems, and provide an in-depth literature review of the latest solutions to enhance trustworthiness of EML. Finally, we discuss corresponding research challenges and open issues.Comment: 27 pages, 7 figures, 10 table

    Advances in Grid Computing

    Get PDF
    This book approaches the grid computing with a perspective on the latest achievements in the field, providing an insight into the current research trends and advances, and presenting a large range of innovative research papers. The topics covered in this book include resource and data management, grid architectures and development, and grid-enabled applications. New ideas employing heuristic methods from swarm intelligence or genetic algorithm and quantum encryption are considered in order to explain two main aspects of grid computing: resource management and data management. The book addresses also some aspects of grid computing that regard architecture and development, and includes a diverse range of applications for grid computing, including possible human grid computing system, simulation of the fusion reaction, ubiquitous healthcare service provisioning and complex water systems

    Autonomy and Intelligence in the Computing Continuum: Challenges, Enablers, and Future Directions for Orchestration

    Full text link
    Future AI applications require performance, reliability and privacy that the existing, cloud-dependant system architectures cannot provide. In this article, we study orchestration in the device-edge-cloud continuum, and focus on AI for edge, that is, the AI methods used in resource orchestration. We claim that to support the constantly growing requirements of intelligent applications in the device-edge-cloud computing continuum, resource orchestration needs to embrace edge AI and emphasize local autonomy and intelligence. To justify the claim, we provide a general definition for continuum orchestration, and look at how current and emerging orchestration paradigms are suitable for the computing continuum. We describe certain major emerging research themes that may affect future orchestration, and provide an early vision of an orchestration paradigm that embraces those research themes. Finally, we survey current key edge AI methods and look at how they may contribute into fulfilling the vision of future continuum orchestration.Comment: 50 pages, 8 figures (Revised content in all sections, added figures and new section

    An energy-aware scheduling approach for resource-intensive jobs using smart mobile devices as resource providers

    Get PDF
    The ever-growing adoption of smart mobile devices is a worldwide phenomenon that positions smart-phones and tablets as primary devices for communication and Internet access. In addition to this, the computing capabilities of such devices, often underutilized by their owners, are in continuous improvement. Today, smart mobile devices have multi-core CPUs, several gigabytes of RAM, and ability to communicate through several wireless networking technologies. These facts caught the attention of researchers who have proposed to leverage smart mobile devices aggregated computing capabilities for running resource intensive software. However, such idea is conditioned by key features, named singularities in the context of this thesis, that characterize resource provision with smart mobile devices.These are the ability of devices to change location (user mobility), the shared or non-dedicated nature of resources provided (lack of ownership) and the limited operation time given by the finite energy source (exhaustible resources).Existing proposals materializing this idea differ in the singularities combinations they target and the way they address each singularity, which make them suitable for distinct goals and resource exploitation opportunities. The latter are represented by real life situations where resources provided by groups of smart mobile devices can be exploited, which in turn are characterized by a social context and a networking support used to link and coordinate devices. The behavior of people in a given social context configure a special availability level of resources, while the underlying networking support imposes restrictionson how information flows, computational tasks are distributed and results are collected. The latter constitutes one fundamental difference of proposals mainly because each networking support ?i.e., ad-hoc and infrastructure based? has its own application scenarios. Aside from the singularities addressed and the networking support utilized, the weakest point of most of the proposals is their practical applicability. The performance achieved heavily relies on the accuracy with which task information, including execution time and/or energy required for execution, is provided to feed the resource allocator.The expanded usage of wireless communication infrastructure in public and private buildings, e.g., shoppings, work offices, university campuses and so on, constitutes a networking support that can be naturally re-utilized for leveraging smart mobile devices computational capabilities. In this context, this thesisproposal aims to contribute with an easy-to-implement  scheduling approach for running CPU-bound applications on a cluster of smart mobile devices. The approach is aware of the finite nature of smart mobile devices energy, and it does not depend on tasks information to operate. By contrast, it allocatescomputational resources to incoming tasks using a node ranking-based strategy. The ranking weights nodes combining static and dynamic parameters, including benchmark results, battery level, number of queued tasks, among others. This node ranking-based task assignment, or first allocation phase, is complemented with a re-balancing phase using job stealing techniques. The second allocation phase is an aid to the unbalanced load provoked as consequence of the non-dedicated nature of smart mobile devices CPU usage, i.e., the effect of the owner interaction, tasks heterogeneity, and lack of up-to-dateand accurate information of remaining energy estimations. The evaluation of the scheduling approach is through an in-vitro simulation. A novel simulator which exploits energy consumption profiles of real smart mobile devices, as well as, fluctuating CPU usage built upon empirical models, derived from real users interaction data, is another major contribution. Tests that validate the simulation tool are provided and the approach is evaluated in scenarios varying the composition of nodes, tasks and nodes characteristics including different tasks arrival rates, tasks requirements and different levels of nodes resource utilization.Fil: Hirsch Jofré, Matías Eberardo. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Tandil. Instituto Superior de Ingeniería del Software. Universidad Nacional del Centro de la Provincia de Buenos Aires. Instituto Superior de Ingeniería del Software; Argentin

    End-to-End Trust Fulfillment of Big Data Workflow Provisioning over Competing Clouds

    Get PDF
    Cloud Computing has emerged as a promising and powerful paradigm for delivering data- intensive, high performance computation, applications and services over the Internet. Cloud Computing has enabled the implementation and success of Big Data, a relatively recent phenomenon consisting of the generation and analysis of abundant data from various sources. Accordingly, to satisfy the growing demands of Big Data storage, processing, and analytics, a large market has emerged for Cloud Service Providers, offering a myriad of resources, platforms, and infrastructures. The proliferation of these services often makes it difficult for consumers to select the most suitable and trustworthy provider to fulfill the requirements of building complex workflows and applications in a relatively short time. In this thesis, we first propose a quality specification model to support dual pre- and post-cloud workflow provisioning, consisting of service provider selection and workflow quality enforcement and adaptation. This model captures key properties of the quality of work at different stages of the Big Data value chain, enabling standardized quality specification, monitoring, and adaptation. Subsequently, we propose a two-dimensional trust-enabled framework to facilitate end-to-end Quality of Service (QoS) enforcement that: 1) automates cloud service provider selection for Big Data workflow processing, and 2) maintains the required QoS levels of Big Data workflows during runtime through dynamic orchestration using multi-model architecture-driven workflow monitoring, prediction, and adaptation. The trust-based automatic service provider selection scheme we propose in this thesis is comprehensive and adaptive, as it relies on a dynamic trust model to evaluate the QoS of a cloud provider prior to taking any selection decisions. It is a multi-dimensional trust model for Big Data workflows over competing clouds that assesses the trustworthiness of cloud providers based on three trust levels: (1) presence of the most up-to-date cloud resource verified capabilities, (2) reputational evidence measured by neighboring users and (3) a recorded personal history of experiences with the cloud provider. The trust-based workflow orchestration scheme we propose aims to avoid performance degradation or cloud service interruption. Our workflow orchestration approach is not only based on automatic adaptation and reconfiguration supported by monitoring, but also on predicting cloud resource shortages, thus preventing performance degradation. We formalize the cloud resource orchestration process using a state machine that efficiently captures different dynamic properties of the cloud execution environment. In addition, we use a model checker to validate our monitoring model in terms of reachability, liveness, and safety properties. We evaluate both our automated service provider selection scheme and cloud workflow orchestration, monitoring and adaptation schemes on a workflow-enabled Big Data application. A set of scenarios were carefully chosen to evaluate the performance of the service provider selection, workflow monitoring and the adaptation schemes we have implemented. The results demonstrate that our service selection outperforms other selection strategies and ensures trustworthy service provider selection. The results of evaluating automated workflow orchestration further show that our model is self-adapting, self-configuring, reacts efficiently to changes and adapts accordingly while enforcing QoS of workflows

    Resource management in the cloud: An end-to-end Approach

    Get PDF
    Philosophiae Doctor - PhDCloud Computing enables users achieve ubiquitous on-demand , and convenient access to a variety of shared computing resources, such as serves network, storage ,applications and more. As a business model, Cloud Computing has been openly welcomed by users and has become one of the research hotspots in the field of information and communication technology. This is because it provides users with on-demand customization and pay-per-use resource acquisition methods

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications
    • …
    corecore