1,417 research outputs found

    Data-Aware Scheduling Strategy for Scientific Workflow Applications in IaaS Cloud Computing

    Get PDF
    Scientific workflows benefit from the cloud computing paradigm, which offers access to virtual resources provisioned on pay-as-you-go and on-demand basis. Minimizing resources costs to meet user’s budget is very important in a cloud environment. Several optimization approaches have been proposed to improve the performance and the cost of data-intensive scientific Workflow Scheduling (DiSWS) in cloud computing. However, in the literature, the majority of the DiSWS approaches focused on the use of heuristic and metaheuristic as an optimization method. Furthermore, the tasks hierarchy in data-intensive scientific workflows has not been extensively explored in the current literature. Specifically, in this paper, a data-intensive scientific workflow is represented as a hierarchy, which specifies hierarchical relations between workflow tasks, and an approach for data-intensive workflow scheduling applications is proposed. In this approach, first, the datasets and workflow tasks are modeled as a conditional probability matrix (CPM). Second, several data transformation and hierarchical clustering are applied to the CPM structure to determine the minimum number of virtual machines needed for the workflow execution. In this approach, the hierarchical clustering is done with respect to the budget imposed by the user. After data transformation and hierarchical clustering, the amount of data transmitted between clusters can be reduced, which can improve cost and makespan of the workflow by optimizing the use of virtual resources and network bandwidth. The performance and cost are analyzed using an extension of Cloudsim simulation tool and compared with existing multi-objective approaches. The results demonstrate that our approach reduces resources cost with respect to the user budgets

    A knowledge hub to enhance the learning processes of an industrial cluster

    Get PDF
    Industrial clusters have been defined as ?networks of production of strongly interdependent firms (including specialised suppliers), knowledge producing agents (universities, research institutes, engineering companies), institutions (brokers, consultants), linked to each other in a value adding production chain? (OECD Focus Group, 1999). The industrial clusters distinctive mode of production is specialisation, based on a sophisticated division of labour, that leads to interlinked activities and need for cooperation, with the consequent emergence of communities of practice (CoPs). CoPs are here conceived as groups of people and/or organisations bound together by shared expertise and propensity towards a joint work (Wenger and Suyden, 1999). Cooperation needs closeness for just-in-time delivery, for communication, for the exchange of knowledge, especially in its tacit form. Indeed the knowledge exchanges between the CoPs specialised actors, in geographical proximity, lead to spillovers and synergies. In the digital economy landscape, the use of collaborative technologies, such as shared repositories, chat rooms and videoconferences can, when appropriately used, have a positive impact on the development of the CoP exchanges process of codified knowledge. On the other end, systems for the individuals profile management, e-learning platforms and intelligent agents can trigger also some socialisation mechanisms of tacit knowledge. In this perspective, we have set-up a model of a Knowledge Hub (KH), driven by the Information and Communication Technologies (ICT-driven), that enables the knowledge exchanges of a CoP. In order to present the model, the paper is organised in the following logical steps: - an overview of the most seminal and consolidated approaches to CoPs; - a description of the KH model, ICT-driven, conceived as a booster of the knowledge exchanges of a CoP, that adds to the economic benefits coming from geographical proximity, the advantages coming from organizational proximity, based on the ICTs; - a discussion of some preliminary results that we are obtaining during the implementation of the model.

    An study of the effect of process malleability in the energy efficiency on GPU‑based clusters

    Get PDF
    The adoption of graphic processor units (GPU) in high-performance computing (HPC) infrastructures determines, in many cases, the energy consumption of those facilities. For this reason, an efficient management and administration of the GPU-enabled clusters is crucial for the optimum operation of the cluster. The main aim of this work is to study and design efficient mechanisms of job scheduling across GPU-enabled clusters by leveraging process malleability techniques, able to reconfigure running jobs, depending on the cluster status. This paper presents a model that improves the energy efficiency when processing a batch of jobs in an HPC cluster. The model is validated through the MPDATA algorithm, as a representative example of stencil computation used in numerical weather prediction. The proposed solution applies the efficiency metrics obtained in a new reconfiguration policy aimed at job arrays. This solution allows the reduction in the processing time of workloads up to 4.8 times and reduction in the energy consumption up to 2.4 times the cluster compared to the traditional job management, where jobs are not reconfigured during their execution

    Persistent disparities in regional unemployment: Application of a spatial filtering approach to local labour markets in Germany

    Get PDF
    The geographical distribution and persistence of regional/local unemployment rates in heterogeneous economies (such as Germany) have been, in recent years, the subject of various theoretical and empirical studies. Several researchers have shown an interest in analysing the dynamic adjustment processes of unemployment and the average degree of dependence of the current unemployment rates or gross domestic product from the ones observed in the past. In this paper, we present a new econometric approach to the study of regional unemployment persistence, in order to account for spatial heterogeneity and/or spatial autocorrelation in both the levels and the dynamics of unemployment. First, we propose an econometric procedure suggesting the use of spatial filtering techniques as a substitute for fixed effects in a panel estimation framework. The spatial filter computed here is a proxy for spatially distributed region-specific information (e.g., the endowment of natural resources, or the size of the ‘home market’) that is usually incorporated in the fixed effects coefficients. The advantages of our proposed procedure are that the spatial filter, by incorporating region- specific information that generates spatial autocorrelation, frees up degrees of freedom, simultaneously corrects for time-stable spatial autocorrelation in the residuals, and provides insights about the spatial patterns in regional adjustment processes. In the paper we present several experiments in order to investigate the spatial pattern of the heterogeneous autoregressive coefficients estimated for unemployment data for German NUTS-3 regions
    • 

    corecore