557 research outputs found

    A Taxonomy of Self-configuring Service Discovery Systems

    Get PDF
    We analyze the fundamental concepts and issues in service discovery. This analysis places service discovery in the context of distributed systems by describing service discovery as a third generation naming system. We also describe the essential architectures and the functionalities in service discovery. We then proceed to show how service discovery fits into a system, by characterizing operational aspects. Subsequently, we describe how existing state of the art performs service discovery, in relation to the operational aspects and functionalities, and identify areas for improvement

    A2THOS: Availability Analysis and Optimisation in SLAs

    Get PDF
    IT service availability is at the core of customer satisfaction and business success for todayā€™s organisations. Many medium-large size organisations outsource part of their IT services to external providers, with Service Level Agreements describing the agreed availability of outsourced service components. Availability management of partially outsourced IT services is a non trivial task since classic approaches for calculating availability are not applicable, and IT managers can only rely on their expertise to fulfil it. This often leads to the adoption of non optimal solutions. In this paper we present A2THOS, a framework to calculate the availability of partially outsourced IT services in the presence of SLAs and to achieve a cost-optimal choice of availability levels for outsourced IT components while guaranteeing a target availability level for the service

    Modelling grid architecture.

    Get PDF
    This thesis evaluates software engineering methods, especially event modelling of distributed systems architecture, by applying them to specific data-grid projects. Other methods evaluated include requirements' analysis, formal architectural definition and discrete event simulation. A novel technique for matching architectural styles to requirements is introduced. Data-grids are a new class of networked information systems arising from e-science, itself an emergent method for computer-based collaborative research in the physical sciences. The tools used in general grid systems, which federate distributed resources, are reviewed, showing that they do not clearly guide architecture. The data-grid projects, which join heterogeneous data stores specifically, put required qualities at risk. Such risk of failure is mitigated in the EGSO and AstroGrid solar physics data-grid projects' designs by modelling. Design errors are trapped by rapidly encoding and evaluating informal concepts, architecture, component interaction and objects. The success of software engineering modelling techniques depends on the models' accuracy, ability to demonstrate the required properties, and clarity (so project managers and developers can act on findings). The novel formal event modelling language chosen, FSP, meets these criteria at the diverse early lifecycle stages (unlike some techniques trialled). Models permit very early testing, finding hidden complexity, gaps in designed protocols and risks of unreliability. However, simulation is shown to be more suitable for evaluating qualities like scalability, which emerge when there are many component instances. Design patterns (which may be reused in other data-grids to resolve commonly encountered challenges) are exposed in these models. A method for generating useful models rapidly, introducing the strength of iterative lifecycles to sequential projects, also arises. Despite reported resistance to innovation in industry, the software engineering techniques demonstrated may benefit commercial information systems too

    Multi-source imagery fusion using deep learning in a cloud computing platform

    Full text link
    Given the high availability of data collected by different remote sensing instruments, the data fusion of multi-spectral and hyperspectral images (HSI) is an important topic in remote sensing. In particular, super-resolution as a data fusion application using spatial and spectral domains is highly investigated because its fused images is used to improve the classification and tracking objects accuracy. On the other hand, the huge amount of data obtained by remote sensing instruments represent a key concern in terms of data storage, management and pre-processing. This paper proposes a Big Data Cloud platform using Hadoop and Spark to store, manages, and process remote sensing data. Also, a study over the parameter \textit{chunk size} is presented to suggest the appropriate value for this parameter to download imagery data from Hadoop into a Spark application, based on the format of our data. We also developed an alternative approach based on Long Short Term Memory trained with different patch sizes for super-resolution image. This approach fuse hyperspectral and multispectral images. As a result, we obtain images with high-spatial and high-spectral resolution. The experimental results show that for a chunk size of 64k, an average of 3.5s was required to download data from Hadoop into a Spark application. The proposed model for super-resolution provides a structural similarity index of 0.98 and 0.907 for the used dataset

    An Autonomic Cross-Platform Operating Environment for On-Demand Internet Computing

    Get PDF
    The Internet has evolved into a global and ubiquitous communication medium interconnecting powerful application servers, diverse desktop computers and mobile notebooks. Along with recent developments in computer technology, such as the convergence of computing and communication devices, the way how people use computers and the Internet has changed peopleĀ“s working habits and has led to new application scenarios. On the one hand, pervasive computing, ubiquitous computing and nomadic computing become more and more important since different computing devices like PDAs and notebooks may be used concurrently and alternately, e.g. while the user is on the move. On the other hand, the ubiquitous availability and pervasive interconnection of computing systems have fostered various trends towards the dynamic utilization and spontaneous collaboration of available remote computing resources, which are addressed by approaches like utility computing, grid computing, cloud computing and public computing. From a general point of view, the common objective of this development is the use of Internet applications on demand, i.e. applications that are not installed in advance by a platform administrator but are dynamically deployed and run as they are requested by the application user. The heterogeneous and unmanaged nature of the Internet represents a major challenge for the on demand use of custom Internet applications across heterogeneous hardware platforms, operating systems and network environments. Promising remedies are autonomic computing systems that are supposed to maintain themselves without particular user or application intervention. In this thesis, an Autonomic Cross-Platform Operating Environment (ACOE) is presented that supports On Demand Internet Computing (ODIC), such as dynamic application composition and ad hoc execution migration. The approach is based on an integration middleware called crossware that does not replace existing middleware but operates as a self-managing mediator between diverse application requirements and heterogeneous platform configurations. A Java implementation of the Crossware Development Kit (XDK) is presented, followed by the description of the On Demand Internet Computing System (ODIX). The feasibility of the approach is shown by the implementation of an Internet Application Workbench, an Internet Application Factory and an Internet Peer Federation. They illustrate the use of ODIX to support local, remote and distributed ODIC, respectively. Finally, the suitability of the approach is discussed with respect to the support of ODIC

    Decentralized Resource Availability Prediction in Peer-to-Peer Desktop Grids

    Get PDF
    Grid computing is a form of distributed computing which is used by an organizaĀ­ tion to handle its long-running computational tasks. Volunteer computing (desktop grid) is a type of grid computing that uses idle CPU cycles donated voluntarily by users, to run its tasks. In a desktop grid model, the resources are not dedicated. The job (computational task) is submitted for execution in the resource only when the resource is idle. There is no guarantee that the job which has started to execute in a resource will complete its execution without any disruption from user activity (such as keyboard click or mouse move). This problem becomes more challenging in a Peer-to-Peer (P2P) model of desktop grids where there is no central server which takes the decision on whether to allocate a job to a resource. In this thesis we propose and implement a P2P desktop grid framework which does resource availability prediction. We try to improve the predictability of the system, by submitting the jobs on machines which have a higher probability of being available at a given time. We benchmark our framework and provide an analysis of our results

    Decentralized load balancing in heterogeneous computational grids

    Get PDF
    With the rapid development of high-speed wide-area networks and powerful yet low-cost computational resources, grid computing has emerged as an attractive computing paradigm. The space limitations of conventional distributed systems can thus be overcome, to fully exploit the resources of under-utilised computing resources in every region around the world for distributed jobs. Workload and resource management are key grid services at the service level of grid software infrastructure, where issues of load balancing represent a common concern for most grid infrastructure developers. Although these are established research areas in parallel and distributed computing, grid computing environments present a number of new challenges, including large-scale computing resources, heterogeneous computing power, the autonomy of organisations hosting the resources, uneven job-arrival pattern among grid sites, considerable job transfer costs, and considerable communication overhead involved in capturing the load information of sites. This dissertation focuses on designing solutions for load balancing in computational grids that can cater for the unique characteristics of grid computing environments. To explore the solution space, we conducted a survey for load balancing solutions, which enabled discussion and comparison of existing approaches, and the delimiting and exploration of the apportion of solution space. A system model was developed to study the load-balancing problems in computational grid environments. In particular, we developed three decentralised algorithms for job dispatching and load balancingā€”using only partial information: the desirability-aware load balancing algorithm (DA), the performance-driven desirability-aware load-balancing algorithm (P-DA), and the performance-driven region-based load-balancing algorithm (P-RB). All three are scalable, dynamic, decentralised and sender-initiated. We conducted extensive simulation studies to analyse the performance of our load-balancing algorithms. Simulation results showed that the algorithms significantly outperform preexisting decentralised algorithms that are relevant to this research
    • ā€¦
    corecore