37,962 research outputs found

    Real-time transaction processing for autonomic grid application

    Get PDF
    The advances in computing and communication technologies and software have resulted in an explosive growth in computing systems and applications that impact all aspects of our life. Computing systems are expected to be effective and serve useful purpose when they are first introduced and continue to be useful as condition changes. With increase in complexity of systems and applications, their development, configuration, and management challenges are beyond the capabilities of existing tools and methodologies. So the system becomes unmanageable and insecure. So in order to make the systems selfmanageable and secure the concept of Autonomic computing is evolved. Autonomic computing offers a potential solution to these challenging research problems. It is inspired by nature and biological systems (such as the autonomic nervous system) that have evolved to cope with the challenges of scale, complexity, heterogeneity and unpredictability by being decentralized, context aware, adaptive and resilient. This new era of computing is driven by the convergence of biological and digital computing systems and is characterized by being self-defining, self-configuring, self-optimizing, self-protecting, self-healing, context aware and anticipatory. Autonomic computing is a new computing model to self manages computing systems with a minimal human interference. It provides an unprecedented level of self-regulation and hides complexity from Users. The Autonomic computing initiative is inspired by the human body’s autonomic nervous system. The autonomic nervous system monitors the heart- beats, checks blood sugar levels and maintains normal body temperature with out any conscious effort from the human. There is an important distinction between autonomic activity in the human body and autonomic responses in computer systems. Many of the decision made autonomic elements in computer systems make decisions based on tasks, which are chosen to be delegated to the technology. The influences of the autonomic nervous systems may imply that the autonomic computing initiative is concerned only with lowlevel self-managing capability such as reflex reaction. The basic application area of autonomic computing is grid computing. Both autonomic computing and grid computing are proposed as innovations of IT. Autonomic computing aims to present a solution to the rapidly increasing complexity crises in IT industry, as grid computing tries to share and integrate distributed computational resources and data resources. Basic aim is to implement the autonomic computing in grid related study like autonomic task distribution and handling in grids, and autonomic resource allocation. In this thesis paper we presents methods of calculating deadlines of global and local transaction And sub transaction by taking EDF algorithm and measure the performance by taking miss ratio in Different workload. We implement this work in an existing grid. The basic aim is to know autonomic computing better. It is a model to self manage computing Systems with minimal human interference. Self manage has properties like self-configuration, self-optimization, self-healing, self-protection. Autonomic grid computing combines autonomic computing with grid technologies to help companies to reduce the complexity associated with the grid system and hides the complexity from their grid user. Autonomic real-time transaction services incorporate fault tolerance into autonomic grid technology by automatically recovering systems from various failures. Here in this paper Deadlines of global transaction, sub transaction and local transaction are calculated by taking parameters arrival time, execution time, relative deadline, and slack time. We are taking a periodic transaction having λ (transaction arrival rate per second) Tasks are generated at different nodes with Poisson ratio with λ as workload. Miss ratio is the performance metrics. With increase in workload miss ratio first decreased and then rose. The reason was each sub transaction acted as a unit to compete for resources so that more workload the more system resource they consumed. So more transaction missed their deadlines, as they could not get enough resource in time. EDF algorithm has both less global and local miss ratios then other scheduling algorithm. If EDF is compare with FCFS or SJF or HPF it is apparent that both algorithms perform almost identically until no of transaction is low, then EDF misses fewer dead lines than other. Real-time transaction can handled by the grid in autonomic environment and satisfy properties of autonomic computing

    Service-oriented computing: concepts, characteristics and directions

    Get PDF
    Service-Oriented Computing (SOC) is the computing paradigm that utilizes services as fundamental elements for developing applications/solutions. To build the service model, SOC relies on the Service Oriented Architecture (SOA), which is a way of reorganizing software applications and infrastructure into a set of interacting services. However, the basic SOA does not address overarching concerns such as management, service orchestration, service transaction management and coordination, security, and other concerns that apply to all components in a services architecture. In this paper we introduce an Extended Service Oriented Architecture that provides separate tiers for composing and coordinating services and for managing services in an open marketplace by employing grid services.

    Providing Transaction Class-Based QoS in In-Memory Data Grids via Machine Learning

    Get PDF
    Elastic architectures and the ”pay-as-you-go” resource pricing model offered by many cloud infrastructure providers may seem the right choice for companies dealing with data centric applications characterized by high variable workload. In such a context, in-memory transactional data grids have demonstrated to be particularly suited for exploiting advantages provided by elastic computing platforms, mainly thanks to their ability to be dynamically (re-)sized and tuned. Anyway, when specific QoS requirements have to be met, this kind of architectures have revealed to be complex to be managed by humans. Particularly, their management is a very complex task without the stand of mechanisms supporting run-time automatic sizing/tuning of the data platform and the underlying (virtual) hardware resources provided by the cloud. In this paper, we present a neural network-based architecture where the system is constantly and automatically re-configured, particularly in terms of computing resources

    Developing Resource Usage Service in WLCG

    No full text
    According to the Memorandum of Understanding (MoU) of the World-wide LHC Computing Grid (WLCG) project, participating sites are required to provide resource usage or accounting data to the Grid Operational Centre (GOC) to enrich the understanding of how shared resources are used, and to provide information for improving the effectiveness of resource allocation. As a multi-grid environment, the accounting process of WLCG is currently enabled by four accounting systems, each of which was developed independently by constituent grid projects. These accounting systems were designed and implemented based on project-specific local understanding of requirements, and therefore lack interoperability. In order to automate the accounting process in WLCG, three transportation methods are being introduced for streaming accounting data metered by heterogeneous accounting systems into GOC at Rutherford Appleton Laboratory (RAL) in the UK, where accounting data are aggregated and accumulated throughout the year. These transportation methods, however, were introduced on a per accounting-system basis, i.e. targeting at a particular accounting system, making them hard to reuse and customize to new requirements. This paper presents the design of WLCG-RUS system, a standards-compatible solution providing a consistent process for streaming resource usage data across various accounting systems, while ensuring interoperability, portability, and customization
    corecore