2,748 research outputs found
Recommended from our members
Process-based Software Tweaking with Mobile Agents
We describe an approach based upon software process technology to on-the-fly monitoring, redeployment, reconfiguration, and in general adaptation of distributed software applications, in short 'software tweaking'. We choose the term tweaking to refer to modifications in structure and behavior that can be made to individual components, as well as sets thereof, or the overall target system configuration, such as adding, removing or substituting components, while the system is running and without bringing it down. The goal of software tweaking is manifold: supporting run-time software composition, enforcing adherence to requirements, ensuring uptime and quality of service of mission-critical systems, recovering from and preventing faults, seamless system upgrading, etc. Our approach involves dispatching and coordinating software agents - named Worklets - via a process engine, since successful tweaking of a complex distributed software system often requires the concerted action of multiple agents on multiple components. The software tweaking process must incorporate and decide upon knowledge about the specifications and architecture of the target software, as well as Worklets capabilities. Software tweaking is correlated to a variety of other software processes - such as configuration management, deployment, validation and evolution - and allows to address at run time a number of related concerns that are normally dealt with only at development time
Can intelligent optimisation techniques improve computing job scheduling in a Grid environment? review, problem and proposal
In the existing Grid scheduling literature, the reported methods and strategies are mostly related to high-level schedulers such as global schedulers, external schedulers, data schedulers, and cluster schedulers. Although a number of these have previously considered job scheduling, thus far only relatively simple queue-based policies such as First In First Out (FIFO) have been considered for local job scheduling within Grid contexts. Our initial research shows that it is worth investigating the potential impact on the performance of the Grid when intelligent optimisation techniques are applied to local scheduling policies. The research problem is defined, and a basic research methodology with a detailed roadmap is presented. This paper forms a proposal with the intention of exchanging ideas and seeking potential collaborators
Recommended from our members
Combining Mobile Agents and Process-based Coordination to Achieve Software Adaptation
We have developed a model and a platform for end-to-end run-time monitoring, behavior and performance analysis, and consequent dynamic adaptation of distributed applications. This paper concentrates on how we coordinate and actuate the potentially multi-part adaptation, operating externally to the target systems, that is, without requiring any a priori built-in adaptation facilities on the part of said target systems. The actual changes are performed on the fly onto the target by communities of mobile software agents, coordinated by a decentralized process engine. These changes can be coarse-grained, such as replacing entire components or rearranging the connections among components, or fine-grained, such as changing the operational parameters, internal state and functioning logic of individual components. We discuss our successful experience using our approach in dynamic adaptation of a large-scale commercial application, which requires both coarse and fine grained modifications
Recommended from our members
Retrofitting Autonomic Capabilities onto Legacy Systems
Autonomic computing - self-configuring, self-healing, self-optimizing applications, systems and networks - is a promising solution to ever-increasing system complexity and the spiraling costs of human management as systems scale to global proportions. Most results to date, however, suggest ways to architect new software constructed from the ground up as autonomic systems, whereas in the real world organizations continue to use stovepipe legacy systems and/or build 'systems of systems' that draw from a gamut of disparate technologies from numerous vendors. Our goal is to retrofit autonomic computing onto such systems, externally, without any need to understand, modify or even recompile the target system's code. We present an autonomic infrastructure that operates similarly to active middleware, to explicitly add autonomic services to pre-existing systems via continual monitoring and a feedback loop that performs, as needed, reconfiguration and/or repair. Our lightweight design and separation of concerns enables easy adoption of individual components, independent of the rest of the full infrastructure, for use with a large variety of target systems. This work has been validated by several case studies spanning multiple application domains
Complex approach to service development
Modern companies including telecommunication companies and mobile operators working in the global environment should guarantee technological effectiveness and innovation, renewing their technologies and services. Operation Support System/Business Support System is used in telecommunication companies. In current state-of-the-art approaches, several iterations involving analysts and system architects are necessary, methodologies allow modeling non-functional or functional requirements but they do not take into account the interaction between functional and non-functional requirements as well as collaboration between services. Web Services Agreement is a convenient way to contain QoS parameters but state-of-the-art SLA-aware methods cannot support all classes of non-functional parameters and provide run-time support and dynamic reconfiguration at the same time. The approach proposed in this paper fills this gap. It employs a well-defined workflow and analysis model for developing and adapting complex software systems including support of all classes of non-functional parameters and providing run-time support and dynamic reconfiguration of provided services
A Model for Scientific Workflows with Parallel and Distributed Computing
In the last decade we witnessed an immense evolution of the computing infrastructures
in terms of processing, storage and communication. On one hand, developments in hardware architectures have made it possible to run multiple virtual machines on a single physical machine. On the other hand, the increase of the available network communication bandwidth has enabled the widespread use of distributed computing infrastructures, for example based on clusters, grids and clouds. The above factors enabled different scientific communities to aim for the development and implementation of complex scientific applications possibly involving large amounts of data. However, due to their structural complexity, these applications require decomposition models to allow multiple tasks running in parallel and distributed environments.
The scientific workflow concept arises naturally as a way to model applications composed of multiple activities. In fact, in the past decades many initiatives have been
undertaken to model application development using the workflow paradigm, both in
the business and in scientific domains. However, despite such intensive efforts, current
scientific workflow systems and tools still have limitations, which pose difficulties to the
development of emerging large-scale, distributed and dynamic applications.
This dissertation proposes the AWARD model for scientific workflows with parallel
and distributed computing. AWARD is an acronym for Autonomic Workflow Activities
Reconfigurable and Dynamic.
The AWARD model has the following main characteristics.
It is based on a decentralized execution control model where multiple autonomic
workflow activities interact by exchanging tokens through input and output ports. The
activities can be executed separately in diverse computing environments, such as in a
single computer or on multiple virtual machines running on distributed infrastructures,
such as clusters and clouds.
It provides basic workflow patterns for parallel and distributed application decomposition and other useful patterns supporting feedback loops and load balancing. The model is suitable to express applications based on a finite or infinite number of iterations, thus allowing to model long-running workflows, which are typical in scientific experimention. A distintive contribution of the AWARD model is the support for dynamic reconfiguration
of long-running workflows. A dynamic reconfiguration allows to modify the
structure of the workflow, for example, to introduce new activities, modify the connections
between activity input and output ports. The activity behavior can also be modified,
for example, by dynamically replacing the activity algorithm.
In addition to the proposal of a new workflow model, this dissertation presents the
implementation of a fully functional software architecture that supports the AWARD
model. The implemented prototype was used to validate and refine the model across
multiple workflow scenarios whose usefulness has been demonstrated in practice clearly, through experimental results, demonstrating the advantages of the major characteristics and contributions of the AWARD model. The implemented prototype was also used to develop application cases, such as a workflow to support the implementation of the MapReduce model and a workflow to support a text mining application developed by an external user.
The extensive experimental work confirmed the adequacy of the AWARD model and
its implementation for developing applications that exploit parallelism and distribution
using the scientific workflows paradigm
Recommended from our members
A Mobile Agent Approach to Lightweight Process Workflow
The Programming Systems Lab at Columbia University has investigated software process modeling and enactment since its inception in the mid-1980s, initially in the Marvel project. In the early to mid-90s, we extended to cross-organizational processes operating over the Internet, in Oz and OzWeb. The successive prototype frameworks we developed and demonstrated were used on a daily basis in-house to maintain, deploy and monitor their own components, APIs and user interfaces. The new process technology first presented here is broadly based on our decade of research on and experimentation with architecting and using such prototype services and software development processes targeted to Internet/Web middleware and applications, but reflects a major departure from our own (and others') previous directions. In particular, current process and workflow systems, including our own, are often too rigid for open-ended creative intellectual work, unable to rapidly adapt either the models or the enactment to situational context and/or user role. On the other hand, the process/workflow ideal implies a flexible mechanism for composition and coordination of information system components. We now present our in-progress development of rehostable lightweight mobile agents for on-the-fly process construction, adaptation and evolution, system reconfiguration, and knowledge propagation
A language and toolkit for the specification, execution and monitoring of dependable distributed applications
PhD ThesisThis thesis addresses the problem of specifying the composition of distributed applications
out of existing applications, possibly legacy ones. With the automation of business processes
on the increase, more and more applications of this kind are being constructed. The resulting
applications can be quite complex, usually long-lived and are executed in a heterogeneous
environment. In a distributed environment, long-lived activities need support for fault tolerance
and dynamic reconfiguration. Indeed, it is likely that the environment where they are run will
change (nodes may fail, services may be moved elsewhere or withdrawn) during their
execution and the specification will have to be modified. There is also a need for modularity,
scalability and openness. However, most of the existing systems only consider part of these
requirements. A new area of research, called workflow management has been trying to address
these issues.
This work first looks at what needs to be addressed to support the specification and
execution of these new applications in a heterogeneous, distributed environment. A co-
ordination language (scripting language) is developed that fulfils the requirements of specifying
the composition and inter-dependencies of distributed applications with the properties of
dynamic reconfiguration, fault tolerance, modularity, scalability and openness. The architecture
of the overall workflow system and its implementation are then presented. The system has been
implemented as a set of CORBA services and the execution environment is built using a
transactional workflow management system. Next, the thesis describes the design of a toolkit
to specify, execute and monitor distributed applications. The design of the co-ordination
language and the toolkit represents the main contribution of the thesis.UK Engineering and Physical Sciences Research Council,
CaberNet,
Northern Telecom (Nortel)
Developing and operating time critical applications in clouds: the state of the art and the SWITCH approach
Cloud environments can provide virtualized, elastic, controllable and high quality on-demand services for supporting complex distributed applications. However, the engineering methods and software tools used for developing, deploying and executing classical time critical applications do not, as yet, account for the programmability and controllability provided by clouds, and so time critical applications cannot yet benefit from the full potential of cloud technology. This paper reviews the state of the art of technologies involved in developing time critical cloud applications, and presents the approach of a recently funded EU H2020 project: the Software Workbench for Interactive, Time Critical and Highly self-adaptive cloud applications (SWITCH). SWITCH aims to improve the existing development and execution model of time critical applications by introducing a novel conceptual modelβthe application-infrastructure co-programming and control modelβin which application QoS and QoE, together with the programmability and controllability of cloud environments, is included in the complete application lifecycle
- β¦