9 research outputs found

    Crowd Sourcing Rules in Agile Software Engineering to Improve Efficiency Using Ontological Framework

    Get PDF
    Business Rule Management System provides the necessary seeds for the planning, implementing, verifying and validating the Agile Requirements. The BRMS model needs to be modified in a way that organizational growth runs parallel with the intrinsic expansion in the number of User Requirements in Agile Development. This growth in Requirements or Rules in Agile Software Development is an obvious overhead that needs to be managed properly considering its sprint nature. A Semantic approach is followed by design and maintenance of an Ontology called RAgile. The ontology is developed in ā€˜ProtĆ©gĆ© 5 having inherent capability of Ontology Merging in case of disparate Rule files. User requirements that are drawn into the Rules or Policies depend upon the features users expect of the Agile System

    Agents in Bioinformatics

    No full text
    The scope of the Technical Forum Group (TFG) on Agents in Bioinformatics (BIOAGENTS) was to inspire collaboration between the agent and bioinformatics communities with the aim of creating an opportunity to propose a different (agent-based) approach to the development of computational frameworks both for data analysis in bioinformatics and for system modelling in computational biology. During the day, the participants examined the future of research on agents in bioinformatics primarily through 12 invited talks selected to cover the most relevant topics. From the discussions, it became clear that there are many perspectives to the field, ranging from bio-conceptual languages for agent-based simulation, to the definition of bio-ontology-based declarative languages for use by information agents, and to the use of Grid agents, each of which requires further exploration. The interactions between participants encouraged the development of applications that describe a way of creating agent-based simulation models of biological systems, starting from an hypothesis and inferring new knowledge (or relations) by mining and analysing the huge amount of public biological data. In this report we summarise and reflect on the presentations and discussions

    Rule responder HCLS eScience infrastructure

    Full text link
    paschke at inf.fu-berlin.de The emerging field of integrative bioinformatics provides en-abling methods and technologies for transparent information integration across distributed heterogenous data sources, tools and services. The aim of this article is to evolve a flex-ible and expandable distributed Pragmatic Web eScience infrastructure in the domain of Health Care and Life Sci-ence (HCLS), called Rule Responder HCLS. Rule Respon-der HCLS is about providing information consumers with rule-based agents to transform existing information into rel-evant information of practical consequences, hence provid-ing control to the end-users by enabling them to express in a declarative rule-based way how to turn existing informa-tion into personally relevant information and how to react or make automated decisions on top of it

    Agents in bioinformatics, computational and systems biology

    Get PDF
    The adoption of agent technologies and multi-agent systems constitutes an emerging area in bioinformatics. In this article, we report on the activity of the Working Group on Agents in Bioinformatics (BIOAGENTS) founded during the first AgentLink III Technical Forum meeting on the 2nd of July, 2004, in Rome. The meeting provided an opportunity for seeding collaborations between the agent and bioinformatics communities to develop a different (agent-based) approach of computational frameworks both for data analysis and management in bioinformatics and for systems modelling and simulation in computational and systems biology.The collaborations gave rise to applica- tions and integrated tools that we summarize and discuss in context of the state of the art in this area. We investigate on future challenges and argue that the field should still be explored from many perspectives ranging from bio-conceptual languages for agent-based simulation, to the definition of bio-ontology-based declarative languages to be used by information agents, and to the adoption of agents for computational grid

    Dynamic deployment of web services on the internet or grid

    Get PDF
    PhD ThesisThis thesis focuses on the area of dynamic Web Service deployment for grid and Internet applications. It presents a new Dynamic Service Oriented Architecture (DynaSOAr) that enables the deployment of Web Services at run-time in response to consumer requests. The service-oriented approach to grid and Internet computing is centred on two parties: the service provider and the service consumer. This thesis investigates the introduction of mobility into this service-oriented approach allowing for better use of resources and improved quality of service. To this end, it examines the role of the service provider and makes the case for a clear separation of its concerns into two distinct roles: that of a Web Service Provider, whose responsibility is to receive and direct consumer requests and supply service implementations, and a Host Provider, whose role is to deploy services and process consumers' requests on available resources. This separation of concerns breaks the implicit bond between a published Web Service endpoint (network address) and the resource upon which the service is deployed. It also allows the architecture to respond dynamically to changes in service demand and the quality of service requirements. Clearly defined interfaces for each role are presented, which form the infrastructure of DynaSOAr. The approach taken is wholly based on Web Services. The dynamic deployment of service code between separate roles, potentially running in different administrative domains, raises a number of security issues which are addressed. A DynaSOAr service invocation involves three parties: the requesting Consumer, a Web Service Provider and a Host Provider; this tripartite relationship requires a security model that allows the concerns of each party to be enforced for a given invocation. This thesis, therefore, presents a Tripartite Security Model and an architecture that allows the representation, propagation and enforcement of three separate sets of constraints. A prototype implementation of DynaSOAr is used to evaluate the claims made, and the results show that a significant benefit in terms of round-trip execution time for data-intensive applications is achieved. Additional benefits in terms of parallel deployments to satisfy multiple concurrent requests are also shown

    Using ontology and semantic web services to support modeling in systems biology

    Get PDF
    This thesis addresses the problem of collaboration among experimental biologists and modelers in the study of systems biology by using ontology and Semantic Web Services techniques. Modeling in systems biology is concerned with using experimental information and mathematical methods to build quantitative models across different biological scales. This requires interoperation among various knowledge sources and services. Ontology and Semantic Web Services potentially provide an infrastructure to meet this requirement. In our study, we propose an ontology-centered framework within the Semantic Web infrastructure that aims at standardizing various areas of knowledge involved in the biological modeling processes. In this framework, first we specify an ontology-based meta-model for building biological models. This meta-model supports using shared biological ontologies to annotate biological entities in the models, allows semantic queries and automatic discoveries, enables easy model reuse and composition, and serves as a basis to embed external knowledge. We also develop means of transforming biological data sources and data analysis methods into Web Services. These Web Services can then be composed together to perform parameterization in biological modeling. The knowledge of decision-making and workflow of parameterization processes are then recorded by the semantic descriptions of these Web Services, and embedded in model instances built on our proposed meta-model. We use three cases of biological modeling to evaluate our framework. By examining our ontology-centered framework in practice, we conclude that by using ontology to represent biological models and using Semantic Web Services to standardize knowledge components in modeling processes, greater capabilities of knowledge sharing, reuse and collaboration can be achieved. We also conclude that ontology-based biological models with formal semantics are essential to standardize knowledge in compliance with the Semantic Web vision

    Social techniques for effective interactions in open cooperative systems

    Get PDF
    Distributed systems are becoming increasingly popular, both in academic and commercial communities, because of the functionality they offer for sharing resources among participants of these communities. As individual systems with different purposes and functionalities are developed, and as data of many different kinds are generated, the value to be gained from sharing services with others rather than just personal use, increases dramatically. This, however, is only achievable if participants of open systems cooperate with each other, to ensure the longevity of the system and the richness of available services, and to make decisions about the services they use to ensure that they are of sufficient levels of quality. Moreover, the properties of distributed systems such as openness, dynamism, heterogeneity and resource-bounded providers bring a number of challenges to designing computational entities that cooperate effectively and efficiently. In particular, computational entities must deal with the diversity of available services, the possible resource limitations for service provision, and with finding providers willing to cooperate even in the absence of economic gains. This requires a means not only to provide non-monetary incentives for service providers, but also to account for the level of quality of cooperations, in terms of the quality of provided and received services. In support of this, entities must be capable of selecting among alternative interaction partners, since each will offer distinct properties, which may change due to the dynamism of the environment. With this in mind, our goal is to develop mechanisms to allow effective cooperation between agents operating in systems that are open, dynamic, heterogeneous, and cooperative. Such mechanisms are needed in the context of cooperative applications with services that are free of charge, such as those in bioinformatics. To achieve this, we propose a framework for non-monetary cooperative interactions, which provides non-monetary incentives for service provision and a means to analyse cooperations; an evaluation method, for evaluating dynamic services; a provider selection mechanism, for decision-making over service requests; and a requester selection mechanism, for decision-making over service provision.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Applying Agents to Bioinformatics in GeneWeaver

    Get PDF
    Recent years have seen dramatic and sustained growth in the amount of genomic data being generated, including in late 1999 the first complete sequence of a human chromosome. The challenge now faced by biological scientists is to make sense of this vast amount of accumulated and accumulating data. Fortunately, numerous databases are provided as resources containing relevant data, and there are similarly many available programs that analyse this data and attempt to understand it. However, the key problem in analysing this genomic data is how to integrate the software and primary databases in a flexible and robust way. The wide range of available programs conform to very different input, output and processing requirements, typically with little consideration given to issues of integration, and in many cases with only token efforts made in the direction of usability. In this paper, we introduce the problem domain and describe GeneWeaver, a multi-agent system for genome analys..

    Applying agents to bioinformatics in GeneWeaver THE FUTURE OF INFORMATION AGENTS IN CYBERSPACE

    No full text
    Recent years have seen dramatic and sustained growth in the amount of genomic data being generated, including in late 1999 the first complete sequence of a human chromosome. The challenge now faced by biological scientists is to make sense of this vast amount of accumulated and accumulating data. Fortunately, numerous databases are provided as resources containing relevant data, and there are similarly many available programs that analyse this data and attempt to understand it. However, the key problem in analysing this genomic data is how to integrate the software and primary databases in a flexible and robust way. The wide range of available programs conform to very different input, output and processing requirements, typically with little consideration given to issues of integration, and in many cases with only token efforts made in the. direction of usability. In this paper, we introduce the problem domain and describe GeneWeaver, a multi-agent system for genome analysis. We explain the suitability of the, information agent paradigm to the problem domain, focus on the problem of incorporating different existing analysis tools, and describe progress to date
    corecore