1,803 research outputs found

    Semantic Web: An Integrated Approach for Web Service Discovery, Selection and Composition

    Get PDF
    Web services are remote methods can be invoked through open standards such as Simple Object Access Protocols. The increasing web services in the repositories makes the selection process very complex. The same can be extended in forming the composition of web services. This research focuses on the semantic web service selection and composition through design and implementation of a framework. The proposed framework is an ontology based service selection approach and the selected services are participating in the composition process. This approach deals with semantic search, which uses Quality of services for service selection and composition. The entire framework is implemented with semantic web technology and the performance of the system is observed with domain specific ontologies

    An Integrated Semantic Web Service Discovery and Composition Framework

    Full text link
    In this paper we present a theoretical analysis of graph-based service composition in terms of its dependency with service discovery. Driven by this analysis we define a composition framework by means of integration with fine-grained I/O service discovery that enables the generation of a graph-based composition which contains the set of services that are semantically relevant for an input-output request. The proposed framework also includes an optimal composition search algorithm to extract the best composition from the graph minimising the length and the number of services, and different graph optimisations to improve the scalability of the system. A practical implementation used for the empirical analysis is also provided. This analysis proves the scalability and flexibility of our proposal and provides insights on how integrated composition systems can be designed in order to achieve good performance in real scenarios for the Web.Comment: Accepted to appear in IEEE Transactions on Services Computing 201

    Integration of Word Cloud and Tag Cloud with ILS OPAC for Enhancing the Folksonomy Based Services

    Get PDF
    Purpose The significant purpose of conducting this research is to find out more about folksonomy-based services with the help of word cloud and tag cloud. Broader terms and narrower terms are searching on bibliographic records and linking web repositories. It shows the integration mechanism of the word cloud and tag cloud with ILS OPAC using Koha and HTML scripts. Designing and developing a process for easy retrieving of library resources based on folksonomy enabled services. These new innovative services are very much helpful for library users. Methodology This folksonomy-based integrated framework has been developed and designed based on global repositories. Word cloud and tag cloud are possible with the help of HTML scripts and Koha ILS OPAC. Developed a framework for incorporating the HTML script in Koha OPAC main user block based on tools and news options in staff client interfaces. The HTML script has been designed from the word cloud concept in the online repository. This integrated web 2.0 framework is very fruitful to library professionals because it depends on a LAMP architecture. The whole system and services have been developed on Ubuntu operating system. Findings Folksonomy-based services can be achieved for the users after proper configuration and adding of these concepts. It is possible to provide the web 2.0 services with the help of word cloud and tag cloud from Koha ILS OPAC. Related terms and links can be accessed using this integrated framework. So, folksonomy-based services have been provided by using these techniques. Originality The originality of this study is keyword visualization based on folksonomy services. This integrated framework belongs to web 2.0 concepts. It is possible to generate the word cloud and tag cloud in ILS OPAC in the form of visualization. This is very useful and conducive to library users. So all the libraries are very much attracted to these modern services and strategies. This integrated framework is based on web 2.0, and folksonomy-enabled services in tag and word cloud. Overall, it is possible to integrate and generate these services with OPAC to increase modern information retrieval systems and services

    YUMA – An AI Planning Agent for Composing IT Services from Infrastructure-as-Code Specifications

    Get PDF
    Infrastructure-as-code enables cloud architects to automate IT service delivery by specifying IT services through machine-readable definition files. To allow for a reusability of the infrastructure-as-code specifications, cloud architects specify IT services as compositions of sub-processes. As the AI planning agents for automated IT service composition proposed by prior research fall short in the infrastructure-as-code context, we design a search-based problem-solving agent named YUMA according to a design science research process to fill this research gap. YUMA holds a search tree reflecting the state space and transition model. It includes an algorithm for building the search tree and two algorithms for determining the minimum composition plan. The underlying IT service composition problem is explicated for the infrastructure-as-code context and formulated as a search problem. The results of the demonstration and evaluation show that YUMA fulfills the requirements necessary to solve this problem and digitizes an important task of cloud architects

    Bisnesmetriikan kerääminen ja visualisointi pilvipohjaisessa kehitysympäristössä

    Get PDF
    Monitoring cloud computing resources is a straightforward and common task for any cloud application developer. The problem with current monitoring solutions is that they only focus on infrastructure resources. Many companies on the other hand would need data about the business side of their applications. This thesis extends the current monitoring solutions to capture business metrics from within applications. The metrics are then visualized to quickly allow for better analysis of the data. The tool is composed of three main components. The metrics are captured with a Node.js library that is imported in the monitored application. The library sends the captured data to InfluxDB timeseries database. The data is visualized with Grafana which implements tables, graphs, and gauges. The provided command-line tool creates a file that can be imported in Grafana to create a new dashboard with graphs in it. The requirements for the tool were created through the needs of software developers and clients of web- and mobile-developer Codemate. An architectural design was made based on the requirements and then implemented on the AWS cloud platform on top of Kubernetes. The implementation was evaluated by testing it in a real production server. The tool is functional and it works as intended. The results from the evaluation prove that the tool created in this thesis can help companies gain better information about their products. Future work includes adding the metrics capture for other languages such as Go and Ruby as well as integrating the tool to Codemate’s new development environment. Further research can be done especially in improving performance of the solution in large systems.Pilviresurssien monitorointi on selkeä ja yleinen tehtävä jokaiselle pilvipalvelun kehittäjälle. Monitorointisovellukset keskittyvät vain infrastruktuuriresursseihin, vaikka monet nykyajan yritykset tarvitsisivat tarkempaa tietoa sovellusten bisnespuolesta. Tämä diplomityö laajentaa nykyisiä monitorointisovelluksia kattamaan bisnesmetriikan keräämisen applikaatioiden sisältä sekä visualisoi datan paremman analyysin mahdollistamiseksi. Diplomityössä kehitetty työkalu koostuu kolmesta osasta. Metriikat kerätään sovelluksista Node.js-kirjaston avulla, joka lisätään sovelluksen koodiin. Kirjasto lähettää dataa InfluxDB-tietokantaan, josta se visualisoidaan Grafanalla interaktiivisten kuvaajien sekä taulukoiden avulla. Grafanaan voidaan lisäksi luoda työpöytiä diplomityötä varten luodulla ohjelmalla. Bisnesmetriikan keräämiseen ja visualisointiin luotu työkalu määriteltiin ohjelmistokehittäjä Codematen ohjelmistoinsinöörien sekä asiakkaiden tarpeiden mukaan. Määrittelyä käytettiin työkalun arkkitehtuurin luomiseen, joka ohjasi käytännön toteutusta. Työkalu rakennettiin Amazonin AWS-palveluun Kuberneteksen päälle. Toteutetun työkalun toimivuus testattiin lopuksi asiakasympäristössä tuotantopalvelimella. Työkalun todettiin toimivan tarkoituksenmukaisesti. Testauksesta saadut tulokset osoittavat, että työkalu voi auttaa yrityksiä saamaan parempaa informaatiota ohjelmistotuotteistaan sekä niiden käytöstä. Työkalun kehitystä voidaan jatkaa laajentamalla sen toimintaa Go- ja Ruby-kielille sekä integroimalla se tiiviimmin Codematen uuteen kehitysympäristöön. Lisätutkimus erityisesti suorituskyvyn parantamiseen laajoissa järjestelmissä on tarpeen

    Harmonizing semantic annotations for computational models in biology

    Get PDF
    Life science researchers use computational models to articulate and test hypotheses about the behavior of biological systems. Semantic annotation is a critical component for enhancing the interoperability and reusability of such models as well as for the integration of the data needed for model parameterization and validation. Encoded as machine-readable links to knowledge resource terms, semantic annotations describe the computational or biological meaning of what models and data represent. These annotations help researchers find and repurpose models, accelerate model composition and enable knowledge integration across model repositories and experimental data stores. However, realizing the potential benefits of semantic annotation requires the development of model annotation standards that adhere to a community-based annotation protocol.Without such standards, tool developers must account for a variety of annotation formats and approaches, a situation that can become prohibitively cumbersome and which can defeat the purpose of linking model elements to controlled knowledge resource terms. Currently, no consensus protocol for semantic annotation exists among the larger biological modeling community. Here, we report on the landscape of current annotation practices among the Computational Modeling in BIology NEtwork community and provide a set of recommendations for building a consensus approach to semantic annotation

    Towards Run-time Flexibility for Process Families: Open Issues and Research Challenges

    Get PDF
    The increasing adoption of process-aware information systems and the high variability of business processes in practice have resulted in process model repositories with large collections of related process variants (i.e., process families). Existing approaches for variability management focus on the modeling and configuration of process variants. However, case studies have shown that run-time configuration and re-confifiguration as well as the evolution of process variants are essential as well. Effectively handling process variants in these lifecycle phases requires deferring certain configuration decisions to the run-time, dynamically re-configuring process variants in response to contextual changes, adapting process variants to emerging needs, and evolving process families over time. In this paper, we characterize these flexibility needs for process families, discuss fundamental challenges to be tackled, and provide an overview of existing proposals made in this context

    Harmonizing semantic annotations for computational models in biology

    Get PDF
    Life science researchers use computational models to articulate and test hypotheses about the behavior of biological systems. Semantic annotation is a critical component for enhancing the interoperability and reusability of such models as well as for the integration of the data needed for model parameterization and validation. Encoded as machine-readable links to knowledge resource terms, semantic annotations describe the computational or biological meaning of what models and data represent. These annotations help researchers find and repurpose models, accelerate model composition and enable knowledge integration across model repositories and experimental data stores. However, realizing the potential benefits of semantic annotation requires the development of model annotation standards that adhere to a community-based annotation protocol. Without such standards, tool developers must account for a variety of annotation formats and approaches, a situation that can become prohibitively cumbersome and which can defeat the purpose of linking model elements to controlled knowledge resource terms. Currently, no consensus protocol for semantic annotation exists among the larger biological modeling community. Here, we report on the landscape of current annotation practices among the COmputational Modeling in BIology NEtwork community and provide a set of recommendations for building a consensus approach to semantic annotation

    Process Change Patterns: Recent Research, Use Cases, Research Directions

    Get PDF
    In previous work, we introduced change patterns to foster a systematic comparison of process-aware information systems with respect to change support. This paper revisits change patterns and shows how our research activities have evolved. Further, it presents characteristic use cases and gives insights into current research directions
    corecore