454,580 research outputs found

    A High-level Petri Net Based Approach for Modeling and Composition of Web Services

    Get PDF
    AbstractWeb services are modular, self-describing, self-contained and loosely coupled applications, which intercommuni-cate via messages exchanging. The evolution of the internet and the emergence of new technologies like e-business have influenced the use of these last ones, which have become popular. The composition of web services is a topic that attracts the interest of researchers. It offers complex problems process ability even with simple existing web services while cooperating with each other. However, modeling tools and formal techniques for the completion of this task are required.In this paper, we show how simple existing web services can be composed, in order to create a composite service, which offers new features. In this context, we propose an expressive object-oriented Petri net based algebra that succeeds in the complex composition of Web services

    Semantic Network Analysis of Ontologies

    Get PDF
    A key argument for modeling knowledge in ontologies is the easy re-use and re-engineering of the knowledge. However, current ontology engineering tools provide only basic functionalities for analyzing ontologies. Since ontologies can be considered as graphs, graph analysis techniques are a suitable answer for this need. Graph analysis has been performed by sociologists for over 60 years, and resulted in the vivid research area of Social Network Analysis (SNA). While social network structures currently receive high attention in the Semantic Web community, there are only very few SNA applications, and virtually none for analyzing the structure of ontologies. We illustrate the benefits of applying SNA to ontologies and the Semantic Web, and discuss which research topics arise on the edge between the two areas. In particular, we discuss how different notions of centrality describe the core content and structure of an ontology. From the rather simple notion of degree centrality over betweenness centrality to the more complex eigenvector centrality, we illustrate the insights these measures provide on two ontologies, which are different in purpose, scope, and size

    A Perl toolkit for LIMS development

    Get PDF
    BACKGROUND: High throughput laboratory techniques generate huge quantities of scientific data. Laboratory Information Management Systems (LIMS) are a necessary requirement, dealing with sample tracking, data storage and data reporting. Commercial LIMS solutions are available, but these can be both costly and overly complex for the task. The development of bespoke LIMS solutions offers a number of advantages, including the flexibility to fulfil all a laboratory's requirements at a fraction of the price of a commercial system. The programming language Perl is a perfect development solution for LIMS applications because of Perl's powerful but simple to use database and web interaction, it is also well known for enabling rapid application development and deployment, and boasts a very active and helpful developer community. The development of an in house LIMS from scratch however can take considerable time and resources, so programming tools that enable the rapid development of LIMS applications are essential but there are currently no LIMS development tools for Perl. RESULTS: We have developed ArrayPipeline, a Perl toolkit providing object oriented methods that facilitate the rapid development of bespoke LIMS applications. The toolkit includes Perl objects that encapsulate key components of a LIMS, providing methods for creating interactive web pages, interacting with databases, error tracking and reporting, and user and session management. The MT_Plate object provides methods for manipulation and management of microtitre plates, while a given LIMS can be encapsulated by extension of the core modules, providing system specific methods for database interaction and web page management. CONCLUSION: This important addition to the Perl developer's library will make the development of in house LIMS applications quicker and easier encouraging laboratories to create bespoke LIMS applications to meet their specific data management requirements

    Scalable hosting of web applications

    Get PDF
    Modern Web sites have evolved from simple monolithic systems to complex multitiered systems. In contrast to traditional Web sites, these sites do not simply deliver pre-written content but dynamically generate content using (one or more) multi-tiered Web applications. In this thesis, we addressed the question: How to host multi-tiered Web applications in a scalable manner? Scaling up a Web application requires scaling its individual tiers. To this end, various research works have proposed techniques that employ replication or caching solutions at different tiers. However, most of these techniques aim to optimize the performance of individual tiers and not the entire application. A key observation made in our research is that there exists no elixir technique that performs the best for allWeb applications. Effective hosting of a Web application requires careful selection and deployment of several techniques at different tiers. To this end, we present several caching and replication strategies, such as GlobeCBC, GlobeDB and GlobeTP, to improve the scalability of different tiers of a Web application. While these techniques and systems improve the performance of the individual tiers (and eventually the application), an application's administrator is not only interested in the performance of its individual tiers but also in its endto- end performance. To this end, we propose a resource provisioning approach that allows us to choose the best resource configuration for hosting a Web application such that its end-to-end response time can be optimized with minimum usage of resources. The proposed approach is based on an analytical model for multi-tier systems, which allows us to derive expressions for estimating the mean end-to-end response time and its variance.Steen, M.R. van [Promotor]Pierre, G.E.O. [Copromotor

    Formal patterns for Web-based systems design

    Get PDF
    The ubiquitous and simple interface of Web browsers has opened the door for the devel- opment of a new class of distributed applications which they have been known as Web applications. As more and more systems become Web-enabled we become increasingly dependent on the Web applications. Therefore, reliability of such systems is a very crucial factor for successful operation of many modern organisations and institutes. In the ĀÆrst part of this thesis we review how Web systems have evolved from simple static pages, in their early days, to their current situation as distributed applications with sophisticated functionalities. We also ĀÆnd out how the design methods have evolved to align with the rapid changes both in the new emerging technologies and growing functionalities. Although design approaches for Web applications have improved during the last decade we conclude that dependability should be given more consideration. In Chapter 2 we explain how this could be achieved through the application of formal methods. Therefore, we have provided an overview of dependability and formal methods in this chapter. In the second part of this research we follow a practical approach to the formal modelling of Web Applications. Accordingly, in Chapter 3 we have developed a series of formal models for an integrated holiday booking system. Our main objectives are to gain some common knowledge of the domain and to identify some key areas and features with regard to our formal modelling approach. Formal modelling of large Web applications could be a very complex process. In Chapter 4 we have introduced the idea of formal patterns for speciĀÆcation and reĀÆnement to accelerate the modelling process and to help alleviate the burden of formal modelling. In a further attempt to tackle the complexity of the formal modelling of Web applica- tions, we have introduced the idea of speciĀÆcation partitioning in Chapter 5. SpeciĀÆ- cation partitioning is closely related to the notion of composition. In this chapter we have extended some CSP-like composition techniques to build the system speciĀÆcation from subsystems or parts. The summary of our research, related ĀÆndings and some suggestions for the future work are presented in Chapter 6.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    A review of the state of the art in Machine Learning on the Semantic Web: Technical Report CSTR-05-003

    Get PDF

    Web Data Extraction, Applications and Techniques: A Survey

    Full text link
    Web Data Extraction is an important problem that has been studied by means of different scientific tools and in a broad range of applications. Many approaches to extracting data from the Web have been designed to solve specific problems and operate in ad-hoc domains. Other approaches, instead, heavily reuse techniques and algorithms developed in the field of Information Extraction. This survey aims at providing a structured and comprehensive overview of the literature in the field of Web Data Extraction. We provided a simple classification framework in which existing Web Data Extraction applications are grouped into two main classes, namely applications at the Enterprise level and at the Social Web level. At the Enterprise level, Web Data Extraction techniques emerge as a key tool to perform data analysis in Business and Competitive Intelligence systems as well as for business process re-engineering. At the Social Web level, Web Data Extraction techniques allow to gather a large amount of structured data continuously generated and disseminated by Web 2.0, Social Media and Online Social Network users and this offers unprecedented opportunities to analyze human behavior at a very large scale. We discuss also the potential of cross-fertilization, i.e., on the possibility of re-using Web Data Extraction techniques originally designed to work in a given domain, in other domains.Comment: Knowledge-based System

    A Survey on IT-Techniques for a Dynamic Emergency Management in Large Infrastructures

    Get PDF
    This deliverable is a survey on the IT techniques that are relevant to the three use cases of the project EMILI. It describes the state-of-the-art in four complementary IT areas: Data cleansing, supervisory control and data acquisition, wireless sensor networks and complex event processing. Even though the deliverableā€™s authors have tried to avoid a too technical language and have tried to explain every concept referred to, the deliverable might seem rather technical to readers so far little familiar with the techniques it describes
    • ā€¦
    corecore