127 research outputs found

    ONTOLOGY-BASED INFORMATION EXTRACTION FOR ANALYZING IT SERVICES

    Get PDF
    Service Level Agreements (SLA) for multi-service Information Technology (IT) outsourcing contracts contain vast amounts of textual information. The SLAs provide details about a specific service, Key Performance Indicators (KPI) to measure its performance; as well as process elements, such as activities, events, and resources that are integral in achieving performance goals. However, KPIs and the process elements may be interrelated. The knowledge of such interrelationships is often tacitly present in the SLAs. The aim of our research is to extract this hidden information from IT service contracts and analyze them to empower customers of IT services to make better performance management and incentive decisions. We apply an Ontology- Based Information Extraction (OBIE) approach in developing a prototype decision support framework, named SLA-Miner. The results, obtained from analyzing a set of Industry SLAs, demonstrate the utility of SLA-Miner in identifying KPI interrelationships, deficiencies, and impacts of various process elements on individual KPIs

    Model driven design and data integration in semantic web information systems

    Get PDF
    The Web is quickly evolving in many ways. It has evolved from a Web of documents into a Web of applications in which a growing number of designers offer new and interactive Web applications with people all over the world. However, application design and implementation remain complex, error-prone and laborious. In parallel there is also an evolution from a Web of documents into a Web of `knowledge' as a growing number of data owners are sharing their data sources with a growing audience. This brings the potential new applications for these data sources, including scenarios in which these datasets are reused and integrated with other existing and new data sources. However, the heterogeneity of these data sources in syntax, semantics and structure represents a great challenge for application designers. The Semantic Web is a collection of standards and technologies that offer solutions for at least the syntactic and some structural issues. If offers semantic freedom and flexibility, but this leaves the issue of semantic interoperability. In this thesis we present Hera-S, an evolution of the Model Driven Web Engineering (MDWE) method Hera. MDWEs allow designers to create data centric applications using models instead of programming. Hera-S especially targets Semantic Web sources and provides a flexible method for designing personalized adaptive Web applications. Hera-S defines several models that together define the target Web application. Moreover we implemented a framework called Hydragen, which is able to execute the Hera-S models to run the desired Web application. Hera-S' core is the Application Model (AM) in which the main logic of the application is defined, i.e. defining the groups of data elements that form logical units or subunits, the personalization conditions, and the relationships between the units. Hera-S also uses a so-called Domain Model (DM) that describes the content and its structure. However, this DM is not Hera-S specific, but instead allows any Semantic Web source representation as its DM, as long as its content can be queried by the standardized Semantic Web query language SPARQL. The same holds for the User Model (UM). The UM can be used for personalization conditions, but also as a source of user-related content if necessary. In fact, the difference between DM and UM is conceptual as their implementation within Hydragen is the same. Hera-S also defines a presentation model (PM) which defines presentation details of elements like order and style. In order to help designers with building their Web applications we have introduced a toolset, Hera Studio, which allows to build the different models graphically. Hera Studio also provides some additional functionality like model checking and deployment of the models in Hydragen. Both Hera-S and its implementation Hydragen are designed to be flexible regarding the user of models. In order to achieve this Hydragen is a stateless engine that queries for relevant information from the models at every page request. This allows the models and data to be changed in the datastore during runtime. We show that one way to exploit this flexibility is by applying aspect-orientation to the AM. Aspect-orientation allows us to dynamically inject functionality that pervades the entire application. Another way to exploit Hera-S' flexibility is in reusing specialized components, e.g. for presentation generation. We present a configuration of Hydragen in which we replace our native presentation generation functionality by the AMACONT engine. AMACONT provides more extensive multi-level presentation generation and adaptation capabilities as well aspect-orientation and a form of semantic based adaptation. Hera-S was designed to allow the (re-)use of any (Semantic) Web datasource. It even opens up the possibility for data integration at the back end, by using an extendible storage layer in our database of choice Sesame. However, even though theoretically possible it still leaves much of the actual data integration issue. As this is a recurring issue in many domains, a broader challenge than for Hera-S design only, we decided to look at this issue in isolation. We present a framework called Relco which provides a language to express data transformation operations as well as a collection of techniques that can be used to (semi-)automatically find relationships between concepts in different ontologies. This is done with a combination of syntactic, semantic and collaboration techniques, which together provide strong clues for which concepts are most likely related. In order to prove the applicability of Relco we explore five application scenarios in different domains for which data integration is a central aspect. This includes a cultural heritage portal, Explorer, for which data from several datasources was integrated and was made available by a mapview, a timeline and a graph view. Explorer also allows users to provide metadata for objects via a tagging mechanism. Another application is SenSee: an electronic TV-guide and recommender. TV-guide data was integrated and enriched with semantically structured data from several sources. Recommendations are computed by exploiting the underlying semantic structure. ViTa was a project in which several techniques for tagging and searching educational videos were evaluated. This includes scenarios in which user tags are related with an ontology, or other tags, using the Relco framework. The MobiLife project targeted the facilitation of a new generation of mobile applications that would use context-based personalization. This can be done using a context-based user profiling platform that can also be used for user model data exchange between mobile applications using technologies like Relco. The final application scenario that is shown is from the GRAPPLE project which targeted the integration of adaptive technology into current learning management systems. A large part of this integration is achieved by using a user modeling component framework in which any application can store user model information, but which can also be used for the exchange of user model data

    Fifth Conference on Artificial Intelligence for Space Applications

    Get PDF
    The Fifth Conference on Artificial Intelligence for Space Applications brings together diverse technical and scientific work in order to help those who employ AI methods in space applications to identify common goals and to address issues of general interest in the AI community. Topics include the following: automation for Space Station; intelligent control, testing, and fault diagnosis; robotics and vision; planning and scheduling; simulation, modeling, and tutoring; development tools and automatic programming; knowledge representation and acquisition; and knowledge base/data base integration

    Design and Implementation Strategies for IMS Learning Design

    Get PDF
    SIKS Dissertation Series No. 2008-27The IMS Learning Design (LD) specification, which has been released in February 2003, is a generic and flexible language for describing the learning practice and underlying learning designs using a formal notation which is computer-interpretable. It is based on a pedagogical meta-model (Koper & Manderveld, 2004) and supports the use of a wide range of pedagogies. It supports adaptation of individual learning routes and orchestrates interactions between users in various learning and support roles. A formalized learning design can be applied repeatedly in similar situations with different persons and contexts. Yet because IMS Learning Design is a fairly complex and elaborate specification, it can be difficult to grasp; furthermore, designing and implementing a runtime environment for the specification is far from straightforward. That IMS Learning Design makes use of other specifications and e-learning services adds further to this complexity for both its users and the software developers. For this new specification to succeed, therefore, a reference runtime implementation was needed. To this end, this thesis addresses two research and development issues. First, it investigates research into and development of a reusable reference runtime environment for IMS Learning Design. The resulting runtime, called CopperCore, provides a reference both for users of the specification and for software developers. The latter can reuse the design principles presented in this thesis for their own implementations, or reuse the CopperCore product through the interfaces provided. Second, this thesis addresses the integration of other specifications and e-learning services during runtime. It presents an architecture and implementation (CopperCore Service Integration) which provides an extensible lightweight solution to the problem. Both developments have been tested through real-world use in projects carried out by the IMS Learning Design community. The results have generally been positive, and have led us to conclude that we successfully addressed both the research and development issues. However, the results also indicate that the LD tooling lacks maturity, particularly in the authoring area. Through close integration of CopperCore with a product called the Personal Competence Manager, we demonstrate that a complementary approach to authoring in IMS Learning Design solves some of these issues

    CodeTF: One-stop Transformer Library for State-of-the-art Code LLM

    Full text link
    Code intelligence plays a key role in transforming modern software engineering. Recently, deep learning-based models, especially Transformer-based large language models (LLMs), have demonstrated remarkable potential in tackling these tasks by leveraging massive open-source code data and programming language features. However, the development and deployment of such models often require expertise in both machine learning and software engineering, creating a barrier for the model adoption. In this paper, we present CodeTF, an open-source Transformer-based library for state-of-the-art Code LLMs and code intelligence. Following the principles of modular design and extensible framework, we design CodeTF with a unified interface to enable rapid access and development across different types of models, datasets and tasks. Our library supports a collection of pretrained Code LLM models and popular code benchmarks, including a standardized interface to train and serve code LLMs efficiently, and data features such as language-specific parsers and utility functions for extracting code attributes. In this paper, we describe the design principles, the architecture, key modules and components, and compare with other related library tools. Finally, we hope CodeTF is able to bridge the gap between machine learning/generative AI and software engineering, providing a comprehensive open-source solution for developers, researchers, and practitioners.Comment: Ongoing work - Draft Previe

    Mathematical modeling of evolutionary changes of oligonucleotide frequency patterns of bacterial genomes for genome-scale phylogenetic inferences

    Get PDF
    Modern phylogenetic studies from the advancement of next generation sequencing can benefit from an analysis of complete genome sequences of various microorganisms. Evolutionary inferences based on genome scale analysis were believed to be more accurate than gene-based ones. However, the computational complexity of current phylogenomic procedures and lack of reliable annotation and alignment free evolutionary models keep microbiologists from wider use of these opportunities. For example, the super-matrix approach of phylogenomics requires identification of clusters of orthologous genes in compared genomes followed by alignment of numerous sequences to proceed with reconciliation of multiple trees inferred by traditional phylogenetic tools. In fact, the approach potentially multiplies the problems of gene annotation and sequence alignment, not mentioning the computational difficulties and laboriousness of the methods. For this research, we identified that the alignment and annotation-free method based on comparison of oligonucleotide usage patterns (OUP) calculated for genome-scale DNA sequences allowed fast inferring of phylogenetic trees. These were also congruent with the corresponding whole genome supermatrix trees in terms of tree topology and branch lengths. Validation and benchmarking tests for OUP phylogenomics were done based on comparisons to current literature and artificially created sequences with known phylogeny. It was demonstrated that the OUP diversification between taxa was driven by global adjustments of codon usage to fit fluctuating tRNA concentrations that were well aligned to the species evolution. A web-based program to perform OUP-based phylogenomics was released on http://swphylo.bi.up.ac.za/. Applicability of the tool was proven for different taxa from species to family levels. Distinguishing between closely related taxonomic units may be enforced by providing the program with alignments of marker protein sequences, e.g. gyrA.Thesis (PhD)--University of Pretoria, 2018.BiochemistryPhDUnrestricte

    Next generation software environments : principles, problems, and research directions

    Get PDF
    The past decade has seen a burgeoning of research and development in software environments. Conferences have been devoted to the topic of practical environments, journal papers produced, and commercial systems sold. Given all the activity, one might expect a great deal of consensus on issues, approaches, and techniques. This is not the case, however. Indeed, the term "environment" is still used in a variety of conflicting ways. Nevertheless substantial progress has been made and we are at least nearing consensus on many critical issues.The purpose of this paper is to characterize environments, describe several important principles that have emerged in the last decade or so, note current open problems, and describe some approaches to these problems, with particular emphasis on the activities of one large-scale research program, the Arcadia project. Consideration is also given to two related topics: empirical evaluation and technology transition. That is, how can environments and their constituents be evaluated, and how can new developments be moved effectively into the production sector

    A Case for FRBR and Semantic Wikis in Enterprise Information Environments

    Get PDF
    Wikis allow users to collaboratively create and maintain content. As a new platform for wiki sites, Semantic wikis provide additional means to annotate the content to add structure. These Semantic Wiki sites are experiencing an enormous increase in popularity because structured data is more usable and thus more valuable than unstructured data. This study proposes the use of a semantic wiki to develop an web portal to collect and organize the information maintained in an Enterprise's content management system. The ontology selected to support the conceptual infrastructure of the portal is based on a scheme inspired from an IFLA proposition known as Functional Requirements for Bibliographic Records (FRBR). The paper introduces a model for the specification of mappings between FRBR entities and the organization's information artifacts. Also, the information system stakeholders - process owners, document creators, SMEs (Subject Matter Experts), support reps, end users, etc.- are also defined and interrelated using simple standard ontologies and vocabularies such as Friend-of-a-Friend (FOAF), RDF and OWL. The FRBR ontology and the web portal are developed using the platform provided by Semantic MediaWiki, an extension of MediaWiki, the platform for Wikipedia. The main features for standard vocabularies integration and semantic queries and searches are summarized. Finally, a number of metrics and indicators are presented as a reference for the portal's project managers and sponsors to evaluate the performance and effectiveness of the tool and help them make decisions about future initiatives, enhancements and new information requirements implementation.Master of Science in Information Scienc
    corecore