15 research outputs found

    Competence Discovery and Composition

    Get PDF
    International audienceThe capture, the structuring and the exploitation of competences of an "object" (like a business partner, an employee, a software component, a Web service, etc.) are crucial problems in various applications, like cooperative and distributed applications or e\_business applications. The work we describe here concerns competence advertising, organization, discovery and composition. Indeed, one of the originality of the proposal is in the nature of the answers the intended system can return when seeking for individuals fitted with given competences: answers may be composite ones in that sense that when no single object meets the search criteria, we attempt to find out what a set of objects, when pooled together, do satisfy the whole search criteria. Conceptual Graphs (CGs) are used as a knowledge representation formalism and operations on graphs are used as a search mechanism. A client/server prototype, viewed as a federation of mediators, has been developed as a proof of concept

    Personal Smart Assistant for Digital Media and Advertisement

    Get PDF
    The expansion of the cyberspace and the enormous process in computing and software applications enabled technology to cover every aspect of our life, therefore, many of our goals are now technology driven. Consequently, the need of intelligent assistance to achieve these goals has increased. However, for this assistance to be beneficial for users, it should be targeted to them based on their needs and preferences. Intelligent software agents have been recognized as a promising approach for the development of user-centric, personalized, applications. In this thesis a generic personal smart assistant agent is proposed that provides relevant assistance to the user based on modeling his/her interests and behaviours. The main focus of this work is on developing a user behaviour model that captures the deliberative and reactive behaviours of the user in open environments. Furthermore, a prototype is built to utilize the personal assistant for personalized advertisement applications, where this assistant attempts to recommend the right advertisement to the right person at the right time

    A journey through Austronesian and Papuan linguistic and cultural space: papers in honour of Andrew K. Pawley

    Get PDF

    Typing the Dancing Signifier: Jim Andrews' (Vis)Poetics

    Get PDF
    This study focuses on the work of Jim Andrews, whose electronic poems take advantage of a variety of media, authoring programs, programming languages, and file formats to create poetic experiences worthy of study. Much can be learned about electronic textuality and poetry by following the trajectory of a poet and programmer whose fascination with language in programmable media leads him to distinctive poetic explorations and collaborations. This study offers a detailed exploration of Andrews' poetry, motivations, inspirations, and poetics, while telling a piece of the story of the rise of electronic poetry from the mid 1980s until the present. Electronic poetry can be defined as first generation electronic objects that can only be read with a computer--they cannot be printed out nor read aloud without negating that which makes them "native" to the digital environment in which they were created, exist, and are experienced in. If translated to different media, they would lose the extra-textual elements that I describe in this study as behavior. These "behaviors" electronic texts exhibit are programmed instructions that cause the text to be still, move, react to user input, change, act on a schedule, or include a sound component. The conversation between the growing capabilities of computers and networks and Andrews' poetry is the most extensive part of the study, examining three areas in which he develops his poetry: visual poetry (from static to kinetic), sound poetry (from static to responsive), and code poetry (from objects to applications). In addition to being a literary biography, the close readings of Andrews' poems are media-specific analyses that demonstrate how the software and programming languages used shape the creative and production performances in significant ways. This study makes available new materials for those interested in the textual materiality of Andrews' videogame poem, Arteroids, by publishing the Arteroids Development Folder--a collection of source files, drafts, and old versions of the poem. This collection is of great value to those who wish to inform readings of the work, study the source code and its programming architecture, and even produce a critical edition of the work

    Semantic Service Description Framework for Efficient Service Discovery and Composition

    Get PDF
    Web services have been widely adopted as a new distributed system technology by industries in the areas of, enterprise application integration, business process management, and virtual organisation. However, lack of semantics in current Web services standards has been a major barrier in the further improvement of service discovery and composition. For the last decade, Semantic Web Services have become an important research topic to enrich the semantics of Web services. The key objective of Semantic Web Services is to achieve automatic/semi-automatic Web service discovery, invocation, and composition. There are several existing semantic Web service description frameworks, such as, OWL-S, WSDL-S, and WSMF. However, existing frameworks have several issues, such as insufficient service usage context information, precisely specified requirements needed to locate services, lacking information about inter-service relationships, and insufficient/incomplete information handling, make the process of service discovery and composition not as efficient as it should be. To address these problems, a context-based semantic service description framework is proposed in this thesis. This framework focuses on not only capabilities of Web services, but also the usage context information of Web services, which we consider as an important factor in efficient service discovery and composition. Based on this framework, an enhanced service discovery mechanism is proposed. It gives service users more flexibility to search for services in more natural ways rather than only by technical specifications of required services. The service discovery mechanism also demonstrates how the features provided by the framework can facilitate the service discovery and composition processes. Together with the framework, a transformation method is provided to transform exiting service descriptions into the new framework based descriptions. The framework is evaluated through a scenario based analysis in comparison with OWL-S and a prototype based performance evaluation in terms of query response time, the precision and recall ratio, and system scalability

    BNAIC 2008:Proceedings of BNAIC 2008, the twentieth Belgian-Dutch Artificial Intelligence Conference

    Get PDF

    Recherche d'information sémantique et extraction automatique d'ontologie du domaine

    Get PDF
    Il peut s'avérer ardu, même pour une organisation de petite taille, de se retrouver parmi des centaines, voir des milliers de documents électroniques. Souvent, les techniques employées par les moteurs de recherche dans Internet sont utilisées par les entreprises voulant faciliter la recherche d'information dans leur intranet. Ces techniques reposent sur des méthodes statistiques et ne permettent pas de traiter la sémantique contenue dans la requête de l'usager ainsi que dans les documents. Certaines approches ont été développées pour extraire cette sémantique et ainsi, mieux répondre à des requêtes faites par les usagers. Par contre, la plupart de ces techniques ont été conçues pour s'appliquer au Web en entier et non pas sur un domaine en particulier. Il pourrait être intéressant d'utiliser une ontologie pour représenter un domaine spécifique et ainsi, être capable de mieux répondre aux questions posées par un usager. Ce mémoire présente notre approche proposant l'utilisation du logiciel Text- To-Onto pour créer automatiquement une ontologie décrivant un domaine. Cette même ontologie est par la suite utilisée par le logiciel Sesei, qui est un filtre sémantique pour les moteurs de recherche conventionnels. Cette méthode permet ainsi d'améliorer la pertinence des documents envoyés à l'usager.It can prove to be diffcult, even for a small size organization, to find information among hundreds, even thousands of electronic documents. Most often, the methods employed by search engines on the Internet are used by companies wanting to improve information retrieval on their intranet. These techniques rest on statistical methods and do not make it possible neither to evaluate the semantics contained in the user requests, nor in the documents. Certain methods were developed to extract this semantics and thus, to improve the answer given to requests. On the other hand, the majority of these techniques were conceived to be applied on the entire World Wide Web and not on a particular field of knowledge, like corporative data. It could be interesting to use domain specific ontologies in trying to link a specific query to related documents and thus, to be able to better answer these queries. This thesis presents our approach which proposes the use of the Text-To-Onto software to automatically create an ontology describing a particular field. Thereafter, this ontology is used by the Sesei software, which is a semantic filter for conventional search engines. This method makes it possible to improve the relevance of documents returned to the user

    Ontology-based Approximate Query Processing for Searching the Semantic Web with Corese

    Get PDF
    The semantic web relies on ontologies representing domains through their main concepts and the relations between them. Such a domain knowledge is the keystone to represent the semantic contents of web resources and services in metadata associated to them. These metadata then enable us to search for information based on the semantics of web resources rather than their syntactic forms. However, in the context of the semantic web there are many possibilities of executing queries that would not retrieve any resource. The viewpoints of the designers of ontologies, of the designers of annotations and of the users performing a Web search may not completely match. The user may not completely share or understand the viewpoints of the designers and this mismatch may lead to missed answers. Approximate query processing is then of prime importance for efficiently searching the Semantic Web. In this paper we present the Corese ontology-based search engine we have developped to handle RDF(S) and OWL Lite metadata. We present its theoretical foundation, its query language, and we stress its ability to process approximate queries

    Querying the Semantic Web with Corese Search Engine

    Get PDF
    International audienceThis paper presents an ontology-based approach for web querying, using semantic metadata. We propose a query language based on ontologies and emphasize its ability to express approximate queries, useful for an efficient information retrieval on the web. We present the Corese search engine dedicated to RDF(S) metadata and illustrate it through several real-world applications

    Ontology Engineering: a Survey and a Return on Experience

    Get PDF
    Ontology is a new object of IA that recently came to maturity and a powerful conceptual tool of Knowledge Modeling. It provides a coherent base to build on, and a shared reference to align with, in the form of a consensual conceptual vocabulary, on which one can build descriptions and communication acts. This report presents the object that is called "an ontology" and a state of the art of engineering techniques for ontologies. Then it describes a project for which we developed an ontology and used it to improve knowledge management. Finally it describes the design process and discuss the resulting ontology
    corecore