6,196 research outputs found
Continuous Improvement Through Knowledge-Guided Analysis in Experience Feedback
Continuous improvement in industrial processes is increasingly a key element of competitiveness for industrial systems. The management of experience feedback in this framework is designed to build, analyze and facilitate the knowledge sharing among problem solving practitioners of an organization in order to improve processes and products achievement. During Problem Solving Processes, the intellectual investment of experts is often considerable and the opportunities for expert knowledge exploitation are numerous: decision making, problem solving under uncertainty, and expert configuration. In this paper, our contribution relates to the structuring of a cognitive experience feedback framework, which allows a flexible exploitation of expert knowledge during Problem Solving Processes and a reuse such collected experience. To that purpose, the proposed approach uses the general principles of root cause analysis for identifying the root causes of problems or events, the conceptual graphs formalism for the semantic conceptualization of the domain vocabulary and the Transferable Belief Model for the fusion of information from different sources. The underlying formal reasoning mechanisms (logic-based semantics) in conceptual graphs enable intelligent information retrieval for the effective exploitation of lessons learned from past projects. An example will illustrate the application of the proposed approach of experience feedback processes formalization in the transport industry sector
Multi modal multi-semantic image retrieval
PhDThe rapid growth in the volume of visual information, e.g. image, and video can
overwhelm usersâ ability to find and access the specific visual information of interest
to them. In recent years, ontology knowledge-based (KB) image information retrieval
techniques have been adopted into in order to attempt to extract knowledge from these
images, enhancing the retrieval performance. A KB framework is presented to
promote semi-automatic annotation and semantic image retrieval using multimodal
cues (visual features and text captions). In addition, a hierarchical structure for the KB
allows metadata to be shared that supports multi-semantics (polysemy) for concepts.
The framework builds up an effective knowledge base pertaining to a domain specific
image collection, e.g. sports, and is able to disambiguate and assign high level
semantics to âunannotatedâ images.
Local feature analysis of visual content, namely using Scale Invariant Feature
Transform (SIFT) descriptors, have been deployed in the âBag of Visual Wordsâ
model (BVW) as an effective method to represent visual content information and to
enhance its classification and retrieval. Local features are more useful than global
features, e.g. colour, shape or texture, as they are invariant to image scale, orientation
and camera angle. An innovative approach is proposed for the representation,
annotation and retrieval of visual content using a hybrid technique based upon the use
of an unstructured visual word and upon a (structured) hierarchical ontology KB
model. The structural model facilitates the disambiguation of unstructured visual
words and a more effective classification of visual content, compared to a vector
space model, through exploiting local conceptual structures and their relationships.
The key contributions of this framework in using local features for image
representation include: first, a method to generate visual words using the semantic
local adaptive clustering (SLAC) algorithm which takes term weight and spatial
locations of keypoints into account. Consequently, the semantic information is
preserved. Second a technique is used to detect the domain specific ânon-informative
visual wordsâ which are ineffective at representing the content of visual data and
degrade its categorisation ability. Third, a method to combine an ontology model with
xi
a visual word model to resolve synonym (visual heterogeneity) and polysemy
problems, is proposed. The experimental results show that this approach can discover
semantically meaningful visual content descriptions and recognise specific events,
e.g., sports events, depicted in images efficiently.
Since discovering the semantics of an image is an extremely challenging problem, one
promising approach to enhance visual content interpretation is to use any associated
textual information that accompanies an image, as a cue to predict the meaning of an
image, by transforming this textual information into a structured annotation for an
image e.g. using XML, RDF, OWL or MPEG-7. Although, text and image are distinct
types of information representation and modality, there are some strong, invariant,
implicit, connections between images and any accompanying text information.
Semantic analysis of image captions can be used by image retrieval systems to
retrieve selected images more precisely. To do this, a Natural Language Processing
(NLP) is exploited firstly in order to extract concepts from image captions. Next, an
ontology-based knowledge model is deployed in order to resolve natural language
ambiguities. To deal with the accompanying text information, two methods to extract
knowledge from textual information have been proposed. First, metadata can be
extracted automatically from text captions and restructured with respect to a semantic
model. Second, the use of LSI in relation to a domain-specific ontology-based
knowledge model enables the combined framework to tolerate ambiguities and
variations (incompleteness) of metadata. The use of the ontology-based knowledge
model allows the system to find indirectly relevant concepts in image captions and
thus leverage these to represent the semantics of images at a higher level.
Experimental results show that the proposed framework significantly enhances image
retrieval and leads to narrowing of the semantic gap between lower level machinederived
and higher level human-understandable conceptualisation
Semantic metrics
In the context of the Semantic Web, many ontology-related operations, e.g. ontology ranking, segmentation, alignment, articulation, reuse, evaluation, can be boiled down to one fundamental operation: computing the similarity and?or dissimilarity among ontological entities, and in some cases among ontologies themselves. In this paper, we review standard metrics for computing distance measures and we propose a series of semantic metrics. We give a formal account of semantic metrics drawn from a variety of research disciplines, and enrich them with semantics based on standard Description Logic constructs. We argue that concept-based metrics can be aggregated to produce numeric distances at ontology-level and we speculate on the usability of our ideas through potential areas
Ontology-based specific and exhaustive user profiles for constraint information fusion for multi-agents
Intelligent agents are an advanced technology utilized in Web Intelligence. When searching information from a distributed Web environment, information is retrieved by multi-agents on the client site and fused on the broker site. The current information fusion techniques rely on cooperation of agents to provide statistics. Such techniques are computationally expensive and unrealistic in the real world. In this paper, we introduce a model that uses a world ontology constructed from the Dewey Decimal Classification to acquire user profiles. By search using specific and exhaustive user profiles, information fusion techniques no longer rely on the statistics provided by agents. The model has been successfully evaluated using the large INEX data set simulating the distributed Web environment
Recommended from our members
Using TREC for cross-comparison between classic IR and ontology-based search models at a Web scale
The construction of standard datasets and benchmarks to evaluate ontology-based search approaches and to compare then against baseline IR models is a major open problem in the semantic technologies community. In this paper we propose a novel evaluation benchmark for ontology-based IR models based on an adaptation of the well-known Cranfield paradigm (Cleverdon, 1967) traditionally used by the IR community. The proposed benchmark comprises: 1) a text document collection, 2) a set of queries and their corresponding document relevance judgments and 3) a set of ontologies and Knowledge Bases covering the query topics. The document collection and the set of queries and judgments are taken from one of the most widely used datasets in the IR community, the TREC Web track. As a use case example we apply the proposed benchmark to compare a real ontology-based search model (Fernandez, et al., 2008) against the best IR systems of TREC 9 and TREC 2001 competitions. A deep analysis of the strengths and weaknesses of this benchmark and a discussion of how it can be used to evaluate other ontology-based search systems is also included at the end of the paper
Un environnement de spécification et de découverte pour la réutilisation des composants logiciels dans le développement des logiciels distribués
Notre travail vise Ă Ă©laborer une solution efficace pour la dĂ©couverte et la rĂ©utilisation des composants logiciels dans les environnements de dĂ©veloppement existants et couramment utilisĂ©s. Nous proposons une ontologie pour dĂ©crire et dĂ©couvrir des composants logiciels Ă©lĂ©mentaires. La description couvre Ă la fois les propriĂ©tĂ©s fonctionnelles et les propriĂ©tĂ©s non fonctionnelles des composants logiciels exprimĂ©es comme des paramĂštres de QoS. Notre processus de recherche est basĂ© sur la fonction qui calcule la distance sĂ©mantique entre la signature d'un composant et la signature d'une requĂȘte donnĂ©e, rĂ©alisant ainsi une comparaison judicieuse. Nous employons Ă©galement la notion de " subsumption " pour comparer l'entrĂ©e-sortie de la requĂȘte et des composants. AprĂšs sĂ©lection des composants adĂ©quats, les propriĂ©tĂ©s non fonctionnelles sont employĂ©es comme un facteur distinctif pour raffiner le rĂ©sultat de publication des composants rĂ©sultats. Nous proposons une approche de dĂ©couverte des composants composite si aucun composant Ă©lĂ©mentaire n'est trouvĂ©, cette approche basĂ©e sur l'ontologie commune. Pour intĂ©grer le composant rĂ©sultat dans le projet en cours de dĂ©veloppement, nous avons dĂ©veloppĂ© l'ontologie d'intĂ©gration et les deux services " input/output convertor " et " output Matching ".Our work aims to develop an effective solution for the discovery and the reuse of software components in existing and commonly used development environments. We propose an ontology for describing and discovering atomic software components. The description covers both the functional and non functional properties which are expressed as QoS parameters. Our search process is based on the function that calculates the semantic distance between the component interface signature and the signature of a given query, thus achieving an appropriate comparison. We also use the notion of "subsumption" to compare the input/output of the query and the components input/output. After selecting the appropriate components, the non-functional properties are used to refine the search result. We propose an approach for discovering composite components if any atomic component is found, this approach based on the shared ontology. To integrate the component results in the project under development, we developed the ontology integration and two services " input/output convertor " and " output Matching "
Managing corporate memory on the semantic web
Corporate memory (CM) is the total body of data, information and knowledge required to deliver the strategic aims and objectives of an organization. In the current market, the rapidly increasing volume of unstructured documents in the enterprises has brought the challenge of building an autonomic framework to acquire, represent, learn and maintain CM, and efficiently reason from it to aid in knowledge discovery and reuse. The concept of semantic web is being introduced in the enterprises to structure information in a machine readable way and enhance the understandability of the disparate information. Due to the continual popularity of the semantic web, this paper develops a framework for CM management on the semantic web. The proposed approach gleans information from the documents, converts into a semantic web resource using resource description framework (RDF) and RDF Schema and then identifies relations among them using latent semantic analysis technique. The efficacy of the proposed approach is demonstrated through empirical experiments conducted on two case studies. © 2014 Springer Science+Business Media New York
- âŠ