7 research outputs found
Initiating organizational memories using ontology network analysis
One of the important problems in organizational memories is their initial set-up. It is difficult to choose the right information to include in an organizational memory, and the right information is also a prerequisite for maximizing the uptake and relevance of the memory content. To tackle this problem, most developers adopt heavy-weight solutions and rely on a faithful continuous interaction with users to create and improve its content. In this paper, we explore the use of an automatic, light-weight solution, drawn from the underlying ingredients of an organizational memory: ontologies. We have developed an ontology-based network analysis method which we applied to tackle the problem of identifying communities of practice in an organization. We use ontology-based network analysis as a means to provide content automatically for the initial set up of an organizational memory
A semantic autonomous video surveillance system for dense camera networks in smart cities
ProducciĂłn CientĂficaThis paper presents a proposal of an intelligent video surveillance system able to
detect and identify abnormal and alarming situations by analyzing object movement. The
system is designed to minimize video processing and transmission, thus allowing a large
number of cameras to be deployed on the system, and therefore making it suitable for its
usage as an integrated safety and security solution in Smart Cities. Alarm detection is
performed on the basis of parameters of the moving objects and their trajectories, and is
performed using semantic reasoning and ontologies. This means that the system employs a
high-level conceptual language easy to understand for human operators, capable of raising
enriched alarms with descriptions of what is happening on the image, and to automate
reactions to them such as alerting the appropriate emergency services using the Smart City
safety network
Hybrid fuzzy multi-objective particle swarm optimization for taxonomy extraction
Ontology learning refers to an automatic extraction of ontology to produce the ontology learning layer cake which consists of five kinds of output: terms, concepts, taxonomy relations, non-taxonomy relations and axioms. Term extraction is a prerequisite for all aspects of ontology learning. It is the automatic mining of complete terms from the input document. Another important part of ontology is taxonomy, or the hierarchy of concepts. It presents a tree view of the ontology and shows the inheritance between subconcepts and superconcepts. In this research, two methods were proposed for improving the performance of the extraction result. The first method uses particle swarm optimization in order to optimize the weights of features. The advantage of particle swarm optimization is that it can calculate and adjust the weight of each feature according to the appropriate value, and here it is used to improve the performance of term and taxonomy extraction. The second method uses a hybrid technique that uses multi-objective particle swarm optimization and fuzzy systems that ensures that the membership functions and fuzzy system rule sets are optimized. The advantage of using a fuzzy system is that the imprecise and uncertain values of feature weights can be tolerated during the extraction process. This method is used to improve the performance of taxonomy extraction. In the term extraction experiment, five extracted features were used for each term from the document. These features were represented by feature vectors consisting of domain relevance, domain consensus, term cohesion, first occurrence and length of noun phrase. For taxonomy extraction, matching Hearst lexico-syntactic patterns in documents and the web, and hypernym information form WordNet were used as the features that represent each pair of terms from the texts. These two proposed methods are evaluated using a dataset that contains documents about tourism. For term extraction, the proposed method is compared with benchmark algorithms such as Term Frequency Inverse Document Frequency, Weirdness, Glossary Extraction and Term Extractor, using the precision performance evaluation measurement. For taxonomy extraction, the proposed methods are compared with benchmark methods of Feature-based and weighting by Support Vector Machine using the f-measure, precision and recall performance evaluation measurements. For the first method, the experiment results concluded that implementing particle swarm optimization in order to optimize the feature weights in terms and taxonomy extraction leads to improved accuracy of extraction result compared to the benchmark algorithms. For the second method, the results concluded that the hybrid technique that uses multi-objective particle swarm optimization and fuzzy systems leads to improved performance of taxonomy extraction results when compared to the benchmark methods, while adjusting the fuzzy membership function and keeping the number of fuzzy rules to a minimum number with a high degree of accuracy
Recommended from our members
HOLMES: A Hybrid Ontology-Learning Materials Engineering System
Designing and discovering novel materials is challenging problem in many domains such as fuel additives, composites, pharmaceuticals, and so on. At the core of all this are models that capture how the different domain-specific data, information, and knowledge regarding the structures and properties of the materials are related to one another. This dissertation explores the difficult task of developing an artificial intelligence-based knowledge modeling environment, called Hybrid Ontology-Learning Materials Engineering System (HOLMES) that can assist humans in populating a materials science and engineering ontology through automatic information extraction from journal article abstracts. While what we propose may be adapted for a generic materials engineering application, our focus in this thesis is on the needs of the pharmaceutical industry. We develop the Columbia Ontology for Pharmaceutical Engineering (COPE), which is a modification of the Purdue Ontology for Pharmaceutical Engineering. COPE serves as the basis for HOLMES.
The HOLMES framework starts with journal articles that are in the Portable Document Format (PDF) and ends with the assignment of the entries in the journal articles into ontologies. While this might seem to be a simple task of information extraction, to fully extract the information such that the ontology is filled as completely and correctly as possible is not easy when considering a fully developed ontology.
In the development of the information extraction tasks, we note that there are new problems that have not arisen in previous information extraction work in the literature. The first is the necessity to extract auxiliary information in the form of concepts such as actions, ideas, problem specifications, properties, etc. The second problem is in the existence of multiple labels for a single token due to the existence of the aforementioned concepts. These two problems are the focus of this dissertation.
In this work, the HOLMES framework is presented as a whole, describing our successful progress as well as unsolved problems, which might help future research on this topic. The ontology is then presented to help in the identification of the relevant information that needs to be retrieved. The annotations are next developed to create the data sets necessary for the machine learning algorithms to perform. Then, the current level of information extraction for these concepts is explored and expanded. This is done through the introduction of entity feature sets that are based on previously extracted entities from the entity recognition task. And finally, the new task of handling multiple labels for tagging a single entity is also explored by the use of multiple-label algorithms used primarily in image processing
Template-driven information extraction for populating ontologies
We address the integration of information extraction(IE) and ontologies. In particular, using an ontology to aid the IE process, and using the IE results to help populate the ontology. We perform IE by means of domain specific templates and the lightweight use of Natural Language Processing(NLP) techniques. Our main goal is to learn information from texts by the use of templates and in this way to alleviate the main bottleneck in creating knowledge-base systems that is "the extraction of knowledge". Our domain of study is "KMi Planet", a Web-based news server for communication of stories between members in our institute. The main goals of our system are to classify an incoming story, obtain the relevant objects within the story, deduce the relationships between them, and to populate the ontology. Furthermore, we aim to do this with minimal help from the user
CaracterizaciĂłn semĂĄntica de espacios: Sistema de Videovigilancia Inteligente en Smart Cities
Esta Tesis Doctoral, realizada dentro del proyecto europeo HuSIMS - Human Situation Monitoring System, presenta una metodologĂa inteligente para la caracterizaciĂłn de escenarios capaz de detectar e identificar situaciones anĂłmalas analizando el movimiento de los objetos. El sistema estĂĄ diseñado para reducir al mĂnimo el procesamiento y la transmisiĂłn de vĂdeo permitiendo el despliegue de un gran nĂșmero de cĂĄmaras y sensores, y por lo tanto adecuada para Smart Cities. Se propone un enfoque en tres etapas. Primero, la detecciĂłn de objetos en movimiento en las propias cĂĄmaras, utilizando algorĂtmica sencilla, evitando el envĂo de datos de vĂdeo. Segundo, la construcciĂłn de un modelo de las zonas de las escenas utilizando los parĂĄmetros de movimiento identificados previamente. Y tercero, la realizaciĂłn de razonado semĂĄntico sobre el modelo de rutas y los parĂĄmetros de los objetos de la escena actual para identificar las alarmas reconociendo la naturaleza de los eventosDepartamento de TeorĂa de la Señal y Comunicaciones e IngenierĂa TelemĂĄtic