5 research outputs found

    Solving problems of data heterogeneity, semantic heterogeneity and data inequality : an approach using ontologies

    Get PDF
    Knowledge is people’s personal map and people’s personal model of the world. Knowledge acquisition involves complex cognitive processes such as perception, communication, and reasoning. According to the knowledge differences, then it is possible for people have a different perception to attain awareness or understand the environment or reality. This paper provides a case study where there is a group of people in different communities managing data using different perceptions, different concepts, different terms (terminologies), and different semantics to represent the same reality. Perceptions are converted into data, and then saved into separate storage devices that are not connected to each other. Each user – belonging to different communities - use different terminologies in collecting data and as a consequence they also get different results of that exercise. It is not a problem if the different results are used for each community, the problem occur if people need to take data from another communities, sharing, collaborating and using it to get a bigger solution. In this paper we present an approach to generate a common set of terms based on the terms of several and different storage devices, used by different communities, in order to make data retrieval independent of the different perceptions and terminologies used by those communities. We use ontologies to represent the knowledge and discuss the use of mapping and integration techniques to find correspondences between the concepts used in those ontologies. We discuss too how to derive a common ontology to be used by all the communities. We can find in literature several documents about the theoretical concepts and techniques that can be used to solve the described problem. However, in this paper we are presenting a real implementation of a system using those concepts

    Ontology learning for the semantic deep web

    Get PDF
    Ontologies could play an important role in assisting users in their search for Web pages. This dissertation considers the problem of constructing natural ontologies that support users in their Web search efforts and increase the number of relevant Web pages that are returned. To achieve this goal, this thesis suggests combining the Deep Web information, which consists of dynamically generated Web pages and cannot be indexed by the existing automated Web crawlers, with ontologies, resulting in the Semantic Deep Web. The Deep Web information is exploited in three different ways: extracting attributes from the Deep Web data sources automatically, generating domain ontologies from the Deep Web automatically, and extracting instances from the Deep Web to enhance the domain ontologies. Several algorithms for the above mentioned tasks are presented. Lxperimeiital results suggest that the proposed methods assist users with finding more relevant Web sites. Another contribution of this dissertation includes developing a methodology to evaluate existing general purpose ontologies using the Web as a corpus. The quality of ontologies (QoO) is quantified by analyzing existing ontologies to get numeric measures of how natural their concepts and their relationships are. This methodology was first applied to several major, popular ontologies, such as WordNet, OpenCyc and the UMLS. Subsequently the domain ontologies developed in this research were evaluated from the naturalness perspective

    Способи обчислення міри неконсистентностей OWL онтологій

    Get PDF
    Актуальність теми. Розвиток інформаційно-телекомунікаційних технологій сприяє збільшенню обсягів інформації, необхідної для роботи корпоративних систем. Тому на сьогодні існує проблема ефективної обробки даних. Одним із варіантів рішення цієї задачі є обробка даних в системах з використанням онтологій. Онтологія — формалізоване представлення знань про певну предметну область, придатне для автоматизованої обробки. Таким чином дані охоплюють менший об'єм пам'яті, а інформації з них можна отримати більше. Розмір онтологій невпинно зростає, тому неконсистентність або внутрішнє протиріччя онтології в таких випадках є звичним явищем. Для обробки та аналізу таких онтологій необхідно застосовувати способи обчислення міри неконсистентності, які і будуть розглянуті в даній дисертаційній роботі. Об’єктом дослідження є онтологічні системи, некосистентність при побудові онтологій. Предметом дослідження є способи обчислення міри неконсистентності OWL онтологій. Методи дослідження – методи математичної статистики для аналізу обчислення міри некосистентності OWL онтологій. Мета роботи: підвищення ефективності обробки неконсистентних онтологій шляхом застосування обчислення міри невідновідності; адаптація підходів до обчислення міри неконсистентності онтологій в описовій логіці до OWL онтологій; оптимізація способів обчислення міри неконсистентності онтологій задля зменшення часу їх виконання.Actuality of subject. The development of information and telecommunication technologies contributes to the increase of the amount of information necessary for the work of corporate systems. Therefore, today there is a problem of efficient data processing. One of the solutions to this problem is the processing of data in systems using ontologies. Ontology is a formalized representation of knowledge about a particular subject area, suitable for automated processing. This way, the data covers a smaller amount of memory, and more information can be obtained from it. The size of the ontologies is constantly increasing, so the inconsistency or internal contradiction of ontology in such cases is a common occurrence. For the processing and analysis of such ontologies, it is necessary to use methods for calculating the degree of inconsistency, which will be considered in this thesis. The object of the study is ontological systems, non-consistency in the construction of ontologies. The subject of the study is how to calculate the degree of non-consistency of OWL ontologies. Methods of research - methods of mathematical statistics for the analysis of the calculation of the degree of non-consistency of OWL ontologies. The purpose of the work: to increase the efficiency of processing inconsistent ontologies by applying the calculation of the degree of noncompliance; adaptation of approaches to calculating the degree of inconsistency of ontologies in descriptive logic to OWL ontologies; optimization of methods for calculating the degree of inconsistency of ontologies to reduce the time of their implementation.Актуальность темы. Развитие информационно- телекоммуникационных технологий способствует увеличению объемов информации, необходимой для работы корпоративных систем. Поэтому на сегодняшний день существует проблема эффективной обработки данных. Одним из вариантов решения этой задачи является обработка данных в системах с использованием онтологий. Онтология - формализованное представление знаний об определенной предметной области, пригодное для автоматизированной обработки. Таким образом данные охватывают меньший объем памяти, а информации по ним можно больше. Размер онтологий постоянно растет, поэтому неконсистентнисть или внутреннее противоречие онтологии в таких случаях является обычным явлением. Для обработки и анализа таких онтологий необходимо применять способы вычисления степени неконсистентности, которые и будут рассмотрены в данной диссертационной работе. Объектом исследования является онтологические системы, некосистентнисть при построении онтологий. Предметом исследования являются способы вычисления степени неконсистентности OWL онтологий. Методы исследования - методы математической статистики для анализа вычисления меры некосистентности OWL онтологий. Цель работы: повышение эффективности обработки неконсистентних онтологий путем применения вычисления меры несоответствия; адаптация подходов к вычислению степени неконсистентности онтологий в описательной логике в OWL онтологии; оптимизация способов вычисления меры неконсистентности онтологий для уменьшения времени их выполнения

    Ontology based model framework for conceptual design of treatment flow sheets

    Get PDF
    The primary objective of wastewater treatment is the removal of pollutants to meet given legal effluent standards. To further reduce operators costs additional recovery of resources and energy is desired by industrial and municipal wastewater treatment. Hence the objective in early stage of planning of treatment facilities lies in the identification and evaluation of promising configurations of treatment units. Obviously this early stage of planning may best be supported by software tools to be able to deal with a variety of different treatment configurations. In chemical process engineering various design tools are available that automatically identify feasible process configurations for the purpose to obtain desired products from given educts. In contrast, the adaptation of these design tools for the automatic generation of treatment unit configurations (process chains) to achieve preset effluent standards is hampered by the following three reasons. First, pollutants in wastewater are usually not defined as chemical substances but by compound parameters according to equal properties (e.g. all particulate matter). Consequently the variation of a single compound parameter leads to a change of related parameters (e.g. relation between Chemical Oxygen Demand and Total Suspended Solids). Furthermore, mathematical process models of treatment processes are tailored towards fractions of compound parameters. This hampers the generic representation of these process models which in turn is essential for automatic identification of treatment configurations. Second, treatment technologies for wastewater treatment rely on a variety of chemical, biological, and physical phenomena. Approaches to mathematically describe these phenomena cover a wide range of modeling techniques including stochastic, conceptual or deterministic approaches. Even more the consideration of temporal and spatial resolutions differ. This again hampers a generic representation of process models. Third, the automatic identification of treatment configurations may either be achieved by the use of design rules or by permutation of all possible combinations of units stored within a database of treatment units. The first approach depends on past experience translated into design rules. Hence, no innovative new treatment configurations can be identified. The second approach to identify all possible configurations collapses by extremely high numbers of treatment configurations that cannot be mastered. This is due to the phenomena of combinatorial explosion. It follows therefrom that an appropriate planning algorithm should function without the need of additional design rules and should be able to identify directly feasible configurations while discarding those impractical. This work presents a planning tool for the identification and evaluation of treatment configurations that tackles the before addressed problems. The planning tool comprises two major parts. An external declarative knowledge base and the actual planning tool that includes a goal oriented planning algorithm. The knowledge base describes parameters for wastewater characterization (i.e. material model) and a set of treatment units represented by process models (i.e. process model). The formalization of the knowledge base is achieved by the Web Ontology Language (OWL). The developed data model being the organization structure of the knowledge base describes relations between wastewater parameters and process models to enable for generic representation of process models. Through these parameters for wastewater characterization as well as treatment units can be altered or added to the knowledge base without the requirement to synchronize already included parameter representations or process models. Furthermore the knowledge base describes relations between parameters and properties of water constituents. This allows to track changes of all wastewater parameters which result from modeling of removal efficiency of applied treatment units. So far two generic treatment units have been represented within the knowledge base. These are separation and conversion units. These two raw types have been applied to represent different types of clarifiers and biological treatment units. The developed planning algorithm is based on a Means-Ends Analysis (MEA). This is a goal oriented search algorithm that posts goals from wastewater state and limit value restrictions to select those treatment units only that are likely to solve the treatment problem. Regarding this, all treatment units are qualified according to postconditions that describe the effect of each unit. In addition, units are also characterized by preconditions that state the application range of each unit. The developed planning algorithm furthermore allows for the identification of simple cycles to account for moving bed reactor systems (e.g. functional unit of aeration tank and clarifier). The evaluation of identified treatment configurations is achieved by total estimated cost of each configuration. The planning tool has been tested on five use cases. Some use cases contained multiple sources and sinks. This showed the possibility to identify water reuse capabilities as well as to identify solutions that go beyond end of pipe solutions. Beyond the originated area of application, the planning tool may be used for advanced interrogations. Thereby the knowledge base and planning algorithm may be further developed to address the objectives to identify configurations for any type of material and energy recovery
    corecore