3,538 research outputs found

    A rule-based ontological framework for the classification of molecules

    Get PDF
    BACKGROUND: A variety of key activities within life sciences research involves integrating and intelligently managing large amounts of biochemical information. Semantic technologies provide an intuitive way to organise and sift through these rapidly growing datasets via the design and maintenance of ontology-supported knowledge bases. To this end, OWL-a W3C standard declarative language- has been extensively used in the deployment of biochemical ontologies that can be conveniently organised using the classification facilities of OWL-based tools. One of the most established ontologies for the chemical domain is ChEBI, an open-access dictionary of molecular entities that supplies high quality annotation and taxonomical information for biologically relevant compounds. However, ChEBI is being manually expanded which hinders its potential to grow due to the limited availability of human resources. RESULTS: In this work, we describe a prototype that performs automatic classification of chemical compounds. The software we present implements a sound and complete reasoning procedure of a formalism that extends datalog and builds upon an off-the-shelf deductive database system. We capture a wide range of chemical classes that are not expressible with OWL-based formalisms such as cyclic molecules, saturated molecules and alkanes. Furthermore, we describe a surface 'less-logician-like' syntax that allows application experts to create ontological descriptions of complex biochemical objects without prior knowledge of logic. In terms of performance, a noticeable improvement is observed in comparison with previous approaches. Our evaluation has discovered subsumptions that are missing from the manually curated ChEBI ontology as well as discrepancies with respect to existing subclass relations. We illustrate thus the potential of an ontology language suitable for the life sciences domain that exhibits a favourable balance between expressive power and practical feasibility. CONCLUSIONS: Our proposed methodology can form the basis of an ontology-mediated application to assist biocurators in the production of complete and error-free taxonomies. Moreover, such a tool could contribute to a more rapid development of the ChEBI ontology and to the efforts of the ChEBI team to make annotated chemical datasets available to the public. From a modelling point of view, our approach could stimulate the adoption of a different and expressive reasoning paradigm based on rules for which state-of-the-art and highly optimised reasoners are available; it could thus pave the way for the representation of a broader spectrum of life sciences and biomedical knowledge.</p

    A rule-based ontological framework for the classification of molecules

    Full text link

    Structure-based classification and ontology in chemistry

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Recent years have seen an explosion in the availability of data in the chemistry domain. With this information explosion, however, retrieving <it>relevant </it>results from the available information, and <it>organising </it>those results, become even harder problems. Computational processing is essential to filter and organise the available resources so as to better facilitate the work of scientists. Ontologies encode expert domain knowledge in a hierarchically organised machine-processable format. One such ontology for the chemical domain is ChEBI. ChEBI provides a classification of chemicals based on their structural features and a role or activity-based classification. An example of a structure-based class is 'pentacyclic compound' (compounds containing five-ring structures), while an example of a role-based class is 'analgesic', since many different chemicals can act as analgesics without sharing structural features. Structure-based classification in chemistry exploits elegant regularities and symmetries in the underlying chemical domain. As yet, there has been neither a systematic analysis of the types of structural classification in use in chemistry nor a comparison to the capabilities of available technologies.</p> <p>Results</p> <p>We analyze the different categories of structural classes in chemistry, presenting a list of patterns for features found in class definitions. We compare these patterns of class definition to tools which allow for automation of hierarchy construction within cheminformatics and within logic-based ontology technology, going into detail in the latter case with respect to the expressive capabilities of the Web Ontology Language and recent extensions for modelling structured objects. Finally we discuss the relationships and interactions between cheminformatics approaches and logic-based approaches.</p> <p>Conclusion</p> <p>Systems that perform intelligent reasoning tasks on chemistry data require a diverse set of underlying computational utilities including algorithmic, statistical and logic-based tools. For the task of automatic structure-based classification of chemical entities, essential to managing the vast swathes of chemical data being brought online, systems which are capable of hybrid reasoning combining several different approaches are crucial. We provide a thorough review of the available tools and methodologies, and identify areas of open research.</p

    A multiple criteria supplier segmentation using outranking and value function methods

    Full text link
    [EN] Suppliers play a key role in supply chain management which involves evaluation for supplier selection problem, as well as other complex issues that companies should take into account. The purpose of this research is to develop and test an integrated system, which allows qualifying providers and also supplier segmentation by monitoring their performance based on a multiple criteria tool for systematic decision making. This proposal consists in a general procedure to assess suppliers based mainly on exploiting all reliable databases of the company. Firstly, for each group of products, their evaluation criteria are defined collaboratively in order to determine their critical and strategic performance, which are then integrated with other criteria that are specific of the suppliers and represent relevant aspects for the company, also classified by critical and strategic dimensions. Two multiple criteria methods, compensatory and non-compensatory, are used and compared so as to point out their strengths, weaknesses and flexibility for the supplier evaluation in different contexts, which are usually relevant in the supply chain management. A value function approach is the appropriate method to qualify providers to be included in the panel of approved suppliers of the company as this process depends only on own features of the supplier. On the other hand, outranking methods such as PROMETHEE have shown greater potential and robustness to develop portfolios with suppliers that should be partners of the company, as well as to identify other types of relationships, such as long term contracts, market policies or to highlight those to be removed from their portfolio. These results and conclusions are based on an empirical research in a multinational company for food, pharmaceuticals and chemicals. This system has shown a great impact as it represents the first supplier segmentation proposal applied to industry, in which decision making not only takes into account opinions and judgements, but also integrates historical data and expert knowledge. This approach provides a robust support system to inform operative, tactical and strategic decisions, which is very relevant when applying an advanced management in practice.This research has been partially developed with the support of the Ministry of Economy and Competitiveness (Ref. ECO2011-27369) and Ministry of Education (Marina Segura, scholarship of Training Plan of University Teaching).Segura, M.; Maroto, C. (2017). A multiple criteria supplier segmentation using outranking and value function methods. Expert Systems with Applications. 69:87-100. doi:10.1016/eswa.2016.10.031S871006

    Using ICT to enhance student understanding of classification

    Get PDF
    It is common for 13-year-old students in Victoria, Australia to learn how to classify animals and plants using the Linnaean system and dichotomous keys. This is usually done with text based research on the major groups of animals and plants and a few simple exercises with various objects to explain the underlying concept of classification. In this paper we describe our attempt to achieve similar goals using three computer software programs to build dichotomous keys and represent the data: Inspiration, MS PowerPoint, and MicroWorlds. Student work is included to illustrate what can be achieved by students of various abilities with these information and communication technologies

    Reference Ontology and (ONTO)2 Agent: The Ontology Yellow pages

    Get PDF
    Knowledge reuse by means of ontologies faces three important problems at present: (1) there are no standardized identifying features that characterize ontologies from the user point of view; (2) there are no web sites using the same logical organization, presenting relevant information about ontologies; and (3) the search for appropriate ontologies is hard, time-consuming and usually fruitless. To solve the above problems, we present: (1) a living set of features that allow us to characterize ontologies from the user point of view and have the same logical organization; (2) a living domain ontology about ontologies (called Reference Ontology) that gathers, describes and has links to existing ontologies; and (3) (ONTO)2Agent, the ontology-based WWW broker about ontologies that uses Reference Ontology as a source of its knowledge and retrieves descriptions of ontologies that satisfy a given set of constraints

    Otimização do modo de registo de dados durante a técnica de Perfusão Isolada dos Membros

    Get PDF
    Cancer is a disease in which the cells of our organism, due to mutations in their DNA, divide without control and acquire malignant properties and, during this process of uncontrolled division, invade other tissues and don’t die. Cancer cells have the ability to spread through the body using the circulatory and lymphatic systems, giving rise to metastases. With regard to cutaneous neoplasms, the therapy chosen for surgically dispersed and unresectable metastases involved amputation of the sick limb, however, many complications and short time intervals between treatment and the appearance of new lesions were associated. In 1957, an innovative technique emerges and proves to be extremely effective and avoids limb amputation: Isolated Limb Perfusion performed with Melphalan and TNF-α. The main objective of this procedure is to isolate the limb affected by the disease from the systemic circulation so that it is possible to administer very high doses of chemotherapy without any collateral damage. In this sense, it is necessary to have a control of blood leaks from the limb to the systemic circulation, in order to ensure that no other organ or tissue is compromised. The Portuguese Institute of Oncology (IPO) of Porto is one of the worldwide institutions that practice this type of surgical interventions, reporting an annual increase in the number of occurrences year after year. Consequently, this progressive increase and coupled with the fact that this leak control is impractical and time-consuming, led this institution to join Instituto Superior de Engenharia do Porto (ISEP) to develop an application that, in combination with an extracorporeal counter equipment called Neoprobe Gamma Detector System is able to record the values obtained automatically and allow monitoring possible leaks

    The Knowledge Graph Construction in the Educational Domain: Take an Australian School Science Course as an Example

    Get PDF
    The evolution of the Internet technology and artificial intelligence has changed the ways we gain knowledge, which has expanded to every aspect of our lives. In recent years, Knowledge Graphs technology as one of the artificial intelligence techniques has been widely used in the educational domain. However, there are few studies dedicating the construction of knowledge graphs for K-10 education in Australia, and most of the existing studies only focus on at the theory level, and little research shows practical pipeline steps to complete the complex flow of constructing the educational knowledge graph. Apart from that, most studies focused on concept entities and their relations but ignored the features of concept entities and the relations between learning knowledge points and required learning outcomes. To overcome these shortages and provide the data foundation for the development of downstream research and applications in this educational domain, the construction processes of building a knowledge graph for Australian K-10 education were analyzed at the theory level and implemented in a practical way in this research. We took the Year 9 science course as a typical data source example fed to the proposed method called K10EDU-RCF-KG to construct this educational knowledge graph and to enrich the features of entities in the knowledge graph. In the construction pipeline, a variety of techniques were employed to complete the building process. Firstly, the POI and OCR techniques were applied to convert Word and PDF format files into text, followed by developing an educational resources management platform where the machine-readable text could be stored in a relational database management system. Secondly, we designed an architecture framework as the guidance of the construction pipeline. According to this architecture, the educational ontology was initially designed, and a backend microservice was developed to process the entity extraction and relation extraction by NLP-NER and probabilistic association rule mining algorithms, respectively. We also adopted the NLP-POS technique to find out the neighbor adjectives related to entitles to enrich features of these concept entitles. In addition, a subject dictionary was introduced during the refinement process of the knowledge graph, which reduced the data noise rate of the knowledge graph entities. Furthermore, the connections between learning outcome entities and topic knowledge point entities were directly connected, which provides a clear and efficient way to identify what corresponding learning objectives are related to the learning unit. Finally, a set of REST APIs for querying this educational knowledge graph were developed
    • …
    corecore