1,957 research outputs found

    From fuzzy to annotated semantic web languages

    Get PDF
    The aim of this chapter is to present a detailed, selfcontained and comprehensive account of the state of the art in representing and reasoning with fuzzy knowledge in Semantic Web Languages such as triple languages RDF/RDFS, conceptual languages of the OWL 2 family and rule languages. We further show how one may generalise them to so-called annotation domains, that cover also e.g. temporal and provenance extensions

    Hybrid fuzzy multi-objective particle swarm optimization for taxonomy extraction

    Get PDF
    Ontology learning refers to an automatic extraction of ontology to produce the ontology learning layer cake which consists of five kinds of output: terms, concepts, taxonomy relations, non-taxonomy relations and axioms. Term extraction is a prerequisite for all aspects of ontology learning. It is the automatic mining of complete terms from the input document. Another important part of ontology is taxonomy, or the hierarchy of concepts. It presents a tree view of the ontology and shows the inheritance between subconcepts and superconcepts. In this research, two methods were proposed for improving the performance of the extraction result. The first method uses particle swarm optimization in order to optimize the weights of features. The advantage of particle swarm optimization is that it can calculate and adjust the weight of each feature according to the appropriate value, and here it is used to improve the performance of term and taxonomy extraction. The second method uses a hybrid technique that uses multi-objective particle swarm optimization and fuzzy systems that ensures that the membership functions and fuzzy system rule sets are optimized. The advantage of using a fuzzy system is that the imprecise and uncertain values of feature weights can be tolerated during the extraction process. This method is used to improve the performance of taxonomy extraction. In the term extraction experiment, five extracted features were used for each term from the document. These features were represented by feature vectors consisting of domain relevance, domain consensus, term cohesion, first occurrence and length of noun phrase. For taxonomy extraction, matching Hearst lexico-syntactic patterns in documents and the web, and hypernym information form WordNet were used as the features that represent each pair of terms from the texts. These two proposed methods are evaluated using a dataset that contains documents about tourism. For term extraction, the proposed method is compared with benchmark algorithms such as Term Frequency Inverse Document Frequency, Weirdness, Glossary Extraction and Term Extractor, using the precision performance evaluation measurement. For taxonomy extraction, the proposed methods are compared with benchmark methods of Feature-based and weighting by Support Vector Machine using the f-measure, precision and recall performance evaluation measurements. For the first method, the experiment results concluded that implementing particle swarm optimization in order to optimize the feature weights in terms and taxonomy extraction leads to improved accuracy of extraction result compared to the benchmark algorithms. For the second method, the results concluded that the hybrid technique that uses multi-objective particle swarm optimization and fuzzy systems leads to improved performance of taxonomy extraction results when compared to the benchmark methods, while adjusting the fuzzy membership function and keeping the number of fuzzy rules to a minimum number with a high degree of accuracy

    Toward Sensor-Based Context Aware Systems

    Get PDF
    This paper proposes a methodology for sensor data interpretation that can combine sensor outputs with contexts represented as sets of annotated business rules. Sensor readings are interpreted to generate events labeled with the appropriate type and level of uncertainty. Then, the appropriate context is selected. Reconciliation of different uncertainty types is achieved by a simple technique that moves uncertainty from events to business rules by generating combs of standard Boolean predicates. Finally, context rules are evaluated together with the events to take a decision. The feasibility of our idea is demonstrated via a case study where a context-reasoning engine has been connected to simulated heartbeat sensors using prerecorded experimental data. We use sensor outputs to identify the proper context of operation of a system and trigger decision-making based on context information

    Interim research assessment 2003-2005 - Computer Science

    Get PDF
    This report primarily serves as a source of information for the 2007 Interim Research Assessment Committee for Computer Science at the three technical universities in the Netherlands. The report also provides information for others interested in our research activities

    A methodology to produce geographical information for land planning using very-high resolution images

    Get PDF
    Actualmente, os municípios são obrigados a produzir, no âmbito da elaboração dos instrumentos de gestão territorial, cartografia homologada pela autoridade nacional. O Plano Director Municipal (PDM) tem um período de vigência de 10 anos. Porém, no que diz respeito à cartografia para estes planos, principalmente em municípios onde a pressão urbanística é elevada, esta periodicidade não é compatível com a dinâmica de alteração de uso do solo. Emerge assim, a necessidade de um processo de produção mais eficaz, que permita a obtenção de uma nova cartografia de base e temática mais frequentemente. Em Portugal recorre-se à fotografia aérea como informação de base para a produção de cartografia de grande escala. Por um lado, embora este suporte de informação resulte em mapas bastante rigorosos e detalhados, a sua produção têm custos muito elevados e consomem muito tempo. As imagens de satélite de muito alta-resolução espacial podem constituir uma alternativa, mas sem substituir as fotografias aéreas na produção de cartografia temática, a grande escala. O tema da tese trata assim da satisfação das necessidades municipais em informação geográfica actualizada. Para melhor conhecer o valor e utilidade desta informação, realizou-se um inquérito aos municípios Portugueses. Este passo foi essencial para avaliar a pertinência e a utilidade da introdução de imagens de satélite de muito alta-resolução espacial na cadeia de procedimentos de actualização de alguns temas, quer na cartografia de base quer na cartografia temática. A abordagem proposta para solução do problema identificado baseia-se no uso de imagens de satélite e outros dados digitais em ambiente de Sistemas de Informação Geográfica. A experimentação teve como objectivo a extracção automática de elementos de interesse municipal a partir de imagens de muito alta-resolução espacial (fotografias aéreas ortorectificadas, imagem QuickBird, e imagem IKONOS), bem como de dados altimétricos (dados LiDAR). Avaliaram-se as potencialidades da informação geográfica extraídas das imagens para fins cartográficos e analíticos. Desenvolveram-se quatro casos de estudo que reflectem diferentes usos para os dados geográficos a nível municipal, e que traduzem aplicações com exigências diferentes. No primeiro caso de estudo, propõe-se uma metodologia para actualização periódica de cartografia a grande escala, que faz uso de fotografias aéreas vi ortorectificadas na área da Alta de Lisboa. Esta é uma aplicação quantitativa onde as qualidades posicionais e geométricas dos elementos extraídos são mais exigentes. No segundo caso de estudo, criou-se um sistema de alarme para áreas potencialmente alteradas, com recurso a uma imagem QuickBird e dados LiDAR, no Bairro da Madre de Deus, com objectivo de auxiliar a actualização de cartografia de grande escala. No terceiro caso de estudo avaliou-se o potencial solar de topos de edifícios nas Avenidas Novas, com recurso a dados LiDAR. No quarto caso de estudo, propõe-se uma série de indicadores municipais de monitorização territorial, obtidos pelo processamento de uma imagem IKONOS que cobre toda a área do concelho de Lisboa. Esta é uma aplicação com fins analíticos onde a qualidade temática da extracção é mais relevante.Currently, the Portuguese municipalities are required to produce homologated cartography, under the Territorial Management Instruments framework. The Municipal Master Plan (PDM) has to be revised every 10 years, as well as the topographic and thematic maps that describe the municipal territory. However, this period is inadequate for representing counties where urban pressure is high, and where the changes in the land use are very dynamic. Consequently, emerges the need for a more efficient mapping process, allowing obtaining recent geographic information more often. Several countries, including Portugal, continue to use aerial photography for large-scale mapping. Although this data enables highly accurate maps, its acquisition and visual interpretation are very costly and time consuming. Very-High Resolution (VHR) satellite imagery can be an alternative data source, without replacing the aerial images, for producing large-scale thematic cartography. The focus of the thesis is the demand for updated geographic information in the land planning process. To better understand the value and usefulness of this information, a survey of all Portuguese municipalities was carried out. This step was essential for assessing the relevance and usefulness of the introduction of VHR satellite imagery in the chain of procedures for updating land information. The proposed methodology is based on the use of VHR satellite imagery, and other digital data, in a Geographic Information Systems (GIS) environment. Different algorithms for feature extraction that take into account the variation in texture, color and shape of objects in the image, were tested. The trials aimed for automatic extraction of features of municipal interest, based on aerial and satellite high-resolution (orthophotos, QuickBird and IKONOS imagery) as well as elevation data (altimetric information and LiDAR data). To evaluate the potential of geographic information extracted from VHR images, two areas of application were identified: mapping and analytical purposes. Four case studies that reflect different uses of geographic data at the municipal level, with different accuracy requirements, were considered. The first case study presents a methodology for periodic updating of large-scale maps based on orthophotos, in the area of Alta de Lisboa. This is a situation where the positional and geometric accuracy of the extracted information are more demanding, since technical mapping standards must be complied. In the second case study, an alarm system that indicates the location of potential changes in building areas, using a QuickBird image and LiDAR data, was developed for the area of Bairro da Madre de Deus. The goal of the system is to assist the updating of large scale mapping, providing a layer that can be used by the municipal technicians as the basis for manual editing. In the third case study, the analysis of the most suitable roof-tops for installing solar systems, using LiDAR data, was performed in the area of Avenidas Novas. A set of urban environment indicators obtained from VHR imagery is presented. The concept is demonstrated for the entire city of Lisbon, through IKONOS imagery processing. In this analytical application, the positional quality issue of extraction is less relevant.GEOSAT – Methodologies to extract large scale GEOgraphical information from very high resolution SATellite images (PTDC/GEO/64826/2006), e-GEO – Centro de Estudos de Geografia e Planeamento Regional, da Faculdade de Ciências Sociais e Humanas, no quadro do Grupo de Investigação Modelação Geográfica, Cidades e Ordenamento do Territóri

    The Encyclopedia of Neutrosophic Researchers - vol. 3

    Get PDF
    This is the third volume of the Encyclopedia of Neutrosophic Researchers, edited from materials offered by the authors who responded to the editor’s invitation. The authors are listed alphabetically. The introduction contains a short history of neutrosophics, together with links to the main papers and books. Neutrosophic set, neutrosophic logic, neutrosophic probability, neutrosophic statistics, neutrosophic measure, neutrosophic precalculus, neutrosophic calculus and so on are gaining significant attention in solving many real life problems that involve uncertainty, impreciseness, vagueness, incompleteness, inconsistent, and indeterminacy. In the past years the fields of neutrosophics have been extended and applied in various fields, such as: artificial intelligence, data mining, soft computing, decision making in incomplete / indeterminate / inconsistent information systems, image processing, computational modelling, robotics, medical diagnosis, biomedical engineering, investment problems, economic forecasting, social science, humanistic and practical achievements

    Training of Crisis Mappers and Map Production from Multi-sensor Data: Vernazza Case Study (Cinque Terre National Park, Italy)

    Get PDF
    This aim of paper is to presents the development of a multidisciplinary project carried out by the cooperation between Politecnico di Torino and ITHACA (Information Technology for Humanitarian Assistance, Cooperation and Action). The goal of the project was the training in geospatial data acquiring and processing for students attending Architecture and Engineering Courses, in order to start up a team of "volunteer mappers". Indeed, the project is aimed to document the environmental and built heritage subject to disaster; the purpose is to improve the capabilities of the actors involved in the activities connected in geospatial data collection, integration and sharing. The proposed area for testing the training activities is the Cinque Terre National Park, registered in the World Heritage List since 1997. The area was affected by flood on the 25th of October 2011. According to other international experiences, the group is expected to be active after emergencies in order to upgrade maps, using data acquired by typical geomatic methods and techniques such as terrestrial and aerial Lidar, close-range and aerial photogrammetry, topographic and GNSS instruments etc.; or by non conventional systems and instruments such us UAV, mobile mapping etc. The ultimate goal is to implement a WebGIS platform to share all the data collected with local authorities and the Civil Protectio
    corecore