265 research outputs found

    A Survey on Region Extractors from Web Documents

    Get PDF
    Extracting information from web documents has become a research area in which new proposals sprout out year after year. This has motivated several researchers to work on surveys that attempt to provide an overall picture of the many existing proposals. Unfortunately, none of these surveys provide a complete picture, because they do not take region extractors into account. These tools are kind of preprocessors, because they help information extractors focus on the regions of a web document that contain relevant information. With the increasing complexity of web documents, region extractors are becoming a must to extract information from many websites. Beyond information extraction, region extractors have also found their way into information retrieval, focused web crawling, topic distillation, adaptive content delivery, mashups, and metasearch engines. In this paper, we survey the existing proposals regarding region extractors and compare them side by side.Ministerio de Educación y Ciencia TIN2007-64119Junta de Andalucía P07-TIC-2602Junta de Andalucía P08- TIC-4100Ministerio de Ciencia e Innovación TIN2008-04718-EMinisterio de Ciencia e Innovación TIN2010-21744Ministerio de Economía, Industria y Competitividad TIN2010-09809-EMinisterio de Ciencia e Innovación TIN2010-10811-EMinisterio de Ciencia e Innovación TIN2010-09988-

    Search improvement within the geospatial web in the context of spatial data infrastructures

    Get PDF
    El trabajo desarrollado en esta tesis doctoral demuestra que es posible mejorar la búsqueda en el contexto de las Infraestructuras de Datos Espaciales mediante la aplicación de técnicas y buenas prácticas de otras comunidades científicas, especialmente de las comunidades de la Web y de la Web Semántica (por ejemplo, Linked Data). El uso de las descripciones semánticas y las aproximaciones basadas en el contenido publicado por la comunidad geoespacial pueden ayudar en la búsqueda de información sobre los fenómenos geográficos, y en la búsqueda de recursos geoespaciales en general. El trabajo comienza con un análisis de una aproximación para mejorar la búsqueda de las entidades geoespaciales desde la perspectiva de geocodificación tradicional. La arquitectura de geocodificación compuesta propuesta en este trabajo asegura una mejora de los resultados de geocodificación gracias a la utilización de diferentes proveedores de información geográfica. En este enfoque, el uso de patrones estructurales de diseño y ontologías en esta aproximación permite una arquitectura avanzada en términos de extensibilidad, flexibilidad y adaptabilidad. Además, una arquitectura basada en la selección de servicio de geocodificación permite el desarrollo de una metodología de la georreferenciación de diversos tipos de información geográfica (por ejemplo, direcciones o puntos de interés). A continuación, se presentan dos aplicaciones representativas que requieren una caracterización semántica adicional de los recursos geoespaciales. El enfoque propuesto en este trabajo utiliza contenidos basados en heurísticas para el muestreo de un conjunto de recursos geopesaciales. La primera parte se dedica a la idea de la abstracción de un fenómeno geográfico de su definición espacial. La investigación muestra que las buenas prácticas de la Web Semántica se puede reutilizar en el ámbito de una Infraestructura de Datos Espaciales para describir los servicios geoespaciales estandarizados por Open Geospatial Consortium por medio de geoidentificadores (es decir, por medio de las entidades de una ontología geográfica). La segunda parte de este capítulo desglosa la aquitectura y componentes de un servicio de geoprocesamiento para la identificación automática de ortoimágenes ofrecidas a través de un servicio estándar de publicación de mapas (es decir, los servicios que siguen la especificación OGC Web Map Service). Como resultado de este trabajo se ha propuesto un método para la identificación de los mapas ofrecidos por un Web Map Service que son ortoimágenes. A continuación, el trabajo se dedica al análisis de cuestiones relacionadas con la creación de los metadatos de recursos de la Web en el contexto del dominio geográfico. Este trabajo propone una arquitectura para la generación automática de conocimiento geográfico de los recursos Web. Ha sido necesario desarrollar un método para la estimación de la cobertura geográfica de las páginas Web. Las heurísticas propuestas están basadas en el contenido publicado por os proveedores de información geográfica. El prototipo desarrollado es capaz de generar metadatos. El modelo generado contiene el conjunto mínimo recomendado de elementos requeridos por un catálogo que sigue especificación OGC Catalogue Service for the Web, el estandar recomendado por deiferentes Infraestructuras de Datos Espaciales (por ejemplo, the Infrastructure for Spatial Information in the European Community (INSPIRE)). Además, este estudio determina algunas características de la Web Geoespacial actual. En primer lugar, ofrece algunas características del mercado de los proveedores de los recursos Web de la información geográfica. Este estudio revela algunas prácticas de la comunidad geoespacial en la producción de metadatos de las páginas Web, en particular, la falta de metadatos geográficos. Todo lo anterior es la base del estudio de la cuestión del apoyo a los usuarios no expertos en la búsqueda de recursos de la Web Geoespacial. El motor de búsqueda dedicado a la Web Geoespacial propuesto en este trabajo es capaz de usar como base un motor de búsqueda existente. Por otro lado, da soporte a la búsqueda exploratoria de los recursos geoespaciales descubiertos en la Web. El experimento sobre la precisión y la recuperación ha demostrado que el prototipo desarrollado en este trabajo es al menos tan bueno como el motor de búsqueda remoto. Un estudio dedicado a la utilidad del sistema indica que incluso los no expertos pueden realizar una tarea de búsqueda con resultados satisfactorios

    Data analytics 2016: proceedings of the fifth international conference on data analytics

    Get PDF

    Representing archaeological uncertainty in cultural informatics

    Get PDF
    This thesis sets out to explore, describe, quantify, and visualise uncertainty in a cultural informatics context, with a focus on archaeological reconstructions. For quite some time, archaeologists and heritage experts have been criticising the often toorealistic appearance of three-dimensional reconstructions. They have been highlighting one of the unique features of archaeology: the information we have on our heritage will always be incomplete. This incompleteness should be reflected in digitised reconstructions of the past. This criticism is the driving force behind this thesis. The research examines archaeological theory and inferential process and provides insight into computer visualisation. It describes how these two areas, of archaeology and computer graphics, have formed a useful, but often tumultuous, relationship through the years. By examining the uncertainty background of disciplines such as GIS, medicine, and law, the thesis postulates that archaeological visualisation, in order to mature, must move towards archaeological knowledge visualisation. Three sequential areas are proposed through this thesis for the initial exploration of archaeological uncertainty: identification, quantification and modelling. The main contributions of the thesis lie in those three areas. Firstly, through the innovative design, distribution, and analysis of a questionnaire, the thesis identifies the importance of uncertainty in archaeological interpretation and discovers potential preferences among different evidence types. Secondly, the thesis uniquely analyses and evaluates, in relation to archaeological uncertainty, three different belief quantification models. The varying ways that these mathematical models work, are also evaluated through simulated experiments. Comparison of results indicates significant convergence between the models. Thirdly, a novel approach to archaeological uncertainty and evidence conflict visualisation is presented, influenced by information visualisation schemes. Lastly, suggestions for future semantic extensions to this research are presented through the design and development of new plugins to a search engine

    Automated retrieval and extraction of training course information from unstructured web pages

    Get PDF
    Web Information Extraction (WIE) is the discipline dealing with the discovery, processing and extraction of specific pieces of information from semi-structured or unstructured web pages. The World Wide Web comprises billions of web pages and there is much need for systems that will locate, extract and integrate the acquired knowledge into organisations practices. There are some commercial, automated web extraction software packages, however their success comes from heavily involving their users in the process of finding the relevant web pages, preparing the system to recognise items of interest on these pages and manually dealing with the evaluation and storage of the extracted results. This research has explored WIE, specifically with regard to the automation of the extraction and validation of online training information. The work also includes research and development in the area of automated Web Information Retrieval (WIR), more specifically in Web Searching (or Crawling) and Web Classification. Different technologies were considered, however after much consideration, Naïve Bayes Networks were chosen as the most suitable for the development of the classification system. The extraction part of the system used Genetic Programming (GP) for the generation of web extraction solutions. Specifically, GP was used to evolve Regular Expressions, which were then used to extract specific training course information from the web such as: course names, prices, dates and locations. The experimental results indicate that all three aspects of this research perform very well, with the Web Crawler outperforming existing crawling systems, the Web Classifier performing with an accuracy of over 95% and a precision of over 98%, and the Web Extractor achieving an accuracy of over 94% for the extraction of course titles and an accuracy of just under 67% for the extraction of other course attributes such as dates, prices and locations. Furthermore, the overall work is of great significance to the sponsoring company, as it simplifies and improves the existing time-consuming, labour-intensive and error-prone manual techniques, as will be discussed in this thesis. The prototype developed in this research works in the background and requires very little, often no, human assistance

    Economic Trends in Enterprise Search Solutions

    Get PDF
    Enterprise search technology retrieves information within organizations. This data can be proprietary and public, its access to it may be restricted or not. Enterprise search solutions render business processes more efficient particularly in data-intensive companies. This technology is key to increasing the competitiveness of the digital economy; thus it constitutes a strategic market for the European Union. The Enterprise Search Solution (ESS) market was worth close to one billion USD in 2008 and is expected to grow quicker than the overall market for information and knowledge management systems. Optimistic market forecasts expect market size to exceed 1,200 million USD by the end of 2010. Other market analyses see the growth rate slowing down and stabilizing at around 10% a year in 2010. Even in the least favourable case, enterprise search remains an attractive market, particularly because of the opportunities expected to arise from the convergence of ESS and Information Systems. This report looks at the demand and supply side of ESS and provides data about the market. It presents the evolution of market dynamics over the past decade and describes the current situation. Our main thesis is that ESS is currently placed at the point where two established markets, namely web search and the management of information systems, overlap. The report offers evidence that these two markets are converging and discusses the role of the different stakeholders (providers of web search engines, enterprise resource management tools, pure enterprise search tools, etc.) in this changing context.JRC.DDG.J.4-Information Societ

    Linked Vocabulary Recommendation Tools for Internet of Things: A Survey

    Get PDF
    The Semantic Web emerged with the vision of eased integration of heterogeneous, distributed data on the Web. The approach fundamentally relies on the linkage between and reuse of previously published vocabularies to facilitate semantic interoperability. In recent years, the Semantic Web has been perceived as a potential enabling technology to overcome interoperability issues in the Internet of Things (IoT), especially for service discovery and composition. Despite the importance of making vocabulary terms discoverable and selecting most suitable ones in forthcoming IoT applications, no state-of-the-art survey of tools achieving such recommendation tasks exists to date. This survey covers this gap, by specifying an extensive evaluation framework and assessing linked vocabulary recommendation tools. Furthermore, we discuss challenges and opportunities of vocabulary recommendation and related tools in the context of emerging IoT ecosystems. Overall, 40 recommendation tools for linked vocabularies were evaluated, both, empirically and experimentally. Some of the key ndings include that (i) many tools neglect to thoroughly address both, the curation of a vocabulary collection and e ective selection mechanisms; (ii) modern information retrieval techniques are underrepresented; and (iii) the reviewed tools that emerged from Semantic Web use cases are not yet su ciently extended to t today’s IoT projects

    Enterprise Search in the European Union: A Techno-economic Analysis

    Get PDF
    This Report contributes to the work being carried out by IPTS on the potential of Search, discussing, in particular, the prospects of Enterprise search as well as the main challenges and opportunities. It is part of CHORUS+, an initiative supported by the Directorate General Information Society and Media. Information about CHORUS+ is available at http://avmediasearch.euJRC.J.3-Information Societ
    corecore