360 research outputs found

    Integration of distributed terminology resources to facilitate subject cross-browsing for library portal systems

    Get PDF
    With the increase in the number of distributed library information resources, users may have to interact with different user interfaces, learn to switch their mental models between these interfaces, and familiarise themselves with controlled vocabularies used by different resources. For this reason, library professionals have developed library portals to integrate these distributed information resources, and assist end-users in cross-accessing distributed resources via a single access point in their own library. There are two important subject-based services that a library portal system might be able to provide. The first is a federated search service, which refers to a process where a user can input a query to cross-search a number of information resources. The second is a subject cross-browsing service, which can offer a knowledge navigation tree to link subject schemes used by distributed resources. However, the development of subject cross-searching and browsing services has been impeded by the heterogeneity of different KOS (Knowledge Organisation System) used by different information resources. Due to the lack of mappings between different KOS, it is impossible to offer a subject cross-browsing service for a library portal system. [Continues.

    HILT : High-Level Thesaurus Project. Phase IV and Embedding Project Extension : Final Report

    Get PDF
    Ensuring that Higher Education (HE) and Further Education (FE) users of the JISC IE can find appropriate learning, research and information resources by subject search and browse in an environment where most national and institutional service providers - usually for very good local reasons - use different subject schemes to describe their resources is a major challenge facing the JISC domain (and, indeed, other domains beyond JISC). Encouraging the use of standard terminologies in some services (institutional repositories, for example) is a related challenge. Under the auspices of the HILT project, JISC has been investigating mechanisms to assist the community with this problem through a JISC Shared Infrastructure Service that would help optimise the value obtained from expenditure on content and services by facilitating subject-search-based resource sharing to benefit users in the learning and research communities. The project has been through a number of phases, with work from earlier phases reported, both in published work elsewhere, and in project reports (see the project website: http://hilt.cdlr.strath.ac.uk/). HILT Phase IV had two elements - the core project, whose focus was 'to research, investigate and develop pilot solutions for problems pertaining to cross-searching multi-subject scheme information environments, as well as providing a variety of other terminological searching aids', and a short extension to encompass the pilot embedding of routines to interact with HILT M2M services in the user interfaces of various information services serving the JISC community. Both elements contributed to the developments summarised in this report

    Doctor of Philosophy

    Get PDF
    dissertationOver 40 years ago, the first computer simulation of a protein was reported: the atomic motions of a 58 amino acid protein were simulated for few picoseconds. With today's supercomputers, simulations of large biomolecular systems with hundreds of thousands of atoms can reach biologically significant timescales. Through dynamics information biomolecular simulations can provide new insights into molecular structure and function to support the development of new drugs or therapies. While the recent advances in high-performance computing hardware and computational methods have enabled scientists to run longer simulations, they also created new challenges for data management. Investigators need to use local and national resources to run these simulations and store their output, which can reach terabytes of data on disk. Because of the wide variety of computational methods and software packages available to the community, no standard data representation has been established to describe the computational protocol and the output of these simulations, preventing data sharing and collaboration. Data exchange is also limited due to the lack of repositories and tools to summarize, index, and search biomolecular simulation datasets. In this dissertation a common data model for biomolecular simulations is proposed to guide the design of future databases and APIs. The data model was then extended to a controlled vocabulary that can be used in the context of the semantic web. Two different approaches to data management are also proposed. The iBIOMES repository offers a distributed environment where input and output files are indexed via common data elements. The repository includes a dynamic web interface to summarize, visualize, search, and download published data. A simpler tool, iBIOMES Lite, was developed to generate summaries of datasets hosted at remote sites where user privileges and/or IT resources might be limited. These two informatics-based approaches to data management offer new means for the community to keep track of distributed and heterogeneous biomolecular simulation data and create collaborative networks

    HOME: Hybrid Ontology Mapping Evaluation Tool for Computer Science Curricula

    Get PDF
    This paper presents a hybrid ontology mapping tool for evaluating the standard of computer science subjects against the Thailand Qualification Framework for Higher Education (HQF: HEd). This can improve the standard of curriculum of universities in Thailand with higher accuracy and enable the decrease of processing time. Three ontologies have been designed: course, TQF: HEd and the standard curriculum of computer science. They were used for comparing course contents by applying a combination of ontology mapping techniques (semantic-based using extended Wu & Palmer’s algorithm and structure-based using SKOS features). Test with the sample data show that the tool based on a hybrid ontology mapping worked sufficiently well and can inform the efforts for curriculum improvement

    User modeling for exploratory search on the Social Web. Exploiting social bookmarking systems for user model extraction, evaluation and integration

    Get PDF
    Exploratory search is an information seeking strategy that extends be- yond the query-and-response paradigm of traditional Information Retrieval models. Users browse through information to discover novel content and to learn more about the newly discovered things. Social bookmarking systems integrate well with exploratory search, because they allow one to search, browse, and filter social bookmarks. Our contribution is an exploratory tag search engine that merges social bookmarking with exploratory search. For this purpose, we have applied collaborative filtering to recommend tags to users. User models are an im- portant prerequisite for recommender systems. We have produced a method to algorithmically extract user models from folksonomies, and an evaluation method to measure the viability of these user models for exploratory search. According to our evaluation web-scale user modeling, which integrates user models from various services across the Social Web, can improve exploratory search. Within this thesis we also provide a method for user model integra- tion. Our exploratory tag search engine implements the findings of our user model extraction, evaluation, and integration methods. It facilitates ex- ploratory search on social bookmarks from Delicious and Connotea and pub- lishes extracted user models as Linked Data

    Resource discovery in heterogeneous digital content environments

    Get PDF
    The concept of 'resource discovery' is central to our understanding of how users explore, navigate, locate and retrieve information resources. This submission for a PhD by Published Works examines a series of 11 related works which explore topics pertaining to resource discovery, each demonstrating heterogeneity in their digital discovery context. The assembled works are prefaced by nine chapters which seek to review and critically analyse the contribution of each work, as well as provide contextualization within the wider body of research literature. A series of conceptual sub-themes is used to organize and structure the works and the accompanying critical commentary. The thesis first begins by examining issues in distributed discovery contexts by studying collection level metadata (CLM), its application in 'information landscaping' techniques, and its relationship to the efficacy of federated item-level search tools. This research narrative continues but expands in the later works and commentary to consider the application of Knowledge Organization Systems (KOS), particularly within Semantic Web and machine interface contexts, with investigations of semantically aware terminology services in distributed discovery. The necessary modelling of data structures to support resource discovery - and its associated functionalities within digital libraries and repositories - is then considered within the novel context of technology-supported curriculum design repositories, where questions of human-computer interaction (HCI) are also examined. The final works studied as part of the thesis are those which investigate and evaluate the efficacy of open repositories in exposing knowledge commons to resource discovery via web search agents. Through the analysis of the collected works it is possible to identify a unifying theory of resource discovery, with the proposed concept of (meta)data alignment described and presented with a visual model. This analysis assists in the identification of a number of research topics worthy of further research; but it also highlights an incremental transition by the present author, from using research to inform the development of technologies designed to support or facilitate resource discovery, particularly at a 'meta' level, to the application of specific technologies to address resource discovery issues in a local context. Despite this variation the research narrative has remained focussed on topics surrounding resource discovery in heterogeneous digital content environments and is noted as having generated a coherent body of work. Separate chapters are used to consider the methodological approaches adopted in each work and the contribution made to research knowledge and professional practice.The concept of 'resource discovery' is central to our understanding of how users explore, navigate, locate and retrieve information resources. This submission for a PhD by Published Works examines a series of 11 related works which explore topics pertaining to resource discovery, each demonstrating heterogeneity in their digital discovery context. The assembled works are prefaced by nine chapters which seek to review and critically analyse the contribution of each work, as well as provide contextualization within the wider body of research literature. A series of conceptual sub-themes is used to organize and structure the works and the accompanying critical commentary. The thesis first begins by examining issues in distributed discovery contexts by studying collection level metadata (CLM), its application in 'information landscaping' techniques, and its relationship to the efficacy of federated item-level search tools. This research narrative continues but expands in the later works and commentary to consider the application of Knowledge Organization Systems (KOS), particularly within Semantic Web and machine interface contexts, with investigations of semantically aware terminology services in distributed discovery. The necessary modelling of data structures to support resource discovery - and its associated functionalities within digital libraries and repositories - is then considered within the novel context of technology-supported curriculum design repositories, where questions of human-computer interaction (HCI) are also examined. The final works studied as part of the thesis are those which investigate and evaluate the efficacy of open repositories in exposing knowledge commons to resource discovery via web search agents. Through the analysis of the collected works it is possible to identify a unifying theory of resource discovery, with the proposed concept of (meta)data alignment described and presented with a visual model. This analysis assists in the identification of a number of research topics worthy of further research; but it also highlights an incremental transition by the present author, from using research to inform the development of technologies designed to support or facilitate resource discovery, particularly at a 'meta' level, to the application of specific technologies to address resource discovery issues in a local context. Despite this variation the research narrative has remained focussed on topics surrounding resource discovery in heterogeneous digital content environments and is noted as having generated a coherent body of work. Separate chapters are used to consider the methodological approaches adopted in each work and the contribution made to research knowledge and professional practice

    A Semantic Framework for Declarative and Procedural Knowledge

    Get PDF
    In any scientic domain, the full set of data and programs has reached an-ome status, i.e. it has grown massively. The original article on the Semantic Web describes the evolution of a Web of actionable information, i.e.\ud information derived from data through a semantic theory for interpreting the symbols. In a Semantic Web, methodologies are studied for describing, managing and analyzing both resources (domain knowledge) and applications (operational knowledge) - without any restriction on what and where they\ud are respectively suitable and available in the Web - as well as for realizing automatic and semantic-driven work\ud ows of Web applications elaborating Web resources.\ud This thesis attempts to provide a synthesis among Semantic Web technologies, Ontology Research, Knowledge and Work\ud ow Management. Such a synthesis is represented by Resourceome, a Web-based framework consisting of two components which strictly interact with each other: an ontology-based and domain-independent knowledge manager system (Resourceome KMS) - relying on a knowledge model where resource and operational knowledge are contextualized in any domain - and a semantic-driven work ow editor, manager and agent-based execution system (Resourceome WMS).\ud The Resourceome KMS and the Resourceome WMS are exploited in order to realize semantic-driven formulations of work\ud ows, where activities are semantically linked to any involved resource. In the whole, combining the use of domain ontologies and work ow techniques, Resourceome provides a exible domain and operational knowledge organization, a powerful engine for semantic-driven work\ud ow composition, and a distributed, automatic and\ud transparent environment for work ow execution

    Linked Data based Health Information Representation, Visualization and Retrieval System on the Semantic Web

    Get PDF
    Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.To better facilitate health information dissemination, using flexible ways to represent, query and visualize health data becomes increasingly important. Semantic Web technologies, which provide a common framework by allowing data to be shared and reused between applications, can be applied to the management of health data. Linked open data - a new semantic web standard to publish and link heterogonous data- allows not only human, but also machine to brows data in unlimited way. Through a use case of world health organization HIV data of sub Saharan Africa - which is severely affected by HIV epidemic, this thesis built a linked data based health information representation, querying and visualization system. All the data was represented with RDF, by interlinking it with other related datasets, which are already on the cloud. Over all, the system have more than 21,000 triples with a SPARQL endpoint; where users can download and use the data and – a SPARQL query interface where users can put different type of query and retrieve the result. Additionally, It has also a visualization interface where users can visualize the SPARQL result with a tool of their preference. For users who are not familiar with SPARQL queries, they can use the linked data search engine interface to search and browse the data. From this system we can depict that current linked open data technologies have a big potential to represent heterogonous health data in a flexible and reusable manner and they can serve in intelligent queries, which can support decision-making. However, in order to get the best from these technologies, improvements are needed both at the level of triple stores performance and domain-specific ontological vocabularies

    An Interoperability Platform Enabling Reuse of Electronic Health Records for Signal Verification Studies

    Get PDF

    Search improvement within the geospatial web in the context of spatial data infrastructures

    Get PDF
    El trabajo desarrollado en esta tesis doctoral demuestra que es posible mejorar la búsqueda en el contexto de las Infraestructuras de Datos Espaciales mediante la aplicación de técnicas y buenas prácticas de otras comunidades científicas, especialmente de las comunidades de la Web y de la Web Semántica (por ejemplo, Linked Data). El uso de las descripciones semánticas y las aproximaciones basadas en el contenido publicado por la comunidad geoespacial pueden ayudar en la búsqueda de información sobre los fenómenos geográficos, y en la búsqueda de recursos geoespaciales en general. El trabajo comienza con un análisis de una aproximación para mejorar la búsqueda de las entidades geoespaciales desde la perspectiva de geocodificación tradicional. La arquitectura de geocodificación compuesta propuesta en este trabajo asegura una mejora de los resultados de geocodificación gracias a la utilización de diferentes proveedores de información geográfica. En este enfoque, el uso de patrones estructurales de diseño y ontologías en esta aproximación permite una arquitectura avanzada en términos de extensibilidad, flexibilidad y adaptabilidad. Además, una arquitectura basada en la selección de servicio de geocodificación permite el desarrollo de una metodología de la georreferenciación de diversos tipos de información geográfica (por ejemplo, direcciones o puntos de interés). A continuación, se presentan dos aplicaciones representativas que requieren una caracterización semántica adicional de los recursos geoespaciales. El enfoque propuesto en este trabajo utiliza contenidos basados en heurísticas para el muestreo de un conjunto de recursos geopesaciales. La primera parte se dedica a la idea de la abstracción de un fenómeno geográfico de su definición espacial. La investigación muestra que las buenas prácticas de la Web Semántica se puede reutilizar en el ámbito de una Infraestructura de Datos Espaciales para describir los servicios geoespaciales estandarizados por Open Geospatial Consortium por medio de geoidentificadores (es decir, por medio de las entidades de una ontología geográfica). La segunda parte de este capítulo desglosa la aquitectura y componentes de un servicio de geoprocesamiento para la identificación automática de ortoimágenes ofrecidas a través de un servicio estándar de publicación de mapas (es decir, los servicios que siguen la especificación OGC Web Map Service). Como resultado de este trabajo se ha propuesto un método para la identificación de los mapas ofrecidos por un Web Map Service que son ortoimágenes. A continuación, el trabajo se dedica al análisis de cuestiones relacionadas con la creación de los metadatos de recursos de la Web en el contexto del dominio geográfico. Este trabajo propone una arquitectura para la generación automática de conocimiento geográfico de los recursos Web. Ha sido necesario desarrollar un método para la estimación de la cobertura geográfica de las páginas Web. Las heurísticas propuestas están basadas en el contenido publicado por os proveedores de información geográfica. El prototipo desarrollado es capaz de generar metadatos. El modelo generado contiene el conjunto mínimo recomendado de elementos requeridos por un catálogo que sigue especificación OGC Catalogue Service for the Web, el estandar recomendado por deiferentes Infraestructuras de Datos Espaciales (por ejemplo, the Infrastructure for Spatial Information in the European Community (INSPIRE)). Además, este estudio determina algunas características de la Web Geoespacial actual. En primer lugar, ofrece algunas características del mercado de los proveedores de los recursos Web de la información geográfica. Este estudio revela algunas prácticas de la comunidad geoespacial en la producción de metadatos de las páginas Web, en particular, la falta de metadatos geográficos. Todo lo anterior es la base del estudio de la cuestión del apoyo a los usuarios no expertos en la búsqueda de recursos de la Web Geoespacial. El motor de búsqueda dedicado a la Web Geoespacial propuesto en este trabajo es capaz de usar como base un motor de búsqueda existente. Por otro lado, da soporte a la búsqueda exploratoria de los recursos geoespaciales descubiertos en la Web. El experimento sobre la precisión y la recuperación ha demostrado que el prototipo desarrollado en este trabajo es al menos tan bueno como el motor de búsqueda remoto. Un estudio dedicado a la utilidad del sistema indica que incluso los no expertos pueden realizar una tarea de búsqueda con resultados satisfactorios
    • …
    corecore