13 research outputs found

    User-centered semantic dataset retrieval

    Get PDF
    Finding relevant research data is an increasingly important but time-consuming task in daily research practice. Several studies report on difficulties in dataset search, e.g., scholars retrieve only partial pertinent data, and important information can not be displayed in the user interface. Overcoming these problems has motivated a number of research efforts in computer science, such as text mining and semantic search. In particular, the emergence of the Semantic Web opens a variety of novel research perspectives. Motivated by these challenges, the overall aim of this work is to analyze the current obstacles in dataset search and to propose and develop a novel semantic dataset search. The studied domain is biodiversity research, a domain that explores the diversity of life, habitats and ecosystems. This thesis has three main contributions: (1) We evaluate the current situation in dataset search in a user study, and we compare a semantic search with a classical keyword search to explore the suitability of semantic web technologies for dataset search. (2) We generate a question corpus and develop an information model to figure out on what scientific topics scholars in biodiversity research are interested in. Moreover, we also analyze the gap between current metadata and scholarly search interests, and we explore whether metadata and user interests match. (3) We propose and develop an improved dataset search based on three components: (A) a text mining pipeline, enriching metadata and queries with semantic categories and URIs, (B) a retrieval component with a semantic index over categories and URIs and (C) a user interface that enables a search within categories and a search including further hierarchical relations. Following user centered design principles, we ensure user involvement in various user studies during the development process

    Dataset Search: A lightweight, community-built tool to support research data discovery

    Get PDF
    Objective: Promoting discovery of research data helps archived data realize its potential to advance knowledge. Montana State University (MSU) Dataset Search aims to support discovery and reporting for research datasets created by researchers at institutions. Methods and Results: The Dataset Search application consists of five core features: a streamlined browse and search interface, a data model based on dataset discovery, a harvesting process for finding and vetting datasets stored in external repositories, an administrative interface for managing the creation, ingest, and maintenance of dataset records, and a dataset visualization interface to demonstrate how data is produced and used by MSU researchers. Conclusion: The Dataset Search application is designed to be easily customized and implemented by other institutions. Indexes like Dataset Search can improve search and discovery for content archived in data repositories, therefore amplifying the impact and benefits of archived data

    Dataset search: a survey

    Get PDF
    Generating value from data requires the ability to find, access and make sense of datasets. There are many efforts underway to encourage data sharing and reuse, from scientific publishers asking authors to submit data alongside manuscripts to data marketplaces, open data portals and data communities. Google recently beta released a search service for datasets, which allows users to discover data stored in various online repositories via keyword queries. These developments foreshadow an emerging research field around dataset search or retrieval that broadly encompasses frameworks, methods and tools that help match a user data need against a collection of datasets. Here, we survey the state of the art of research and commercial systems in dataset retrieval. We identify what makes dataset search a research field in its own right, with unique challenges and methods and highlight open problems. We look at approaches and implementations from related areas dataset search is drawing upon, including information retrieval, databases, entity-centric and tabular search in order to identify possible paths to resolve these open problems as well as immediate next steps that will take the field forward.Comment: 20 pages, 153 reference

    A Tight Coupling Context-Based Framework for Dataset Discovery

    Get PDF
    Discovering datasets of relevance to meet research goals is at the core of different analysis tasks in order to prove proposed hypothesis and theories. In particular, researchers in Artificial Intelligence (AI) and Machine Learning (ML) research domains where relevant datasets are essential for precise predictions have identified how the absence of methods to discover quality datasets are leading to delay and in many cases failure, of ML projects. Many research reports have brought out the absence of dataset discovery methods that fills the gap between analysis requirements and available datasets, and have given statistics to show how it hinders the process of analysis, with completion rate less than 2%. To the best of our knowledge, removing the above inadequacies remains “an open problem of great importance”. It is in this context that the thesis is making a contribution on context-based tightly coupled framework that will tightly couple dataset providers and data analytics teams. Through this framework, dataset providers publish the metadata descriptions of their datasets and analysts formulate and submit rich queries with goal specifications and quality requirements. The dataset search engine component tightly couples the query specification with metadata specifications datasets through a formal contextualized semantic matching and quality-based ranking and discover all datasets that are relevant to analyst requirements. The thesis gives a proof of concept prototype implementation and reports on its performance and efficiency through a case study

    Investigating the User Experience of Electronic Data Capture Systems: Perspectives of Clinical Research Coordinators

    Get PDF
    The purpose of this study is to explore the user experiences (UX) of clinical research coordinators (CRCs) with electronic data capture (EDC) systems. Fifteen CRCs were recruited and participated in semi-structured interviews. Interview transcripts were coded and analyzed thematically. Themes were further contextualized within a theoretical framework comprised of the Technology Acceptance (TAM) Model, the Task-Technology-Fit (TTF) Model, and concepts from usability theory. Themes emerging from the data included EDC usability and task-technology-fit challenges, and identification of organizational and technological barriers towards EDC acceptance and performance. This study contributes to the literature by evaluating EDC systems from the perspective of a previously uninvestigated user group—CRCs—for the first time. Future work shall expand the results of this study by quantitatively evaluating EDC usability and informing the design of EDC systems.Master of Science in Information Scienc

    Front-Line Physicians' Satisfaction with Information Systems in Hospitals

    Get PDF
    Day-to-day operations management in hospital units is difficult due to continuously varying situations, several actors involved and a vast number of information systems in use. The aim of this study was to describe front-line physicians' satisfaction with existing information systems needed to support the day-to-day operations management in hospitals. A cross-sectional survey was used and data chosen with stratified random sampling were collected in nine hospitals. Data were analyzed with descriptive and inferential statistical methods. The response rate was 65 % (n = 111). The physicians reported that information systems support their decision making to some extent, but they do not improve access to information nor are they tailored for physicians. The respondents also reported that they need to use several information systems to support decision making and that they would prefer one information system to access important information. Improved information access would better support physicians' decision making and has the potential to improve the quality of decisions and speed up the decision making process.Peer reviewe

    Développement de méthodes d'intégration de données biologiques à l'aide d'Elasticsearch

    Get PDF
    En biologie, les données apparaissent à toutes les étapes des projets, de la préparation des études à la publication des résultats. Toutefois, de nombreux aspects limitent leur utilisation. Le volume, la vitesse de production ainsi que la variété des données produites ont fait entrer la biologie dans une ère dominée par le phénomène des données massives. Depuis 1980 et afin d'organiser les données générées, la communauté scientifique a produit de nombreux dépôts de données. Ces dépôts peuvent contenir des données de divers éléments biologiques par exemple les gènes, les transcrits, les protéines et les métabolites, mais aussi d'autres concepts comme les toxines, le vocabulaire biologique et les publications scientifiques. Stocker l'ensemble de ces données nécessite des infrastructures matérielles et logicielles robustes et pérennes. À ce jour, de par la diversité biologique et les architectures informatiques présentes, il n'existe encore aucun dépôt centralisé contenant toutes les bases de données publiques en biologie. Les nombreux dépôts existants sont dispersés et généralement autogérés par des équipes de recherche les ayant publiées. Avec l'évolution rapide des technologies de l'information, les interfaces de partage de données ont, elles aussi, évolué, passant de protocoles de transfert de fichiers à des interfaces de requêtes de données. En conséquence, l'accès à l'ensemble des données dispersées sur les nombreux dépôts est disparate. Cette diversité d'accès nécessite l'appui d'outils d'automatisation pour la récupération de données. Lorsque plusieurs sources de données sont requises dans une étude, le cheminement des données suit différentes étapes. La première est l'intégration de données, notamment en combinant de multiples sources de données sous une interface d'accès unifiée. Viennent ensuite des exploitations diverses comme l'exploration au travers de scripts ou de visualisations, les transformations et les analyses. La littérature a montré de nombreuses initiatives de systèmes informatiques de partage et d'uniformisation de données. Toutefois, la complexité induite par ces multiples systèmes continue de contraindre la diffusion des données biologiques. En effet, la production toujours plus forte de données, leur gestion et les multiples aspects techniques font obstacle aux chercheurs qui veulent exploiter ces données et les mettre à disposition. L'hypothèse testée pour cette thèse est que l'exploitation large des données pouvait être actualisée avec des outils et méthodes récents, notamment un outil nommé Elasticsearch. Cet outil devait permettre de combler les besoins déjà identifiés dans la littérature, mais également devait permettre d'ajouter des considérations plus récentes comme le partage facilité des données. La construction d'une architecture basée sur cet outil de gestion de données permet de les partager selon des standards d'interopérabilité. La diffusion des données selon ces standards peut être autant appliquée à des opérations de fouille de données biologiques que pour de la transformation et de l'analyse de données. Les résultats présentés dans le cadre de ma thèse se basent sur des outils pouvant être utilisés par l'ensemble des chercheurs, en biologie mais aussi dans d'autres domaines. Il restera cependant à les appliquer et à les tester dans les divers autres domaines afin d'en identifier précisément les limites.In biology, data appear at all stages of projects, from study preparation to publication of results. However, many aspects limit their use. The volume, the speed of production and the variety of data produced have brought biology into an era dominated by the phenomenon of "Big Data" (or massive data). Since 1980 and in order to organize the generated data, the scientific community has produced numerous data repositories. These repositories can contain data of various biological elements such as genes, transcripts, proteins and metabolites, but also other concepts such as toxins, biological vocabulary and scientific publications. Storing all of this data requires robust and durable hardware and software infrastructures. To date, due to the diversity of biology and computer architectures present, there is no centralized repository containing all the public databases in biology. Many existing repositories are scattered and generally self-managed by research teams that have published them. With the rapid evolution of information technology, data sharing interfaces have also evolved from file transfer protocols to data query interfaces. As a result, access to data set dispersed across the many repositories is disparate. This diversity of access requires the support of automation tools for data retrieval. When multiple data sources are required in a study, the data flow follows several steps, first of which is data integration, combining multiple data sources under a unified access interface. It is followed by various exploitations such as exploration through scripts or visualizations, transformations and analyses. The literature has shown numerous initiatives of computerized systems for sharing and standardizing data. However, the complexity induced by these multiple systems continues to constrain the dissemination of biological data. Indeed, the ever-increasing production of data, its management and multiple technical aspects hinder researchers who want to exploit these data and make them available. The hypothesis tested for this thesis is that the wide exploitation of data can be updated with recent tools and methods, in particular a tool named Elasticsearch. This tool should fill the needs already identified in the literature, but also should allow adding more recent considerations, such as easy data sharing. The construction of an architecture based on this data management tool allows sharing data according to interoperability standards. Data dissemination according to these standards can be applied to biological data mining operations as well as to data transformation and analysis. The results presented in my thesis are based on tools that can be used by all researchers, in biology but also in other fields. However, applying and testing them in various other fields remains to be studied in order to identify more precisely their limits

    Enriching information extraction pipelines in clinical decision support systems

    Get PDF
    Programa Oficial de Doutoramento en Tecnoloxías da Información e as Comunicacións. 5032V01[Resumo] Os estudos sanitarios de múltiples centros son importantes para aumentar a repercusión dos resultados da investigación médica debido ao número de suxeitos que poden participar neles. Para simplificar a execución destes estudos, o proceso de intercambio de datos debería ser sinxelo, por exemplo, mediante o uso de bases de datos interoperables. Con todo, a consecución desta interoperabilidade segue sendo un tema de investigación en curso, sobre todo debido aos problemas de gobernanza e privacidade dos datos. Na primeira fase deste traballo, propoñemos varias metodoloxías para optimizar os procesos de estandarización das bases de datos sanitarias. Este traballo centrouse na estandarización de fontes de datos heteroxéneas nun esquema de datos estándar, concretamente o OMOP CDM, que foi desenvolvido e promovido pola comunidade OHDSI. Validamos a nosa proposta utilizando conxuntos de datos de pacientes con enfermidade de Alzheimer procedentes de distintas institucións. Na seguinte etapa, co obxectivo de enriquecer a información almacenada nas bases de datos de OMOP CDM, investigamos solucións para extraer conceptos clínicos de narrativas non estruturadas, utilizando técnicas de recuperación de información e de procesamento da linguaxe natural. A validación realizouse a través de conxuntos de datos proporcionados en desafíos científicos, concretamente no National NLP Clinical Challenges(n2c2). Na etapa final, propuxémonos simplificar a execución de protocolos de estudos provenientes de múltiples centros, propoñendo solucións novas para perfilar, publicar e facilitar o descubrimento de bases de datos. Algunhas das solucións desenvolvidas están a utilizarse actualmente en tres proxectos europeos destinados a crear redes federadas de bases de datos de saúde en toda Europa.[Resumen] Los estudios sanitarios de múltiples centros son importantes para aumentar la repercusión de los resultados de la investigación médica debido al número de sujetos que pueden participar en ellos. Para simplificar la ejecución de estos estudios, el proceso de intercambio de datos debería ser sencillo, por ejemplo, mediante el uso de bases de datos interoperables. Sin embargo, la consecución de esta interoperabilidad sigue siendo un tema de investigación en curso, sobre todo debido a los problemas de gobernanza y privacidad de los datos. En la primera fase de este trabajo, proponemos varias metodologías para optimizar los procesos de estandarización de las bases de datos sanitarias. Este trabajo se centró en la estandarización de fuentes de datos heterogéneas en un esquema de datos estándar, concretamente el OMOP CDM, que ha sido desarrollado y promovido por la comunidad OHDSI. Validamos nuestra propuesta utilizando conjuntos de datos de pacientes con enfermedad de Alzheimer procedentes de distintas instituciones. En la siguiente etapa, con el objetivo de enriquecer la información almacenada en las bases de datos de OMOP CDM, hemos investigado soluciones para extraer conceptos clínicos de narrativas no estructuradas, utilizando técnicas de recuperación de información y de procesamiento del lenguaje natural. La validación se realizó a través de conjuntos de datos proporcionados en desafíos científicos, concretamente en el National NLP Clinical Challenges (n2c2). En la etapa final, nos propusimos simplificar la ejecución de protocolos de estudios provenientes de múltiples centros, proponiendo soluciones novedosas para perfilar, publicar y facilitar el descubrimiento de bases de datos. Algunas de las soluciones desarrolladas se están utilizando actualmente en tres proyectos europeos destinados a crear redes federadas de bases de datos de salud en toda Europa.[Abstract] Multicentre health studies are important to increase the impact of medical research findings due to the number of subjects that they are able to engage. To simplify the execution of these studies, the data-sharing process should be effortless, for instance, through the use of interoperable databases. However, achieving this interoperability is still an ongoing research topic, namely due to data governance and privacy issues. In the first stage of this work, we propose several methodologies to optimise the harmonisation pipelines of health databases. This work was focused on harmonising heterogeneous data sources into a standard data schema, namely the OMOP CDM which has been developed and promoted by the OHDSI community. We validated our proposal using data sets of Alzheimer’s disease patients from distinct institutions. In the following stage, aiming to enrich the information stored in OMOP CDM databases, we have investigated solutions to extract clinical concepts from unstructured narratives, using information retrieval and natural language processing techniques. The validation was performed through datasets provided in scientific challenges, namely in the National NLP Clinical Challenges (n2c2). In the final stage, we aimed to simplify the protocol execution of multicentre studies, by proposing novel solutions for profiling, publishing and facilitating the discovery of databases. Some of the developed solutions are currently being used in three European projects aiming to create federated networks of health databases across Europe

    Entwicklung eines Benutzermodells zur nutzeradaptiven Suche von Datensätzen

    Get PDF
    Datenportale stellen eine große Menge an Datensätzen zur Verfügung. Das Finden des gesuchten Datensatzes stellt dabei eine Herausforderung dar. Das Ziel dieser Arbeit ist es, den Nutzer gezielter zum gewünschten Datensatz zu führen. Die gewählte Methode ist das Erstellen eines Benutzermodells, um die Suche nutzeradaptiv zu gestalten. Als eine mögliche Adaption wird in dieser Arbeit das Unterstützen des Nutzers bei der Verwendung von Suchfiltern betrachtet. Durch Vorhersage zukünfiger Interaktionen können Suchfiltervorschläge erzeugt werden. Dazu wurde ein Modell unter Verwendung von partiell sortierten Sequenzregeln anhand vorheriger Nutzersitzungen trainiert. Das Modell wurde anhand eines vorliegenden Datensatzes, den Aufzeichnungen eines Geodatenportals, geprüft. Es konnten die Suchfilter des nächsten Schrittes mit einer Genauigkeit von 51% vorhergesagt werden, was eine deutliche Verbesserung gegenüber eine zufälligen Vorhersage unter gleichen Bedingungen mit 11% darstellt. Die Ergebnisse zeigen, dass die Entwicklung eines Benutzermodells ein geeignetes Vorgehen ist, um Suchfilter vorherzusagen. Dies kann eingesetzt werden, um das Suchen in Datenportalen zu verbesser

    EJP-CONCERT. D3.7 Second joint roadmap for radiation protection research

    Get PDF
    EJP-CONCERT Work Package 3, Deliverable 3.7
    corecore