1,384 research outputs found

    A Simulator for Concept Detector Output

    Get PDF
    Concept based video retrieval is a promising search paradigm because it is fully automated and it investigates the fine grained content of a video, which is normally not captured by human annotations. Concepts are captured by so-called concept detectors. However, since these detectors do not yet show a sufficient performance, the evaluation of retrieval systems, which are built on top of the detector output, is difficult. In this report we describe a software package which generates simulated detector output for a specified performance level. Afterwards, this output can be used to execute a search run and ultimately to evaluate the performance of the proposed retrieval method, which is normally done through comparison to a baseline. The probabilistic model of the detectors are two Gaussians, one for the positive and one for the negative class. Thus, the parameters for the simulation are the two means and deviations plus the prior probability of the concept in the dataset

    Bottom-Up Modeling of Permissions to Reuse Residual Clinical Biospecimens and Health Data

    Full text link
    Consent forms serve as evidence of permissions granted by patients for clinical procedures. As the recognized value of biospecimens and health data increases, many clinical consent forms also seek permission from patients or their legally authorized representative to reuse residual clinical biospecimens and health data for secondary purposes, such as research. Such permissions are also granted by the government, which regulates how residual clinical biospecimens may be reused with or without consent. There is a need for increasingly capable information systems to facilitate discovery, access, and responsible reuse of residual clinical biospecimens and health data in accordance with these permissions. Semantic web technologies, especially ontologies, hold great promise as infrastructure for scalable, semantically interoperable approaches in healthcare and research. While there are many published ontologies for the biomedical domain, there is not yet ontological representation of the permissions relevant for reuse of residual clinical biospecimens and health data. The Informed Consent Ontology (ICO), originally designed for representing consent in research procedures, may already contain core classes necessary for representing clinical consent processes. However, formal evaluation is needed to make this determination and to extend the ontology to cover the new domain. This dissertation focuses on identifying the necessary information required for facilitating responsible reuse of residual clinical biospecimens and health data, and evaluating its representation within ICO. The questions guiding these studies include: 1. What is the necessary information regarding permissions for facilitating responsible reuse of residual clinical biospecimens and health data? 2. How well does the Informed Consent Ontology represent the identified information regarding permissions and obligations for reuse of residual clinical biospecimens and health data? We performed three sequential studies to answer these questions. First, we conducted a scoping review to identify regulations and norms that bear authority or give guidance over reuse of residual clinical biospecimens and health data in the US, the permissions by which reuse of residual clinical biospecimens and health data may occur, and key issues that must be considered when interpreting these regulations and norms. Second, we developed and tested an annotation scheme to identify permissions within clinical consent forms. Lastly, we used these findings as source data for bottom-up modelling and evaluation of ICO for representation of this new domain. We found considerable overlap in classes already in ICO and those necessary for representing permissions to reuse residual clinical biospecimens and health data. However, we also identified more than fifty classes that should be added to or imported into ICO. These efforts provide a foundation for comprehensively representing permissions to reuse residual clinical biospecimens and health data. Such representation fills a critical gap for developing applications which safeguard biospecimen resources and enable querying based on their permissions for use. By modeling information about permissions in an ontology, the heterogeneity of these permissions at a range of levels (e.g., federal regulations, consent forms) can be richly represented using entity-relationship links and embedded rules of inference and inheritance. Furthermore, by developing this content in ICO, missing content will be added to the Open Biological and Biomedical Ontology (OBO) Foundry, enabling use alongside other widely adopted ontologies and providing a valuable resource for biospecimen and information management. These methods may also serve as a model for domain experts to interact with ontology development communities to improve ontologies and address gaps which hinder successful uptake.PHDNursingUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/162937/1/eliewolf_1.pd

    Beyond the Matrix: Repository Services for Qualitative Data

    Get PDF
    The Qualitative Data Repository (QDR) provides infrastructure and guidance for the sharing and reuse of digital data used in qualitative and multi-method social inquiry. In this paper we describe some of the repositoryā€™s early experiences providing services developed specifically for the curation of qualitative research data. We focus on QDRā€™s efforts to address two key challenges for qualitative data sharing. The first challenge concerns constraints on data sharing in order to protect human participants and their identities and to comply with copyright laws. The second set of challenges addresses the unique characteristics of qualitative data and their relationship to the published text. We describe a novel method of annotating scholarly publications, resulting in a ā€œtransparency appendixā€ that allows the sharing of such ā€œgranular dataā€ (Moravcsik et al., 2013). We conclude by describing the future directions of QDRā€™s services for qualitative data archiving, sharing, and reuse

    Semantic Model Alignment for Business Process Integration

    Get PDF
    Business process models describe an enterpriseā€™s way of conducting business and in this form the basis for shaping the organization and engineering the appropriate supporting or even enabling IT. Thereby, a major task in working with models is their analysis and comparison for the purpose of aligning them. As models can differ semantically not only concerning the modeling languages used, but even more so in the way in which the natural language for labeling the model elements has been applied, the correct identification of the intended meaning of a legacy model is a non-trivial task that thus far has only been solved by humans. In particular at the time of reorganizations, the set-up of B2B-collaborations or mergers and acquisitions the semantic analysis of models of different origin that need to be consolidated is a manual effort that is not only tedious and error-prone but also time consuming and costly and often even repetitive. For facilitating automation of this task by means of IT, in this thesis the new method of Semantic Model Alignment is presented. Its application enables to extract and formalize the semantics of models for relating them based on the modeling language used and determining similarities based on the natural language used in model element labels. The resulting alignment supports model-based semantic business process integration. The research conducted is based on a design-science oriented approach and the method developed has been created together with all its enabling artifacts. These results have been published as the research progressed and are presented here in this thesis based on a selection of peer reviewed publications comprehensively describing the various aspects

    The uncertain representation ranking framework for concept-based video retrieval

    Get PDF
    Concept based video retrieval often relies on imperfect and uncertain concept detectors. We propose a general ranking framework to define effective and robust ranking functions, through explicitly addressing detector uncertainty. It can cope with multiple concept-based representations per video segment and it allows the re-use of effective text retrieval functions which are defined on similar representations. The final ranking status value is a weighted combination of two components: the expected score of the possible scores, which represents the risk-neutral choice, and the scoresā€™ standard deviation, which represents the risk or opportunity that the score for the actual representation is higher. The framework consistently improves the search performance in the shot retrieval task and the segment retrieval task over several baselines in five TRECVid collections and two collections which use simulated detectors of varying performance

    Visual exploration of semantic-web-based knowledge structures

    Get PDF
    Humans have a curious nature and seek a better understanding of the world. Data, in- formation, and knowledge became assets of our modern society through the information technology revolution in the form of the internet. However, with the growing size of accumulated data, new challenges emerge, such as searching and navigating in these large collections of data, information, and knowledge. The current developments in academic and industrial contexts target the corresponding challenges using Semantic Web techno- logies. The Semantic Web is an extension of the Web and provides machine-readable representations of knowledge for various domains. These machine-readable representations allow intelligent machine agents to understand the meaning of the data and information; and enable additional inference of new knowledge. Generally, the Semantic Web is designed for information exchange and its processing and does not focus on presenting such semantically enriched data to humans. Visualizations support exploration, navigation, and understanding of data by exploiting humansā€™ ability to comprehend complex data through visual representations. In the context of Semantic- Web-Based knowledge structures, various visualization methods and tools are available, and new ones are being developed every year. However, suitable visualizations are highly dependent on individual use cases and targeted user groups. In this thesis, we investigate visual exploration techniques for Semantic-Web-Based knowledge structures by addressing the following challenges: i) how to engage various user groups in modeling such semantic representations; ii) how to facilitate understanding using customizable visual representations; and iii) how to ease the creation of visualizations for various data sources and different use cases. The achieved results indicate that visual modeling techniques facilitate the engagement of various user groups in ontology modeling. Customizable visualizations enable users to adjust visualizations to the current needs and provide different views on the data. Additionally, customizable visualization pipelines enable rapid visualization generation for various use cases, data sources, and user group
    • ā€¦
    corecore