371 research outputs found
Should We Presume State Protection?
Professors Hathaway and Macklin debate the legality of the āpresumption of state protectionā that the Supreme Court of Canada established as a matter of Canadian refugee law in the Ward decision. Professor Hathaway argues that this presumption should be rejected because it lacks a sound empirical basis and because it conflicts with the relatively low evidentiary threshold set by the Refugee Conventionās āwell-founded fearā standard. Professor Macklin contends that the Ward presumption does not in and of itself impose an unduly onerous burden on claimants, and that much of the damage wrought by the presumption comes instead from misinterpretation and misapplication of the Supreme Courtās dictum by lower courts.Les professeurs Hathaway et Macklin reconsideĢrent la leĢgaliteĢ de la Ā«preĢsomption de la protection de lāEĢtatĀ» que La Cour supreĢme du Canada avait promulgeĢ comme principe de droit canadien en matieĢre de reĢfugieĢs dans le jugement Ward. Le professeur Hathaway soutient que cette preĢsomption devrait eĢtre rejeteĢe en raison de son manque de fondement empirique rigoureux ainsi que de son incompatibiliteĢ avec le niveau de preuve relative-ment faible impliqueĢ par la norme de Ā«crainte justifieĢeĀ» eĢtablie par la Convention relative au statut des reĢfugieĢs. La professeure Macklin estime que la preĢsomption Ward nāimpose gueĢre en soi un fardeau excessivement lourd sur les demandeurs, et que la plupart des probleĢmes engendreĢs par la preĢsomption deĢcoulent des erreurs dāinterpreĢtation ou dāapplication de la deĢcision de la Cour supreĢme de la part des tribunaux infeĢrieurs
Should We Presume State Protection?
Professors Hathaway and Macklin debate the legality of the āpresumption of state protectionā that the Supreme Court of Canada established as a matter of Canadian refugee law in the Ward decision. Professor Hathaway argues that this presumption should be rejected because it lacks a sound empirical basis and because it conflicts with the relatively low evidentiary threshold set by the Refugee Conventionās āwell-founded fearā standard. Professor Macklin contends that the Ward presumption does not in and of itself impose an unduly onerous burden on claimants, and that much of the damage wrought by the presumption comes instead from misinterpretation and misapplication of the Supreme Courtās dictum by lower courts
Natural History Specimen Digitization: Challenges and Concerns
A survey on the challenges and concerns invovled with digitizing natural history specimens was circulated to curators, collections managers, and administrators in the natural history community in the Spring of 2009, with over 200 responses received. The overwhelming barrier to digitizing collections was a lack of funding, based on a limited number of sources, leaving institutions mostly responsible for providing the necessary support. The uneven digitization landscape leads to a patchy accumulation of records at varying qualities, and based on different priorities, ulitimately influencing the data's fitness for use. The survey also found that although the kind of specimens found in collections and their storage can be quite varible, there are many similar challenges when digitizing including imaging, automated text scanning and parsing, geo-referencing, etc. Thus, better communication between domains could foster knowledge on digitization leading to efficiencies that could be disseminated through documentation of best practices and training
Resolving Taxonomic Names using Evidence Extracted from Text
Biological taxonomy is established on organism relationships with scientific names as the primary identifiers; however, resolving various taxonomic names remains one of the greatest challenges in taxonomy and systematic biology overall. We proposed an evidence-based approach that extracts trait (character) evidence from published literature to facilitate the comparison of taxonomic concepts. In this poster, we report an initial set of results from our first case study using the plant genus Rubus. The case study tested the entire pipeline of the Explorer of Taxon Concepts toolkit we have developed and revealed challenging phenomena to be solved in the near future
Recommended from our members
Semantic Annotation of Mutable Data
Electronic annotation of scientific data is very similar to annotation of documents. Both types of annotation amplify the original object, add related knowledge to it, and dispute or support assertions in it. In each case, annotation is a framework for discourse about the original object, and, in each case, an annotation needs to clearly identify its scope and its own terminology. However, electronic annotation of data differs from annotation of documents: the content of the annotations, including expectations and supporting evidence, is more often shared among members of networks. Any consequent actions taken by the holders of the annotated data could be shared as well. But even those current annotation systems that admit data as their subject often make it difficult or impossible to annotate at fine-enough granularity to use the results in this way for data quality control. We address these kinds of issues by offering simple extensions to an existing annotation ontology and describe how the results support an interest-based distribution of annotations. We are using the result to design and deploy a platform that supports annotation services overlaid on networks of distributed data, with particular application to data quality control. Our initial instance supports a set of natural science collection metadata services. An important application is the support for data quality control and provision of missing data. A previous proof of concept demonstrated such use based on data annotations modeled with XML-Schema
Training and hackathon on building biodiversity knowledge graphs
Knowledge graphs have the potential to unite disconnected digitized biodiversity data, and there are a number of efforts underway to build biodiversity knowledge graphs. More generally, the recent popularity of knowledge graphs, driven in part by the advent and success of the Google Knowledge Graph, has breathed life into the ongoing development of semantic web infrastructure and prototypes in the biodiversity informatics community. We describe a one week training event and hackathon that focused on applying three specific knowledge graph technologies ā the Neptune graph database; Metaphactory; and Wikidata - to a diverse set of biodiversity use cases.
We give an overview of the training, the projects that were advanced throughout the week, and the critical discussions that emerged. We believe that the main barriers towards adoption of biodiversity knowledge graphs are the lack of understanding of knowledge graphs and the lack of adoption of shared unique identifiers. Furthermore, we believe an important advancement in the outlook of knowledge graph development is the emergence of Wikidata as an identifier broker and as a scoping tool. To remedy the current barriers towards biodiversity knowledge graph development, we recommend continued discussions at workshops and at conferences, which we expect to increase awareness and adoption of knowledge graph technologies
Incentivising Use of Structured Language in Biological Descriptions: Author-Driven Phenotype Data and Ontology Production
Phenotypes are used for a multitude of purposes such as defining species, reconstructing phylogenies, diagnosing diseases or improving crop and animal productivity, but most of this phenotypic data is published in free-text narratives that are not computable. This means that the complex relationship between the genome, the environment and phenotypes is largely inaccessible to analysis and important questions related to the evolution of organisms, their diseases or their response to climate change cannot be fully addressed. It takes great effort to manually convert free-text narratives to a computable format before they can be used in large-scale analyses. We argue that this manual curation approach is not a sustainable solution to produce computable phenotypic data for three reasons: 1) it does not scale to all of biodiversity; 2) it does not stop the publication of free-text phenotypes that will continue to need manual curation in the future and, most importantly, 3) It does not solve the problem of inter-curator variation (curators interpret/convert a phenotype differently from each other). Our empirical studies have shown that inter-curator variation is as high as 40% even within a single project. With this level of variation, it is difficult to imagine that data integrated from multiple curation projects can be of high quality. The key causes of this variation have been identified as semantic vagueness in original phenotype descriptions and difficulties in using standardised vocabularies (ontologies). We argue that the authors describing phenotypes are the key to the solution. Given the right tools and appropriate attribution, the authors should be in charge of developing a projectās semantics and ontology. This will speed up ontology development and improve the semantic clarity of phenotype descriptions from the moment of publication. A proof of concept project on this idea was funded by NSF ABI in July 2017. We seek readers input or critique of the proposed approaches to help achieve community-based computable phenotype data production in the near future. Results from this project will be accessible through https://biosemantics.github.io/author-driven-production
Incentivising Use of Structured Language in Biological Descriptions: Author-Driven Phenotype Data and Ontology Production
Phenotypes are used for a multitude of purposes such as defining species, reconstructing phylogenies, diagnosing diseases or improving crop and animal productivity, but most of this phenotypic data is published in free-text narratives that are not computable. This means that the complex relationship between the genome, the environment and phenotypes is largely inaccessible to analysis and important questions related to the evolution of organisms, their diseases or their response to climate change cannot be fully addressed. It takes great effort to manually convert free-text narratives to a computable format before they can be used in large-scale analyses. We argue that this manual curation approach is not a sustainable solution to produce computable phenotypic data for three reasons: 1) it does not scale to all of biodiversity; 2) it does not stop the publication of free-text phenotypes that will continue to need manual curation in the future and, most importantly, 3) It does not solve the problem of inter-curator variation (curators interpret/convert a phenotype differently from each other). Our empirical studies have shown that inter-curator variation is as high as 40% even within a single project. With this level of variation, it is difficult to imagine that data integrated from multiple curation projects can be of high quality. The key causes of this variation have been identified as semantic vagueness in original phenotype descriptions and difficulties in using standardised vocabularies (ontologies). We argue that the authors describing phenotypes are the key to the solution. Given the right tools and appropriate attribution, the authors should be in charge of developing a projectās semantics and ontology. This will speed up ontology development and improve the semantic clarity of phenotype descriptions from the moment of publication. A proof of concept project on this idea was funded by NSF ABI in July 2017. We seek readers input or critique of the proposed approaches to help achieve community-based computable phenotype data production in the near future. Results from this project will be accessible through https://biosemantics.github.io/author-driven-production
Developing community phenotype ontologies: Understanding usersā preferences
This poster reports preliminary user-testing results on four different methods to add terms to a phenotype ontology. A total of 31 graduate students from UA iSchool and three senior botanists participated in two different experiments. Results suggest the Quick Form and WebProtege are preferred by biologists and WikiData and Wizard are not preferred for different reasons.Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/151969/1/pra2199_am.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/151969/2/pra2199.pd
- ā¦