5 research outputs found

    Simple identification tools in FishBase

    Get PDF
    Simple identification tools for fish species were included in the FishBase information system from its inception. Early tools made use of the relational model and characters like fin ray meristics. Soon pictures and drawings were added as a further help, similar to a field guide. Later came the computerization of existing dichotomous keys, again in combination with pictures and other information, and the ability to restrict possible species by country, area, or taxonomic group. Today, www.FishBase.org offers four different ways to identify species. This paper describes these tools with their advantages and disadvantages, and suggests various options for further development. It explores the possibility of a holistic and integrated computeraided strategy

    Decision-Making with Multi-Step Expert Advice on the Web

    Get PDF
    This thesis deals with solving multi-step tasks by using advice from experts, which are algorithms to solve individual steps of such tasks. We contribute with methods for maximizing the number of correct task solutions by selecting and combining experts for individual task instances and methods for automating the process of solving tasks on the Web, where experts are available as Web services. Multi-step tasks frequently occur in Natural Language Processing (NLP) or Computer Vision, and as research progresses an increasing amount of exchangeable experts for the same steps are available on the Web. Service provider platforms such as Algorithmia monetize expert access by making expert services available via their platform and having customers pay for single executions. Such experts can be used to solve diverse tasks, which often consist of multiple steps and thus require pipelines of experts to generate hypotheses. We perceive two distinct problems for solving multi-step tasks with expert services: (1) Given that the task is sufficiently complex, no single pipeline generates correct solutions for all possible task instances. One thus must learn how to construct individual expert pipelines for individual task instances in order to maximize the number of correct solutions, while also taking into account the costs adhered to executing an expert. (2) To automatically solve multi-step tasks with expert services, we need to discover, execute and compose expert pipelines. With mostly textual descriptions of complex functionalities and input parameters, Web automation entails to integrate available expert services and data, interpreting user-specified task goals or efficiently finding correct service configurations. In this thesis, we present solutions to both problems: (1) We enable to learn well-performing expert pipelines assuming available reference data sets (comprising a number of task instances and solutions), where we distinguish between centralized and decentralized decision-making. We formalize the problem as specialization of a Markov Decision Process (MDP), which we refer to as Expert Process (EP) and integrate techniques from Statistical Relational Learning (SRL) or Multiagent coordination. (2) We develop a framework for automatically discovering, executing and composing expert pipelines by exploiting methods developed for the Semantic Web. We lift the representations of experts with structured vocabularies modeled with the Resource Description Framework (RDF) and extend EPs to Semantic Expert Processes (SEPs) to enable the data-driven execution of experts in Web-based architectures. We evaluate our methods in different domains, namely Medical Assistance with tasks in Image Processing and Surgical Phase Recognition, and NLP for textual data on the Web, where we deal with the task of Named Entity Recognition and Disambiguation (NERD)

    Tools for identifying biodiversity: progress and problems

    Get PDF
    The correct identification of organisms is fundamental not only for the assessment and the conservation of biodiversity, but also in agriculture, forestry, the food and pharmaceutical industries, forensic biology, and in the broad field of formal and informal education at all levels. In this book, the reader will find short presentations of current and upcoming projects (EDIT, KeyToNature, STERNA, Species 2000, Fishbase, BHL, ViBRANT, etc.), plus a large panel of short articles on software, taxonomic applications, use of e-keys in the educational field, and practical applications. Single-access keys are now available on most recent electronic devices; the collaborative and semantic web opens new ways to develop and to share applications; the automatic processing of molecular data and images is now based on validated systems; identification tools appear as an efficient support for environmental education and training; the monitoring of invasive and protected species and the study of climate change require intensive identifications of specimens, which opens new markets for identification research

    Analyzing Multi-Dimensional Networks within MediaWikis

    No full text
    The MediaWiki platform supports popular socio-technical systems such as Wikipedia as well as thousands of other wikis. This software encodes and records a variety of relationships about the content, history, and editors of its articles such as hyperlinks between articles, discussions among editors, and editing histories. These relationships can be analyzed using standard techniques from social network analysis, however, extracting relational data from Wikipedia has traditionally required specialized knowledge of its API, information retrieval, network analysis, and data visualization that has inhibited scholarly analysis. We present a software library called the NodeXL MediaWiki Importer that extracts a variety of relationships from the MediaWiki API and integrates with the popular NodeXL network analysis and visualization software. This library allows users to query and extract a variety of multidimensional relationships from any MediaWiki installation with a publicly-accessible API. We present a case study examining the similarities and differences between different relationships for the Wikipedia articles about “Pope Francis ” and “Social media. ” We conclude by discussing the implications this library has for both theoretical and methodological research as well as community management and outline future work to expand the capabilities of the library
    corecore