1,624 research outputs found

    An Australian abroad : the secret life of the brushtail possum (Trichosurus vulpecula) : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Veterinary Science at Massey University, Manawatu, New Zealand

    Get PDF
    The “superspreader” hypothesis relates disease transmission to social contacts and assumes transmission is driven by the frequency, type and distribution of contacts among infected and susceptible individuals. I investigated characteristics of brushtail possum (Trichosurus vulpecula) home ranges for six wild free-living subpopulations, (four grids were studied; all of them before possum depopulation and two of them after possum depopulation) constructing social networks relevant to bovine tuberculosis (TB) transmission before and after depopulation. I also experimentally infected possums with a novel strain of TB to monitor secondary case infections in relation to these contact and other factors, including population density and sex ratio. Before depopulation home range estimates showed adult males had larger home ranges than female and younger possums. Home range overlap and area of overlap differed between subpopulations, and possum sex and age; with adult males having more and larger overlaps with other possums. Possums were fitted with proximity-logging collars and contacts registered between April and October, 2012. The number of connections an individual has with others and the probability of the distribution of contacts it has within the population, or node degree and betweenness, also known as the shortest distance between individuals, were associated with sex, with males having higher values for each. Males also contacted more possums than females. Post-depopulation results showed an influx of male possums, higher population density, and smaller home range sizes than before depopulation. Possums post-depopulation also lacked an apparent ‘routine’ in contact networks, interacting with other possums haphazardly. The greater level of contact among adult males, than before depopulation, and their effects on recovering populations post-depopulation, was likely the cause of more TB infection in adults and males. This thesis provides empirical evidence that adult male possums have home range and contact network characteristics that are likely to enhance their involvement in the transmission and persistence of TB, relative to female and younger possums. Observations of experimentally infected individuals showed that infected males survived longer than females and that, as a consequence, those males potentially acted as a “supershedding” subgroup. I therefore provide evidence that adult male possums are the most important drivers of TB transmission and persistence of infection in populations, and could be targeted for control measures

    Learning Semantic Correspondences in Technical Documentation

    Full text link
    We consider the problem of translating high-level textual descriptions to formal representations in technical documentation as part of an effort to model the meaning of such documentation. We focus specifically on the problem of learning translational correspondences between text descriptions and grounded representations in the target documentation, such as formal representation of functions or code templates. Our approach exploits the parallel nature of such documentation, or the tight coupling between high-level text and the low-level representations we aim to learn. Data is collected by mining technical documents for such parallel text-representation pairs, which we use to train a simple semantic parsing model. We report new baseline results on sixteen novel datasets, including the standard library documentation for nine popular programming languages across seven natural languages, and a small collection of Unix utility manuals.Comment: accepted to ACL-201

    Polyglot Semantic Parsing in APIs

    Full text link
    Traditional approaches to semantic parsing (SP) work by training individual models for each available parallel dataset of text-meaning pairs. In this paper, we explore the idea of polyglot semantic translation, or learning semantic parsing models that are trained on multiple datasets and natural languages. In particular, we focus on translating text to code signature representations using the software component datasets of Richardson and Kuhn (2017a,b). The advantage of such models is that they can be used for parsing a wide variety of input natural languages and output programming languages, or mixed input languages, using a single unified model. To facilitate modeling of this type, we develop a novel graph-based decoding framework that achieves state-of-the-art performance on the above datasets, and apply this method to two other benchmark SP tasks.Comment: accepted for NAACL-2018 (camera ready version

    Collaborative Collective Algorithms to Coordinate UGVs

    Get PDF
    Sentel/Brilliant Innovations has developed autonomous UGVs (unmanned ground vehicles) capable of generating a map of an unknown location through exploration using local software and the power of Google Tango technology. This project was tasked with developing an efficient and capable map-stitching solution allowing multiple UGVs to coordinate their movements and share information in order to greatly improve the speed at which these drones can be used to generate maps. The solution utilizes the processing power of a Raspberry Pi to pull maps from a Redis server and stitch them together. Once stitched, the maps are redistributed via the Redis server back through the network, providing every UGV the opportunity to obtain the global map. All of this stitching is performed on a single UGV, freeing the other drones to focus on generating and uploading their own unique maps to the server. The drones can use this new information to better inform their next move to prevent multiple drones from generating a map of the same location. In the future, Sentel/Brilliant Innovations hopes to take this technology and attach more advanced sensors to the drones, allowing them to add greater detail of the environment to the map rather than simply drawing boundaries. These drones have many potential applications, such as search and rescue, seeking out potential hazards, and intelligence for military and civil use.https://scholarscompass.vcu.edu/capstone/1187/thumbnail.jp

    DeepA2: A Modular Framework for Deep Argument Analysis with Pretrained Neural Text2Text Language Models

    Get PDF
    In this paper, we present and implement a multi-dimensional, modular framework for performing deep argument analysis (DeepA2) using current pre-trained language models (PTLMs). ArgumentAnalyst -- a T5 model (Raffel et al. 2020) set up and trained within DeepA2 -- reconstructs argumentative texts, which advance an informal argumentation, as valid arguments: It inserts, e.g., missing premises and conclusions, formalizes inferences, and coherently links the logical reconstruction to the source text. We create a synthetic corpus for deep argument analysis, and evaluate ArgumentAnalyst on this new dataset as well as on existing data, specifically EntailmentBank (Dalvi et al. 2021). Our empirical findings vindicate the overall framework and highlight the advantages of a modular design, in particular its ability to emulate established heuristics (such as hermeneutic cycles), to explore the model's uncertainty, to cope with the plurality of correct solutions (underdetermination), and to exploit higher-order evidence.Comment: A Demo is available at https://huggingface.co/spaces/debatelab/deepa2-demo , the model can be downloaded from https://huggingface.co/debatelab/argument-analyst , and the datasets can be accessed at https://huggingface.co/datasets/debatelab/aaa

    Probabilistic coherence, logical consistency, and Bayesian learning: Neural language models as epistemic agents

    Get PDF
    It is argued that suitably trained neural language models exhibit key properties of epistemic agency: they hold probabilistically coherent and logically consistent degrees of belief, which they can rationally revise in the face of novel evidence. To this purpose, we conduct computational experiments with rankers: T5 models [Raffel et al. 2020] that are pretrained on carefully designed synthetic corpora. Moreover, we introduce a procedure for eliciting a model’s degrees of belief, and define numerical metrics that measure the extent to which given degrees of belief violate (probabilistic, logical, and Bayesian) rationality constraints. While pretrained rankers are found to suffer from global inconsistency (in agreement with, e.g., [Jang et al. 2021]), we observe that subsequent self-training on auto-generated texts allows rankers to gradually obtain a probabilistically coherent belief system that is aligned with logical constraints. In addition, such self-training is found to have a pivotal role in rational evidential learning, too, for it seems to enable rankers to propagate a novel evidence item through their belief systems, successively re-adjusting individual degrees of belief. All this, we conclude, confirms the Rationality Hypothesis, i.e., the claim that suitable trained NLMs may exhibit advanced rational skills. We suggest that this hypothesis has empirical, yet also normative and conceptual ramifications far beyond the practical linguistic problems NLMs have originally been designed to solve

    Judgment aggregation, discursive dilemma and reflective equilibrium: Neural language models as self-improving doxastic agents

    Get PDF
    Neural language models (NLMs) are susceptible to producing inconsistent output. This paper proposes a new diagnosis as well as a novel remedy for NLMs\u27 incoherence. We train NLMs on synthetic text corpora that are created by simulating text production in a society. For diagnostic purposes, we explicitly model the individual belief systems of artificial agents (authors) who produce corpus texts. NLMs, trained on those texts, can be shown to aggregate the judgments of individual authors during pre-training according to sentence-wise vote ratios (roughly, reporting frequencies), which inevitably leads to so-called discursive dilemmas: aggregate judgments are inconsistent even though all individual belief states are consistent. As a remedy for such inconsistencies, we develop a self-training procedure—inspired by the concept of reflective equilibrium—that effectively reduces the extent of logical incoherence in a model\u27s belief system, corrects global mis-confidence, and eventually allows the model to settle on a new, epistemically superior belief state. Thus, social choice theory helps to understand why NLMs are prone to produce inconsistencies; epistemology suggests how to get rid of them
    corecore