79 research outputs found

    Evolutionary ecomorphology of the Falkland Islands wolf Dusicyon australis

    Get PDF
    The Falkland Islands wolf Dusicyon australis is an extinct canid that was once the only endemic terrestrial mammal to inhabit the Falkland Islands. There is still a puzzling picture of the morphological adaptations of this wolf that quickly evolved from its mainland fossil ancestor: Dusicyon avus. We employ a geometric morphometric approach to identify patterns of skull shape variation in extant canids and Dusicyon spp. The Falkland Islands wolf and its fossil ancestor show a more carnivorous feeding morphology than other South American foxes, and they cluster morphologically with jackals. This supports convergence in skull shape between Dusicyon and Old World canids, although the convergence is not as strong as that exhibited by their sister hyper- and hypocarnivorous taxa

    Why the Failure? How Adversarial Examples Can Provide Insights for Interpretable Machine Learning

    Get PDF
    Recent advances in Machine Learning (ML) have profoundly changed many detection, classification, recognition and inference tasks. Given the complexity of the battlespace, ML has the potential to revolutionise how Coalition Situation Understanding is synthesised and revised. However, many issues must be overcome before its widespread adoption. In this paper we consider two - interpretability and adversarial attacks. Interpretability is needed because military decision-makers must be able to justify their decisions. Adversarial attacks arise because many ML algorithms are very sensitive to certain kinds of input perturbations. In this paper, we argue that these two issues are conceptually linked, and insights in one can provide insights in the other. We illustrate these ideas with relevant examples from the literature and our own experiments

    A geometric network model of intrinsic grey-matter connectivity of the human brain

    Get PDF
    Network science provides a general framework for analysing the large-scale brain networks that naturally arise from modern neuroimaging studies, and a key goal in theoretical neuro- science is to understand the extent to which these neural architectures influence the dynamical processes they sustain. To date, brain network modelling has largely been conducted at the macroscale level (i.e. white-matter tracts), despite growing evidence of the role that local grey matter architecture plays in a variety of brain disorders. Here, we present a new model of intrinsic grey matter connectivity of the human connectome. Importantly, the new model incorporates detailed information on cortical geometry to construct ‘shortcuts’ through the thickness of the cortex, thus enabling spatially distant brain regions, as measured along the cortical surface, to communicate. Our study indicates that structures based on human brain surface information differ significantly, both in terms of their topological network characteristics and activity propagation properties, when compared against a variety of alternative geometries and generative algorithms. In particular, this might help explain histological patterns of grey matter connectivity, highlighting that observed connection distances may have arisen to maximise information processing ability, and that such gains are consistent with (and enhanced by) the presence of short-cut connections

    Interpretability of deep learning models: A survey of results

    Get PDF
    Deep neural networks have achieved near-human accuracy levels in various types of classification and prediction tasks including images, text, speech, and video data. However, the networks continue to be treated mostly as black-box function approximators, mapping a given input to a classification output. The next step in this human-machine evolutionary process-incorporating these networks into mission critical processes such as medical diagnosis, planning and control-requires a level of trust association with the machine output. Typically, statistical metrics are used to quantify the uncertainty of an output. However, the notion of trust also depends on the visibility that a human has into the working of the machine. In other words, the neural network should provide human-understandable justifications for its output leading to insights about the inner workings. We call such models as interpretable deep networks. Interpretability is not a monolithic notion. In fact, the subjectivity of an interpretation, due to different levels of human understanding, implies that there must be a multitude of dimensions that together constitute interpretability. In addition, the interpretation itself can be provided either in terms of the low-level network parameters, or in terms of input features used by the model. In this paper, we outline some of the dimensions that are useful for model interpretability, and categorize prior work along those dimensions. In the process, we perform a gap analysis of what needs to be done to improve model interpretability

    Characterization of the ethanol‐inducible alc gene‐expression system in Arabidopsis thaliana

    Get PDF
    Controlled expression of transgenes in plants is key to the characterization of gene function and the regulated manipulation of growth and development. The alc gene-expression system, derived from the filamentous fungus Aspergillus nidulans, has previously been used successfully in both tobacco and potato, and has potential for use in agriculture. Its value to fundamental research is largely dependent on its utility in Arabidopsis thaliana. We have undertaken a detailed function analysis of the alc regulon in A. thaliana. By linking the alcA promoter to β-glucuronidase (GUS), luciferase (LUC) and green fluorescent protein (GFP) genes, we demonstrate that alcR-mediated expression occurs throughout the plant in a highly responsive manner. Induction occurs within one hour and is dose-dependent, with negligible activity in the absence of the exogenous inducer for soil-grown plants. Direct application of ethanol or exposure of whole plants to ethanol vapour are equally effective means of induction. Maximal expression using soil-grown plants occurred after 5 days of induction. In the majority of transgenics, expression is tightly regulated and reversible. We describe optimal strategies for utilizing the alc system in A. thaliana

    Uncertainty-Aware situational understanding

    No full text
    Situational understanding is impossible without causal reasoning and reasoning under and about uncertainty, i.e. probabilistic reasoning and reasoning about the confidence in the uncertainty assessment. We therefore consider the case of subjective (uncertain) Bayesian networks. In previous work we notice that when observations are out of the ordinary, confidence decreases because the relevant training data, effective instantiations, to determine the probabilities for unobserved variables, on the basis of the observed variables, is significantly smaller than the size of the training data, the total number of instantiations. It is therefore of primary importance for the ultimate goal of situational understanding to be able to efficiently determine the reasoning paths that lead to low confidence whenever and wherever it occurs: This can guide specific data collection exercises to reduce such an uncertainty. We propose three methods to this end, and we evaluate them on the basis of a case-study developed in collaboration with professional intelligence analysts
    • …
    corecore