65 research outputs found

    Ethnoontology: Ways of world‐building across cultures

    Get PDF
    This article outlines a program of ethnoontology that brings together empirical research in the ethnosciences with ontological debates in philosophy. First, we survey empirical evidence from heterogeneous cultural contexts and disciplines. Second, we propose a model of cross‐cultural relations between ontologies beyond a simple divide between universalist and relativist models. Third, we argue for an integrative model of ontology building that synthesizes insights from different fields such as biological taxonomy, cognitive science, cultural anthropology, and political ecology. We conclude by arguing that a program of ethnoontology provides philosophers both with insights about traditional issues such as debates about natural kinds and with novel strategies for connecting philosophy with pressing global issues such as the conservation of local environments and the self‐determination of Indigenous communities

    Visual interaction with dimensionality reduction: a structured literature analysis

    Get PDF
    Dimensionality Reduction (DR) is a core building block in visualizing multidimensional data. For DR techniques to be useful in exploratory data analysis, they need to be adapted to human needs and domain-specific problems, ideally, interactively, and on-the-fly. Many visual analytics systems have already demonstrated the benefits of tightly integrating DR with interactive visualizations. Nevertheless, a general, structured understanding of this integration is missing. To address this, we systematically studied the visual analytics and visualization literature to investigate how analysts interact with automatic DR techniques. The results reveal seven common interaction scenarios that are amenable to interactive control such as specifying algorithmic constraints, selecting relevant features, or choosing among several DR algorithms. We investigate specific implementations of visual analysis systems integrating DR, and analyze ways that other machine learning methods have been combined with DR. Summarizing the results in a “human in the loop” process model provides a general lens for the evaluation of visual interactive DR systems. We apply the proposed model to study and classify several systems previously described in the literature, and to derive future research opportunities

    What you see is what you can change: human-centred machine learning by interactive visualization

    Get PDF
    Visual analytics (VA) systems help data analysts solve complex problems interactively, by integrating automated data analysis and mining, such as machine learning (ML) based methods, with interactive visualizations. We propose a conceptual framework that models human interactions with ML components in the VA process, and that puts the central relationship between automated algorithms and interactive visualizations into sharp focus. The framework is illustrated with several examples and we further elaborate on the interactive ML process by identifying key scenarios where ML methods are combined with human feedback through interactive visualization. We derive five open research challenges at the intersection of ML and visualization research, whose solution should lead to more effective data analysis

    Using high angular resolution diffusion imaging data to discriminate cortical regions

    Get PDF
    Brodmann's 100-year-old summary map has been widely used for cortical localization in neuroscience. There is a pressing need to update this map using non-invasive, high-resolution and reproducible data, in a way that captures individual variability. We demonstrate here that standard HARDI data has sufficiently diverse directional variation among grey matter regions to inform parcellation into distinct functional regions, and that this variation is reproducible across scans. This characterization of the signal variation as non-random and reproducible is the critical condition for successful cortical parcellation using HARDI data. This paper is a first step towards an individual cortex-wide map of grey matter microstructure, The gray/white matter and pial boundaries were identified on the high-resolution structural MRI images. Two HARDI data sets were collected from each individual and aligned with the corresponding structural image. At each vertex point on the surface tessellation, the diffusion-weighted signal was extracted from each image in the HARDI data set at a point, half way between gray/white matter and pial boundaries. We then derived several features of the HARDI profile with respect to the local cortical normal direction, as well as several fully orientationally invariant features. These features were taken as a fingerprint of the underlying grey matter tissue, and used to distinguish separate cortical areas. A support-vector machine classifier, trained on three distinct areas in repeat 1 achieved 80-82% correct classification of the same three areas in the unseen data from repeat 2 in three volunteers. Though gray matter anisotropy has been mostly overlooked hitherto, this approach may eventually form the foundation of a new cortical parcellation method in living humans. Our approach allows for further studies on the consistency of HARDI based parcellation across subjects and comparison with independent microstructural measures such as ex-vivo histology

    In Vivo Electroporation Enhances the Immunogenicity of an HIV-1 DNA Vaccine Candidate in Healthy Volunteers

    Get PDF
    DNA-based vaccines have been safe but weakly immunogenic in humans to date.We sought to determine the safety, tolerability, and immunogenicity of ADVAX, a multigenic HIV-1 DNA vaccine candidate, injected intramuscularly by in vivo electroporation (EP) in a Phase-1, double-blind, randomized placebo-controlled trial in healthy volunteers. Eight volunteers each received 0.2 mg, 1 mg, or 4 mg ADVAX or saline placebo via EP, or 4 mg ADVAX via standard intramuscular injection at weeks 0 and 8. A third vaccination was administered to eleven volunteers at week 36. EP was safe, well-tolerated and considered acceptable for a prophylactic vaccine. EP delivery of ADVAX increased the magnitude of HIV-1-specific cell mediated immunity by up to 70-fold over IM injection, as measured by gamma interferon ELISpot. The number of antigens to which the response was detected improved with EP and increasing dosage. Intracellular cytokine staining analysis of ELISpot responders revealed both CD4+ and CD8+ T cell responses, with co-secretion of multiple cytokines.This is the first demonstration in healthy volunteers that EP is safe, tolerable, and effective in improving the magnitude, breadth and durability of cellular immune responses to a DNA vaccine candidate.ClinicalTrials.gov NCT00545987

    Innovative Visualizations Shed Light on Avian Nocturnal Migration

    Get PDF
    We acknowledge the support provided by COST–European Cooperation in Science and Technology through the Action ES1305 ‘European Network for the Radar Surveillance of Animal Movement’ (ENRAM) in facilitating this collaboration. We thank ENRAM members and researchers attending the EOU round table discussion ‘Radar aeroecology: unravelling population scale patterns of avian movement’ for feedback on the visualizations. We thank Arie Dekker for his feedback as jury member of the bird migration visualization challenge & hackathon hosted at the University of Amsterdam, 25–27 March 2015. We thank Willem Bouten and Kevin Winner for discussion of methodological design. We thank Kevin Webb and Jed Irvine for assistance with downloading, managing, and reviewing US radar data. We thank the Royal Meteorological Institute of Belgium for providing weather radar data.Globally, billions of flying animals undergo seasonal migrations, many of which occur at night. The temporal and spatial scales at which migrations occur and our inability to directly observe these nocturnal movements makes monitoring and characterizing this critical period in migratory animals’ life cycles difficult. Remote sensing, therefore, has played an important role in our understanding of large-scale nocturnal bird migrations. Weather surveillance radar networks in Europe and North America have great potential for long-term low-cost monitoring of bird migration at scales that have previously been impossible to achieve. Such long-term monitoring, however, poses a number of challenges for the ornithological and ecological communities: how does one take advantage of this vast data resource, integrate information across multiple sensors and large spatial and temporal scales, and visually represent the data for interpretation and dissemination, considering the dynamic nature of migration? We assembled an interdisciplinary team of ecologists, meteorologists, computer scientists, and graphic designers to develop two different flow visualizations, which are interactive and open source, in order to create novel representations of broad-front nocturnal bird migration to address a primary impediment to long-term, large-scale nocturnal migration monitoring. We have applied these visualization techniques to mass bird migration events recorded by two different weather surveillance radar networks covering regions in Europe and North America. These applications show the flexibility and portability of such an approach. The visualizations provide an intuitive representation of the scale and dynamics of these complex systems, are easily accessible for a broad interest group, and are biologically insightful. Additionally, they facilitate fundamental ecological research, conservation, mitigation of human–wildlife conflicts, improvement of meteorological products, and public outreach, education, and engagement.Yeshttp://www.plosone.org/static/editorial#pee

    The Goldilocks problem and extended cognition

    No full text
    Abstract: According to the hypothesis of extended cognition (HEC), parts of the extrabodily world can constitute cognitive operations. I argue that the debate over HEC should be framed as a debate over the location and bounds of cognitive systems. The "Goldilocks problem" is how to demarcate these systems in a way that is neither too restrictive nor too permissive. I lay out a view of systems demarcation on which cognitive systems are sets of mechanisms for producing cognitive processes that are bounded by transducers and effectors: structures that turn physical stimuli into representations, and representations into physical effects. I show how the transducereffector view can stop the problem of uncontrolled cognitive spreading that faces HEC, and illustrate its advantages relative to other views of system individuation. Finally, I argue that demarcating systems by transducers and effectors is not question-begging in the context of a debate over HEC. "Roland had learned to see himself, theoretically, as a crossing-place for a number of systems, all loosely connected. He had been trained to see his idea of his "self" as an illusion, to be replaced by a discontinuous machinery and electrical message-network of various desires, ideological beliefs and responses, language-forms and hormones and pheromones. Mostly he liked this. He had no desire for any strenuous Romantic self-assertion." A. S. Byatt, Possession The embodied, extended, embedded, and enactive cognition movements promise revolutionary things both for cognitive science and our ordinary conception of ourselves. 1 In cognitive science, they aim to loosen the Cartesian stranglehold on our theorizing and reorient us towards new models that recognize cognition as something not restricted to the brain, but as happening in the body and the world. Correlatively, this implies a vision of ourselves as beings whose cognitive nature is constituted in part by our bodily and worldly environments. At the extreme, the picture emerging from these allied movements depicts us as "a vast parallel coalition of more or less influential forces, whose largely self-organizing unfolding makes us the thinking beings we are" (Clark, 2008, p. 131). This is the natural self-image to adopt if we free 1 See the papers collected in 2 ourselves of the idea that in each of our brains there is a "Central Meaner" whose activities most significantly constitute our abilities to reason, plan, and carry out other acts characteristic of human intelligence. Here I will focus on the arguments for the hypothesis of extended cognition (HEC): the claim that some aspects of everyday cognition actually take place in the extrabodily environment. This is distinct from the hypothesis of extended minds (HEM), which claims that some of our everyday mental functioning (as identified by folk psychology) takes place in the extrabodily environment. Both HEC and HEM face a pointed challenge: if some of the extrabodily environment is part of our cognition and mentation, what is to stop vast chunks of it from also being incorporated? In short, what is the principle of demarcation that determines that this aspect of the world, but not that one, should be counted as part of the mind or cognition? Call this the "Goldilocks problem" for psychological taxonomy. The problem is to find a way of drawing boundaries around mind and cognition that is neither too wide nor too narrow, but rather "just right". There may be varying notions of what counts as "just right" in this debate, of course. The main criteria for an adequate solution is that it be explicit and principled. A possible further condition is that it conform with well-entrenched practices and taxonomies in cognitive science; in other words, that it be conservative. Conservatism is likely to be seen as prejudicial by HEC"s proponents, however, who aim precisely to reform these ways of thinking. But the deeper rationale for conservatism is that any criterion we advance must at least account for past successes; and ideally, if we are engaged in a revisionary project, we should also provide some sort of demonstrable explanatory advantage over the practices that underlie those successes. 3 I will argue that the proper locus of the debate over HEC should be how to draw the boundaries of cognitive systems, and lay out a principled demarcation criterion for such systems. 2 The proposal I defend-the transducer-effector view-harks back to traditional notions about classical computational systems. The upshot of this view is that most of the examples of alleged extended cognition turn out not to be. Moreover, this view can accommodate the explanatory successes of traditional cognitive science, and has significant advantages over other systems-based views in the literature. Finally, it offers a solution to the problem of cognitive spread. These constitute significant arguments in its favor. between states that are attributable to the whole person or whole organism and those that are attributable to mechanisms that comprise parts of the person"s cognitive system. In either sense, states are never isolated. As I will understand them, cognitive states only come about in virtue of organized systems of processes and mechanisms, and they belong to the systems whose operations produce and sustain them. The fundamentality of cognitive systems Sometimes HEC is stated in terms of the spatial location of cognitive processes rather than vehicles. As Rowlands (2009, p. 1) puts it, it is the claim that "at least some token cognitive processes extend into the cognizing organism"s environment". Cognitive processes are sequences of cognitive states that are produced in virtue of the operations made available by the underlying 3 Saying what a "formal" property is in this context is extraordinarily difficult. See Schneider (2009) for recent discussion. All that I will mean by "formal" properties here is non-semantic properties; they may be physical, functional, etc. 5 architecture of the system to which they belong. 4 Different architectures, employing different kinds of representational vehicles, will have correspondingly different operations available to them. In classical symbolic systems, the operations include comparing and concatenating symbols, and transforming strings of symbols into new strings in accordance with some rule; e.g., for systems that embody propositional logic, the rules might include AND-elimination and double negation deletion. In connectionist systems the rules are those that determine how activation is passed from one layer to another and how the values of weights change over time. In systems using perceptual symbols, the rules might involve performing rotation on mental images, scanning an image for a match to a symbol, or determining the overlap in volume between two represented bodies in space. This notion of a cognitive process is generic: the operations that determine the next stage in processing can be of any sort, so long as they turn one representation (or set of representations) into another in some systematic way. Finally, HEC is sometimes claimed to be a thesis about the spatial distribution of cognitive systems 6 links between the subsystems to pass information and control signals (e.g., activation or inhibition of another subsystem). Putting these together, we get the following picture: 107) in which cognitive systems can extend into the body, which they take to be uncontroversial. However, the claim that cognitive processing takes place in one"s big toe would be surprising. I agree with this latter claim, but deny that human cognitive systems include every part of the human body. The error here is in taking whole human beings to be cognitive systems. Human beings possess cognitive systems, but their boundaries are not those of the whole human. So if there is no cognitive processing in one"s big toe, this is plausibly because the cognitive system embedded in the whole human doesn"t extend to the toe itself. Rowlands" (2009) attempt to demarcate cognitive processes also shows the need to move to an approach that takes systems as fundamental. His proposal is: a process P is a cognitive process iff (1) P involves information processing; (2) this processing has the proper function of making new information available either to the subject or to later processing operations; (3) the information processing involves the production of a representational state; and (4) the process "belongs to the subject of that representational state" (p. 8, emphasis in original). The sticking point here is condition (4), the ownership condition. Rowlands rightly points out that understanding what it means for a process to be owned is an extremely difficult task, but he adds that spelling this out is a job for internalists as well as externalists. Without some such 8 criterion we face the problem of "cognitive bloat" again. For instance, to borrow his example, the representations produced by my telescope as I use it to perceive Jupiter"s moons would be at risk of being cognitive processes that belong to me, since they satisfy conditions 6 Ownership is intended to block bloat. The admittedly tentative suggestion that Rowlands gives for spelling out the notion of ownership appeals to the integration of one process with others. Roughly, a process P is integrated with other processes Q and R "when it is fulfilling its proper function with respect to those processes" (p. 17). In the case of cognitive processes, this presumably means that, for example, P takes its inputs from Q and feeds its outputs to R. And a process is owned by a subject iff it is sufficiently well-integrated with other processes in the subject"s life. "Ownership is to be understood in terms of the appropriate sort of integration into the life-and in particular, the psychological life-of a subject" (p. 17). A metaphysical worry that arises with respect to this picture is that we do not yet know what a "subject" is here. But setting this aside, this criterion seems too weak to rule out the earlier counterexamples. The telescope that I peer through is fulfilling its proper function of representing distant moons to its user when I use it. The telescopic processing is integrated with my own visual processing, just as its designers intended. And similarly with any other 6 Interestingly, Rowlands treats this as a problem for HEC, although there may well be advocates of the thesis for whom they are simply natural, indeed welcome, consequences of the view. But dialectically this is fair, since both opponents of HEC and some of its defenders will want a principled way to rule out at least some cases of cognitive bloat. 9 extrabodily tool that I use, since tools are defined (in part) in terms of their proper functions. If integration (and hence ownership) only requires that a process be fulfilling its proper information-processing function with respect to other processes, then bloat remains unblocked. Rowlands might try to tighten up the conditions on integration. Perhaps it"s required that a process be integrated with many other processes for it to be genuinely owned. In section 5, we will consider Rupert"s attempt to define cognitive systems in something like this way. Or perhaps the integration has to take a specific form. However, for the time being, my diagnosis is that the mistake at work here is starting with the notion of a cognitive process and then trying to spell out what it is for these processes to be integrated with a "subject". We can make greater progress if we start with the notion of a cognitive system, and explain what it is for a process to be taking place in that system by appeal to the demarcation criteria for such systems in general. The transducer-effector view of systems demarcation The conception of a cognitive system that I will be working with is one that derives from Pylyshyn"s discussion of cognitive (functional) architectures (1984, pp. 30-1). A cognitive system is a set of physical structures and mechanisms that collectively realize a specific functional architecture. Such an architecture makes available a representational vocabulary, a set of primitive operations defined over them, a set of resources that these operations may make use of, and a set of control structures that determine how the activation and inhibition of operations and resources is orchestrated. These collectively determine the internal dynamics of processes in the system: how one set of input representations triggers a cascade of processing throughout various parts of the system, resulting eventually in some sort of output. 10 Within this generic definition of a cognitive system, there are many more determinate ways to fill in the details of the architecture, and much of the debate among working cognitive psychologists and neuropsychologists centers on this problem. The specific sort of architecture that is at work in human cognition is not our main concern here, however. Neither is the rather difficult question of how we are to individuate types of architecture. Rather, what is relevant is that the conception of cognitive systems as sets of mechanisms that realize a functional architecture comes with a criterion for deciding what is internal to the system and what is external to it. The criterion is this: the boundaries of a cognitive system are given by the location of its transducers and its effectors. A transducer, in Pylyshyn"s terms (pp. 151-178) is a device that (1) maps inputs described in physical terms into outputs described in representational terms in a way that is (2) interrupt-driven and (3) primitive and nonsymbolic. Saying that transducers are interrupt-driven is just to say that their activation is mandatorily determined by the presence of their physical input conditions. Saying that they are primitive implies that they do not carry out their mapping function by any internal representational means; their operations do not involve cognitive processes, although they may obviously be physically complex. The most important condition on transducers, for our purposes, is that they have the function of turning physical stimuli into representational or computational states. The inputs to a transducer are not themselves representational; transducers respond only to physical properties and magnitudes. They take, for example, pressure, temperature, vibrations in the air, or ambient light in a region of space, and produce vehicles that represent something, most frequently some aspect of the environment that the stimulus typically carries information about. Transducers can 11 thus be thought of as the place in where things in the external environment become input for the cognitive system. The same can be said of effectors. Corresponding to the above definition of a transducer, an effector is a device that (1) maps inputs described in representational terms into outputs described in physical terms in a way that is (2) interrupt-driven and 12 operations themselves are not computational or representational. This is what justifies our treating it as a primitive processor from the point of view of the architecture. It may be difficult to determine how to "chunk" a complex neural system into those parts that carry out the function of transducers and effectors. The point is not the minimize these complexities, but only to note that the notion of a peripheral sensorimotor cell and the notion of a transducer-effector need not always coincide. What does it mean to be "within" the boundaries of transducer-effectors? Physical containment is neither necessary nor sufficient. What matters is that something take its input from them, or deliver its output to them. Normally, in the case of biological organisms, this will involve inbound or outbound spatial movement, but it need not. We can easily imagine strange creatures that have their transducers on their bodily surfaces, but keep their central nervous system elsewhere. Dennett"s thought experiment in which a series of mishaps result in his ending up as a brain in a vat connected by radio signals to a distant body is a perfect example The motivation for adopting the transducer view can be seen in Pylyshyn"s discussion. He remarks that aspects of the physical environment to which the computer may be called on to respond-say, in a machine vision or speech recognition system-generally are not the same as the aspects that define computational states. Indeed, rarely is a physical event 13 outside a computer"s mainframe considered a computational event (though, with distributed computation, the boundaries of the "computer" become less well-defined). Computational events invariable are highly specific equivalence classes of electrical events within the machine"s structure. If, however, we have a device that systematically maps interesting equivalence classes of physical events into classes of computationally relevant internal events, we encounter the possibility of coupling the machine-as computer-to a noncomputational environment. (Pylyshyn, 1984, pp. 151-2) A virtue of this account, then, is not that it merely gives us a way of telling inside from outside. It also does the much more important job of telling us what sorts of events count as input to the system and output of the system. It is possible to influence the course of processing in a system in any number of ways. A simple knock on the head may produce thoughts of being Napoleon or hallucinations of pink bears. The knock on the head is the cause, but it is not an input, since the system is not designed to produce those states in response to head-knocks (see The transducer view has substantial initial plausibility. It provides a clear criterion for distinguishing cognitive systems from their environment, and in doing so helps us to make the important distinction between what is properly input to and output from these systems. Its further virtues will emerge as it is compared to its rivals. Skepticism about transducers Haugeland (1998) has argued that the notion of a transducer is fundamentally a confused one, and that focus on it distracts us from the important facts concerning how organisms interact fluidly with their environments. He offers several related arguments for the conclusion that a theory of behavior should dispense with the notion of transducers and effectors entirely. None of these, however, is persuasive. Haugeland proposes that transducers are inherently "low-bandwidth" devices (p. 220). That is, they take a relatively information rich stream of stimuli from the world and squash it down to a few bits encoded in a symbolic description. But this, he conjectures, results in a system that loses significant capacity to respond sensitively to the details of the perceptual situation. A system lacking this sort of "bottleneck" could engage more fluidly with its surroundings. So we should reject the transducer conception of how cognizers relate to the world, in favor of a non-transduction based "high-bandwidth" interaction. But as Clark (2008, pp. 31-3) points out, it is a mistake to suppose that all transducers need to be low-bandwidth. 8 This seems to be an illusion generated by Haugeland"s focus on symbolic descriptions as the output of transduction. Symbols, in something like the LOT sense, are one possible output, but it is equally possible that transducers output elements of fairly finegrained perceptual models of the environment. These perceptual symbol
    corecore