16 research outputs found

    Prediction and Topological Models in Neuroscience

    Get PDF
    In the last two decades, philosophy of neuroscience has predominantly focused on explanation. Indeed, it has been argued that mechanistic models are the standards of explanatory success in neuroscience over, among other things, topological models. However, explanatory power is only one virtue of a scientific model. Another is its predictive power. Unfortunately, the notion of prediction has received comparatively little attention in the philosophy of neuroscience, in part because predictions seem disconnected from interventions. In contrast, we argue that topological predictions can and do guide interventions in science, both inside and outside of neuroscience. Topological models allow researchers to predict many phenomena, including diseases, treatment outcomes, aging, and cognition, among others. Moreover, we argue that these predictions also offer strategies for useful interventions. Topology-based predictions play this role regardless of whether they do or can receive a mechanistic interpretation. We conclude by making a case for philosophers to focus on prediction in neuroscience in addition to explanation alone

    Identifying Goals of Agents by Learning from Observations

    No full text
    International audienceThe intention recognition problem is a difficult problem which consists in determining the intentions and the goals of an agent. Solving this problem is useful when they are several agents which are interacting with each other and when they do not know each other. The effectiveness of an agents’ work could be improved in this case. We present a method to infer the possible goals of an agent by observing him in a series of successful attempts to reach them. We model this problem as a case of concept learning and propose an algorithm to produce concise hypotheses. However, this first proposal does not take into account the sequential nature of our observations and we discuss how we can infer better hypotheses when we can make some assumption about the behavior of the agents and use background knowledge on the dynamics of the environment. We then provide a simple way to enrich our data by assuming the agent can compute the effects of his actions in the next step and study the properties of our proposal in two different settings. We show that our algorithm will always provide a possible goal if such a goal exists (meaning that there is indeed some set of states in which the agent always succeeds and stops in our observations)
    corecore