129,901 research outputs found
Learning in a Landscape: Simulation-building as Reflexive Intervention
This article makes a dual contribution to scholarship in science and
technology studies (STS) on simulation-building. It both documents a specific
simulation-building project, and demonstrates a concrete contribution to
interdisciplinary work of STS insights. The article analyses the struggles that
arise in the course of determining what counts as theory, as model and even as
a simulation. Such debates are especially decisive when working across
disciplinary boundaries, and their resolution is an important part of the work
involved in building simulations. In particular, we show how ontological
arguments about the value of simulations tend to determine the direction of
simulation-building. This dynamic makes it difficult to maintain an interest in
the heterogeneity of simulations and a view of simulations as unfolding
scientific objects. As an outcome of our analysis of the process and
reflections about interdisciplinary work around simulations, we propose a
chart, as a tool to facilitate discussions about simulations. This chart can be
a means to create common ground among actors in a simulation-building project,
and a support for discussions that address other features of simulations
besides their ontological status. Rather than foregrounding the chart's
classificatory potential, we stress its (past and potential) role in discussing
and reflecting on simulation-building as interdisciplinary endeavor. This chart
is a concrete instance of the kinds of contributions that STS can make to
better, more reflexive practice of simulation-building.Comment: 37 page
A Bayesian spatial random effects model characterisation of tumour heterogeneity implemented using Markov chain Monte Carlo (MCMC) simulation
The focus of this study is the development of a statistical modelling procedure for characterising intra-tumour heterogeneity, motivated by recent clinical literature indicating that a variety of tumours exhibit a considerable degree of genetic spatial variability. A formal spatial statistical model has been developed and used to characterise the structural heterogeneity of a number of supratentorial primitive neuroecto-dermal tumours (PNETs), based on diffusionweighted magnetic resonance imaging. Particular attention is paid to the spatial dependence of diffusion close to the tumour boundary, in order to determine whether the data provide statistical evidence to support the proposition that water diffusivity in the boundary region of some tumours exhibits a deterministic dependence on distance from the boundary, in excess of an underlying random 2D spatial heterogeneity in diffusion. Tumour spatial heterogeneity measures were derived from the diffusion parameter estimates obtained using a Bayesian spatial random effects model. The analyses were implemented using Markov chain Monte Carlo (MCMC) simulation. Posterior predictive simulation was used to assess the adequacy of the statistical model. The main observations are that the previously reported relationship between diffusion and boundary proximity remains observable and achieves statistical significance after adjusting for an underlying random 2D spatial heterogeneity in the diffusion model parameters. A comparison of the magnitude of the boundary-distance effect with the underlying random 2D boundary heterogeneity suggests that both are important sources of variation in the vicinity of the boundary. No consistent pattern emerges from a comparison of the boundary and core spatial heterogeneity, with no indication of a consistently greater level of heterogeneity in one region compared with the other. The results raise the possibility that DWI might provide a surrogate marker of intra-tumour genetic regional heterogeneity, which would provide a powerful tool with applications in both patient management and in cancer research
Cortical Learning of Recognition Categories: A Resolution of the Exemplar Vs. Prototype Debate
Do humans and animals learn exemplars or prototypes when they categorize objects and events in the world? How are different degrees of abstraction realized through learning by neurons in inferotemporal and prefrontal cortex? How do top-down expectations influence the course of learning? Thirty related human cognitive experiments (the 5-4 category structure) have been used to test competing views in the prototype-exemplar debate. In these experiments, during the test phase, subjects unlearn in a characteristic way items that they had learned to categorize perfectly in the training phase. Many cognitive models do not describe how an individual learns or forgets such categories through time. Adaptive Resonance Theory (ART) neural models provide such a description, and also clarify both psychological and neurobiological data. Matching of bottom-up signals with learned top-down expectations plays a key role in ART model learning. Here, an ART model is used to learn incrementally in response to 5-4 category structure stimuli. Simulation results agree with experimental data, achieving perfect categorization in training and a good match to the pattern of errors exhibited by human subjects in the testing phase. These results show how the model learns both prototypes and certain exemplars in the training phase. ART prototypes are, however, unlike the ones posited in the traditional prototype-exemplar debate. Rather, they are critical patterns of features to which a subject learns to pay attention based on past predictive success and the order in which exemplars are experienced. Perturbations of old memories by newly arriving test items generate a performance curve that closely matches the performance pattern of human subjects. The model also clarifies exemplar-based accounts of data concerning amnesia.Defense Advanced Projects Research Agency SyNaPSE program (Hewlett-Packard Company, DARPA HR0011-09-3-0001; HRL Laboratories LLC #801881-BS under HR0011-09-C-0011); Science of Learning Centers program of the National Science Foundation (NSF SBE-0354378
Integrating modes of policy analysis and strategic management practice : requisite elements and dilemmas
There is a need to bring methods to bear on public problems that are inclusive, analytic, and quick. This paper describes the efforts of three pairs of academics working from three different though complementary theoretical foundations and intervention backgrounds (i.e., ways of working) who set out together to meet this challenge. Each of the three pairs had conducted dozens of interventions that had been regarded as successful or very successful by the client groups in dealing with complex policy and strategic problems. One approach focused on leadership issues and stakeholders, another on negotiating competitive strategic intent with attention to stakeholder responses, and the third on analysis of feedback ramifications in developing policies. This paper describes the 10 year longitudinal research project designed to address the above challenge. The important outcomes are reported: the requisite elements of a general integrated approach and the enduring puzzles and tensions that arose from seeking to design a wide-ranging multi-method approach
Being-in-the-world-with: Presence Meets Social And Cognitive Neuroscience
In this chapter we will discuss the concepts of “presence” (Inner Presence) and “social presence” (Co-presence) within a cognitive and ecological perspective. Specifically, we claim that the concepts of “presence” and “social presence” are the possible links between self, action, communication and culture. In the first section we will provide a capsule view of Heidegger’s work by examining the two main features of the Heideggerian concept of “being”: spatiality and “being with”. We argue that different visions from social and cognitive sciences – Situated Cognition, Embodied Cognition, Enactive Approach, Situated Simulation, Covert Imitation - and discoveries from neuroscience – Mirror and Canonical Neurons - have many contact points with this view. In particular, these data suggest that our conceptual system dynamically produces contextualized representations (simulations) that support grounded action in different situations. This is allowed by a common coding – the motor code – shared by perception, action and concepts. This common coding also allows the subject for natively recognizing actions done by other selves within the phenomenological contents. In this picture we argue that the role of presence and social presence is to allow the process of self-identification through the separation between “self” and “other,” and between “internal” and “external”. Finally, implications of this position for communication and media studies are discussed by way of conclusion
Verification-guided modelling of salience and cognitive load
Well-designed interfaces use procedural and sensory cues to increase the cognitive salience of appropriate actions. However, empirical studies suggest that cognitive load can influence the strength of those cues. We formalise the relationship between salience and cognitive load revealed by empirical data. We add these rules to our abstract cognitive architecture, based on higher-order logic and developed for the formal verification of usability properties. The interface of a fire engine dispatch task from the empirical studies is then formally modelled and verified. The outcomes of this verification and their comparison with the empirical data provide a way of assessing our salience and load rules. They also guide further iterative refinements of these rules. Furthermore, the juxtaposition of the outcomes of formal analysis and empirical studies suggests new experimental hypotheses, thus providing input to researchers in cognitive science
The Need to Support of Data Flow Graph Visualization of Forensic Lucid Programs, Forensic Evidence, and their Evaluation by GIPSY
Lucid programs are data-flow programs and can be visually represented as data
flow graphs (DFGs) and composed visually. Forensic Lucid, a Lucid dialect, is a
language to specify and reason about cyberforensic cases. It includes the
encoding of the evidence (representing the context of evaluation) and the crime
scene modeling in order to validate claims against the model and perform event
reconstruction, potentially within large swaths of digital evidence. To aid
investigators to model the scene and evaluate it, instead of typing a Forensic
Lucid program, we propose to expand the design and implementation of the Lucid
DFG programming onto Forensic Lucid case modeling and specification to enhance
the usability of the language and the system and its behavior. We briefly
discuss the related work on visual programming an DFG modeling in an attempt to
define and select one approach or a composition of approaches for Forensic
Lucid based on various criteria such as previous implementation, wide use,
formal backing in terms of semantics and translation. In the end, we solicit
the readers' constructive, opinions, feedback, comments, and recommendations
within the context of this short discussion.Comment: 11 pages, 7 figures, index; extended abstract presented at VizSec'10
at http://www.vizsec2010.org/posters ; short paper accepted at PST'1
- …