14,536 research outputs found
Empiricism without Magic: Transformational Abstraction in Deep Convolutional Neural Networks
In artificial intelligence, recent research has demonstrated the remarkable potential of Deep Convolutional Neural Networks (DCNNs), which seem to exceed state-of-the-art performance in new domains weekly, especially on the sorts of very difficult perceptual discrimination tasks that skeptics thought would remain beyond the reach of artificial intelligence. However, it has proven difficult to explain why DCNNs perform so well. In philosophy of mind, empiricists have long suggested that complex cognition is based on information derived from sensory experience, often appealing to a faculty of abstraction. Rationalists have frequently complained, however, that empiricists never adequately explained how this faculty of abstraction actually works. In this paper, I tie these two questions together, to the mutual benefit of both disciplines. I argue that the architectural features that distinguish DCNNs from earlier neural networks allow them to implement a form of hierarchical processing that I call âtransformational abstractionâ. Transformational abstraction iteratively converts sensory-based representations of category exemplars into new formats that are increasingly tolerant to ânuisance variationâ in input. Reflecting upon the way that DCNNs leverage a combination of linear and non-linear processing to efficiently accomplish this feat allows us to understand how the brain is capable of bi-directional travel between exemplars and abstractions, addressing longstanding problems in empiricist philosophy of mind. I end by considering the prospects for future research on DCNNs, arguing that rather than simply implementing 80s connectionism with more brute-force computation, transformational abstraction counts as a qualitatively distinct form of processing ripe with philosophical and psychological significance, because it is significantly better suited to depict the generic mechanism responsible for this important kind of psychological processing in the brain
On the integration of digital technologies into mathematics classrooms
Troucheâs (2003) presentation at the Third Computer Algebra in Mathematics Education Symposium focused on the notions of instrumental genesis and of orchestration: the former concerning the mutual transformation of learner and artefact in the course of constructing knowledge with technology; the latter concerning the problem of integrating technology into classroom practice. At the Symposium, there was considerable discussion of the idea of situated abstraction, which the current authors have been developing over the last decade. In this paper, we summarise the theory of instrumental genesis and attempt to link it with situated abstraction. We then seek to broaden Troucheâs discussion of orchestration to elaborate the role of artefacts in the process, and describe how the notion of situated abstraction could be used to make sense of the evolving mathematical knowledge of a community as well as an individual. We conclude by elaborating the ways in which technological artefacts can provide shared means of mathematical expression, and discuss the need to recognise the diversity of studentâs emergent meanings for mathematics, and the legitimacy of mathematical expression that may be initially divergent from institutionalised mathematics
Reflections on the relationship between artificial intelligence and operations research
Historically, part of Artificial Intelligence's (AI's) roots lie in Operations Research (OR). How AI has extended the problem solving paradigm developed in OR is explored. In particular, by examining how scheduling problems are solved using OR and AI, it is demonstrated that AI extends OR's model of problem solving through the opportunistic use of knowledge, problem reformulation and learning
On relating functional modeling approaches: abstracting functional models from behavioral models
This paper presents a survey of functional modeling approaches and describes a strategy to establish functional knowledge exchange between them. This survey is focused on a comparison of function meanings and representations. It is argued that functions represented as input-output flow transformations correspond to behaviors in the approaches that characterize functions as intended behaviors. Based on this result a strategy is presented to relate the different meanings of function between the approaches, establishing functional knowledge exchange between them. It is shown that this strategy is able to preserve more functional information than the functional knowledge exchange methodology of Kitamura, Mizoguchi, and co-workers. The strategy proposed here consists of two steps. In step one, operation-on-flow functions are translated into behaviors. In step two, intended behavior functions are derived from behaviors. The two-step strategy and its benefits are demonstrated by relating functional models of a power screwdriver between methodologies
Ethics of Artificial Intelligence Demarcations
In this paper we present a set of key demarcations, particularly important
when discussing ethical and societal issues of current AI research and
applications. Properly distinguishing issues and concerns related to Artificial
General Intelligence and weak AI, between symbolic and connectionist AI, AI
methods, data and applications are prerequisites for an informed debate. Such
demarcations would not only facilitate much-needed discussions on ethics on
current AI technologies and research. In addition sufficiently establishing
such demarcations would also enhance knowledge-sharing and support rigor in
interdisciplinary research between technical and social sciences.Comment: Proceedings of the Norwegian AI Symposium 2019 (NAIS 2019),
Trondheim, Norwa
Recommended from our members
Asymmetrical Multi-User Co-operative Whole Body Interaction in Abstract Domains
"Going back to our roots": second generation biocomputing
Researchers in the field of biocomputing have, for many years, successfully
"harvested and exploited" the natural world for inspiration in developing
systems that are robust, adaptable and capable of generating novel and even
"creative" solutions to human-defined problems. However, in this position paper
we argue that the time has now come for a reassessment of how we exploit
biology to generate new computational systems. Previous solutions (the "first
generation" of biocomputing techniques), whilst reasonably effective, are crude
analogues of actual biological systems. We believe that a new, inherently
inter-disciplinary approach is needed for the development of the emerging
"second generation" of bio-inspired methods. This new modus operandi will
require much closer interaction between the engineering and life sciences
communities, as well as a bidirectional flow of concepts, applications and
expertise. We support our argument by examining, in this new light, three
existing areas of biocomputing (genetic programming, artificial immune systems
and evolvable hardware), as well as an emerging area (natural genetic
engineering) which may provide useful pointers as to the way forward.Comment: Submitted to the International Journal of Unconventional Computin
Does Empirical Embeddedness Matter? Methodological Issues on Agent-Based Models for Analytical Social Science
The paper deals with the use of empirical data in social science agent-based models. Agent-based models are too often viewed just as highly abstract thought experiments conducted in artificial worlds, in which the purpose is to generate and not to test theoretical hypotheses in an empirical way. On the contrary, they should be viewed as models that need to be embedded into empirical data both to allow the calibration and the validation of their findings. As a consequence, the search for strategies to find and extract data from reality, and integrate agent-based models with other traditional empirical social science methods, such as qualitative, quantitative, experimental and participatory methods, becomes a fundamental step of the modelling process. The paper argues that the characteristics of the empirical target matter. According to characteristics of the target, ABMs can be differentiated into case-based models, typifications and theoretical abstractions. These differences pose different challenges for empirical data gathering, and imply the use of different validation strategies.Agent-Based Models, Empirical Calibration and Validation, Taxanomy of Models
- âŠ