25,070 research outputs found
Intellectualism and the argument from cognitive science
Intellectualism is the claim that practical knowledge or ‘know-how’ is a kind of propositional knowledge. The debate over Intellectualism has appealed to two different kinds of evidence, semantic and scientific. This paper concerns the relationship between Intellectualist arguments based on truth-conditional semantics of practical knowledge ascriptions, and anti-Intellectualist arguments based on cognitive science and propositional representation. The first half of the paper argues that the anti-Intellectualist argument from cognitive science rests on a naturalistic approach to metaphysics: its proponents assume that findings from cognitive science provide evidence about the nature of mental states. We demonstrate that this fact has been overlooked in the ensuing debate, resulting in inconsistency and confusion. Defenders of the semantic approach to Intellectualism engage with the argument from cognitive science in a way that implicitly endorses this naturalistic metaphysics, and even rely on it to claim that cognitive science support Intellectualism. In the course of their arguments, however, they also reject that scientific findings can have metaphysical import. We argue that this situation is preventing productive debate about Intellectualism, which would benefit from both sides being more transparent about their metaphilosophical assumptions
The Future Evolution of Consciousness
ABSTRACT. What potential exists for improvements in the functioning of consciousness? The paper addresses this issue using global workspace theory. According to this model, the prime function of consciousness is to develop novel adaptive responses. Consciousness does this by putting together new combinations of knowledge, skills and other disparate resources that are recruited from throughout the brain. The paper’s search for potential improvements in the functioning of consciousness draws on studies of the shift during human development from the use of implicit knowledge to the use of explicit (declarative) knowledge. These studies show that the ability of consciousness to adapt a particular domain improves significantly as the transition to the use of declarative knowledge occurs in that domain. However, this potential for consciousness to enhance adaptability has not yet been realised to any extent in relation to consciousness itself. The paper assesses the potential for adaptability to be improved by the conscious adaptation of key processes that constitute consciousness. A number of sources (including the practices of religious and contemplative traditions) are drawn on to investigate how this potential might be realised
Classes of Terminating Logic Programs
Termination of logic programs depends critically on the selection rule, i.e.
the rule that determines which atom is selected in each resolution step. In
this article, we classify programs (and queries) according to the selection
rules for which they terminate. This is a survey and unified view on different
approaches in the literature. For each class, we present a sufficient, for most
classes even necessary, criterion for determining that a program is in that
class. We study six classes: a program strongly terminates if it terminates for
all selection rules; a program input terminates if it terminates for selection
rules which only select atoms that are sufficiently instantiated in their input
positions, so that these arguments do not get instantiated any further by the
unification; a program local delay terminates if it terminates for local
selection rules which only select atoms that are bounded w.r.t. an appropriate
level mapping; a program left-terminates if it terminates for the usual
left-to-right selection rule; a program exists-terminates if there exists a
selection rule for which it terminates; finally, a program has bounded
nondeterminism if it only has finitely many refutations. We propose a
semantics-preserving transformation from programs with bounded nondeterminism
into strongly terminating programs. Moreover, by unifying different formalisms
and making appropriate assumptions, we are able to establish a formal hierarchy
between the different classes.Comment: 50 pages. The following mistake was corrected: In figure 5, the first
clause for insert was insert([],X,[X]
What Can Be Learned from Computer Modeling? Comparing Expository and Modeling Approaches to Teaching Dynamic Systems Behavior
Computer modeling has been widely promoted as a means to attain higher order learning outcomes. Substantiating these benefits, however, has been problematic due to a lack of proper assessment tools. In this study, we compared computer modeling with expository instruction, using a tailored assessment designed to reveal the benefits of either mode of instruction. The assessment addresses proficiency in declarative knowledge, application, construction, and evaluation. The subscales differentiate between simple and complex structure. The learning task concerns the dynamics of global warming. We found that, for complex tasks, the modeling group outperformed the expository group on declarative knowledge and on evaluating complex models and data. No differences were found with regard to the application of knowledge or the creation of models. These results confirmed that modeling and direct instruction lead to qualitatively different learning outcomes, and that these two modes of instruction cannot be compared on a single “effectiveness measure”
Know-how, intellectualism, and memory systems
ABSTRACTA longstanding tradition in philosophy distinguishes between knowthatand know-how. This traditional “anti-intellectualist” view is soentrenched in folk psychology that it is often invoked in supportof an allegedly equivalent distinction between explicit and implicitmemory, derived from the so-called “standard model of memory.”In the last two decades, the received philosophical view has beenchallenged by an “intellectualist” view of know-how. Surprisingly, defenders of the anti-intellectualist view have turned to the cognitivescience of memory, and to the standard model in particular, todefend their view. Here, I argue that this strategy is a mistake. As it turns out, upon closer scrutiny, the evidence from cognitivepsychology and neuroscience of memory does not support theanti-intellectualist approach, mainly because the standard modelof memory is likely wrong. However, this need not be interpretedas good news for the intellectualist, for it is not clear that theempirical evidence necessarily supp..
Bridging the Gap between Programming Languages and Hardware Weak Memory Models
We develop a new intermediate weak memory model, IMM, as a way of
modularizing the proofs of correctness of compilation from concurrent
programming languages with weak memory consistency semantics to mainstream
multi-core architectures, such as POWER and ARM. We use IMM to prove the
correctness of compilation from the promising semantics of Kang et al. to POWER
(thereby correcting and improving their result) and ARMv7, as well as to the
recently revised ARMv8 model. Our results are mechanized in Coq, and to the
best of our knowledge, these are the first machine-verified compilation
correctness results for models that are weaker than x86-TSO
ArrayBridge: Interweaving declarative array processing with high-performance computing
Scientists are increasingly turning to datacenter-scale computers to produce
and analyze massive arrays. Despite decades of database research that extols
the virtues of declarative query processing, scientists still write, debug and
parallelize imperative HPC kernels even for the most mundane queries. This
impedance mismatch has been partly attributed to the cumbersome data loading
process; in response, the database community has proposed in situ mechanisms to
access data in scientific file formats. Scientists, however, desire more than a
passive access method that reads arrays from files.
This paper describes ArrayBridge, a bi-directional array view mechanism for
scientific file formats, that aims to make declarative array manipulations
interoperable with imperative file-centric analyses. Our prototype
implementation of ArrayBridge uses HDF5 as the underlying array storage library
and seamlessly integrates into the SciDB open-source array database system. In
addition to fast querying over external array objects, ArrayBridge produces
arrays in the HDF5 file format just as easily as it can read from it.
ArrayBridge also supports time travel queries from imperative kernels through
the unmodified HDF5 API, and automatically deduplicates between array versions
for space efficiency. Our extensive performance evaluation in NERSC, a
large-scale scientific computing facility, shows that ArrayBridge exhibits
statistically indistinguishable performance and I/O scalability to the native
SciDB storage engine.Comment: 12 pages, 13 figure
- …