19,374 research outputs found
Computational tasks in robotics and factory automation
The design of Manufacturing Planning and Control Systems (MPCSs) â systems that negotiate with Customers and Suppliers to exchange products in return for money in order to generate profit, is discussed.\ud
\ud
The computational task of MPCS components are systematically specified as a starting point for the development of computational engines, as computer systems and programs, that execute the specified computation. Key issues are the overwhelming complexity and frequently changing application of MPCSs
Trusting Computations: a Mechanized Proof from Partial Differential Equations to Actual Program
Computer programs may go wrong due to exceptional behaviors, out-of-bound
array accesses, or simply coding errors. Thus, they cannot be blindly trusted.
Scientific computing programs make no exception in that respect, and even bring
specific accuracy issues due to their massive use of floating-point
computations. Yet, it is uncommon to guarantee their correctness. Indeed, we
had to extend existing methods and tools for proving the correct behavior of
programs to verify an existing numerical analysis program. This C program
implements the second-order centered finite difference explicit scheme for
solving the 1D wave equation. In fact, we have gone much further as we have
mechanically verified the convergence of the numerical scheme in order to get a
complete formal proof covering all aspects from partial differential equations
to actual numerical results. To the best of our knowledge, this is the first
time such a comprehensive proof is achieved.Comment: N° RR-8197 (2012). arXiv admin note: text overlap with
arXiv:1112.179
Probabilistic Numerics and Uncertainty in Computations
We deliver a call to arms for probabilistic numerical methods: algorithms for
numerical tasks, including linear algebra, integration, optimization and
solving differential equations, that return uncertainties in their
calculations. Such uncertainties, arising from the loss of precision induced by
numerical calculation with limited time or hardware, are important for much
contemporary science and industry. Within applications such as climate science
and astrophysics, the need to make decisions on the basis of computations with
large and complex data has led to a renewed focus on the management of
numerical uncertainty. We describe how several seminal classic numerical
methods can be interpreted naturally as probabilistic inference. We then show
that the probabilistic view suggests new algorithms that can flexibly be
adapted to suit application specifics, while delivering improved empirical
performance. We provide concrete illustrations of the benefits of probabilistic
numeric algorithms on real scientific problems from astrometry and astronomical
imaging, while highlighting open problems with these new algorithms. Finally,
we describe how probabilistic numerical methods provide a coherent framework
for identifying the uncertainty in calculations performed with a combination of
numerical algorithms (e.g. both numerical optimisers and differential equation
solvers), potentially allowing the diagnosis (and control) of error sources in
computations.Comment: Author Generated Postprint. 17 pages, 4 Figures, 1 Tabl
Information Processing, Computation and Cognition
Computation and information processing are among the most fundamental notions in cognitive science. They are also among the most imprecisely discussed. Many cognitive scientists take it for granted that cognition involves computation, information processing, or both â although others disagree vehemently. Yet different cognitive scientists use âcomputationâ and âinformation processingâ to mean different things, sometimes without realizing that they do. In addition, computation and information processing are surrounded by several myths; first and foremost, that they are the same thing. In this paper, we address this unsatisfactory state of affairs by presenting a general and theory-neutral account of computation and information processing. We also apply our framework by analyzing the relations between computation and information processing on one hand and classicism and connectionism/computational neuroscience on the other. We defend the relevance to cognitive science of both computation, at least in a generic sense, and information processing, in three important senses of the term. Our account advances several foundational debates in cognitive science by untangling some of their conceptual knots in a theory-neutral way. By leveling the playing field, we pave the way for the future resolution of the debatesâ empirical aspects
Learning, Social Intelligence and the Turing Test - why an "out-of-the-box" Turing Machine will not pass the Turing Test
The Turing Test (TT) checks for human intelligence, rather than any putative
general intelligence. It involves repeated interaction requiring learning in
the form of adaption to the human conversation partner. It is a macro-level
post-hoc test in contrast to the definition of a Turing Machine (TM), which is
a prior micro-level definition. This raises the question of whether learning is
just another computational process, i.e. can be implemented as a TM. Here we
argue that learning or adaption is fundamentally different from computation,
though it does involve processes that can be seen as computations. To
illustrate this difference we compare (a) designing a TM and (b) learning a TM,
defining them for the purpose of the argument. We show that there is a
well-defined sequence of problems which are not effectively designable but are
learnable, in the form of the bounded halting problem. Some characteristics of
human intelligence are reviewed including it's: interactive nature, learning
abilities, imitative tendencies, linguistic ability and context-dependency. A
story that explains some of these is the Social Intelligence Hypothesis. If
this is broadly correct, this points to the necessity of a considerable period
of acculturation (social learning in context) if an artificial intelligence is
to pass the TT. Whilst it is always possible to 'compile' the results of
learning into a TM, this would not be a designed TM and would not be able to
continually adapt (pass future TTs). We conclude three things, namely that: a
purely "designed" TM will never pass the TT; that there is no such thing as a
general intelligence since it necessary involves learning; and that
learning/adaption and computation should be clearly distinguished.Comment: 10 pages, invited talk at Turing Centenary Conference CiE 2012,
special session on "The Turing Test and Thinking Machines
Progressive Analytics: A Computation Paradigm for Exploratory Data Analysis
Exploring data requires a fast feedback loop from the analyst to the system,
with a latency below about 10 seconds because of human cognitive limitations.
When data becomes large or analysis becomes complex, sequential computations
can no longer be completed in a few seconds and data exploration is severely
hampered. This article describes a novel computation paradigm called
Progressive Computation for Data Analysis or more concisely Progressive
Analytics, that brings at the programming language level a low-latency
guarantee by performing computations in a progressive fashion. Moving this
progressive computation at the language level relieves the programmer of
exploratory data analysis systems from implementing the whole analytics
pipeline in a progressive way from scratch, streamlining the implementation of
scalable exploratory data analysis systems. This article describes the new
paradigm through a prototype implementation called ProgressiVis, and explains
the requirements it implies through examples.Comment: 10 page
- âŠ