955 research outputs found
Logic Programming and Machine Ethics
Transparency is a key requirement for ethical machines. Verified ethical
behavior is not enough to establish justified trust in autonomous intelligent
agents: it needs to be supported by the ability to explain decisions. Logic
Programming (LP) has a great potential for developing such perspective ethical
systems, as in fact logic rules are easily comprehensible by humans.
Furthermore, LP is able to model causality, which is crucial for ethical
decision making.Comment: In Proceedings ICLP 2020, arXiv:2009.09158. Invited paper for the
ICLP2020 Panel on "Machine Ethics". arXiv admin note: text overlap with
arXiv:1909.0825
Heterogeneous Proxytypes Extended: Integrating Theory-like Representations and Mechanisms with Prototypes and Exemplars
The paper introduces an extension of the proposal according to which
conceptual representations in cognitive agents should be intended as heterogeneous
proxytypes. The main contribution of this paper is in that it details how
to reconcile, under a heterogeneous representational perspective, different theories
of typicality about conceptual representation and reasoning. In particular, it
provides a novel theoretical hypothesis - as well as a novel categorization algorithm
called DELTA - showing how to integrate the representational and reasoning
assumptions of the theory-theory of concepts with the those ascribed to the
prototype and exemplars-based theories
Motivations, Values and Emotions: 3 sides of the same coin
This position paper speaks to the interrelationships between the three concepts of motivations, values, and emotion. Motivations prime actions, values serve to choose between motivations, emotions provide a common currency for values, and emotions implement motivations. While conceptually distinct, the three are so pragmatically intertwined as to differ primarily from our taking different points of view. To make these points more transparent, we briefly describe the three in the context a cognitive architecture, the LIDA model, for software agents and robots that models human cognition, including a developmental period. We also compare the LIDA model with other models of cognition, some involving learning and emotions. Finally, we conclude that artificial emotions will prove most valuable as implementers of motivations in situations requiring learning and development
Verification of Uncertain POMDPs Using Barrier Certificates
We consider a class of partially observable Markov decision processes
(POMDPs) with uncertain transition and/or observation probabilities. The
uncertainty takes the form of probability intervals. Such uncertain POMDPs can
be used, for example, to model autonomous agents with sensors with limited
accuracy, or agents undergoing a sudden component failure, or structural damage
[1]. Given an uncertain POMDP representation of the autonomous agent, our goal
is to propose a method for checking whether the system will satisfy an optimal
performance, while not violating a safety requirement (e.g. fuel level,
velocity, and etc.). To this end, we cast the POMDP problem into a switched
system scenario. We then take advantage of this switched system
characterization and propose a method based on barrier certificates for
optimality and/or safety verification. We then show that the verification task
can be carried out computationally by sum-of-squares programming. We illustrate
the efficacy of our method by applying it to a Mars rover exploration example.Comment: 8 pages, 4 figure
Exploration of Parameter Spaces in a Virtual Observatory
Like every other field of intellectual endeavor, astronomy is being
revolutionised by the advances in information technology. There is an ongoing
exponential growth in the volume, quality, and complexity of astronomical data
sets, mainly through large digital sky surveys and archives. The Virtual
Observatory (VO) concept represents a scientific and technological framework
needed to cope with this data flood. Systematic exploration of the observable
parameter spaces, covered by large digital sky surveys spanning a range of
wavelengths, will be one of the primary modes of research with a VO. This is
where the truly new discoveries will be made, and new insights be gained about
the already known astronomical objects and phenomena. We review some of the
methodological challenges posed by the analysis of large and complex data sets
expected in the VO-based research. The challenges are driven both by the size
and the complexity of the data sets (billions of data vectors in parameter
spaces of tens or hundreds of dimensions), by the heterogeneity of the data and
measurement errors, including differences in basic survey parameters for the
federated data sets (e.g., in the positional accuracy and resolution,
wavelength coverage, time baseline, etc.), various selection effects, as well
as the intrinsic clustering properties (functional form, topology) of the data
distributions in the parameter spaces of observed attributes. Answering these
challenges will require substantial collaborative efforts and partnerships
between astronomers, computer scientists, and statisticians.Comment: Invited review, 10 pages, Latex file with 4 eps figures, style files
included. To appear in Proc. SPIE, v. 4477 (2001
Reports of the AAAI 2019 spring symposium series
Applications of machine learning combined with AI algorithms have propelled unprecedented economic disruptions across diverse fields in industry, military, medicine, finance, and others. With the forecast for even larger impacts, the present economic impact of machine learning is estimated in the trillions of dollars. But as autonomous machines become ubiquitous, recent problems have surfaced. Early on, and again in 2018, Judea Pearl warned AI scientists they must "build machines that make sense of what goes on in their environment," a warning still unheeded that may impede future development. For example, self-driving vehicles often rely on sparse data; self-driving cars have already been involved in fatalities, including a pedestrian; and yet machine learning is unable to explain the contexts within which it operates
A Note on Zipf's Law, Natural Languages, and Noncoding DNA regions
In Phys. Rev. Letters (73:2, 5 Dec. 94), Mantegna et al. conclude on the
basis of Zipf rank frequency data that noncoding DNA sequence regions are more
like natural languages than coding regions. We argue on the contrary that an
empirical fit to Zipf's ``law'' cannot be used as a criterion for similarity to
natural languages. Although DNA is a presumably an ``organized system of
signs'' in Mandelbrot's (1961) sense, an observation of statistical features of
the sort presented in the Mantegna et al. paper does not shed light on the
similarity between DNA's ``grammar'' and natural language grammars, just as the
observation of exact Zipf-like behavior cannot distinguish between the
underlying processes of tossing an sided die or a finite-state branching
process.Comment: compressed uuencoded postscript file: 14 page
- …