2 research outputs found
Data and knowledge-driven intelligent investment cognitive reasoning model
The modeling and analysis of information flow from various sources (e.g., analyst reports, news, and social media), and their impact on assets and investment decision- making, have drawn lots of attention. In this paper, we propose a new knowledge inference design framework that provides concrete prescriptions for developing systems capable of supporting knowledge-based investment decision-making. Our framework design incorporates the advantages of both knowledge graphs and symbolic reasoning engines through the concept of a dual system. On the other hand, it overcomes the weaknesses of traditional expert systems, saving time in the knowledge input process, reducing the introduction of errors, and achieving more comprehensive knowledge coverage to obtain better predictive performance. Moreover, our proposed design artifacts are of significant importance in addressing the issues of causality and interpretability in the literature
Towards Safe Artificial General Intelligence
The field of artificial intelligence has recently experienced a
number of breakthroughs thanks to progress in deep learning and
reinforcement learning. Computer algorithms now outperform humans
at Go, Jeopardy, image classification, and lip reading, and are
becoming very competent at driving cars and interpreting natural
language. The rapid development has led many to conjecture that
artificial intelligence with greater-than-human ability on a wide
range of tasks may not be far. This in turn raises concerns
whether we know how to control such systems, in case we were to
successfully build them.
Indeed, if humanity would find itself in conflict with a system
of much greater intelligence than itself, then human society
would likely lose. One way to make sure we avoid such a conflict
is to ensure that any future AI system with potentially
greater-than-human-intelligence has goals that are aligned with
the goals of the rest of humanity. For example, it should not
wish to kill humans or steal their resources.
The main focus of this thesis will therefore be goal alignment,
i.e. how to design artificially intelligent agents with goals
coinciding with the goals of their designers. Focus will mainly
be directed towards variants of reinforcement learning, as
reinforcement learning currently seems to be the most promising
path towards powerful artificial intelligence. We identify and
categorize goal misalignment problems in reinforcement learning
agents as designed today, and give examples of how these agents
may cause catastrophes in the future. We also suggest a number of
reasonably modest modifications that can be used to avoid or
mitigate each identified misalignment problem. Finally, we also
study various choices of decision algorithms, and conditions for
when a powerful reinforcement learning system will permit us to
shut it down.
The central conclusion is that while reinforcement learning
systems as designed today are inherently unsafe to scale to human
levels of intelligence, there are ways to potentially address
many of these issues without straying too far from the currently
so successful reinforcement learning paradigm. Much work remains
in turning the high-level proposals suggested in this thesis into
practical algorithms, however