2,589,710 research outputs found
Recommended from our members
Interest and Predictability: Deciding What to Learn, When to Learn
Inductive learning, which involves largely structural comparisons of examples, and explanation-based learning, a knowledge-intensive method for analyzing examples to build generalized schemas, are two major learning techniques used in AI. In this paper, we show how a combination of the two methods - applying generalization-based techniques during the course of inductive learning - can achieve the power of explanation-based learning without some of the computational problems that arise in domains lacking detailed explanatory rules. We show how the ideas predictability and interest can be particulary valuable in this text
Learning Disentangled Representations with Reference-Based Variational Autoencoders
Learning disentangled representations from visual data, where different
high-level generative factors are independently encoded, is of importance for
many computer vision tasks. Solving this problem, however, typically requires
to explicitly label all the factors of interest in training images. To
alleviate the annotation cost, we introduce a learning setting which we refer
to as "reference-based disentangling". Given a pool of unlabeled images, the
goal is to learn a representation where a set of target factors are
disentangled from others. The only supervision comes from an auxiliary
"reference set" containing images where the factors of interest are constant.
In order to address this problem, we propose reference-based variational
autoencoders, a novel deep generative model designed to exploit the
weak-supervision provided by the reference set. By addressing tasks such as
feature learning, conditional image generation or attribute transfer, we
validate the ability of the proposed model to learn disentangled
representations from this minimal form of supervision
Design and Development of an Intelligent Tutoring System for C# Language
Learning programming is thought to be troublesome. One doable reason why students don’t do well in programming is expounded to the very fact that traditional way of learning within the lecture hall adds more stress on students in understanding the Material rather than applying the Material to a true application. For a few students, this teaching model might not catch their interest. As a result, they'll not offer their best effort to grasp the Material given. Seeing however the information is applied to real issues will increase student interest in learning. As a consequence, this may increase their effort to be taught.
In the current paper, we try to help students learn C# programming language using Intelligent Tutoring System. This ITS was developed using ITSB authoring tool to be able to help the student learn programming efficiently and make the learning procedure very pleasing. A knowledge base using ITSB authoring tool style was used to represent the student's work and to give customized feedback and support to students
Beef Cattle Instance Segmentation Using Fully Convolutional Neural Network
In this paper we present a novel instance segmentation algorithm that extends a fully convolutional network to learn to label objects separately without prediction of regions of interest. We trained the new algorithm on a challenging CCTV recording of beef cattle, as well as benchmark MS COCO and Pascal VOC datasets. Extensive experimentation showed that our approach outperforms the state-of-the-art solutions by up to 8% on our data
Foreword to John\u27s Gospel in New Perspective
Over the last half century or more of Johannine scholarship, three issues have been of primary critical concern. One subject of interest has been the literary origin and composition of the Fourth Gospel. A second has been the application of new-literary analyses to the Johannine narrative, wherein the literary artistry and rhetorical design of the text is studied in order to discern how John’s message is conveyed in the interest of better understanding what is being said. A third area of interest has been a sustained interest in the Johannine situation, seeking to learn more about the history of Johannine Christianity. This field of inquiry provides a means of coming to grips with what issues were being faced by the Johannine hearers and readers, helping interpreters better understand how John’s story of Jesus was crafted as a means of addressing issues contemporary with the evangelist and his audience. It is within this third field of inquiry that Richard Cassidy’s book, John’s Gospel in New Perspective, makes an important contribution that is especially relevant to studies of empire and early Christianit
Heterogeneous Information about the Term Structure of Interest rates, Least-Squares Learning and Optimal Interest Rate Rules for Inflation Forecast Targeting
In this paper we incorporate the term structure of interest rates in a standard inflation forecast targeting framework.Learning about the transmission process of monetary policy is introduced by having heterogeneous agents - i.e. the central bank and private agents - who have different information sets about the future sequence of short-term interest rates.We analyse inflation forecast targeting in two environments.One in which the central bank has perfect knowledge, in the sense that it understands and observes the process by which private sector interest rate expectations are generated, and one in which the central bank has imperfect knowledge and has to learn the private sector forecasting rule for short-term interest rates.In the case of imperfect knowledge, the central bank has to learn about private sector interest rate expectations, as the latter affect the impact of monetary policy through the expectations theory of the term structure of interest rates.Here following Evans and Honkapohja (2001), the learning scheme we investigate is that of least-squares learning (recursive OLS) using the Kalman filter.We find that optimal monetary policy under learning is a policy that separates estimation and control.Therefore, this model suggests that the practical relevance of the breakdown of the separation principle and the need for experimentation in policy may be limited.information;term structure of interest rates;least squares;optimization;inflation;forecasting;learning;rational expectations;kalman filter
Bayesian Nonparametric Feature and Policy Learning for Decision-Making
Learning from demonstrations has gained increasing interest in the recent
past, enabling an agent to learn how to make decisions by observing an
experienced teacher. While many approaches have been proposed to solve this
problem, there is only little work that focuses on reasoning about the observed
behavior. We assume that, in many practical problems, an agent makes its
decision based on latent features, indicating a certain action. Therefore, we
propose a generative model for the states and actions. Inference reveals the
number of features, the features, and the policies, allowing us to learn and to
analyze the underlying structure of the observed behavior. Further, our
approach enables prediction of actions for new states. Simulations are used to
assess the performance of the algorithm based upon this model. Moreover, the
problem of learning a driver's behavior is investigated, demonstrating the
performance of the proposed model in a real-world scenario
Cloud Watch
The purpose of this activity is to explore the connections between cloud type, cloud cover, and weather and stimulate student interest in taking cloud type observations. Students observe cloud type and coverage and weather conditions over a five-day period and correlate these observations. Students make and test predictions using these observations. The intended outcome is that students learn to draw inferences from observations and use them to make and test predictions. Educational levels: Primary elementary, Intermediate elementary, Middle school, High school
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in
building systems that learn and think like people. Many advances have come from
using deep neural networks trained end-to-end in tasks such as object
recognition, video games, and board games, achieving performance that equals or
even beats humans in some respects. Despite their biological inspiration and
performance achievements, these systems differ from human intelligence in
crucial ways. We review progress in cognitive science suggesting that truly
human-like learning and thinking machines will have to reach beyond current
engineering trends in both what they learn, and how they learn it.
Specifically, we argue that these machines should (a) build causal models of
the world that support explanation and understanding, rather than merely
solving pattern recognition problems; (b) ground learning in intuitive theories
of physics and psychology, to support and enrich the knowledge that is learned;
and (c) harness compositionality and learning-to-learn to rapidly acquire and
generalize knowledge to new tasks and situations. We suggest concrete
challenges and promising routes towards these goals that can combine the
strengths of recent neural network advances with more structured cognitive
models.Comment: In press at Behavioral and Brain Sciences. Open call for commentary
proposals (until Nov. 22, 2016).
https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentar
- …
