27,583 research outputs found
Learning Comprehensible Theories from Structured Data
This thesis is concerned with the problem of learning comprehensible theories from structured data and covers primarily classification and regression learning. The basic knowledge representation language is set around a polymorphically-typed, higher-order logic. The general setup is closely related to the learning from propositionalized knowledge and learning from interpretations settings in Inductive Logic Programming. Individuals (also called instances) are represented as terms in the logic. A grammar-like construct called a predicate rewrite system is used to define features in the form of predicates that individuals may or may not satisfy. For learning, decision-tree algorithms of various kinds are adopted.¶ The scope of the thesis spans both theory and practice. ..
On Cognitive Preferences and the Plausibility of Rule-based Models
It is conventional wisdom in machine learning and data mining that logical
models such as rule sets are more interpretable than other models, and that
among such rule-based models, simpler models are more interpretable than more
complex ones. In this position paper, we question this latter assumption by
focusing on one particular aspect of interpretability, namely the plausibility
of models. Roughly speaking, we equate the plausibility of a model with the
likeliness that a user accepts it as an explanation for a prediction. In
particular, we argue that, all other things being equal, longer explanations
may be more convincing than shorter ones, and that the predominant bias for
shorter models, which is typically necessary for learning powerful
discriminative models, may not be suitable when it comes to user acceptance of
the learned models. To that end, we first recapitulate evidence for and against
this postulate, and then report the results of an evaluation in a
crowd-sourcing study based on about 3.000 judgments. The results do not reveal
a strong preference for simple rules, whereas we can observe a weak preference
for longer rules in some domains. We then relate these results to well-known
cognitive biases such as the conjunction fallacy, the representative heuristic,
or the recogition heuristic, and investigate their relation to rule length and
plausibility.Comment: V4: Another rewrite of section on interpretability to clarify focus
on plausibility and relation to interpretability, comprehensibility, and
justifiabilit
Recommended from our members
Fewer epistemological challenges for connectionism
Seventeen years ago, John McCarthy wrote the note Epistemological challenges for connectionism as a response to Paul Smolensky’s paper 'On the proper treatment of connectionism'. I will discuss the extent to which the four key challenges put forward by McCarthy have been solved, and what are the new challenges ahead. I argue that there are fewer epistemological challenges for connectionism, but progress has been slow. Nevertheless, there is now strong indication that neural-symbolic integration can provide effective systems of expressive reasoning and robust learning due to the recent developments in the field
Induction of Non-Monotonic Logic Programs to Explain Boosted Tree Models Using LIME
We present a heuristic based algorithm to induce \textit{nonmonotonic} logic
programs that will explain the behavior of XGBoost trained classifiers. We use
the technique based on the LIME approach to locally select the most important
features contributing to the classification decision. Then, in order to explain
the model's global behavior, we propose the LIME-FOLD algorithm ---a
heuristic-based inductive logic programming (ILP) algorithm capable of learning
non-monotonic logic programs---that we apply to a transformed dataset produced
by LIME. Our proposed approach is agnostic to the choice of the ILP algorithm.
Our experiments with UCI standard benchmarks suggest a significant improvement
in terms of classification evaluation metrics. Meanwhile, the number of induced
rules dramatically decreases compared to ALEPH, a state-of-the-art ILP system
The educational effectiveness of bilingual education
Bilingual education is the use of the native tongue to instruct limited Englishspeaking children. The authors read studies of bilingual education from the earliest period of this literature to the most recent. Of the 300 program evaluations read, only 72 (25%) were methodologically acceptable - that is, they had a treatment and control group and a statistical control for pre-treatment differences where groups were not randomly assigned. Virtually all of the studies in the United States were of elementary or junior high school students and Spanish speakers; The few studies conducted outside the United States were almost all in Canada. The research evidence indicates that, on standardized achievement tests, transitional bilingual education (TBE) is better than regular classroom instruction in only 22% of the methodologically acceptable studies when the outcome is reading, 7% of the studies when the outcome is language, and 9% of the studies when the outcome is math. TBE is never better than structured immersion, a special program for limited English proficient children where the children are in a self-contained classroom composed solely of English learners, but the instruction is in English at a pace they can understand. Thus, the research evidence does not support transitional bilingual education as a superior form of instruction for limited English proficient children
Information Science and Philosophy
Looking out of Information Science (IS) it´s a dangerous attempt to compare this relative new science direct with Philosophy. Here you find a first circumspective trial of an investigation of the traditionally named “queen of science”, Philosophy, two thousand years old and - direct opposite - the only a half century old Information Science. For me it is till now not yet clear how to do this in a serious scientific manner. I worked in Applied Informatics for 30 years and make Information Science since about 15 years. Here I dare to publish for first time the results. SOKRATES (469 – 399 b.Chr.), PLATON (428/27- 348/47 b.Chr.) und ARISTOTELES (384 - 322 b.Chr.) as inventors of our traditional occidental Philosophy, have founded the search of the sense of our Human Life, Thinking and Acting as an own science. They set the Joy of Life on top of their way of thinking. PLATON has separated this special new thinking from the „Sophists“ who had a very good public image too at his time. But they were thinking more about common business facts and knowledge only. Today we would call them manufacturer, qualified skilled workers or even bachelors of special sciences.
Philosophy has (since over 20 centuries) till today first of all the smart and high duty to serve Religion and Ethics as mental, spirit- and language-grounded science-base. In other direction it was used to overthink our whole surrounding nature theoretically and completely by our best Human Mind. It´s our traditional science on our mental highest level. All sciences can be related by Philosophy. That´s possible by our human ability to Learn, Think, Understand and finally Know any interesting new fact.
Where and how do we have now to integrate this new own science Information Science? We search consciously term-oriented and make an abstract science-theoretical comparison to find answers and definitions
Recommended from our members
Using inductive types for ensuring correctness of neuro-symbolic computations
Explaining Explanations in AI
Recent work on interpretability in machine learning and AI has focused on the building of simplified models that approximate the true criteria used to make decisions. These models are a useful pedagogical device for teaching trained professionals how to predict what decisions will be made by the complex system, and most importantly how the system might break. However, when considering any such model it’s important to remember Box’s maxim that "All models are wrong but some are useful." We focus on the distinction between these models and explanations in philosophy and sociology. These models can be understood as a "do it yourself kit" for explanations, allowing a practitioner to directly answer "what if questions" or generate contrastive explanations without external assistance. Although a valuable ability, giving these models as explanations appears more difficult than necessary, and other forms of explanation may not have the same trade-offs. We contrast the different schools of thought on what makes an explanation, and suggest that machine learning might benefit from viewing the problem more broadly
Development of an Assessment of Student Conception of the Nature of Science
This article describes a study in which a series of general education and introductory science courses were assessed using a Likert-scale instrument. As universities across the country have begun to make changes in their science curricula, especially with regards to non-science majors, assessment of courses and curricula has lagged behind implementation. The Likert-scale instrument, Attitudes and Conceptions in Science (ACS), provides a means by which faculty can determine the partial effectiveness of introductory and general education science courses. The established validity and reliability of this test suggests that its use in a variety of courses could allow identification of specific teaching methods, content, or other course characteristics that promote scientific literacy. Educational levels: Graduate or professional
- …