322 research outputs found

    Rule-based Machine Learning Methods for Functional Prediction

    Full text link
    We describe a machine learning method for predicting the value of a real-valued function, given the values of multiple input variables. The method induces solutions from samples in the form of ordered disjunctive normal form (DNF) decision rules. A central objective of the method and representation is the induction of compact, easily interpretable solutions. This rule-based decision model can be extended to search efficiently for similar cases prior to approximating function values. Experimental results on real-world data demonstrate that the new techniques are competitive with existing machine learning and statistical methods and can sometimes yield superior regression performance.Comment: See http://www.jair.org/ for any accompanying file

    Thinking like a child : restoring primacy of experience in stimulating creativity

    Get PDF

    Is morality the last frontier for machines?

    Get PDF
    This paper examines some ethical and cognitive aspects of machines making moral decisions in difficult situations. We compare the situations when humans have to make tough moral choices with those in which machines make such decisions. We argue that in situations where machines make tough moral choices, it is important to produce justification for those decisions that are psychologically compelling and acceptable by peopl

    A cognitive perspective on norms

    Get PDF
    Norms are ideals that serve as guiding beacon in many human activities. They are considered to transcend accepted social and cultural practices, and reflect some universal, moral principles. In this chapter, we will show that norms are cognitive constructs by considering several examples in the domains of language, art and aesthetics, law, science and mathematics. We will argue that, yes, norms are ideals that we posit, so in this respect they go beyond current social and cultural values. However, norms are posited using cognitive mechanisms and are based on our existing knowledge and wisdom. In this sense, norms are what we, as an individual or as a society, strive for, but they show the horizon effect in that they recede and transform as we progress towards them, and sometimes this transformation can be radical

    Is a humorous robot more trustworthy?

    Full text link
    As more and more social robots are being used for collaborative activities with humans, it is crucial to investigate mechanisms to facilitate trust in the human-robot interaction. One such mechanism is humour: it has been shown to increase creativity and productivity in human-human interaction, which has an indirect influence on trust. In this study, we investigate if humour can increase trust in human-robot interaction. We conducted a between-subjects experiment with 40 participants to see if the participants are more likely to accept the robot's suggestion in the Three-card Monte game, as a trust check task. Though we were unable to find a significant effect of humour, we discuss the effect of possible confounding variables, and also report some interesting qualitative observations from our study: for instance, the participants interacted effectively with the robot as a team member, regardless of the humour or no-humour condition.Comment: ICSR 202
    corecore