241,115 research outputs found

    Determining what people feel and think when interacting with humans and machines

    Get PDF
    Any interactive software program must interpret the users’ actions and come up with an appropriate response that is intelligable and meaningful to the user. In most situations, the options of the user are determined by the software and hardware and the actions that can be carried out are unambiguous. The machine knows what it should do when the user carries out an action. In most cases, the user knows what he has to do by relying on conventions which he may have learned by having had a look at the instruction manual, having them seen performed by somebody else, or which he learned by modifying a previously learned convention. Some, or most, of the times he just finds out by trial and error. In user-friendly interfaces, the user knows, without having to read extensive manuals, what is expected from him and how he can get the machine to do what he wants. An intelligent interface is so-called, because it does not assume the same kind of programming of the user by the machine, but the machine itself can figure out what the user wants and how he wants it without the user having to take all the trouble of telling it to the machine in the way the machine dictates but being able to do it in his own words. Or perhaps by not using any words at all, as the machine is able to read off the intentions of the user by observing his actions and expressions. Ideally, the machine should be able to determine what the user wants, what he expects, what he hopes will happen, and how he feels

    Minds, Brains and Turing

    Get PDF
    Turing set the agenda for (what would eventually be called) the cognitive sciences. He said, essentially, that cognition is as cognition does (or, more accurately, as cognition is capable of doing): Explain the causal basis of cognitive capacity and you’ve explained cognition. Test your explanation by designing a machine that can do everything a normal human cognizer can do – and do it so veridically that human cognizers cannot tell its performance apart from a real human cognizer’s – and you really cannot ask for anything more. Or can you? Neither Turing modelling nor any other kind of computational r dynamical modelling will explain how or why cognizers feel

    What Can Artificial Intelligence Do for Scientific Realism?

    Get PDF
    The paper proposes a synthesis between human scientists and artificial representation learning models as a way of augmenting epistemic warrants of realist theories against various anti-realist attempts. Towards this end, the paper fleshes out unconceived alternatives not as a critique of scientific realism but rather a reinforcement, as it rejects the retrospective interpretations of scientific progress, which brought about the problem of alternatives in the first place. By utilising adversarial machine learning, the synthesis explores possibility spaces of available evidence for unconceived alternatives providing modal knowledge of what is possible therein. As a result, the epistemic warrant of synthesised realist theories should emerge bolstered as the underdetermination by available evidence gets reduced. While shifting the realist commitment away from theoretical artefacts towards modalities of the possibility spaces, the synthesis comes out as a kind of perspectival modelling

    On the independence between phenomenal consciousness and computational intelligence

    Full text link
    Consciousness and intelligence are properties commonly understood as dependent by folk psychology and society in general. The term artificial intelligence and the kind of problems that it managed to solve in the recent years has been shown as an argument to establish that machines experience some sort of consciousness. Following the analogy of Russell, if a machine is able to do what a conscious human being does, the likelihood that the machine is conscious increases. However, the social implications of this analogy are catastrophic. Concretely, if rights are given to entities that can solve the kind of problems that a neurotypical person can, does the machine have potentially more rights that a person that has a disability? For example, the autistic syndrome disorder spectrum can make a person unable to solve the kind of problems that a machine solves. We believe that the obvious answer is no, as problem solving does not imply consciousness. Consequently, we will argue in this paper how phenomenal consciousness and, at least, computational intelligence are independent and why machines do not possess phenomenal consciousness, although they can potentially develop a higher computational intelligence that human beings. In order to do so, we try to formulate an objective measure of computational intelligence and study how it presents in human beings, animals and machines. Analogously, we study phenomenal consciousness as a dichotomous variable and how it is distributed in humans, animals and machines. As phenomenal consciousness and computational intelligence are independent, this fact has critical implications for society that we also analyze in this work

    AI-assisted Software Development Effort Estimation

    Get PDF
    Effort estimation is a critical aspect of software project management. Without accurate estimates of the developer effort a particular project will require, the project's timeline and resourcing cannot be efficiently planned, which greatly increases the likelihood of the project failing to meet at least some of its goals. The goal of this thesis is to apply machine learning methods to analyze the work hour data logged by individual employees in order to provide project management with useful estimations of how much more effort it will take to finish a given project, and how long that will take. The work is conducted for ATR Soft Oy, using the data from their internal work hour logging tool. At first a literature review is conducted to determine what kind of estimation methods and tools are currently used in the software industry, and what kind of objectives and requirements organizations commonly set for their estimation processes. The basics of machine learning are explained, and a brief look is taken at how machine learning is currently used to support software engineering and project management. The literature review revealed that while machine learning methods have been applied to software project estimation for decades at this point, such data-driven methods generally suffer from a lack of relevant historical project data, and thus aren't commonly used in the industry. Initial insights were gathered from the work hour data and analysis goals were refined accordingly. The data was pre-processed to a form suitable for training machine learning models. Two different modeling scenarios were tested: Creating a single general model from all available data, and creating multiple project-specific models of a more limited scope. The modeling performance data indicates that machine learning models based on work hour data are capable of achieving better results in some situations than traditional expert estimation. The models developed here are not reliable enough to be used as the sole estimation method, but can provide useful additional information to support decision making

    Thinking and Learning Demands in Contemporary Childhood

    Get PDF
    Is today’s childhood is the same as the past’s? Frankly speaking, we cannot answer this question as a clear yes. It is obvious that children today are more into tablet computers, social networks and online games than traditional child games. Besides, our communication styles have been changed significantly for the past years. We, no longer need to meet others face to face to ask for help or to chat. Artificial intelligence, machine learning and robots are another story of the contemporary world. Robots capable of perceiving their surroundings and making decisions have started to deprive many people of their jobs. But what kind of jobs will human beings perform? The increasing emphasis on innovation, cooperation, critical thinking, being creative, problem solving, communication skills and project management is an indicator of what kind of a business world will today’s children meet in the future. This on-going trend also includes clues about how should children be educated. This study is focusing on thinking and learning demands expected contemporary children to meet. Throughout the chapter, the changing world was depicted briefly and then demands of the contemporary age on critical thinking, creative thinking, problem solving and learning were explored respectively

    Satan stultified: a rejoinder to Paul Benacerraf

    Get PDF
    Benacerraf criticizes Lucas’ argument against Mechanism because, in his opinion, it depends too much on how the system we are talking about is presented and because the argument put in form of challenge reduces itself to a contest of wits between Lucas and the mechanists. In Benacerraf opinion, Lucas should clarify the sense of utilised notions and the argument would have to be reconstructed as formally as possible, in order to determine the involved philosophical premises. Moreover Benacerraf maintains that, instead of abandoning the idea that human mind is a machine, we could assume that minds are machines for which it is not possible to prove the consistency or that they are inadequate for arithmetic; moreover minds could be machines whose characteristics we are not able to specify. However, Lucas answers that the requirement of reconstructing his argument in a formal way misunderstands his project: his argument is not a direct proof but a dialectical argument, a schema of disproof for any particular version of mechanist argument, and so the attempt to reconstruct it as a rigorous proof is a distortion of the original argument, that is essentially dialectical. What about the hypothesis suggested by Benacerraf, Lucas disputes that we are able to manage arithmetic and we don’t seem as inconsistent as an inconsistent system is, because we are selective while an inconsistent system is not; at the other hand, the idea that we are machine but we don’t know anything about what kind of machine we are evacuates Mechanism of all content

    Plastic Machines: Behavioural Diversity and the Turing Test

    Get PDF
    After proposing the Turing Test, Alan Turing himself considered a number of objections to the idea that a machine might eventually pass it. One of the objections discussed by Turing was that no machine will ever pass the Turing Test because no machine will ever “have as much diversity of behaviour as a man”. He responded as follows: the “criticism that a machine cannot have much diversity of behaviour is just a way of saying that it cannot have much storage capacity”. I shall argue that the objection cannot be dismissed so easily. The diversity exhibited by human behaviour is characterized by a kind of context-sensitive adaptive plasticity. Most of the time, human beings flexibly and fluently respond to what is relevant in a given situation. Moreover, ordinary human life involves an open-ended flow of shifting contexts to which our behaviour typically adapts in real time. For a machine to “have as much diversity of behaviour as a man” would be for that machine to keep its responses and behaviour relevant within such a flow. Merely giving a machine the capacity to store a huge amount of information and an enormous number of behaviour-generating rules will not achieve this goal. By drawing on arguments presented originally by Descartes, and by making contact with the frame problem in artificial intelligence, I shall argue that the distinctive context-sensitive adaptive plasticity of human behaviour explains why the Turing Test is such a stringent test for the presence of thought, and why it is much harder to pass than Turing himself may have realized

    Hierarchy-in-flux: Co-evolving a distributed user interface for orbiting robots

    Get PDF
    The role of context has been an important focus for Human-Computer Interaction research since the beginning of the Second Wave of HCI. While different theoretical frameworks within the HCI community have different approaches to analysing context, they do so always with the object of understanding its effects on human-machine interaction, often with the larger goal of generating insights into future designs. The forces that shape context itself are typically ignored in these analyses because they are not considered relevant to the interaction itself, which is the focus of HCI. Yet if these forces were to create different contexts for interaction those changes would be relevant to HCI research. This suggests that HCI might benefit from techniques that analyse and design for the creation of the institutional structures that constrain human-machine interactions. We present the notions of multi-scale analysis and multi-scale design as terms which describe approaches that seek to engage the different disciplinary proficiencies that create the context for interaction. In doing so we make the case for a new kind of design education that strives to create multidisciplinary designers capable of harnessing the dynamics of systems at different levels of abstraction to achieve outcomes that exceed what we might expect from HCI alone
    • 

    corecore