1,479 research outputs found

    Artificial consciousness and the consciousness-attention dissociation

    Get PDF
    Artificial Intelligence is at a turning point, with a substantial increase in projects aiming to implement sophisticated forms of human intelligence in machines. This research attempts to model specific forms of intelligence through brute-force search heuristics and also reproduce features of human perception and cognition, including emotions. Such goals have implications for artificial consciousness, with some arguing that it will be achievable once we overcome short-term engineering challenges. We believe, however, that phenomenal consciousness cannot be implemented in machines. This becomes clear when considering emotions and examining the dissociation between consciousness and attention in humans. While we may be able to program ethical behavior based on rules and machine learning, we will never be able to reproduce emotions or empathy by programming such control systems—these will be merely simulations. Arguments in favor of this claim include considerations about evolution, the neuropsychological aspects of emotions, and the dissociation between attention and consciousness found in humans. Ultimately, we are far from achieving artificial consciousness

    The Computability-Theoretic Content of Emergence

    Get PDF
    In dealing with emergent phenomena, a common task is to identify useful descriptions of them in terms of the underlying atomic processes, and to extract enough computational content from these descriptions to enable predictions to be made. Generally, the underlying atomic processes are quite well understood, and (with important exceptions) captured by mathematics from which it is relatively easy to extract algorithmic con- tent. A widespread view is that the difficulty in describing transitions from algorithmic activity to the emergence associated with chaotic situations is a simple case of complexity outstripping computational resources and human ingenuity. Or, on the other hand, that phenomena transcending the standard Turing model of computation, if they exist, must necessarily lie outside the domain of classical computability theory. In this article we suggest that much of the current confusion arises from conceptual gaps and the lack of a suitably fundamental model within which to situate emergence. We examine the potential for placing emer- gent relations in a familiar context based on Turing's 1939 model for interactive computation over structures described in terms of reals. The explanatory power of this model is explored, formalising informal descrip- tions in terms of mathematical definability and invariance, and relating a range of basic scientific puzzles to results and intractable problems in computability theory

    Spatio-Temporal Sentiment Hotspot Detection Using Geotagged Photos

    Full text link
    We perform spatio-temporal analysis of public sentiment using geotagged photo collections. We develop a deep learning-based classifier that predicts the emotion conveyed by an image. This allows us to associate sentiment with place. We perform spatial hotspot detection and show that different emotions have distinct spatial distributions that match expectations. We also perform temporal analysis using the capture time of the photos. Our spatio-temporal hotspot detection correctly identifies emerging concentrations of specific emotions and year-by-year analyses of select locations show there are strong temporal correlations between the predicted emotions and known events.Comment: To appear in ACM SIGSPATIAL 201

    A computational cognition model of perception, memory, and judgment

    Get PDF
    The mechanism of human cognition and its computability provide an important theoretical foundation to intelligent computation of visual media. This paper focuses on the intelligent processing of massive data of visual media and its corresponding processes of perception, memory, and judgment in cognition. In particular, both the human cognitive mechanism and cognitive computability of visual media are investigated in this paper at the following three levels: neurophysiology, cognitive psychology, and computational modeling. A computational cognition model of Perception, Memory, and Judgment (PMJ model for short) is proposed, which consists of three stages and three pathways by integrating the cognitive mechanism and computability aspects in a unified framework. Finally, this paper illustrates the applications of the proposed PMJ model in five visual media research areas. As demonstrated by these applications, the PMJ model sheds some light on the intelligent processing of visual media, and it would be innovative for researchers to apply human cognitive mechanism to computer science.</p

    Bridging the gap between the micro- and the macro-world of tumors

    Full text link
    At present it is still quite difficult to match the vast knowledge on the behavior of individual tumor cells with macroscopic measurements on clinical tumors. On the modeling side, we already know how to deal with many molecular pathways and cellular events, using systems of differential equations and other modeling tools, and ideally, we should be able to extend such a mathematical description up to the level of large tumor masses. An extended model should thus help us forecast the behavior of large tumors from our basic knowledge of microscopic processes. Unfortunately, the complexity of these processes makes it very difficult -- probably impossible -- to develop comprehensive analytical models. We try to bridge the gap with a simulation program which is based on basic biochemical and biophysical processes -- thereby building an effective computational model -- and in this paper we describe its structure, endeavoring to make the description sufficiently detailed and yet understandable.Comment: 24 pages, 10 figures. Accepted for publication in AIP Advances, in the special issue on the physics of cance

    The Profiling Potential of Computer Vision and the Challenge of Computational Empiricism

    Full text link
    Computer vision and other biometrics data science applications have commenced a new project of profiling people. Rather than using 'transaction generated information', these systems measure the 'real world' and produce an assessment of the 'world state' - in this case an assessment of some individual trait. Instead of using proxies or scores to evaluate people, they increasingly deploy a logic of revealing the truth about reality and the people within it. While these profiling knowledge claims are sometimes tentative, they increasingly suggest that only through computation can these excesses of reality be captured and understood. This article explores the bases of those claims in the systems of measurement, representation, and classification deployed in computer vision. It asks if there is something new in this type of knowledge claim, sketches an account of a new form of computational empiricism being operationalised, and questions what kind of human subject is being constructed by these technological systems and practices. Finally, the article explores legal mechanisms for contesting the emergence of computational empiricism as the dominant knowledge platform for understanding the world and the people within it
    corecore