5,653 research outputs found

    Psychopower and Ordinary Madness: Reticulated Dividuals in Cognitive Capitalism

    Get PDF
    Despite the seemingly neutral vantage of using nature for widely-distributed computational purposes, neither post-biological nor post-humanist teleology simply concludes with the real "end of nature" as entailed in the loss of the specific ontological status embedded in the identifier "natural." As evinced by the ecological crises of the Anthropocene—of which the 2019 Brazil Amazon rainforest fires are only the most recent—our epoch has transfixed the “natural order" and imposed entropic artificial integration, producing living species that become “anoetic,” made to serve as automated exosomatic residues, or digital flecks. I further develop Gilles Deleuze’s description of control societies to upturn Foucauldian biopower, replacing its spacio-temporal bounds with the exographic excesses in psycho-power; culling and further detailing Bernard Stiegler’s framework of transindividuation and hyper-control, I examine how becoming-subject is predictively facilitated within cognitive capitalism and what Alexander Galloway terms “deep digitality.” Despite the loss of material vestiges qua virtualization—which I seek to trace in an historical review of industrialization to postindustrialization—the drive-based and reticulated "internet of things" facilitates a closed loop from within the brain to the outside environment, such that the aperture of thought is mediated and compressed. The human brain, understood through its material constitution, is susceptible to total datafication’s laminated process of “becoming-mnemotechnical,” and, as neuroplasticity is now a valid description for deep-learning and neural nets, we are privy to the rebirth of the once-discounted metaphor of the “cybernetic brain.” Probing algorithmic governmentality while posing noetic dreaming as both technical and pharmacological, I seek to analyze how spirit is blithely confounded with machine-thinking’s gelatinous cognition, as prosthetic organ-adaptation becomes probabilistically molded, networked, and agentially inflected (rather than simply externalized)

    Towards Autopoietic Computing

    Full text link
    A key challenge in modern computing is to develop systems that address complex, dynamic problems in a scalable and efficient way, because the increasing complexity of software makes designing and maintaining efficient and flexible systems increasingly difficult. Biological systems are thought to possess robust, scalable processing paradigms that can automatically manage complex, dynamic problem spaces, possessing several properties that may be useful in computer systems. The biological properties of self-organisation, self-replication, self-management, and scalability are addressed in an interesting way by autopoiesis, a descriptive theory of the cell founded on the concept of a system's circular organisation to define its boundary with its environment. In this paper, therefore, we review the main concepts of autopoiesis and then discuss how they could be related to fundamental concepts and theories of computation. The paper is conceptual in nature and the emphasis is on the review of other people's work in this area as part of a longer-term strategy to develop a formal theory of autopoietic computing.Comment: 10 Pages, 3 figure

    Crime applications and social machines: crowdsourcing sensitive data

    No full text
    The authors explore some issues with the United Kingdom (U.K.) crime reporting and recording systems which currently produce Open Crime Data. The availability of Open Crime Data seems to create a potential data ecosystem which would encourage crowdsourcing, or the creation of social machines, in order to counter some of these issues. While such solutions are enticing, we suggest that in fact the theoretical solution brings to light fairly compelling problems, which highlight some limitations of crowdsourcing as a means of addressing Berners-Lee’s “social constraint.” The authors present a thought experiment – a Gendankenexperiment - in order to explore the implications, both good and bad, of a social machine in such a sensitive space and suggest a Web Science perspective to pick apart the ramifications of this thought experiment as a theoretical approach to the characterisation of social machine

    Correlation of Automorphism Group Size and Topological Properties with Program-size Complexity Evaluations of Graphs and Complex Networks

    Get PDF
    We show that numerical approximations of Kolmogorov complexity (K) applied to graph adjacency matrices capture some group-theoretic and topological properties of graphs and empirical networks ranging from metabolic to social networks. That K and the size of the group of automorphisms of a graph are correlated opens up interesting connections to problems in computational geometry, and thus connects several measures and concepts from complexity science. We show that approximations of K characterise synthetic and natural networks by their generating mechanisms, assigning lower algorithmic randomness to complex network models (Watts-Strogatz and Barabasi-Albert networks) and high Kolmogorov complexity to (random) Erdos-Renyi graphs. We derive these results via two different Kolmogorov complexity approximation methods applied to the adjacency matrices of the graphs and networks. The methods used are the traditional lossless compression approach to Kolmogorov complexity, and a normalised version of a Block Decomposition Method (BDM) measure, based on algorithmic probability theory.Comment: 15 2-column pages, 20 figures. Forthcoming in Physica A: Statistical Mechanics and its Application

    Learning, Social Intelligence and the Turing Test - why an "out-of-the-box" Turing Machine will not pass the Turing Test

    Get PDF
    The Turing Test (TT) checks for human intelligence, rather than any putative general intelligence. It involves repeated interaction requiring learning in the form of adaption to the human conversation partner. It is a macro-level post-hoc test in contrast to the definition of a Turing Machine (TM), which is a prior micro-level definition. This raises the question of whether learning is just another computational process, i.e. can be implemented as a TM. Here we argue that learning or adaption is fundamentally different from computation, though it does involve processes that can be seen as computations. To illustrate this difference we compare (a) designing a TM and (b) learning a TM, defining them for the purpose of the argument. We show that there is a well-defined sequence of problems which are not effectively designable but are learnable, in the form of the bounded halting problem. Some characteristics of human intelligence are reviewed including it's: interactive nature, learning abilities, imitative tendencies, linguistic ability and context-dependency. A story that explains some of these is the Social Intelligence Hypothesis. If this is broadly correct, this points to the necessity of a considerable period of acculturation (social learning in context) if an artificial intelligence is to pass the TT. Whilst it is always possible to 'compile' the results of learning into a TM, this would not be a designed TM and would not be able to continually adapt (pass future TTs). We conclude three things, namely that: a purely "designed" TM will never pass the TT; that there is no such thing as a general intelligence since it necessary involves learning; and that learning/adaption and computation should be clearly distinguished.Comment: 10 pages, invited talk at Turing Centenary Conference CiE 2012, special session on "The Turing Test and Thinking Machines
    corecore