58 research outputs found

    Biohacking and code convergence : a transductive ethnography

    Full text link
    Cette thĂšse se dĂ©ploie dans un espace de discours et de pratiques revendicatrices, Ă  l’inter- section des cultures amateures informatiques et biotechniques, euro-amĂ©ricaines contempo- raines. La problĂ©matique se dessinant dans ce croisement culturel examine des mĂ©taphores et analogies au coeur d’un traffic intense, au milieu de voies de commmunications imposantes, reliant les technologies informatiques et biotechniques comme lieux d’expression mĂ©diatique. L’examen retrace les lignes de force, les mĂ©diations expressives en ces lieux Ă  travers leurs manifestations en tant que codes —à la fois informatiques et gĂ©nĂ©tiques— et reconnaĂźt les caractĂšres analogiques d’expressivitĂ© des codes en tant que processus de convergence. Émergeant lentement, Ă  partir des annĂ©es 40 et 50, les visions convergentes des codes ont facilitĂ© l’entrĂ©e des ordinateurs personnels dans les marchĂ©s, ainsi que dans les garages de hackers, alors que des bricoleurs de l’informatique s’en rĂ©clamaient comme espace de libertĂ© d’information —et surtout d’innovation. Plus de cinquante ans plus tard, l’analogie entre codes informatiques et gĂ©nĂ©tiques sert de moteur aux revendications de libertĂ©, informant cette fois les nouvelles applications de la biotechnologie de marchĂ©, ainsi que l’activitĂ© des biohackers, ces bricoleurs de garage en biologie synthĂ©tique. Les pratiques du biohacking sont ainsi comprises comme des individuations : des tentatives continues de rĂ©soudre des frictions, des tensions travaillant les revendications des cultures amateures informatiques et biotechniques. Une des maniĂšres de moduler ces tensions s’incarne dans un processus connu sous le nom de forking, entrevu ici comme l’expĂ©rience d’une bifurcation. Autrement dit, le forking est ici dĂ©finit comme passage vers un seuil critique, dĂ©clinant la technologie et la biologie sur plusieurs modes. Le forking informe —c’est-Ă -dire permet et contraint— diffĂ©rentes vi- sions collectives de l’ouverture informationnelle. Le forking intervient aussi sur les plans des iii semio-matĂ©rialitĂ©s et pouvoirs d’action investis dans les pratiques biotechniques et informa- tiques. Pris comme processus de co-constitution et de diffĂ©rentiation de l’action collective, les mouvements de bifurcation invitent les trois questions suivantes : 1) Comment le forking catalyse-t-il la solution des tensions participant aux revendications des pratiques du bioha- cking ? 2) Dans ce processus de solution, de quelles maniĂšres les revendications changent de phase, bifurquent et se transforment, parfois au point d’altĂ©rer radicalement ces pratiques ? 3) Quels nouveaux problĂšmes Ă©mergent de ces solutions ? L’effort de recherche a trouvĂ© ces questions, ainsi que les plans correspondants d’action sĂ©mio-matĂ©rielle et collective, incarnĂ©es dans trois expĂ©riences ethnographiques rĂ©parties sur trois ans (2012-2015) : la premiĂšre dans un laboratoire de biotechnologie communautaire new- yorkais, la seconde dans l’émergence d’un groupe de biotechnologie amateure Ă  MontrĂ©al, et la troisiĂšme Ă  Cork, en Irlande, au sein du premier accĂ©lĂ©rateur d’entreprises en biologie synthĂ©tique au monde. La logique de l’enquĂȘte n’est ni strictement inductive ou dĂ©ductive, mais transductive. Elle emprunte Ă  la philosophie de la communication et de l’information de Gilbert Simondon et dĂ©couvre l’épistĂ©mologie en tant qu’acte de crĂ©ation opĂ©rant en milieux relationnels. L’heuristique transductive offre des rencontres inusitĂ©es entre les mĂ©taphores et les analogies des codes. Ces rencontres Ă©tonnantes ont amĂ©nagĂ© l’expĂ©rience de la conver- gence des codes sous forme de jeux d’écritures. Elles se sont retrouvĂ©es dans la recherche ethnographique en tant que processus transductifs.This dissertation examines creative practices and discourses intersecting computer and biotech cultures. It queries influential metaphors and analogies on both sides of the inter- section, and their positioning of biotech and information technologies as expression media. It follows mediations across their incarnations as codes, both computational and biological, and situates their analogical expressivity and programmability as a process of code conver- gence. Converging visions of technological freedom facilitated the entrance of computers in 1960’s Western hobbyist hacker circles, as well as in consumer markets. Almost fifty years later, the analogy drives claims to freedom of information —and freedom of innovation— from biohacker hobbyist groups to new biotech consumer markets. Such biohacking practices are understood as individuations: as ongoing attempts to resolve frictions, tensions working through claims to freedom and openness animating software and biotech cultures. Tensions get modulated in many ways. One of them, otherwise known as “forking,” refers here to a critical bifurcation allowing for differing iterations of biotechnical and computa- tional configurations. Forking informs —that is, simultaneously affords and constrains— differing collective visions of openness. Forking also operates on the materiality and agency invested in biotechnical and computational practices. Taken as a significant process of co- constitution and differentiation in collective action, bifurcation invites the following three questions: 1) How does forking solve tensions working through claims to biotech freedom? 2) In this solving process, how can claims bifurcate and transform to the point of radically altering biotech practices? 3) what new problems do these solutions call into existence? This research found these questions, and both scales of material action and agency, in- carnated in three extensive ethnographical journeys spanning three years (2012-2015): the first in a Brooklyn-based biotech community laboratory, the second in the early days of a biotech community group in Montreal, and the third in the world’s first synthetic biology startup accelerator in Cork, Ireland. The inquiry’s guiding empirical logic is neither solely deductive or inductive, but transductive. It borrows from Gilbert Simondon’s philosophy of communication and information to experience epistemology as an act of analogical creation involving the radical, irreversible transformation of knower and known. Transductive heuris- tics offer unconvential encounters with practices, metaphors and analogies of code. In the end, transductive methods acknowledge code convergence as a metastable writing games, and ethnographical research itself as a transductive process

    East-West Paths to Unconventional Computing

    Get PDF
    Unconventional computing is about breaking boundaries in thinking, acting and computing. Typical topics of this non-typical field include, but are not limited to physics of computation, non-classical logics, new complexity measures, novel hardware, mechanical, chemical and quantum computing. Unconventional computing encourages a new style of thinking while practical applications are obtained from uncovering and exploiting principles and mechanisms of information processing in and functional properties of, physical, chemical and living systems; in particular, efficient algorithms are developed, (almost) optimal architectures are designed and working prototypes of future computing devices are manufactured. This article includes idiosyncratic accounts of ‘unconventional computing’ scientists reflecting on their personal experiences, what attracted them to the field, their inspirations and discoveries.info:eu-repo/semantics/publishedVersio

    Not all computational methods are effective methods

    Get PDF
    An effective method is a computational method that might, in principle, be executed by a human. In this paper, I argue that there are methods for computing that are not effective methods. The examples I consider are taken primarily from quantum computing, but these are only meant to be illustrative of a much wider class. Quantum inference and quantum parallelism involve steps that might be implemented in multiple physical systems, but cannot be implemented, or at least not at will, by an idealised human. Recognising that not all computational methods are effective methods is important for at least two reasons. First, it is needed to correctly state the results of Turing and other founders of computation theory. Turing is sometimes said to have offered a replacement for the informal notion of an effective method with the formal notion of a Turing machine. I argue that such a view only holds under limited circumstances. Second, not distinguishing between computational methods and effective methods can lead to mistakes when quantifying over the class of all possible computational methods. Such quantification is common in philosophy of mind in the context of thought experiments that explore the limits of computational functionalism. I argue that these ‘homuncular’ thought experiments should not be treated as valid

    Dreaming Data. Aspekte der Ästhetik, OriginalitĂ€t und Autorschaft in der kĂŒnstlichen KreativitĂ€t

    Full text link
    Die Maschinen, die sich der Mensch einst ertrĂ€umte, scheinen heute selbst zu trĂ€umen. Dank immer ausgefeilteren Algorithmen rechnen Rechner heute nicht mehr nur, sondern schreiben DrehbĂŒcher, malen Bilder oder komponieren Musik. Die Frage, ob Computer kreativ sein können, ist hinfĂ€llig geworden. Interessanter ist die Frage, wie sie es sind. Was den Bereich der logischen Operationen anbelangt, bezweifelt niemand mehr, dass Computer den Menschen lĂ€ngst ĂŒbertroffen haben. Grösser sind die Vorbehalte bei der Kunst: Ein Computer, so der Vorwurf, könne niemals wirklich kreativ sein, zumindest nie so kreativ wie der Mensch. Was bei der normativ gefĂ€rbten Debatte um die Möglichkeiten simulierter KreativitĂ€t zu kurz kommt, ist die unvoreingenommene BeschĂ€ftigung mit dem konkreten Kunstwerk. Um dem entgegenzuwirken, beleuchtet der Autor ausgewĂ€hlte Beispiele kĂŒnstlicher KreativitĂ€t – von Film ĂŒber Malerei bis hin zu Musik – hinsichtlich ihrer Ästhetik, OriginalitĂ€t und Konzeption von Autorschaft. Die Befunde bringt er in Zusammenhang mit dem dominanten Deutungsrahmen einer Kulturkritik, die sich auf der Grundlage einer ĂŒberkommenen GenieĂ€sthetik zwischen Technologiefeindlichkeit einerseits und euphorischer Affirmation einer vermeintlich autonomen Artificial Creativity andererseits aufhĂ€lt, wĂ€hrend der Autor selbst einen innovativeren Weg beschreitet

    Brave New Worlds: How computer simulation changes model-based science

    Get PDF
    A large part of science involves building and investigating models. One key feature of model-based science is that one thing is studied as a means of learning about some rather different thing. How scientists make inferences from a model to the world, then, is a topic of great interest to philosophers of science. An increasing number of models are specified with very complex computer programs. In this thesis, I examine the epistemological issues that arise when scientists use these computer simulation models to learn about the world or to think through their ideas. I argue that the explosion of computational power over the last several decades has revolutionised model-based science, but that restraint and caution must be exercised in the face of this power. To make my arguments, I focus on two kinds of computer simulation modelling: climate modelling and, in particular, high-fidelity climate models; and agent-based models, which are used to represent populations of interacting agents often in an ecological or social context. Both kinds involve complex model structures and are representative of the beneficial capacities of computer simulation. However, both face epistemic costs that follow from using highly complex model structures. As models increase in size and complexity, it becomes far harder for modellers to understand their models and why they behave the way they do. The value of models is further obscured by their proliferation, and a proliferation of programming languages in which they can be described. If modellers struggle to grasp their models, they can struggle to make good inferences with them. While the climate modelling community has developed much of the infrastructure required to mitigate these epistemic costs, the less mature field of agent-based modelling is still struggling to implement such community standards and infrastructure. I conclude that modellers cannot take full advantage of the representational capacities of computer simulations unless resources are invested into their study that scale proportionately with the models' complexity

    A Neutrosophic Approach Based on TOPSIS Method to Image Segmentation

    Get PDF
    Neutrosophic set (NS) is a formal framework proposed recently. NS can not only describe the incomplete information in the decision-making system but also depict the uncertainty and inconsistency, so it has applied successfully in several fields such as risk assessment, fuzzy decision and image segmentation. In this paper, a new neutrosophic approach based on TOPSIS method, which can make full use of NS information, is proposed to separate the graphics. Firstly, the image is transformed into the NS domain. Then, two operations, a modified alpha-mean and the beta-enhancement operations are used to enhance image edges and to reduce uncertainty. At last, the segmentation is achieved by the TOPSIS method and the modified fuzzy c-means (FCM). Simulated images and real images are illustrated that the proposed method is more effective and accurate in image segmentation
    • 

    corecore