2,558 research outputs found

    Apperceptive patterning: Artefaction, extensional beliefs and cognitive scaffolding

    Get PDF
    In “Psychopower and Ordinary Madness” my ambition, as it relates to Bernard Stiegler’s recent literature, was twofold: 1) critiquing Stiegler’s work on exosomatization and artefactual posthumanism—or, more specifically, nonhumanism—to problematize approaches to media archaeology that rely upon technical exteriorization; 2) challenging how Stiegler engages with Giuseppe Longo and Francis Bailly’s conception of negative entropy. These efforts were directed by a prevalent techno-cultural qualifier: the rise of Synthetic Intelligence (including neural nets, deep learning, predictive processing and Bayesian models of cognition). This paper continues this project but first directs a critical analytic lens at the Derridean practice of the ontologization of grammatization from which Stiegler emerges while also distinguishing how metalanguages operate in relation to object-oriented environmental interaction by way of inferentialism. Stalking continental (Kapp, Simondon, Leroi-Gourhan, etc.) and analytic traditions (e.g., Carnap, Chalmers, Clark, Sutton, Novaes, etc.), we move from artefacts to AI and Predictive Processing so as to link theories related to technicity with philosophy of mind. Simultaneously drawing forth Robert Brandom’s conceptualization of the roles that commitments play in retrospectively reconstructing the social experiences that lead to our endorsement(s) of norms, we compliment this account with Reza Negarestani’s deprivatized account of intelligence while analyzing the equipollent role between language and media (both digital and analog)

    Team Learning: A Theoretical Integration and Review

    Get PDF
    With the increasing emphasis on work teams as the primary architecture of organizational structure, scholars have begun to focus attention on team learning, the processes that support it, and the important outcomes that depend on it. Although the literature addressing learning in teams is broad, it is also messy and fraught with conceptual confusion. This chapter presents a theoretical integration and review. The goal is to organize theory and research on team learning, identify actionable frameworks and findings, and emphasize promising targets for future research. We emphasize three theoretical foci in our examination of team learning, treating it as multilevel (individual and team, not individual or team), dynamic (iterative and progressive; a process not an outcome), and emergent (outcomes of team learning can manifest in different ways over time). The integrative theoretical heuristic distinguishes team learning process theories, supporting emergent states, team knowledge representations, and respective influences on team performance and effectiveness. Promising directions for theory development and research are discussed

    THE KNOWLEDGE-GAP REDUCTION IN SOFTWARE ENGINEERING

    Get PDF
    Many papers proposed in the software engineering and information systems literature are dedicated to analysis of software projects missing their schedules, exceeding their budgets, delivering software products with poor quality and in some cases even wrong functionality. The expression “software crisis” has been coined since the late 60’s to illustrate this phenomenon. Various solutions has been proposed by academics and practitioners in order to deal with the software crisis, counter these trends and improve productivity and software quality. Such solutions recommend software process improvement as the best way to build software products needed by modern organizations. Among the well-known solutions, many are based either on software development tools or on software development approaches, methods, processes, and notations. Nevertheless, the scope of these solutions seems to be limited and the improvements they provide are often not significant. We think that since software artifacts are accumulation of knowledge owned by organizational stakeholders, the software crisis is due to a knowledge gap between resulting from the discrepancy between the knowledge integrated in software systems and the knowledge owned by organizational actors. In particular, integrating knowledge management in software development process permits reducing the knowledge gap through building software products which reflect at least partly the organization’s know-how. In this paper, we propose a framework which provides a definition of knowledge based on information systems architecture and describes how to deal with the knowledge gap of a knowledge oriented software development process which may help organizations in reducing the software crisis impacts

    On UG and materialization

    Get PDF
    This essay discusses Universal Grammar (UG) and the materialization of internal and external language (commonly misconceived of as “lexicalization”). It develops a few simple but central ideas. First, the Universal Lexicon (the “lexical” part of UG) contains two elements: an initial root, Root Zero, and an initial functional feature, Feature Zero, identified as the Edge Feature (zero as they are void of content). Second, UG = a Minimal Language Generator, containing a) Merge, b) Root Zero, and c) Feature Zero. Third, both External and Internal Merge are preconditioned by Feature Zero or the Edge Feature (the Generalized Edge Approach). Fourth, the growth of internal language in the individual involves reiterated (formal) Copy & Merge of Root Zero and Feature Zero (the Copy Theory of Language Growth). The essay focuses on the materialization of internal language, but it also contains a brief discussion of externalization and language variation

    Service Quality Assessment for Cloud-based Distributed Data Services

    Full text link
    The issue of less-than-100% reliability and trust-worthiness of third-party controlled cloud components (e.g., IaaS and SaaS components from different vendors) may lead to laxity in the QoS guarantees offered by a service-support system S to various applications. An example of S is a replicated data service to handle customer queries with fault-tolerance and performance goals. QoS laxity (i.e., SLA violations) may be inadvertent: say, due to the inability of system designers to model the impact of sub-system behaviors onto a deliverable QoS. Sometimes, QoS laxity may even be intentional: say, to reap revenue-oriented benefits by cheating on resource allocations and/or excessive statistical-sharing of system resources (e.g., VM cycles, number of servers). Our goal is to assess how well the internal mechanisms of S are geared to offer a required level of service to the applications. We use computational models of S to determine the optimal feasible resource schedules and verify how close is the actual system behavior to a model-computed \u27gold-standard\u27. Our QoS assessment methods allow comparing different service vendors (possibly with different business policies) in terms of canonical properties: such as elasticity, linearity, isolation, and fairness (analogical to a comparative rating of restaurants). Case studies of cloud-based distributed applications are described to illustrate our QoS assessment methods. Specific systems studied in the thesis are: i) replicated data services where the servers may be hosted on multiple data-centers for fault-tolerance and performance reasons; and ii) content delivery networks to geographically distributed clients where the content data caches may reside on different data-centers. The methods studied in the thesis are useful in various contexts of QoS management and self-configurations in large-scale cloud-based distributed systems that are inherently complex due to size, diversity, and environment dynamicity

    Strategic perspectives on modularity

    Get PDF
    In this paper we argue that the debate on modularity has come to a point where a consensus is slowly emerging. However, we also contend that this consensus is clearly technology driven. In particular, no room is left for firm strategies. Typically, technology is considered as an exogenous variable to which firms have no choices but to adapt. Taking a slightly different perspective, our main objective is to offer a conceptual framework enabling to shed light on the role of corporate strategies in the process of modularization. From interviews with academic design engineers, we show that firms often consider product architecture as a critical variable to fit their strategic requirements. Based on design sciences, we build an original approach to product modularity. This approach, which leaves an important space for firms' strategic choices, proves also to seize a large part of the industrial reality of modularity. Our framework, which is a first step towards the consideration of strategies within the framework of modularity, gives an account for the diversity of industrial logics related to product modularization.product modularity ; corporate strategy ; technological determinism
    • …
    corecore