8,099 research outputs found

    Institutional paraconsciousness and its pathologies

    Get PDF
    This analysis extends a recent mathematical treatment of the Baars consciousness model to analogous, but far more complicated, phenomena of institutional cognition. Individual consciousness is limited to a single, tunable, giant component of interacting cognitive modules, instantiating a Global Workspace. Human institutions, by contrast, support several, sometimes many, such giant components simultaneously, although their behavior remains constrained to a topology generated by cultural context and by the path-dependence inherent to organizational history. Such highly parallel multitasking - institutional paraconsciousness - while clearly limiting inattentional blindness and the consequences of failures within individual workspaces, does not eliminate them, and introduces new characteristic dysfunctions involving the distortion of information sent between global workspaces. Consequently, organizations (or machines designed along these principles), while highly efficient at certain kinds of tasks, remain subject to canonical and idiosyncratic failure patterns similar to, but more complicated than, those afflicting individuals. Remediation is complicated by the manner in which pathogenic externalities can write images of themselves on both institutional function and therapeutic intervention, in the context of relentless market selection pressures. The approach is broadly consonant with recent work on collective efficacy, collective consciousness, and distributed cognition

    Working Notes from the 1992 AAAI Workshop on Automating Software Design. Theme: Domain Specific Software Design

    Get PDF
    The goal of this workshop is to identify different architectural approaches to building domain-specific software design systems and to explore issues unique to domain-specific (vs. general-purpose) software design. Some general issues that cut across the particular software design domain include: (1) knowledge representation, acquisition, and maintenance; (2) specialized software design techniques; and (3) user interaction and user interface

    Raising awareness about measurement error in research on unconscious mental processes

    Get PDF
    Experimental psychologists often neglect the poor psychometric properties of the dependent measures collected in their studies. In particular, a low reliability of measures can have dramatic consequences for the interpretation of key findings in some of the most popular experimental paradigms, especially when strong inferences are drawn from the absence of statistically significant correlations. In research on unconscious cognition, for instance, it is commonly argued that the lack of a correlation between task performance and measures of awareness or explicit recollection of the target stimuli provides strong support for the conclusion that the cognitive processes underlying performance must be unconscious. Using contextual cuing of visual search as a case study, we show that given the low reliability of the dependent measures collected in these studies, it is usually impossible to draw any firm conclusion about the unconscious character of this effect from correlational analyses. Furthermore, both a psychometric meta-analysis of the available evidence and a cognitive-modeling approach suggest that, in fact, we should expect to see very low correlations between performance and awareness at the empirical level, even if both constructs are perfectly related at the latent level. Convincing evidence for the unconscious character of contextual cuing and other effects will most likely demand richer and larger data sets, coupled with more powerful analytic approaches

    A MERGE Model with Endogenous Technological Change and the Cost of Carbon Stabilization

    Get PDF
    Two stylized backstop systems with endogenous technological learning formulations (ETL) are introduced in MERGE: one for the electric and the other for the non-electric markets. Then the model is applied to analyze the impacts of ETL on carbon-mitigation policy, contrasting the resulting impacts with the situation without learning. As the model considers endogenous technological change in the energy sector only some exogenous key parameters defining the production function are varied together with the assumed learning rates to check the robustness of our results. Based on model estimations and the sensitivity analyses we conclude that increased commitments for the development of new technologies to advance along their learning curves has a potential for substantial reductions in the cost of climate mitigation helping to reach safe concentrations of carbon in the atmosphere.Climate change stabilization policies, Non-linear optimization, Induced technological change, Energy and macroeconomy

    Block-level test scheduling under power dissipation constraints

    Get PDF
    As dcvicc technologies such as VLSI and Multichip Module (MCM) become mature, and larger and denser memory ICs arc implemented for high-performancc digital systems, power dissipation becomes a critical factor and can no longer be ignored cither in normal operation of the system or under test conditions. One of the major considerations in test scheduling is the fact that heat dissipated during test application is significantly higher than during normal operation (sometimes 100 - 200% higher). Therefore, this is one of the recent major considerations in test scheduling. Test scheduling is strongly related to test concurrency. Test concurrency is a design property which strongly impacts testability and power dissipation. To satisfy high fault coverage goals with reduced test application time under certain power dissipation constraints, the testing of all components on the system should be performed m parallel to the greatest extent possible. Some theoretical analysis of this problem has been carried out, but only at IC level. The problem was basically described as a compatible test clustering, where the compatibility among tests was given by test resource and power dissipation conflicts at the same time. From an implementation point of view this problem was identified as an Non-Polynomial (NP) complete problem In this thesis, an efficient scheme for overlaying the block-tcsts, called the extended tree growing technique, is proposed together with classical scheduling algorithms to search for power-constrained blocktest scheduling (PTS) profiles m a polynomial time Classical algorithms like listbased scheduling and distribution-graph based scheduling arc employed to tackle at high level the PTS problem. This approach exploits test parallelism under power constraints. This is achieved by overlaying the block-tcst intervals of compatible subcircuits to test as many of them as possible concurrently so that the maximum accumulated power dissipation is balanced and does not exceed the given limit. The test scheduling discipline assumed here is the partitioned testing with run to completion. A constant additive model is employed for power dissipation analysis and estimation throughout the algorithm

    Automated Validation of State-Based Client-Centric Isolation with TLA <sup>+</sup>

    Get PDF
    Clear consistency guarantees on data are paramount for the design and implementation of distributed systems. When implementing distributed applications, developers require approaches to verify the data consistency guarantees of an implementation choice. Crooks et al. define a state-based and client-centric model of database isolation. This paper formalizes this state-based model in, reproduces their examples and shows how to model check runtime traces and algorithms with this formalization. The formalized model in enables semi-automatic model checking for different implementation alternatives for transactional operations and allows checking of conformance to isolation levels. We reproduce examples of the original paper and confirm the isolation guarantees of the combination of the well-known 2-phase locking and 2-phase commit algorithms. Using model checking this formalization can also help finding bugs in incorrect specifications. This improves feasibility of automated checking of isolation guarantees in synthesized synchronization implementations and it provides an environment for experimenting with new designs.</p
    corecore