1,481 research outputs found

    Complexity, BioComplexity, the Connectionist Conjecture and Ontology of Complexity\ud

    Get PDF
    This paper develops and integrates major ideas and concepts on complexity and biocomplexity - the connectionist conjecture, universal ontology of complexity, irreducible complexity of totality & inherent randomness, perpetual evolution of information, emergence of criticality and equivalence of symmetry & complexity. This paper introduces the Connectionist Conjecture which states that the one and only representation of Totality is the connectionist one i.e. in terms of nodes and edges. This paper also introduces an idea of Universal Ontology of Complexity and develops concepts in that direction. The paper also develops ideas and concepts on the perpetual evolution of information, irreducibility and computability of totality, all in the context of the Connectionist Conjecture. The paper indicates that the control and communication are the prime functionals that are responsible for the symmetry and complexity of complex phenomenon. The paper takes the stand that the phenomenon of life (including its evolution) is probably the nearest to what we can describe with the term “complexity”. The paper also assumes that signaling and communication within the living world and of the living world with the environment creates the connectionist structure of the biocomplexity. With life and its evolution as the substrate, the paper develops ideas towards the ontology of complexity. The paper introduces new complexity theoretic interpretations of fundamental biomolecular parameters. The paper also develops ideas on the methodology to determine the complexity of “true” complex phenomena.\u

    One-bit Distributed Sensing and Coding for Field Estimation in Sensor Networks

    Full text link
    This paper formulates and studies a general distributed field reconstruction problem using a dense network of noisy one-bit randomized scalar quantizers in the presence of additive observation noise of unknown distribution. A constructive quantization, coding, and field reconstruction scheme is developed and an upper-bound to the associated mean squared error (MSE) at any point and any snapshot is derived in terms of the local spatio-temporal smoothness properties of the underlying field. It is shown that when the noise, sensor placement pattern, and the sensor schedule satisfy certain weak technical requirements, it is possible to drive the MSE to zero with increasing sensor density at points of field continuity while ensuring that the per-sensor bitrate and sensing-related network overhead rate simultaneously go to zero. The proposed scheme achieves the order-optimal MSE versus sensor density scaling behavior for the class of spatially constant spatio-temporal fields.Comment: Fixed typos, otherwise same as V2. 27 pages (in one column review format), 4 figures. Submitted to IEEE Transactions on Signal Processing. Current version is updated for journal submission: revised author list, modified formulation and framework. Previous version appeared in Proceedings of Allerton Conference On Communication, Control, and Computing 200

    Agoric computation: trust and cyber-physical systems

    Get PDF
    In the past two decades advances in miniaturisation and economies of scale have led to the emergence of billions of connected components that have provided both a spur and a blueprint for the development of smart products acting in specialised environments which are uniquely identifiable, localisable, and capable of autonomy. Adopting the computational perspective of multi-agent systems (MAS) as a technological abstraction married with the engineering perspective of cyber-physical systems (CPS) has provided fertile ground for designing, developing and deploying software applications in smart automated context such as manufacturing, power grids, avionics, healthcare and logistics, capable of being decentralised, intelligent, reconfigurable, modular, flexible, robust, adaptive and responsive. Current agent technologies are, however, ill suited for information-based environments, making it difficult to formalise and implement multiagent systems based on inherently dynamical functional concepts such as trust and reliability, which present special challenges when scaling from small to large systems of agents. To overcome such challenges, it is useful to adopt a unified approach which we term agoric computation, integrating logical, mathematical and programming concepts towards the development of agent-based solutions based on recursive, compositional principles, where smaller systems feed via directed information flows into larger hierarchical systems that define their global environment. Considering information as an integral part of the environment naturally defines a web of operations where components of a systems are wired in some way and each set of inputs and outputs are allowed to carry some value. These operations are stateless abstractions and procedures that act on some stateful cells that cumulate partial information, and it is possible to compose such abstractions into higher-level ones, using a publish-and-subscribe interaction model that keeps track of update messages between abstractions and values in the data. In this thesis we review the logical and mathematical basis of such abstractions and take steps towards the software implementation of agoric modelling as a framework for simulation and verification of the reliability of increasingly complex systems, and report on experimental results related to a few select applications, such as stigmergic interaction in mobile robotics, integrating raw data into agent perceptions, trust and trustworthiness in orchestrated open systems, computing the epistemic cost of trust when reasoning in networks of agents seeded with contradictory information, and trust models for distributed ledgers in the Internet of Things (IoT); and provide a roadmap for future developments of our research

    The visual uncertainty paradigm for controlling screen-space information in visualization

    Get PDF
    The information visualization pipeline serves as a lossy communication channel for presentation of data on a screen-space of limited resolution. The lossy communication is not just a machine-only phenomenon due to information loss caused by translation of data, but also a reflection of the degree to which the human user can comprehend visual information. The common entity in both aspects is the uncertainty associated with the visual representation. However, in the current linear model of the visualization pipeline, visual representation is mostly considered as the ends rather than the means for facilitating the analysis process. While the perceptual side of visualization is also being studied, little attention is paid to the way the visualization appears on the display. Thus, we believe there is a need to study the appearance of the visualization on a limited-resolution screen in order to understand its own properties and how they influence the way they represent the data. I argue that the visual uncertainty paradigm for controlling screen-space information will enable us in achieving user-centric optimization of a visualization in different application scenarios. Conceptualization of visual uncertainty enables us to integrate the encoding and decoding aspects of visual representation into a holistic framework facilitating the definition of metrics that serve as a bridge between the last stages of the visualization pipeline and the user's perceptual system. The goal of this dissertation is three-fold: i) conceptualize a visual uncertainty taxonomy in the context of pixel-based, multi-dimensional visualization techniques that helps systematic definition of screen-space metrics, ii) apply the taxonomy for identifying sources of useful visual uncertainty that helps in protecting privacy of sensitive data and also for identifying the types of uncertainty that can be reduced through interaction techniques, and iii) application of the metrics for designing information-assisted models that help in visualization of high-dimensional, temporal data

    A Methodological Framework for Parametric Combat Analysis

    Get PDF
    This work presents a taxonomic structure for understanding the tension between certain factors of stability for game-theoretic outcomes such as Nash optimality, Pareto optimality, and balance optimality and then applies such game-theoretic concepts to the advancement of strategic thought on spacepower. This work successfully adapts and applies combat modeling theory to the evaluation of cislunar space conflict. This work provides evidence that the reliability characteristics of small spacecraft share similarities to the reliability characteristics of large spacecraft. Using these novel foundational concepts, this dissertation develops and presents a parametric methodological framework capable of analyzing the efficacy of heterogeneous force compositions in the context of space warfare. This framework is shown to be capable of predicting a stochastic distribution of numerical outcomes associated with various modes of conflict and parameter values. Furthermore, this work demonstrates a general alignment in results between the game-theoretic concepts of the framework and Media Interaction Warfare Theory in terms of evaluating force efficacy, providing strong evidence for the validity of the methodological framework presented in this dissertation

    Data models, representation and adequacy-for-purpose

    Get PDF
    We critically engage two traditional views of scientific data and outline a novel philosophical view that we call the pragmatic-representational (PR) view of data. On the PR view, data are representations that are the product of a process of inquiry, and they should be evaluated in terms of their adequacy or fitness for particular purposes. Some important implications of the PR view for data assessment, related to misrepresentation, context-sensitivity, and complementary use, are highlighted. The PR view provides insight into the common but little-discussed practices of iteratively reusing and repurposing data, which result in many datasets' having a phylogeny-an origin and complex evolutionary history-that is relevant to their evaluation and future use. We relate these insights to the open-data and data-rescue movements, and highlight several future avenues of research that build on the PR view of data.Published versio

    Composable M&S web services for net-centric applications

    Get PDF
    Service-oriented architectures promise easier integration of functionality in the form of web services into operational systems than is the case with interface-driven system-oriented approaches. Although the Extensible Markup Language (XML) enables a new level of interoperability among heterogeneous systems, XML alone does not solve all interoperability problems users contend with when integrating services into operational systems. To manage the basic challenges of service interoperation, we developed the Levels of Conceptual Interoperability Model (LCIM) to enable a layered approach and gradual solution improvements. Furthermore, we developed methods of model-based data engineering (MBDE) for semantically consistent service integration as a first step. These methods have been applied in the U.S. in collaboration with industry resulting in proofs of concepts. The results are directly applicable in a net-centric and net-enabled environment

    Cyberspace As/And Space

    Get PDF
    The appropriate role of place- and space-based metaphors for the Internet and its constituent nodes and networks is hotly contested. This essay seeks to provoke critical reflection on the implications of place- and space-based theories of cyberspace for the ongoing production of networked space more generally. It argues, first, that adherents of the cyberspace metaphor have been insufficiently sensitive to the ways in which theories of cyberspace as space themselves function as acts of social construction. Specifically, the leading theories all have deployed the metaphoric construct of cyberspace to situate cyberspace, explicitly or implicitly, as separate space. This denies all of the ways in which cyberspace operates as both extension and evolution of everyday spatial practice. Next, it argues that critics of the cyberspace metaphor have confused two senses of space and two senses of metaphor. The cyberspace metaphor does not refer to abstract, Cartesian space, but instead expresses an experienced spatiality mediated by embodied human cognition. Cyberspace in this sense is relative, mutable, and constituted via the interactions among practice, conceptualization, and representation. The insights drawn from this exercise suggest a very different way of understanding both the spatiality of cyberspace and its architectural and regulatory challenges. In particular, they suggest closer attention to three ongoing shifts: the emergence of a new sense of social space, which the author calls networked space; the interpenetration of embodied, formerly bounded space by networked space; and the ways in which these developments alter, instantiate, and disrupt geographies of power
    corecore