2,080 research outputs found

    Complexity, BioComplexity, the Connectionist Conjecture and Ontology of Complexity\ud

    Get PDF
    This paper develops and integrates major ideas and concepts on complexity and biocomplexity - the connectionist conjecture, universal ontology of complexity, irreducible complexity of totality & inherent randomness, perpetual evolution of information, emergence of criticality and equivalence of symmetry & complexity. This paper introduces the Connectionist Conjecture which states that the one and only representation of Totality is the connectionist one i.e. in terms of nodes and edges. This paper also introduces an idea of Universal Ontology of Complexity and develops concepts in that direction. The paper also develops ideas and concepts on the perpetual evolution of information, irreducibility and computability of totality, all in the context of the Connectionist Conjecture. The paper indicates that the control and communication are the prime functionals that are responsible for the symmetry and complexity of complex phenomenon. The paper takes the stand that the phenomenon of life (including its evolution) is probably the nearest to what we can describe with the term “complexity”. The paper also assumes that signaling and communication within the living world and of the living world with the environment creates the connectionist structure of the biocomplexity. With life and its evolution as the substrate, the paper develops ideas towards the ontology of complexity. The paper introduces new complexity theoretic interpretations of fundamental biomolecular parameters. The paper also develops ideas on the methodology to determine the complexity of “true” complex phenomena.\u

    Distinguishing cause from effect using observational data: methods and benchmarks

    Get PDF
    The discovery of causal relationships from purely observational data is a fundamental problem in science. The most elementary form of such a causal discovery problem is to decide whether X causes Y or, alternatively, Y causes X, given joint observations of two variables X, Y. An example is to decide whether altitude causes temperature, or vice versa, given only joint measurements of both variables. Even under the simplifying assumptions of no confounding, no feedback loops, and no selection bias, such bivariate causal discovery problems are challenging. Nevertheless, several approaches for addressing those problems have been proposed in recent years. We review two families of such methods: Additive Noise Methods (ANM) and Information Geometric Causal Inference (IGCI). We present the benchmark CauseEffectPairs that consists of data for 100 different cause-effect pairs selected from 37 datasets from various domains (e.g., meteorology, biology, medicine, engineering, economy, etc.) and motivate our decisions regarding the "ground truth" causal directions of all pairs. We evaluate the performance of several bivariate causal discovery methods on these real-world benchmark data and in addition on artificially simulated data. Our empirical results on real-world data indicate that certain methods are indeed able to distinguish cause from effect using only purely observational data, although more benchmark data would be needed to obtain statistically significant conclusions. One of the best performing methods overall is the additive-noise method originally proposed by Hoyer et al. (2009), which obtains an accuracy of 63+-10 % and an AUC of 0.74+-0.05 on the real-world benchmark. As the main theoretical contribution of this work we prove the consistency of that method.Comment: 101 pages, second revision submitted to Journal of Machine Learning Researc

    Designing and evaluating the usability of a machine learning API for rapid prototyping music technology

    Get PDF
    To better support creative software developers and music technologists' needs, and to empower them as machine learning users and innovators, the usability of and developer experience with machine learning tools must be considered and better understood. We review background research on the design and evaluation of application programming interfaces (APIs), with a focus on the domain of machine learning for music technology software development. We present the design rationale for the RAPID-MIX API, an easy-to-use API for rapid prototyping with interactive machine learning, and a usability evaluation study with software developers of music technology. A cognitive dimensions questionnaire was designed and delivered to a group of 12 participants who used the RAPID-MIX API in their software projects, including people who developed systems for personal use and professionals developing software products for music and creative technology companies. The results from the questionnaire indicate that participants found the RAPID-MIX API a machine learning API which is easy to learn and use, fun, and good for rapid prototyping with interactive machine learning. Based on these findings, we present an analysis and characterization of the RAPID-MIX API based on the cognitive dimensions framework, and discuss its design trade-offs and usability issues. We use these insights and our design experience to provide design recommendations for ML APIs for rapid prototyping of music technology. We conclude with a summary of the main insights, a discussion of the merits and challenges of the application of the CDs framework to the evaluation of machine learning APIs, and directions to future work which our research deems valuable

    A Survey of Recent Developments in Testability, Safety and Security of RISC-V Processors

    Get PDF
    With the continued success of the open RISC-V architecture, practical deployment of RISC-V processors necessitates an in-depth consideration of their testability, safety and security aspects. This survey provides an overview of recent developments in this quickly-evolving field. We start with discussing the application of state-of-the-art functional and system-level test solutions to RISC-V processors. Then, we discuss the use of RISC-V processors for safety-related applications; to this end, we outline the essential techniques necessary to obtain safety both in the functional and in the timing domain and review recent processor designs with safety features. Finally, we survey the different aspects of security with respect to RISC-V implementations and discuss the relationship between cryptographic protocols and primitives on the one hand and the RISC-V processor architecture and hardware implementation on the other. We also comment on the role of a RISC-V processor for system security and its resilience against side-channel attacks

    State-of-the-Art Report on Systems Analysis Methods for Resolution of Conflicts in Water Resources Management

    Get PDF
    Water is an important factor in conflicts among stakeholders at the local, regional, and even international level. Water conflicts have taken many forms, but they almost always arise from the fact that the freshwater resources of the world are not partitioned to match the political borders, nor are they evenly distributed in space and time. Two or more countries share the watersheds of 261 major rivers and nearly half of the land area of the wo rld is in international river basins. Water has been used as a military and political goal. Water has been a weapon of war. Water systems have been targets during the war. A role of systems approach has been investigated in this report as an approach for resolution of conflicts over water. A review of systems approach provides some basic knowledge of tools and techniques as they apply to water management and conflict resolution. Report provides a classification and description of water conflicts by addressing issues of scale, integrated water management and the role of stakeholders. Four large-scale examples are selected to illustrate the application of systems approach to water conflicts: (a) hydropower development in Canada; (b) multipurpose use of Danube river in Europe; (c) international water conflict between USA and Canada; and (d) Aral See in Asia. Water conflict resolution process involves various sources of uncertainty. One section of the report provides some examples of systems tools that can be used to address objective and subjective uncertainties with special emphasis on the utility of the fuzzy set theory. Systems analysis is known to be driven by the development of computer technology. Last section of the report provides one view of the future and systems tools that will be used for water resources management. Role of the virtual databases, computer and communication networks is investigated in the context of water conflicts and their resolution.https://ir.lib.uwo.ca/wrrr/1005/thumbnail.jp

    Heuristics in Entrepreneurial Opportunity Evaluation: A Comparative Case Study of the Middle East and Germany

    Get PDF
    Heuristics are mental shortcuts applied, consciously, subconsciously or both, to save time and efforts at the expense of risking the accuracy of the outcome. Therefore, one might argue that it is just an accuracy-effort trade-off. Nonetheless, we ought to recognize the distinction between the circumstances of risk, where all choices, outcomes, and probabilities might be generally known, and the circumstances of uncertainty, where, at least some, are not. Traditional models like the Subjective Expected Utility (SEU) work best for decisions under risk but not under uncertainty, which portrays most situations people need to tackle. Uncertainty requires simple heuristics that are sufficient instead of perfect. In this dissertation, the notion of heuristics was researched through a comprehensive historical review that unfolded the heuristics-linked ideas of significant scholars. An explicit distinction between the deliberate and the automatic heuristics was stated with chronological categories of pre and post-introduction of the SEU theory; providing a new perspective and opening a discussion for future research to consider. Additionally, qualitative and quantitative studies were applied that produced an unsophisticated heuristic set that was used by entrepreneurs in the Middle East and Germany. Perhaps entrepreneurs, and people in general, do not always know or acknowledge their use of heuristics. But still, they use it extensively and may exchange heuristics among others. That may lead us to think that in a world where uncertainty prevails, the Homo heuristicus might become a real threat to the Homo economicus

    Durham Zoo: powering a search-&-innovation engine with collective intelligence

    Get PDF
    Purpose – Durham Zoo (hereinafter – DZ) is a project to design and operate a concept search engine for science and technology. In DZ, a concept includes a solution to a problem in a particular context. Design – Concept searching is rendered complex by the fuzzy nature of a concept, the many possible implementations of the same concept, and the many more ways that the many implementations can be expressed in natural language. An additional complexity is the diversity of languages and formats, in which the concepts can be disclosed. Humans understand language, inference, implication and abstraction and, hence, concepts much better than computers, that in turn are much better at storing and processing vast amounts of data. We are 7 billion on the planet and we have the Internet as the backbone for Collective Intelligence. So, our concept search engine uses humans to store concepts via a shorthand that can be stored, processed and searched by computers: so, humans IN and computers OUT. The shorthand is classification: metadata in a structure that can define the content of a disclosure. The classification is designed to be powerful in terms of defining and searching concepts, whilst suited to a crowdsourcing effort. It is simple and intuitive to use. Most importantly, it is adapted to restrict ambiguity, which is the poison of classification, without imposing a restrictive centralised management. In the classification scheme, each entity is shown together in a graphical representation with related entities. The entities are arranged on a sliding scale of similarity. This sliding scale is effectively fuzzy classification. Findings – The authors of the paper have been developing a first classification scheme for the technology of traffic cones, this in preparation for a trial of a working system. The process has enabled the authors to further explore the practicalities of concept classification. The CmapTools knowledge modelling kit to develop the graphical representations has been used. Practical implications – Concept searching is seen as having two categories: prior art searching, which is searching for what already exists, and solution searching: a search for a novel solution to an existing problem. Prior art searching is not as efficient a process, as all encompassing in scope, or as accurate in result, as it could and probably should be. The prior art includes library collections, journals, conference proceedings and everything else that has been written, drawn, spoken or made public in any way. Much technical information is only published in patents. There is a good reason to improve prior art searching: research, industry, and indeed humanity faces the spectre of patent thickets: an impenetrable legal space that effectively hinders innovation rather than promotes it. Improved prior-art searching would help with the gardening and result in fewer and higher-quality patents. Poor-quality patents can reward patenting activity per se, which is not what the system was designed for. Improved prior-art searching could also result in less duplication in research, and/or lead to improved collaboration. As regards solution search, the authors of the paper believe that much better use could be made of the existing literature to find solutions from non-obvious areas of science and technology. The so-called cross industry innovation could be joined by biomimetics, the inspiration of solutions from nature. Crowdsourcing the concept shorthand could produce a system ‘by the people, for the people’, to quote Abraham Lincoln out of context. A Citizen Science and Technology initiative that developed a working search engine could generate revenue for academia. Any monies accruing could be invested in research for the common good, such as the development of climate change mitigation technologies, or the discovery of new antibiotics

    Investigations in robotic-assisted design: Strategies for symbiotic agencies in material-directed generative design processes

    Get PDF
    The research described in this article utilises a phase-changing material, three-dimensional scanning technologies and a six-axis industrial robotic arms as vehicles to enable a novel framework where robotic technology is utilised as an ‘amplifier’ of the design process to realise geometries that derive from both constructive visions and architectural visions through iterative feedback loops between them. The robot in this scenario is not a fabrication tool but the enabler of an environment where the material, robotic and human agencies interact. This article describes the exploratory research for the development of a dialogic design process, sets the framework for its implementation, carries out an evaluation based on designer use and concludes with a set of observations. One of the main findings of this article is that a deeper collaboration that acknowledges the potential of these tools, in a learning-by-design method, can lead to new choreographies for architectural design and fabricatio
    • 

    corecore