12,099 research outputs found

    An Atypical Survey of Typical-Case Heuristic Algorithms

    Full text link
    Heuristic approaches often do so well that they seem to pretty much always give the right answer. How close can heuristic algorithms get to always giving the right answer, without inducing seismic complexity-theoretic consequences? This article first discusses how a series of results by Berman, Buhrman, Hartmanis, Homer, Longpr\'{e}, Ogiwara, Sch\"{o}ening, and Watanabe, from the early 1970s through the early 1990s, explicitly or implicitly limited how well heuristic algorithms can do on NP-hard problems. In particular, many desirable levels of heuristic success cannot be obtained unless severe, highly unlikely complexity class collapses occur. Second, we survey work initiated by Goldreich and Wigderson, who showed how under plausible assumptions deterministic heuristics for randomized computation can achieve a very high frequency of correctness. Finally, we consider formal ways in which theory can help explain the effectiveness of heuristics that solve NP-hard problems in practice.Comment: This article is currently scheduled to appear in the December 2012 issue of SIGACT New

    What is an Analogue for the Semantic Web and Why is Having One Important?

    No full text
    This paper postulates that for the Semantic Web to grow and gain input from fields that will surely benefit it, it needs to develop an analogue that will help people not only understand what it is, but what the potential opportunities are that are enabled by these new protocols. The model proposed in the paper takes the way that Web interaction has been framed as a baseline to inform a similar analogue for the Semantic Web. While the Web has been represented as a Page + Links, the paper presents the argument that the Semantic Web can be conceptualized as a Notebook + Memex. The argument considers how this model also presents new challenges for fundamental human interaction with computing, and that hypertext models have much to contribute to this new understanding for distributed information systems

    Data Loam: The Future of Knowledge Systems

    Get PDF
    This was a two year collaborative research project funded by the Austrian Research Science Foundation [FWF-PEEK]. Data Loam was designed as a multi-faceted arts-based approach to one of the more intractable and urgent problems facing our contemporary digital environment today: the massive proliferation of data, and with it, a particularly nuanced set of complexities confronting our national libraries, universities, research labs as well non-academic cultural institutions and industry-oriented environments. The urgency of the problem circled around three areas: archiving (what to archive and how), accessibility (how to ensure that knowledges systems would remain, intrinsically, ‘open’ in the face of ever-increasing data) and experimental (enabling creativity, intelligence, curiosity, diversity and risk to remain as fundamental to our way of life). In so doing, Data Loam rejected the entrenched paradigm of indexicality as the only method capable of articulating the ‘how’, ‘what’, ‘where’ and ‘when’ of our contemporary world as the 'internet of things'. This meant rejecting also the entrenched Cold War 'binaric systematizing that tended to promote apocalyptic narratives of technology pitting ‘man’ against ‘machine’, and in so doing, taking as given the end of freedom, rule of law, governance and indeed humanity itself. Instead, Data Loam took as its starting point precisely the unruly materiality of information, with its the massive proliferation, messy logics, oddly cathected derivatives of circulation and exchange, navigational gaming, multi-dimensional visualities, crypto-economies, block-chain equivalences, and complexly sutured arenas of cultural difference. Rather than trying to compartmentalise, frame, cut-down, or force into silos or pockets of information, Data Loam foreground this exponential explosion of Big Data. It did so, first and foremost, by putting art-based research and practice at its core, emphasising the logics of sense, planes of immanence, feedback loops, multi-dimensionality, entanglement, and diffraction. Data Loam was able to reach its main goal: the articulation of how data becomes self-organised and can produce a kind of open self-governance that relies of the mass proliferation of information. On a practical level, this included developing an algorithm that could enable a new lexicographical search and tag organising system. Perhaps most significantly, »Data Loam« answered the question of ‘how’ correlations ‘matter’; that is to say, how correlations generate matter, and in so doing enable heterogeneous and local dimensionalities that ‘in-form’ aesthetic-ethical-political ecosystems. The project was linked with teaching /studio work with the MA students at the University of Applied Arts and the PHD students at the RCA (Entanglement Research Group). It was connected with RIAT (Vienna) and was rolled out in various exhibitions in Berlin, Vienna, Singapore, New Zealand and London. National libraries included: The British Library, the Austrian National Library, The German Federal Archive, the Humboldt University (Institute for Library and information Science), and Tisch School of Arts (NYU)

    Smart Contracts Contracts

    Get PDF
    This paper explores the connection between software contracts and smart contracts. Despite the assonance, these two terms denote quite different concepts: software contracts are logical properties of software components, while smart contracts are programs executed on blockchains. What is the relation between them? We answer this question by discussing how to integrate software contracts in the design of programming languages for smart contracts

    What Do Paraconsistent, Undecidable, Random, Computable and Incomplete mean? A Review of Godel's Way: Exploits into an undecidable world by Gregory Chaitin, Francisco A Doria, Newton C.A. da Costa 160p (2012) (review revised 2019)

    Get PDF
    In ‘Godel’s Way’ three eminent scientists discuss issues such as undecidability, incompleteness, randomness, computability and paraconsistency. I approach these issues from the Wittgensteinian viewpoint that there are two basic issues which have completely different solutions. There are the scientific or empirical issues, which are facts about the world that need to be investigated observationally and philosophical issues as to how language can be used intelligibly (which include certain questions in mathematics and logic), which need to be decided by looking at how we actually use words in particular contexts. When we get clear about which language game we are playing, these topics are seen to be ordinary scientific and mathematical questions like any others. Wittgenstein’s insights have seldom been equaled and never surpassed and are as pertinent today as they were 80 years ago when he dictated the Blue and Brown Books. In spite of its failings—really a series of notes rather than a finished book—this is a unique source of the work of these three famous scholars who have been working at the bleeding edges of physics, math and philosophy for over half a century. Da Costa and Doria are cited by Wolpert (see below or my articles on Wolpert and my review of Yanofsky’s ‘The Outer Limits of Reason’) since they wrote on universal computation, and among his many accomplishments, Da Costa is a pioneer in paraconsistency. Those wishing a comprehensive up to date framework for human behavior from the modern two systems view may consult my book ‘The Logical Structure of Philosophy, Psychology, Mind and Language in Ludwig Wittgenstein and John Searle’ 2nd ed (2019). Those interested in more of my writings may see ‘Talking Monkeys--Philosophy, Psychology, Science, Religion and Politics on a Doomed Planet--Articles and Reviews 2006-2019 3rd ed (2019), The Logical Structure of Human Behavior (2019), and Suicidal Utopian Delusions in the 21st Century 4th ed (2019

    Quantum Computing: Resolving Myths, From Physics to Metaphysics

    Get PDF
    As the field of quantum computing becomes popularized, myths or misconceptions will inevitably come along with it. From the sci-fi genre to the casual usage of the term quantum, idealism begins to take over our projections of the technological future. But what are quantum computers? And what does quantum mean? How are they any different than the computers we use on an everyday basis? Will there be quantum computing smartphones? Are quantum computers just a faster version of conventional computing or a wholly new way of computing altogether? The objective of this paper is to resolve common myths or misconceptions about the concept of quantum computers, as well as the outlook and potential of this technology. In the attempt to construct a sound narrative involving a wide range of disciplines, we will draw concepts from classical computing, quantum physics, computational complexity, as well as philosophy to decipher the mystery within this unique field
    • 

    corecore