478 research outputs found

    Dagstuhl Reports : Volume 1, Issue 2, February 2011

    Get PDF
    Online Privacy: Towards Informational Self-Determination on the Internet (Dagstuhl Perspectives Workshop 11061) : Simone Fischer-Hübner, Chris Hoofnagle, Kai Rannenberg, Michael Waidner, Ioannis Krontiris and Michael Marhöfer Self-Repairing Programs (Dagstuhl Seminar 11062) : Mauro Pezzé, Martin C. Rinard, Westley Weimer and Andreas Zeller Theory and Applications of Graph Searching Problems (Dagstuhl Seminar 11071) : Fedor V. Fomin, Pierre Fraigniaud, Stephan Kreutzer and Dimitrios M. Thilikos Combinatorial and Algorithmic Aspects of Sequence Processing (Dagstuhl Seminar 11081) : Maxime Crochemore, Lila Kari, Mehryar Mohri and Dirk Nowotka Packing and Scheduling Algorithms for Information and Communication Services (Dagstuhl Seminar 11091) Klaus Jansen, Claire Mathieu, Hadas Shachnai and Neal E. Youn

    Logic Programming: Context, Character and Development

    Get PDF
    Logic programming has been attracting increasing interest in recent years. Its first realisation in the form of PROLOG demonstrated concretely that Kowalski's view of computation as controlled deduction could be implemented with tolerable efficiency, even on existing computer architectures. Since that time logic programming research has intensified. The majority of computing professionals have remained unaware of the developments, however, and for some the announcement that PROLOG had been selected as the core language for the Japanese 'Fifth Generation' project came as a total surprise. This thesis aims to describe the context, character and development of logic programming. It explains why a radical departure from existing software practices needs to be seriously discussed; it identifies the characteristic features of logic programming, and the practical realisation of these features in current logic programming systems; and it outlines the programming methodology which is proposed for logic programming. The problems and limitations of existing logic programming systems are described and some proposals for development are discussed. The thesis is in three parts. Part One traces the development of programming since the early days of computing. It shows how the problems of software complexity which were addressed by the 'structured programming' school have not been overcome: the software crisis remains severe and seems to require fundamental changes in software practice for its solution. Part Two describes the foundations of logic programming in the procedural interpretation of Horn clauses. Fundamental to logic programming is shown to be the separation of the logic of an algorithm from its control. At present, however, both the logic and the control aspects of logic programming present problems; the first in terms of the extent of the language which is used, and the second in terms of the control strategy which should be applied in order to produce solutions. These problems are described and various proposals, including some which have been incorporated into implemented systems, are described. Part Three discusses the software development methodology which is proposed for logic programming. Some of the experience of practical applications is related. Logic programming is considered in the aspects of its potential for parallel execution and in its relationship to functional programming, and some possible criticisms of the problem-solving potential of logic are described. The conclusion is that although logic programming inevitably has some problems which are yet to be solved, it seems to offer answers to several issues which are at the heart of the software crisis. The potential contribution of logic programming towards the development of software should be substantial

    Parallel execution of horn claus programs

    Get PDF
    Imperial Users onl

    Approximating Spectral Clustering via Sampling: a Review

    Get PDF
    Spectral clustering refers to a family of unsupervised learning algorithms that compute a spectral embedding of the original data based on the eigenvectors of a similarity graph. This non-linear transformation of the data is both the key of these algorithms' success and their Achilles heel: forming a graph and computing its dominant eigenvectors can indeed be computationally prohibitive when dealing with more that a few tens of thousands of points. In this paper, we review the principal research efforts aiming to reduce this computational cost. We focus on methods that come with a theoretical control on the clustering performance and incorporate some form of sampling in their operation. Such methods abound in the machine learning, numerical linear algebra, and graph signal processing literature and, amongst others, include Nystr\"om-approximation, landmarks, coarsening, coresets, and compressive spectral clustering. We present the approximation guarantees available for each and discuss practical merits and limitations. Surprisingly, despite the breadth of the literature explored, we conclude that there is still a gap between theory and practice: the most scalable methods are only intuitively motivated or loosely controlled, whereas those that come with end-to-end guarantees rely on strong assumptions or enable a limited gain of computation time

    Acceleration Methods for MRI

    Full text link
    Acceleration methods are a critical area of research for MRI. Two of the most important acceleration techniques involve parallel imaging and compressed sensing. These advanced signal processing techniques have the potential to drastically reduce scan times and provide radiologists with new information for diagnosing disease. However, many of these new techniques require solving difficult optimization problems, which motivates the development of more advanced algorithms to solve them. In addition, acceleration methods have not reached maturity in some applications, which motivates the development of new models tailored to these applications. This dissertation makes advances in three different areas of accelerations. The first is the development of a new algorithm (called B1-Based, Adaptive Restart, Iterative Soft Thresholding Algorithm or BARISTA), that solves a parallel MRI optimization problem with compressed sensing assumptions. BARISTA is shown to be 2-3 times faster and more robust to parameter selection than current state-of-the-art variable splitting methods. The second contribution is the extension of BARISTA ideas to non-Cartesian trajectories that also leads to a 2-3 times acceleration over previous methods. The third contribution is the development of a new model for functional MRI that enables a 3-4 factor of acceleration of effective temporal resolution in functional MRI scans. Several variations of the new model are proposed, with an ROC curve analysis showing that a combination low-rank/sparsity model giving the best performance in identifying the resting-state motor network.PhDBiomedical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/120841/1/mmuckley_1.pd

    Topics in Programming Languages, a Philosophical Analysis through the case of Prolog

    Get PDF
    [EN]Programming languages seldom find proper anchorage in philosophy of logic, language and science. is more, philosophy of language seems to be restricted to natural languages and linguistics, and even philosophy of logic is rarely framed into programming languages topics. The logic programming paradigm and Prolog are, thus, the most adequate paradigm and programming language to work on this subject, combining natural language processing and linguistics, logic programming and constriction methodology on both algorithms and procedures, on an overall philosophizing declarative status. Not only this, but the dimension of the Fifth Generation Computer system related to strong Al wherein Prolog took a major role. and its historical frame in the very crucial dialectic between procedural and declarative paradigms, structuralist and empiricist biases, serves, in exemplar form, to treat straight ahead philosophy of logic, language and science in the contemporaneous age as well. In recounting Prolog's philosophical, mechanical and algorithmic harbingers, the opportunity is open to various routes. We herein shall exemplify some: - the mechanical-computational background explored by Pascal, Leibniz, Boole, Jacquard, Babbage, Konrad Zuse, until reaching to the ACE (Alan Turing) and EDVAC (von Neumann), offering the backbone in computer architecture, and the work of Turing, Church, Gödel, Kleene, von Neumann, Shannon, and others on computability, in parallel lines, throughly studied in detail, permit us to interpret ahead the evolving realm of programming languages. The proper line from lambda-calculus, to the Algol-family, the declarative and procedural split with the C language and Prolog, and the ensuing branching and programming languages explosion and further delimitation, are thereupon inspected as to relate them with the proper syntax, semantics and philosophical élan of logic programming and Prolog

    Security-Policy Analysis with eXtended Unix Tools

    Get PDF
    During our fieldwork with real-world organizations---including those in Public Key Infrastructure (PKI), network configuration management, and the electrical power grid---we repeatedly noticed that security policies and related security artifacts are hard to manage. We observed three core limitations of security policy analysis that contribute to this difficulty. First, there is a gap between policy languages and the tools available to practitioners. Traditional Unix text-processing tools are useful, but practitioners cannot use these tools to operate on the high-level languages in which security policies are expressed and implemented. Second, practitioners cannot process policy at multiple levels of abstraction but they need this capability because many high-level languages encode hierarchical object models. Finally, practitioners need feedback to be able to measure how security policies and policy artifacts that implement those policies change over time. We designed and built our eXtended Unix tools (XUTools) to address these limitations of security policy analysis. First, our XUTools operate upon context-free languages so that they can operate upon the hierarchical object models of high-level policy languages. Second, our XUTools operate on parse trees so that practitioners can process and analyze texts at multiple levels of abstraction. Finally, our XUTools enable new computational experiments on multi-versioned structured texts and our tools allow practitioners to measure security policies and how they change over time. Just as programmers use high-level languages to program more efficiently, so can practitioners use these tools to analyze texts relative to a high-level language. Throughout the historical transmission of text, people have identified meaningful substrings of text and categorized them into groups such as sentences, pages, lines, function blocks, and books to name a few. Our research interprets these useful structures as different context-free languages by which we can analyze text. XUTools are already in demand by practitioners in a variety of domains and articles on our research have been featured in various news outlets that include ComputerWorld, CIO Magazine, Communications of the ACM, and Slashdot

    Bijective Parameterization with Free Boundaries

    Get PDF
    When displaying 3D surfaces onto computer screens, additional information is often mapped onto the surface to enhance the quality of the rendering. Surface parameterization generates a correspondence, or mapping, between the 3D surface and 2D parameterization space. This mapping has many applications in computer graphics, but in most cases cannot be performed without introducing large distortions in the 2D parameterization. Along with problems of distortion, the mapping of the 2D space to 3D for many applications can be invalidated if the property of bijectivity is violated. While there is previous research guaranteeing bijectivity, these methods must constrain or modify the boundary of the 2D parameterization. This dissertation, describes a fully automatic method for generating guaranteed bijective surface parameterizations from triangulated 3D surfaces. In particular, a new isometric distortion energy metric is introduced preventing local folds of triangles in the parameterization as well as a barrier function that prevents intersection of the 2D boundaries. By using a computationally efficient isometric metric energy, the dissertation achieves fast and comparable optimization times to previous methods. The boundary of the parameterization is free to change shape during the optimization to minimize distortion. A new optimization approach is introduced called singularity aware optimization and in conjunction with an interior point approach and barrier energy functions guarantee bijectivity. This optimization framework is then modified to allow for an importance weighting allowing for customizable and more efficient texel usage

    Search and planning under incomplete information : a study using Bridge card play

    Get PDF
    This thesis investigates problem-solving in domains featuring incomplete information and multiple agents with opposing goals. In particular, we describe Finesse --- a system that forms plans for the problem of declarer play in the game of Bridge. We begin by examining the problem of search. We formalise a best defence model of incomplete information games in which equilibrium point strategies can be identified, and identify two specific problems that can affect algorithms in such domains. In Bridge, we show that the best defence model corresponds to the typical model analysed in expert texts, and examine search algorithms which overcome the problems we have identified. Next, we look at how planning algorithms can be made to cope with the difficulties of such domains. This calls for the development of new techniques for representing uncertainty and actions with disjunctive effects, for coping with an opposition, and for reasoning about compound actions. We tackle these problems with a..
    corecore