471 research outputs found

    Improved bounds for testing Dyck languages

    Full text link
    In this paper we consider the problem of deciding membership in Dyck languages, a fundamental family of context-free languages, comprised of well-balanced strings of parentheses. In this problem we are given a string of length nn in the alphabet of parentheses of mm types and must decide if it is well-balanced. We consider this problem in the property testing setting, where one would like to make the decision while querying as few characters of the input as possible. Property testing of strings for Dyck language membership for m=1m=1, with a number of queries independent of the input size nn, was provided in [Alon, Krivelevich, Newman and Szegedy, SICOMP 2001]. Property testing of strings for Dyck language membership for m2m \ge 2 was first investigated in [Parnas, Ron and Rubinfeld, RSA 2003]. They showed an upper bound and a lower bound for distinguishing strings belonging to the language from strings that are far (in terms of the Hamming distance) from the language, which are respectively (up to polylogarithmic factors) the 2/32/3 power and the 1/111/11 power of the input size nn. Here we improve the power of nn in both bounds. For the upper bound, we introduce a recursion technique, that together with a refinement of the methods in the original work provides a test for any power of nn larger than 2/52/5. For the lower bound, we introduce a new problem called Truestring Equivalence, which is easily reducible to the 22-type Dyck language property testing problem. For this new problem, we show a lower bound of nn to the power of 1/51/5

    Space-Time Trade-offs for Stack-Based Algorithms

    Get PDF
    In memory-constrained algorithms we have read-only access to the input, and the number of additional variables is limited. In this paper we introduce the compressed stack technique, a method that allows to transform algorithms whose space bottleneck is a stack into memory-constrained algorithms. Given an algorithm \alg\ that runs in O(n) time using Θ(n)\Theta(n) variables, we can modify it so that it runs in O(n2/s)O(n^2/s) time using a workspace of O(s) variables (for any so(logn)s\in o(\log n)) or O(nlogn/logp)O(n\log n/\log p) time using O(plogn/logp)O(p\log n/\log p) variables (for any 2pn2\leq p\leq n). We also show how the technique can be applied to solve various geometric problems, namely computing the convex hull of a simple polygon, a triangulation of a monotone polygon, the shortest path between two points inside a monotone polygon, 1-dimensional pyramid approximation of a 1-dimensional vector, and the visibility profile of a point inside a simple polygon. Our approach exceeds or matches the best-known results for these problems in constant-workspace models (when they exist), and gives the first trade-off between the size of the workspace and running time. To the best of our knowledge, this is the first general framework for obtaining memory-constrained algorithms

    Approximate Computing Survey, Part II: Application-Specific & Architectural Approximation Techniques and Applications

    Full text link
    The challenging deployment of compute-intensive applications from domains such Artificial Intelligence (AI) and Digital Signal Processing (DSP), forces the community of computing systems to explore new design approaches. Approximate Computing appears as an emerging solution, allowing to tune the quality of results in the design of a system in order to improve the energy efficiency and/or performance. This radical paradigm shift has attracted interest from both academia and industry, resulting in significant research on approximation techniques and methodologies at different design layers (from system down to integrated circuits). Motivated by the wide appeal of Approximate Computing over the last 10 years, we conduct a two-part survey to cover key aspects (e.g., terminology and applications) and review the state-of-the art approximation techniques from all layers of the traditional computing stack. In Part II of our survey, we classify and present the technical details of application-specific and architectural approximation techniques, which both target the design of resource-efficient processors/accelerators & systems. Moreover, we present a detailed analysis of the application spectrum of Approximate Computing and discuss open challenges and future directions.Comment: Under Review at ACM Computing Survey

    Advanced information processing system: The Army fault tolerant architecture conceptual study. Volume 2: Army fault tolerant architecture design and analysis

    Get PDF
    Described here is the Army Fault Tolerant Architecture (AFTA) hardware architecture and components and the operating system. The architectural and operational theory of the AFTA Fault Tolerant Data Bus is discussed. The test and maintenance strategy developed for use in fielded AFTA installations is presented. An approach to be used in reducing the probability of AFTA failure due to common mode faults is described. Analytical models for AFTA performance, reliability, availability, life cycle cost, weight, power, and volume are developed. An approach is presented for using VHSIC Hardware Description Language (VHDL) to describe and design AFTA's developmental hardware. A plan is described for verifying and validating key AFTA concepts during the Dem/Val phase. Analytical models and partial mission requirements are used to generate AFTA configurations for the TF/TA/NOE and Ground Vehicle missions

    Parallelization of dynamic programming recurrences in computational biology

    Get PDF
    The rapid growth of biosequence databases over the last decade has led to a performance bottleneck in the applications analyzing them. In particular, over the last five years DNA sequencing capacity of next-generation sequencers has been doubling every six months as costs have plummeted. The data produced by these sequencers is overwhelming traditional compute systems. We believe that in the future compute performance, not sequencing, will become the bottleneck in advancing genome science. In this work, we investigate novel computing platforms to accelerate dynamic programming algorithms, which are popular in bioinformatics workloads. We study algorithm-specific hardware architectures that exploit fine-grained parallelism in dynamic programming kernels using field-programmable gate arrays: FPGAs). We advocate a high-level synthesis approach, using the recurrence equation abstraction to represent dynamic programming and polyhedral analysis to exploit parallelism. We suggest a novel technique within the polyhedral model to optimize for throughput by pipelining independent computations on an array. This design technique improves on the state of the art, which builds latency-optimal arrays. We also suggest a method to dynamically switch between a family of designs using FPGA reconfiguration to achieve a significant performance boost. We have used polyhedral methods to parallelize the Nussinov RNA folding algorithm to build a family of accelerators that can trade resources for parallelism and are between 15-130x faster than a modern dual core CPU implementation. A Zuker RNA folding accelerator we built on a single workstation with four Xilinx Virtex 4 FPGAs outperforms 198 3 GHz Intel Core 2 Duo processors. Furthermore, our design running on a single FPGA is an order of magnitude faster than competing implementations on similar-generation FPGAs and graphics processors. Our work is a step toward the goal of automated synthesis of hardware accelerators for dynamic programming algorithms

    Existing and Required Modeling Capabilities for Evaluating ATM Systems and Concepts

    Get PDF
    ATM systems throughout the world are entering a period of major transition and change. The combination of important technological developments and of the globalization of the air transportation industry has necessitated a reexamination of some of the fundamental premises of existing Air Traffic Management (ATM) concepts. New ATM concepts have to be examined, concepts that may place more emphasis on: strategic traffic management; planning and control; partial decentralization of decision-making; and added reliance on the aircraft to carry out strategic ATM plans, with ground controllers confined primarily to a monitoring and supervisory role. 'Free Flight' is a case in point. In order to study, evaluate and validate such new concepts, the ATM community will have to rely heavily on models and computer-based tools/utilities, covering a wide range of issues and metrics related to safety, capacity and efficiency. The state of the art in such modeling support is adequate in some respects, but clearly deficient in others. It is the objective of this study to assist in: (1) assessing the strengths and weaknesses of existing fast-time models and tools for the study of ATM systems and concepts and (2) identifying and prioritizing the requirements for the development of additional modeling capabilities in the near future. A three-stage process has been followed to this purpose: 1. Through the analysis of two case studies involving future ATM system scenarios, as well as through expert assessment, modeling capabilities and supporting tools needed for testing and validating future ATM systems and concepts were identified and described. 2. Existing fast-time ATM models and support tools were reviewed and assessed with regard to the degree to which they offer the capabilities identified under Step 1. 3 . The findings of 1 and 2 were combined to draw conclusions about (1) the best capabilities currently existing, (2) the types of concept testing and validation that can be carried out reliably with such existing capabilities and (3) the currently unavailable modeling capabilities that should receive high priority for near-term research and development. It should be emphasized that the study is concerned only with the class of 'fast time' analytical and simulation models. 'Real time' models, that typically involve humans-in-the-loop, comprise another extensive class which is not addressed in this report. However, the relationship between some of the fast-time models reviewed and a few well-known real-time models is identified in several parts of this report and the potential benefits from the combined use of these two classes of models-a very important subject-are discussed in chapters 4 and 7
    corecore