16,780 research outputs found

    Inferring an Indeterminate String from a Prefix Graph

    Get PDF
    An \itbf{indeterminate string} (or, more simply, just a \itbf{string}) \s{x} = \s{x}[1..n] on an alphabet Σ\Sigma is a sequence of nonempty subsets of Σ\Sigma. We say that \s{x}[i_1] and \s{x}[i_2] \itbf{match} (written \s{x}[i_1] \match \s{x}[i_2]) if and only if \s{x}[i_1] \cap \s{x}[i_2] \ne \emptyset. A \itbf{feasible array} is an array \s{y} = \s{y}[1..n] of integers such that \s{y}[1] = n and for every i∈2..ni \in 2..n, \s{y}[i] \in 0..n\- i\+ 1. A \itbf{prefix table} of a string \s{x} is an array \s{\pi} = \s{\pi}[1..n] of integers such that, for every i∈1..ni \in 1..n, \s{\pi}[i] = j if and only if \s{x}[i..i\+ j\- 1] is the longest substring at position ii of \s{x} that matches a prefix of \s{x}. It is known from \cite{CRSW13} that every feasible array is a prefix table of some indetermintate string. A \itbf{prefix graph} \mathcal{P} = \mathcal{P}_{\s{y}} is a labelled simple graph whose structure is determined by a feasible array \s{y}. In this paper we show, given a feasible array \s{y}, how to use \mathcal{P}_{\s{y}} to construct a lexicographically least indeterminate string on a minimum alphabet whose prefix table \s{\pi} = \s{y}.Comment: 13 pages, 1 figur

    A High performance and low power hardware architecture for H.264 cavlc algorithm

    Get PDF
    In this paper, we present a high performance and low power hard-ware architecture for real-time implementation of Context Adap-tive Variable Length Coding (CAVLC) algorithm used in H.264 / MPEG4 Part 10 video coding standard. This hardware is designed to be used as part of a complete low power H.264 video coding system for portable applications. The proposed architecture is im-plemented in Verilog HDL. The Verilog RTL code is verified to work at 76 MHz in a Xilinx Virtex II FPGA and it is verified to work at 233 MHz in a 0.18ÂŽ ASIC implementation. The FPGA and ASIC implementations can code 22 and 67 VGA frames (640x480) per second respectively

    End-Site Routing Support for IPv6 Multihoming

    Get PDF
    Multihoming is currently widely used to provide fault tolerance and traffic engineering capabilities. It is expected that, as telecommunication costs decrease, its adoption will become more and more prevalent. Current multihoming support is not designed to scale up to the expected number of multihomed sites, so alternative solutions are required, especially for IPv6. In order to preserve interdomain routing scalability, the new multihoming solution has to be compatible with Provider Aggregatable addressing. However, such addressing scheme imposes the configuration of multiple prefixes in multihomed sites, which in turn causes several operational difficulties within those sites that may even result in communication failures when all the ISPs are working properly. In this paper we propose the adoption of Source Address Dependent routing within the multihomed site to overcome the identified difficulties.Publicad

    Content-Centric Networking at Internet Scale through The Integration of Name Resolution and Routing

    Full text link
    We introduce CCN-RAMP (Routing to Anchors Matching Prefixes), a new approach to content-centric networking. CCN-RAMP offers all the advantages of the Named Data Networking (NDN) and Content-Centric Networking (CCNx) but eliminates the need to either use Pending Interest Tables (PIT) or lookup large Forwarding Information Bases (FIB) listing name prefixes in order to forward Interests. CCN-RAMP uses small forwarding tables listing anonymous sources of Interests and the locations of name prefixes. Such tables are immune to Interest-flooding attacks and are smaller than the FIBs used to list IP address ranges in the Internet. We show that no forwarding loops can occur with CCN-RAMP, and that Interests flow over the same routes that NDN and CCNx would maintain using large FIBs. The results of simulation experiments comparing NDN with CCN-RAMP based on ndnSIM show that CCN-RAMP requires forwarding state that is orders of magnitude smaller than what NDN requires, and attains even better performance

    Pipelining Saturated Accumulation

    Get PDF
    Aggressive pipelining and spatial parallelism allow integrated circuits (e.g., custom VLSI, ASICs, and FPGAs) to achieve high throughput on many Digital Signal Processing applications. However, cyclic data dependencies in the computation can limit parallelism and reduce the efficiency and speed of an implementation. Saturated accumulation is an important example where such a cycle limits the throughput of signal processing applications. We show how to reformulate saturated addition as an associative operation so that we can use a parallel-prefix calculation to perform saturated accumulation at any data rate supported by the device. This allows us, for example, to design a 16-bit saturated accumulator which can operate at 280 MHz on a Xilinx Spartan-3(XC3S-5000-4) FPGA, the maximum frequency supported by the component's DCM

    A Light-Weight Forwarding Plane for Content-Centric Networks

    Full text link
    We present CCN-DART, a more efficient forwarding approach for content-centric networking (CCN) than named data networking (NDN) that substitutes Pending Interest Tables (PIT) with Data Answer Routing Tables (DART) and uses a novel approach to eliminate forwarding loops. The forwarding state required at each router using CCN-DART consists of segments of the routes between consumers and content providers that traverse a content router, rather than the Interests that the router forwards towards content providers. Accordingly, the size of a DART is proportional to the number of routes used by Interests traversing a router, rather than the number of Interests traversing a router. We show that CCN-DART avoids forwarding loops by comparing distances to name prefixes reported by neighbors, even when routing loops exist. Results of simulation experiments comparing CCN-DART with NDN using the ndnSIM simulation tool show that CCN-DART incurs 10 to 20 times less storage overhead

    Compressed Text Indexes:From Theory to Practice!

    Full text link
    A compressed full-text self-index represents a text in a compressed form and still answers queries efficiently. This technology represents a breakthrough over the text indexing techniques of the previous decade, whose indexes required several times the size of the text. Although it is relatively new, this technology has matured up to a point where theoretical research is giving way to practical developments. Nonetheless this requires significant programming skills, a deep engineering effort, and a strong algorithmic background to dig into the research results. To date only isolated implementations and focused comparisons of compressed indexes have been reported, and they missed a common API, which prevented their re-use or deployment within other applications. The goal of this paper is to fill this gap. First, we present the existing implementations of compressed indexes from a practitioner's point of view. Second, we introduce the Pizza&Chili site, which offers tuned implementations and a standardized API for the most successful compressed full-text self-indexes, together with effective testbeds and scripts for their automatic validation and test. Third, we show the results of our extensive experiments on these codes with the aim of demonstrating the practical relevance of this novel and exciting technology

    Enhancing Reuse of Constraint Solutions to Improve Symbolic Execution

    Full text link
    Constraint solution reuse is an effective approach to save the time of constraint solving in symbolic execution. Most of the existing reuse approaches are based on syntactic or semantic equivalence of constraints; e.g. the Green framework is able to reuse constraints which have different representations but are semantically equivalent, through canonizing constraints into syntactically equivalent normal forms. However, syntactic/semantic equivalence is not a necessary condition for reuse--some constraints are not syntactically or semantically equivalent, but their solutions still have potential for reuse. Existing approaches are unable to recognize and reuse such constraints. In this paper, we present GreenTrie, an extension to the Green framework, which supports constraint reuse based on the logical implication relations among constraints. GreenTrie provides a component, called L-Trie, which stores constraints and solutions into tries, indexed by an implication partial order graph of constraints. L-Trie is able to carry out logical reduction and logical subset and superset querying for given constraints, to check for reuse of previously solved constraints. We report the results of an experimental assessment of GreenTrie against the original Green framework, which shows that our extension achieves better reuse of constraint solving result and saves significant symbolic execution time.Comment: this paper has been submitted to conference ISSTA 201

    Towards Loop-Free Forwarding of Anonymous Internet Datagrams that Enforce Provenance

    Full text link
    The way in which addressing and forwarding are implemented in the Internet constitutes one of its biggest privacy and security challenges. The fact that source addresses in Internet datagrams cannot be trusted makes the IP Internet inherently vulnerable to DoS and DDoS attacks. The Internet forwarding plane is open to attacks to the privacy of datagram sources, because source addresses in Internet datagrams have global scope. The fact an Internet datagrams are forwarded based solely on the destination addresses stated in datagram headers and the next hops stored in the forwarding information bases (FIB) of relaying routers allows Internet datagrams to traverse loops, which wastes resources and leaves the Internet open to further attacks. We introduce PEAR (Provenance Enforcement through Addressing and Routing), a new approach for addressing and forwarding of Internet datagrams that enables anonymous forwarding of Internet datagrams, eliminates many of the existing DDoS attacks on the IP Internet, and prevents Internet datagrams from looping, even in the presence of routing-table loops.Comment: Proceedings of IEEE Globecom 2016, 4-8 December 2016, Washington, D.C., US

    Reordering Rows for Better Compression: Beyond the Lexicographic Order

    Get PDF
    Sorting database tables before compressing them improves the compression rate. Can we do better than the lexicographical order? For minimizing the number of runs in a run-length encoding compression scheme, the best approaches to row-ordering are derived from traveling salesman heuristics, although there is a significant trade-off between running time and compression. A new heuristic, Multiple Lists, which is a variant on Nearest Neighbor that trades off compression for a major running-time speedup, is a good option for very large tables. However, for some compression schemes, it is more important to generate long runs rather than few runs. For this case, another novel heuristic, Vortex, is promising. We find that we can improve run-length encoding up to a factor of 3 whereas we can improve prefix coding by up to 80%: these gains are on top of the gains due to lexicographically sorting the table. We prove that the new row reordering is optimal (within 10%) at minimizing the runs of identical values within columns, in a few cases.Comment: to appear in ACM TOD
    • 

    corecore