14 research outputs found
On-Line Processing In Large-Scale Transaction Systems
In this thesis, we provide techniques to adapt current database technology to account for the following trends that can be observed in database management system (DBMS) usage: 1. DBMSs are being increasingly used in applications, like computerized stock trading, that have very high transaction rates. 2. Database sizes are growing rapidly, and future databases are expected to be several orders of magnitude larger than the largest databases in operation today. 3. Next generation DBMSs are expected to gravitate more and more towards what is referred to as 24(hour) \Theta 7(day) operation. In order to handle high transaction rates, future DBMSs have to use highly concurrent algorithms for managing often-used auxiliary data structures like indices. To better understand the performance of concurrency control algorithms for index access, we first compare the performance of B-tree concurrency control algorithms using a simulation model of a centralized DBMS. In our performance study, we look a..
Faster IP Lookups using Controlled Prefix Expansion
Internet (IP) address lookup is a major bottleneck in high performance routers. IP address lookup is challenging because it requires a longest matching prefix lookup. It is compounded by increasing routing table sizes, increased traffic, higher speed links, and the migration to 128 bit IPv6 addresses. We describe how IP lookups can be made faster using a new technique called controlled prefix expansion. Controlled prefix expansion, together with optimization techniques based on dynamic programming, can be used to improve the speed of the best known IP lookup algorithms by at least a factor of two. When applied to trie search, our techniques provide a range of algorithms whose performance can be tuned. For example, with 1 MB of L2 cache, trie search of the MaeEast database with 38,000 prefixes can be done in a worst case search time of 181 nsec, a worst case insert/delete time of 2.5 msec, and an average insert/delete time of 4 usec. Our actual experiments used 512 KB L2 cache to obtain..
Algorithms for advanced packet classification with ternary CAMs
Ternary content-addressable memories (TCAMs) have gained wide acceptance in the industry for storing and searching Access Control Lists (ACLs). In this paper, we propose algorithms for addressing two important problems that are encountered while using TCAMs: reducing range expansion and multi-match classification. Our first algorithm addresses the problem of expansion of rules with range fields—to represent range rules in TCAMs, a single range rule is mapped to multiple TCAM entries, which reduces the utilization of TCAMs. We propose a new scheme called Database Independent Range PreEncoding (DIRPE) that, in comparison to earlier approaches, reduces the worst-case number of TCAM entries a single rule maps on to. DIRPE works without prior knowledge of the database, scales when a large number of ranges is present, and has good incremental update properties. Our second algorithm addresses the problem of finding multiple matches in a TCAM. When searched, TCAMs return the first matching entry; however, new applications require either the first few or all matching entries. We describe a novel algorithm, called Multi-match Using Discriminators (MUD), that finds multiple matches without storing any per-search state information in the TCAM, thus making it suitable for multi-threaded environments. MUD does not increase the number of TCAM entries needed, and hence scales to large databases. Our algorithms do not require any modifications to existing TCAMs and are hence relatively easy to deploy. We evaluate the algorithms using real-life and random databases