144,718 research outputs found

    The Ubiquitous B-tree: Volume II

    Get PDF
    Major developments relating to the B-tree from early 1979 through the fall of 1986 are presented. This updates the well-known article, The Ubiquitous B-Tree by Douglas Comer (Computing Surveys, June 1979). After a basic overview of B and B+ trees, recent research is cited as well as descriptions of nine B-tree variants developed since Comer\u27s article. The advantages and disadvantages of each variant over the basic B-tree are emphasized. Also included are a discussion of concurrency control issues in B-trees and a speculation on the future of B-trees

    6 Access Methods and Query Processing Techniques

    Get PDF
    The performance of a database management system (DBMS) is fundamentally dependent on the access methods and query processing techniques available to the system. Traditionally, relational DBMSs have relied on well-known access methods, such as the ubiquitous B +-tree, hashing with chaining, and, in som

    Context Trees: Augmenting Geospatial Trajectories with Context

    Get PDF
    Exposing latent knowledge in geospatial trajectories has the potential to provide a better understanding of the movements of individuals and groups. Motivated by such a desire, this work presents the context tree, a new hierarchical data structure that summarises the context behind user actions in a single model. We propose a method for context tree construction that augments geospatial trajectories with land usage data to identify such contexts. Through evaluation of the construction method and analysis of the properties of generated context trees, we demonstrate the foundation for understanding and modelling behaviour afforded. Summarising user contexts into a single data structure gives easy access to information that would otherwise remain latent, providing the basis for better understanding and predicting the actions and behaviours of individuals and groups. Finally, we also present a method for pruning context trees, for use in applications where it is desirable to reduce the size of the tree while retaining useful information

    Non-Wellfounded Trees in Homotopy Type Theory

    Get PDF
    Coinductive data types are used in functional programming to represent infinite data struc-tures. Examples include the ubiquitous data type of streams over a given base type, but also more sophisticated types. From a categorical perspective, coinductive types are characterized by a universal property, which specifies the object with that property uniquely in a suitable sense. More precisely, a coinductive type is specified as the terminal coalgebra of a suitable endofunctor. In this category-theoretic viewpoint, coinductive types are dual to inductive types, which are defined as initial algebras. Inductive, resp. coinductive, types are usually considered in the principled form of the family of W-types, resp. M-types, parametrized by a type A and a dependent type family B over A, that is, a family of types (B(a))a:A. Intuitively, the elements of the coinductive type M(A,B) are trees with nodes labeled by elements of A such that a node labeled by a: A has B(a)-many subtrees, given by a map B(a) → M(A,B); see Figure 1 for an example. The inductive type W(A,B) contains only trees where any path within that tree eventually leads to a leaf, that is, to a node a: A such that B(a) is empty. a, b, c: A B(a) =

    FastDeepIoT: Towards Understanding and Optimizing Neural Network Execution Time on Mobile and Embedded Devices

    Full text link
    Deep neural networks show great potential as solutions to many sensing application problems, but their excessive resource demand slows down execution time, pausing a serious impediment to deployment on low-end devices. To address this challenge, recent literature focused on compressing neural network size to improve performance. We show that changing neural network size does not proportionally affect performance attributes of interest, such as execution time. Rather, extreme run-time nonlinearities exist over the network configuration space. Hence, we propose a novel framework, called FastDeepIoT, that uncovers the non-linear relation between neural network structure and execution time, then exploits that understanding to find network configurations that significantly improve the trade-off between execution time and accuracy on mobile and embedded devices. FastDeepIoT makes two key contributions. First, FastDeepIoT automatically learns an accurate and highly interpretable execution time model for deep neural networks on the target device. This is done without prior knowledge of either the hardware specifications or the detailed implementation of the used deep learning library. Second, FastDeepIoT informs a compression algorithm how to minimize execution time on the profiled device without impacting accuracy. We evaluate FastDeepIoT using three different sensing-related tasks on two mobile devices: Nexus 5 and Galaxy Nexus. FastDeepIoT further reduces the neural network execution time by 48%48\% to 78%78\% and energy consumption by 37%37\% to 69%69\% compared with the state-of-the-art compression algorithms.Comment: Accepted by SenSys '1
    • …
    corecore