33,765 research outputs found

    Active data structures on GPGPUs

    Get PDF
    Active data structures support operations that may affect a large number of elements of an aggregate data structure. They are well suited for extremely fine grain parallel systems, including circuit parallelism. General purpose GPUs were designed to support regular graphics algorithms, but their intermediate level of granularity makes them potentially viable also for active data structures. We consider the characteristics of active data structures and discuss the feasibility of implementing them on GPGPUs. We describe the GPU implementations of two such data structures (ESF arrays and index intervals), assess their performance, and discuss the potential of active data structures as an unconventional programming model that can exploit the capabilities of emerging fine grain architectures such as GPUs

    Extensible sparse functional arrays with circuit parallelism

    Get PDF
    A longstanding open question in algorithms and data structures is the time and space complexity of pure functional arrays. Imperative arrays provide update and lookup operations that require constant time in the RAM theoretical model, but it is conjectured that there does not exist a RAM algorithm that achieves the same complexity for functional arrays, unless restrictions are placed on the operations. The main result of this paper is an algorithm that does achieve optimal unit time and space complexity for update and lookup on functional arrays. This algorithm does not run on a RAM, but instead it exploits the massive parallelism inherent in digital circuits. The algorithm also provides unit time operations that support storage management, as well as sparse and extensible arrays. The main idea behind the algorithm is to replace a RAM memory by a tree circuit that is more powerful than the RAM yet has the same asymptotic complexity in time (gate delays) and size (number of components). The algorithm uses an array representation that allows elements to be shared between many arrays with only a small constant factor penalty in space and time. This system exemplifies circuit parallelism, which exploits very large numbers of transistors per chip in order to speed up key algorithms. Extensible Sparse Functional Arrays (ESFA) can be used with both functional and imperative programming languages. The system comprises a set of algorithms and a circuit specification, and it has been implemented on a GPGPU with good performance

    Optimal dimensional synthesis of force feedback lower arm exoskeletons

    Get PDF
    This paper presents multi-criteria design optimization of parallel mechanism based force feedback exoskeletons for human forearm and wrist. The optimized devices are aimed to be employed as a high fidelity haptic interfaces. Multiple design objectives are discussed and classified for the devices and the optimization problem to study the trade-offs between these criteria is formulated. Dimensional syntheses are performed for optimal global kinematic and dynamic performance, utilizing a Pareto front based framework, for two spherical parallel mechanisms that satisfy the ergonomic necessities of a human forearm and wrist. Two optimized mechanisms are compared and discussed in the light of multiple design criteria. Finally, kinematic structure and dimensions of an optimal exoskeleton are decided

    bdbms -- A Database Management System for Biological Data

    Full text link
    Biologists are increasingly using databases for storing and managing their data. Biological databases typically consist of a mixture of raw data, metadata, sequences, annotations, and related data obtained from various sources. Current database technology lacks several functionalities that are needed by biological databases. In this paper, we introduce bdbms, an extensible prototype database management system for supporting biological data. bdbms extends the functionalities of current DBMSs to include: (1) Annotation and provenance management including storage, indexing, manipulation, and querying of annotation and provenance as first class objects in bdbms, (2) Local dependency tracking to track the dependencies and derivations among data items, (3) Update authorization to support data curation via content-based authorization, in contrast to identity-based authorization, and (4) New access methods and their supporting operators that support pattern matching on various types of compressed biological data types. This paper presents the design of bdbms along with the techniques proposed to support these functionalities including an extension to SQL. We also outline some open issues in building bdbms.Comment: This article is published under a Creative Commons License Agreement (http://creativecommons.org/licenses/by/2.5/.) You may copy, distribute, display, and perform the work, make derivative works and make commercial use of the work, but, you must attribute the work to the author and CIDR 2007. 3rd Biennial Conference on Innovative Data Systems Research (CIDR) January 710, 2007, Asilomar, California, US

    Noncommutative Geometry of Finite Groups

    Full text link
    A finite set can be supplied with a group structure which can then be used to select (classes of) differential calculi on it via the notions of left-, right- and bicovariance. A corresponding framework has been developed by Woronowicz, more generally for Hopf algebras including quantum groups. A differential calculus is regarded as the most basic structure needed for the introduction of further geometric notions like linear connections and, moreover, for the formulation of field theories and dynamics on finite sets. Associated with each bicovariant first order differential calculus on a finite group is a braid operator which plays an important role for the construction of distinguished geometric structures. For a covariant calculus, there are notions of invariance for linear connections and tensors. All these concepts are explored for finite groups and illustrated with examples. Some results are formulated more generally for arbitrary associative (Hopf) algebras. In particular, the problem of extension of a connection on a bimodule (over an associative algebra) to tensor products is investigated, leading to the class of `extensible connections'. It is shown that invariance properties of an extensible connection on a bimodule over a Hopf algebra are carried over to the extension. Furthermore, an invariance property of a connection is also shared by a `dual connection' which exists on the dual bimodule (as defined in this work).Comment: 34 pages, Late

    Extensible Detection and Indexing of Highlight Events in Broadcasted Sports Video

    Get PDF
    Content-based indexing is fundamental to support and sustain the ongoing growth of broadcasted sports video. The main challenge is to design extensible frameworks to detect and index highlight events. This paper presents: 1) A statistical-driven event detection approach that utilizes a minimum amount of manual knowledge and is based on a universal scope-of-detection and audio-visual features; 2) A semi-schema-based indexing that combines the benefits of schema-based modeling to ensure that the video indexes are valid at all time without manual checking, and schema-less modeling to allow several passes of instantiation in which additional elements can be declared. To demonstrate the performance of the events detection, a large dataset of sport videos with a total of around 15 hours including soccer, basketball and Australian football is used

    Design and Implementation of an Extensible Variable Resolution Bathymetric Estimator

    Get PDF
    For grid-based bathymetric estimation techniques, determining the right resolution at which to work is essential. Appropriate grid resolution can be related, roughly, to data density and thence to sonar characteristics, survey methodology, and depth. It is therefore variable in almost all survey scenarios, and methods of addressing this problem can have enormous impact on the correctness and efficiency of computational schemes of this kind. This paper describes the design and implementation of a bathymetric depth estimation algorithm that attempts to address this problem by combining the computational efficiency of locally regular grids with piecewise-variable estimation resolution to provide a single logical data structure and associated algorithms that can adjust to local data conditions, change resolution where required to best support the data, and operate over essentially arbitrarily large areas as a single unit. The algorithm, which is in part a development of CUBE, is modular and extensible, and is structured as a client-server application to support different implementation modalities. The algorithm is called “CUBE with Hierarchical Resolution Techniques”, or CHRT
    corecore