14 research outputs found

    Symmetric Binary B-Trees: Data Structure and Algorithms for Random and Sequential Information Processing

    Get PDF

    Robust and Efficient Sorting with Offset-Value Coding

    Full text link
    Sorting and searching are large parts of database query processing, e.g., in the forms of index creation, index maintenance, and index lookup; and comparing pairs of keys is a substantial part of the effort in sorting and searching. We have worked on simple, efficient implementations of decades-old, neglected, effective techniques for fast comparisons and fast sorting, in particular offset-value coding. In the process, we happened upon its mutually beneficial relationship with prefix truncation in run files as well as the duality of compression techniques in row- and column-format storage structures, namely prefix truncation and run-length encoding of leading key columns. We also found a beneficial relationship with consumers of sorted streams, e.g., merging parallel streams, in-stream aggregation, and merge join. We report on our implementation in the context of Google's Napa and F1 Query systems as well as an experimental evaluation of performance and scalability

    Scalable Parallel Packed Memory Arrays

    Get PDF

    Computer and data security: a comprehensive annotated bibliography.

    Get PDF
    Massachusetts Institute of Technology, Alfred P. Sloan School of Management. Thesis. 1973. M.S.MICROFICHE COPY ALSO AVAILABLE IN DEWEY LIBRARY.M.S

    Parallel database operations in heterogeneous environments

    Get PDF
    Im Gegensatz zu dem traditionellen Begriff eines Supercomputers, der aus vielen mittels superschneller, lokaler Netzwerkverbindungen miteinander verbundenen Superrechnern besteht, basieren heterogene Computerumgebungen auf "kompletten" Computersystemen, die mit Hilfe eines herkömmlichen Netzwerkanschlusses an private oder öffentliche Netzwerke angeschlossen sind. Der Bereich des Computernetzwerkens hat sich über die letzten drei Jahrzehnte entwickelt und ist, wie viele andere Technologien, in bezug auf Performance, Funktionalität und Verlässlichkeit extrem gewachsen. Zu Beginn des 21.Jahrhunderts zählt das betriebssichere Hochgeschwindigkeitsnetz genauso zur Alltäglichkeit wie Elektrizität, und auch Rechnerressourcen sind, was Verfügbarkeit und universellen Gebrauch anbelangt, ebenso Standard wie elektrischer Strom. Wissenschafter haben für die Verwendung von heterogenen Grids bei verschiedenen rechenintensiven Applikationen eine Architektur von computational Grids konzipiert und darin Modelle aufgesetzt, die zum einen Rechenleistungen defnieren und zum anderen die komplexen Eigenschaften der Grid-Organisation vor den Benutzern verborgen halten. Somit wird die Verwendung für den Benutzer genauso einfach wie es möglich ist elektrischen Strom zu beziehen. Grundsätzlich existiert keine generell akzeptierte Definition für Grids. Einige Wissenschafter bezeichnen sie als hochleistungsfähige verteilte Umgebung. Manche berücksichtigen bei der Definierung auch die geographische Verteilung und ihre Multi-Domain-Eigenschaft. Andere Wissenschafter wiederum definieren Grids über die Anzahl der Ressourcen, die sie verbinden. Parallele Datenbanksysteme haben in den letzten zwei Jahrzehnten große Bedeutung erlangt, da das rechenintensive wissenschaftliche Arbeiten, wie z.B. auf dem Gebiet der Bioinformatik, Strömungslehre und Hochenergie physik die Verarbeitung riesiger verteilter Datensätze erfordert. Diese Tendenz resultierte daraus, dass man von der fehlgeschlagenen Entwicklung hochspezialisierter Datenbankmaschinen zur Verwendung herkömmlicher paralleler Hardware-Architekturen übergegangen ist. Grundsätzlich wird die gleichzeitige Abarbeitung entweder durch verteilte Datenbankoperationen oder durch Datenparallelität gelöst. Im ersten Fall wird ein unterteilter Abfragenabarbeitungsplan durch verschiedene Datenbankoperatoren parallel durchgeführt. Im Fall der Datenparallelität erfolgt eine Unterteilung der Daten, wobei mehrere Prozessoren die gleichen Operationen parallel an Teilen der Daten durchführen. Es liegen genaue Analysen von parallelen Datenbank-Arbeitsvorgängen für sequenzielle Prozessoren vor. Eine Reihe von Publikationen haben dieses Thema abgehandelt und dabei Vorschläge und Analysen für parallele Datenbankmaschinen erstellt. Bis dato existiert allerdings noch keine spezifische Analyse paralleler Algorithmen mit dem Fokus der speziellen Eigenschaften einer "Grid"-Infrastruktur. Der spezifische Unterschied liegt in der Heterogenität von Grid-Ressourcen. In "shared nothing"-Architekturen, wie man sie bei klassischen Supercomputern und Cluster- Systemen vorfindet, sind alle Ressourcen wie z.B. Verarbeitungsknoten, Festplatten und Netzwerkverbindungen angesichts ihrer Leistung, Zugriffszeit und Bandbreite üblicherweise gleich (homogen). Im Gegensatz dazu zeigen Grid-Architekturen heterogene Ressourcen mit verschiedenen Leistungseigenschaften. Der herausfordernde Aspekt dieser Arbeit bestand darin aufzuzeigen, wie man das Problem heterogener Ressourcen löst, d.h. diese Ressourcen einerseits zur Leistungsmaximierung und andererseits zur Definition von Algorithmen einsetzt, um die Arbeitsablauf-Orchestrierung von Datenbankprozessoren zu optimieren. Um dieser Herausforderung gerecht werden zu können, wurde ein mathematisches Modell zur Untersuchung des Leistungsverhaltens paralleler Datenbankoperationen in heterogenen Umgebungen, wie z.B. in Grids, basierend auf generalisierten Multiprozessor- Architekturen entwickelt. Es wurden dabei sowohl die Parameter und deren Einfluss auf die Leistung als auch das Verhalten der Algorithmen in heterogenen Umgebungen beobachtet. Dabei konnte man feststellen, dass kleine Anpassungen an den Algorithmen zur signifikanten Leistungsverbesserung heterogener Umgebungen führen. Weiters wurde eine graphische Darstellung der Knotenkonfiguration entwickelt und ein optimierter Algorithmus, mit dem ein optimaler Knoten zur Ausführung von Datenbankoperationen gefunden werden kann. Diese Ergebnisse zum neuen Algorithmus wurden durch die Implementierung in einer serviceorientierten Architektur (SODA) bestätigt. Durch diese Implementierung konnte die Gültigkeit des Modells und des neu entwickelten optimierten Algorithmus nachgewiesen werden. In dieser Arbeit werden auch die Möglichkeiten für eine brauchbare Erweiterung des vorgestellten Modells gezeigt, wie z.B. für den Einsatz von Leistungskennziffern für Algorithmen zur Findung optimaler Knoten, die Verlässlichkeit der Knoten oder Vorgehensweisen/Lösungsaufgaben zur dynamischen Optimierung von Arbeitsabläufen.In contrast to the traditional notion of a supercomputer, which has many processors connected by a local high-speed computer bus, heterogeneous computing environments rely on "complete" computer nodes (CPU, storage, network interface, etc.) connected to a private or public network by a conventional network interface. Computer networking has evolved over the past three decades, and, like many technologies, has grown exponentially in terms of performance, functionality and reliability. At the beginning of the twenty-first century, high-speed, highly reliable Internet connectivity has become as commonplace as electricity, and computing resources have become as standard in terms of availability and universal use as electrical power. To use heterogeneous Grids for various applications requiring high-processing power, researchers propose the notion of computational Grids where rules are defined relating to both services and hiding the complexity of the Grid organization from the users. Thus, users would find it as easy to use as electrical power. Generally, there is no widely accepted definition of Grids. Some researchers define it as a high-performance distributed environment. Some take into consideration its geographically distributed, multi-domain feature. Others define Grids based on the number of resources they unify. Parallel database systems gained an important role in database research over the past two decades due to the necessity of handling large distributed datasets for scientific computing such as bioinformatics, fluid dynamics and high energy physics (HEP). This was connected with the shift from the (actually failed) development of highly specialized database machines to the usage of conventional parallel hardware architectures. Generally, concurrent execution is employed either by database operator or data parallelism. The first is achieved through parallel execution of a partitioned query execution plan by different operators, while the latter is achieved through parallel execution of the same operation on the partitioned data among multiple processors. Parallel database operation algorithms have been well analyzed for sequential processors. A number of publications have covered this topic which proposed and analyzed these algorithms for parallel database machines. Until now, to the best knowledge of the author, no specific analysis has been done so far on parallel algorithms with a focus on the specific characteristics of a Grid infrastructure. The specific difference lies in the heterogeneous nature of Grid resources. In a "shared nothing architecture", which can be found in classical supercomputers and cluster systems, all resources such as processing nodes, disks and network interconnection have typically homogeneous characteristics as regards to performance, access time and bandwidth. In contrast, in a Grid architecture heterogeneous resources are found that show different performance characteristics. The challenge of this research is to discover the way how to cope with or to exploit this situation to maximize performance and to define algorithms that lead to a solution for an optimized workflow orchestration. To address this challenge, we developed a mathematical model to investigate the performance behavior of parallel database operations in heterogeneous environments, such as a Grid, based on generalized multiprocessor architecture. We also studied the parameters and their influence on the performance as well as the behavior of the algorithms in heterogeneous environments. We discovered that only a small adjustment on the algorithm is necessary to significantly improve the performance for heterogeneous environments. A graphical representation of the node configuration and an optimized algorithm for finding the optimal node configuration for the execution of the parallel binary merge sort have been developed. Finally, we have proved our findings of the new algorithm by implementing it on a service-orientated infrastructure (SODA). The model and our new developed modified algorithms have been verified with the implementation. We also give an outlook of useful extensions to our model e.g. using performance indices, reliability of the nodes and approaches for dynamic optimization of workflow

    A schema-based peer-to-peer infrastructure for digital library networks

    Get PDF
    [no abstract

    Accelerating Event Stream Processing in On- and Offline Systems

    Get PDF
    Due to a growing number of data producers and their ever-increasing data volume, the ability to ingest, analyze, and store potentially never-ending streams of data is a mission-critical task in today's data processing landscape. A widespread form of data streams are event streams, which consist of continuously arriving notifications about some real-world phenomena. For example, a temperature sensor naturally generates an event stream by periodically measuring the temperature and reporting it with measurement time in case of a substantial change to the previous measurement. In this thesis, we consider two kinds of event stream processing: online and offline. Online refers to processing events solely in main memory as soon as they arrive, while offline means processing event data previously persisted to non-volatile storage. Both modes are supported by widely used scale-out general-purpose stream processing engines (SPEs) like Apache Flink or Spark Streaming. However, such engines suffer from two significant deficiencies that severely limit their processing performance. First, for offline processing, they load the entire stream from non-volatile secondary storage and replay all data items into the associated online engine in order of their original arrival. While this naturally ensures unified query semantics for on- and offline processing, the costs for reading the entire stream from non-volatile storage quickly dominate the overall processing costs. Second, modern SPEs focus on scaling out computations across the nodes of a cluster, but use only a fraction of the available resources of individual nodes. This thesis tackles those problems with three different approaches. First, we present novel techniques for the offline processing of two important query types (windowed aggregation and sequential pattern matching). Our methods utilize well-understood indexing techniques to reduce the total amount of data to read from non-volatile storage. We show that this improves the overall query runtime significantly. In particular, this thesis develops the first index-based algorithms for pattern queries expressed with the Match_Recognize clause, a new and powerful language feature of SQL that has received little attention so far. Second, we show how to maximize resource utilization of single nodes by exploiting the capabilities of modern hardware. Therefore, we develop a prototypical shared-memory CPU-GPU-enabled event processing system. The system provides implementations of all major event processing operators (filtering, windowed aggregation, windowed join, and sequential pattern matching). Our experiments reveal that regarding resource utilization and processing throughput, such a hardware-enabled system is superior to hardware-agnostic general-purpose engines. Finally, we present TPStream, a new operator for pattern matching over temporal intervals. TPStream achieves low processing latency and, in contrast to sequential pattern matching, is easily parallelizable even for unpartitioned input streams. This results in maximized resource utilization, especially for modern CPUs with multiple cores
    corecore