751 research outputs found

    Transactional concurrency control for resource constrained applications

    Get PDF
    PhD ThesisTransactions have long been used as a mechanism for ensuring the consistency of databases. Databases, and associated transactional approaches, have always been an active area of research as different application domains and computing architectures have placed ever more elaborate requirements on shared data access. As transactions typically provide consistency at the expense of timeliness (abort/retry) and resource (duplicate shared data and locking), there has been substantial efforts to limit these two aspects of transactions while still satisfying application requirements. In environments where clients are geographically distant from a database the consistency/performance trade-off becomes acute as any retrieval of data over a network is not only expensive, but relatively slow compared to co-located client/database systems. Furthermore, for battery powered clients the increased overhead of transactions can also be viewed as a significant power overhead. However, for all their drawbacks transactions do provide the data consistency that is a requirement for many application types. In this Thesis we explore the solution space related to timely transactional systems for remote clients and centralised databases with a focus on providing a solution, that, when compared to other's work in this domain: (a) maintains consistency; (b) lowers latency; (c) improves throughput. To achieve this we revisit a technique first developed to decrease disk access times via local caching of state (for aborted transactions) to tackle the problems prevalent in real-time databases. We demonstrate that such a technique (rerun) allows a significant change in the typical structure of a transaction (one never before considered, even in rerun systems). Such a change itself brings significant performance success not only in the traditional rerun local database solution space, but also in the distributed solution space. A byproduct of our improvements also, one can argue, brings about a "greener" solution as less time coupled with improved throughput affords improved battery life for mobile devices

    Three Highly Parallel Computer Architectures and Their Suitability for Three Representative Artificial Intelligence Problems

    Get PDF
    Virtually all current Artificial Intelligence (AI) applications are designed to run on sequential (von Neumann) computer architectures. As a result, current systems do not scale up. As knowledge is added to these systems, a point is reached where their performance quickly degrades. The performance of a von Neumann machine is limited by the bandwidth between memory and processor (the von Neumann bottleneck). The bottleneck is avoided by distributing the processing power across the memory of the computer. In this scheme the memory becomes the processor (a smart memory ). This paper highlights the relationship between three representative AI application domains, namely knowledge representation, rule-based expert systems, and vision, and their parallel hardware realizations. Three machines, covering a wide range of fundamental properties of parallel processors, namely module granularity, concurrency control, and communication geometry, are reviewed: the Connection Machine (a fine-grained SIMD hypercube), DADO (a medium-grained MIMD/SIMD/MSIMD tree-machine), and the Butterfly (a coarse-grained MIMD Butterflyswitch machine)

    Workshop proceedings: Information Systems for Space Astrophysics in the 21st Century, volume 1

    Get PDF
    The Astrophysical Information Systems Workshop was one of the three Integrated Technology Planning workshops. Its objectives were to develop an understanding of future mission requirements for information systems, the potential role of technology in meeting these requirements, and the areas in which NASA investment might have the greatest impact. Workshop participants were briefed on the astrophysical mission set with an emphasis on those missions that drive information systems technology, the existing NASA space-science operations infrastructure, and the ongoing and planned NASA information systems technology programs. Program plans and recommendations were prepared in five technical areas: Mission Planning and Operations; Space-Borne Data Processing; Space-to-Earth Communications; Science Data Systems; and Data Analysis, Integration, and Visualization

    Proceedings of the Second Pilot Climate Data System Workshop

    Get PDF
    The proceedings of the workshop held on January 29 and 30, 1986 are discussed. Data management, satellite radiance data, clouds, ultraviolet flux variations in the upper atmosphere, rainfall during El Nino events, and the use of optical disks are among the topics covered

    High definition systems in Japan

    Get PDF
    The successful implementation of a strategy to produce high-definition systems within the Japanese economy will favorably affect the fundamental competitiveness of Japan relative to the rest of the world. The development of an infrastructure necessary to support high-definition products and systems in that country involves major commitments of engineering resources, plants and equipment, educational programs and funding. The results of these efforts appear to affect virtually every aspect of the Japanese industrial complex. The results of assessments of the current progress of Japan toward the development of high-definition products and systems are presented. The assessments are based on the findings of a panel of U.S. experts made up of individuals from U.S. academia and industry, and derived from a study of the Japanese literature combined with visits to the primary relevant industrial laboratories and development agencies in Japan. Specific coverage includes an evaluation of progress in R&D for high-definition television (HDTV) displays that are evolving in Japan; high-definition standards and equipment development; Japanese intentions for the use of HDTV; economic evaluation of Japan's public policy initiatives in support of high-definition systems; management analysis of Japan's strategy of leverage with respect to high-definition products and systems

    Transcriber: Development and use of a tool for assisting speech corpora production”.

    Get PDF
    Abstract We present``Transcriber'', a tool for assisting in the creation of speech corpora, and describe some aspects of its development and use. Transcriber was designed for the manual segmentation and transcription of long duration broadcast news recordings, including annotation of speech turns, topics and acoustic conditions. It is highly portable, relying on the scripting language Tcl/Tk with extensions such as Snack for advanced audio functions and tcLex for lexical analysis, and has been tested on various Unix systems and Windows. The data format follows the XML standard with Unicode support for multilingual transcriptions. Distributed as free software in order to encourage the production of corpora, ease their sharing, increase user feedback and motivate software contributions, Transcriber has been in use for over a year in several countries. As a result of this collective experience, new requirements arose to support additional data formats, video control, and a better management of conversational speech. Using the annotation graphs framework recently formalized, adaptation of the tool towards new tasks and support of dierent data formats will become easier. Ó 2001 Elsevier Science B.V. All rights reserved. R esum e Nous pr esentons``Transcriber'', un outil d'aide a la cr eation de corpus de parole, et nous d ecrivons des el ements de son d eveloppement et de son utilisation. Transcriber a et e conc ßu pour permettre la segmentation manuelle et la transcription d'enregistrements de nouvelles radio-dius ees de longue dur ee, ainsi que l'annotation des tours de parole, des th emes et des conditions acoustiques. Cet outil tr es portable, reposant sur le langage de script Tcl/Tk et des extensions telles que Snack pour les fonctionnalit es audio et tcLex pour l'analyse lexicale, a et e test e sur di erents syst emes Unix et sous Windows. Le format de donn ees respecte le standard XML avec un support d'Unicode pour les transcriptions multilingues. Distribu e sous license libre pour encourager la production de corpus, faciliter leur echange, augmenter le retour d'exp erience des utilisateurs et motiver les contributions logicielles ext erieures, Transcriber est utilis e depuis plus d'un an dans plusieurs pays. Suite a cette utilisation, de nouveaux besoins sont apparus comme le support de formats de donn ees suppl ementaires, de la vid eo, et un meilleur traitement de la parole conversationnelle. En utilisant le mod ele des graphes d'annotation formalis e r ecemment, l'adaptation de l'outil vers de nouvelles t aches et le support de di erents formats de donn ees sera facilit e. Ó 2001 Elsevier Science B.V. All rights reserved

    An architecture for an ATM network continuous media server exploiting temporal locality of access

    Get PDF
    With the continuing drop in the price of memory, Video-on-Demand (VoD) solutions that have so far focused on maximising the throughput of disk units with a minimal use of physical memory may now employ significant amounts of cache memory. The subject of this thesis is the study of a technique to best utilise a memory buffer within such a VoD solution. In particular, knowledge of the streams active on the server is used to allocate cache memory. Stream optimised caching exploits reuse of data among streams that are temporally close to each other within the same clip; the data fetched on behalf of the leading stream may be cached and reused by the following streams. Therefore, only the leading stream requires access to the physical disk and the potential level of service provision allowed by the server may be increased. The use of stream optimised caching may consequently be limited to environments where reuse of data is significant. As such, the technique examined within this thesis focuses on a classroom environment where user progress is generally linear and all users progress at approximately the same rate for such an environment, reuse of data is guaranteed. The analysis of stream optimised caching begins with a detailed theoretical discussion of the technique and suggests possible implementations. Later chapters describe both the design and construction of a prototype server that employs the caching technique, and experiments that use of the prototype to assess the effectiveness of the technique for the chosen environment using `emulated' users. The conclusions of these experiments indicate that stream optimised caching may be applicable to larger scale VoD systems than small scale teaching environments. Future development of stream optimised caching is considered

    Efficient caching algorithms for memory management in computer systems

    Get PDF
    As disk performance continues to lag behind that of memory systems and processors, fully utilizing memory to reduce disk accesses is a highly effective effort to improve the entire system performance. Furthermore, to serve the applications running on a computer in distributed systems, not only the local memory but also the memory on remote servers must be effectively managed to minimize I/O operations. The critical challenges in an effective memory cache management include: (1) Insightfully understanding and quantifying the locality inherent in the memory access requests; (2) Effectively utilizing the locality information in replacement algorithms; (3) Intelligently placing and replacing data in the multi-level caches of a distributed system; (4) Ensuring that the overheads of the proposed schemes are acceptable.;This dissertation provides solutions and makes unique and novel contributions in application locality quantification, general replacement algorithms, low-cost replacement policy, thrashing protection, as well as multi-level cache management in a distributed system. First, the dissertation proposes a new method to quantify locality strength, and accurately to identify the data with strong locality. It also provides a new replacement algorithm, which significantly outperforms existing algorithms. Second, considering the extremely low-cost requirements on replacement policies in virtual memory management, the dissertation proposes a policy meeting the requirements, and considerably exceeding the performance existing policies. Third, the dissertation provides an effective scheme to protect the system from thrashing for running memory-intensive applications. Finally, the dissertation provides a multi-level block placement and replacement protocol in a distributed client-server environment, exploiting non-uniform locality strengths in the I/O access requests.;The methodology used in this study include careful application behavior characterization, system requirement analysis, algorithm designs, trace-driven simulation, and system implementations. A main conclusion of the work is that there is still much room for innovation and significant performance improvement for the seemingly mature and stable policies that have been broadly used in the current operating system design

    Algorithmen für Topologiebewusstsein in Sensornetzen

    Get PDF
    This work deals with algorithmic and geometric challenges in wireless sensor networks (WSNs). Classical algorithm theory, with a single processor executing one sequential program while having access to the complete data of the problem at hand, does not suit the needs of WSNs. Instead, we need distributed protocols where nodes collaboratively solve problems that are too complex for a single node. First we analyze a location problem, where the nodes obtain a sense of the network topology and their position in it. Computing coordinates in a global coordinate system is NP-hard in almost all relevant variants. So we present a completely new approach instead. The network builds clusters and constructs an abstract graph that closely reflects the topology of the network region. The resulting topology awareness suits the needs of some applications much better than the coordinate-based approach. In the second part, we present a novel flow problem, which adds battery constraints to dynamic network flows. Given a time horizon, we seek a flow from source to sink that maximizes the total amount of delivered data. As there is no prior work on this problem, we also analyze it in a centralized setting. We prove complexity results for several variants and present approximation schemes. The third part introduces the WSN simulator Shawn. By letting the user choose among different geometric communication models and data structures for the resulting graph, Shawn can adapt to many different setups, including mobile ones. Due to its design, Shawn is much faster than comparable simulation environments.Die vorliegende Arbeit beschäftigt sich mit algorithmischen und geometrischen Fragestellungen in Sensornetzwerken. Im Gegensatz zur klassischen Algorithmik, bei der ein einzelner Prozessor sequenziell Anweisungen abarbeitet und vollen Zugriff auf die Probleminstanz hat, werden hier verteilte Protokolle benötigt, bei denen die Knoten gemeinsam eine Aufgabe bewältigen, zu der sie allein nicht in der Lage wären. Zuerst untersuchen wir das grundlegende Problem, wie Sensorknoten ein Bewusstsein für ihre Position erlangen können. Motiviert daraus, dass das Problem, Koordinaten für ein globales Koordinatensystem zu bestimmen, in fast allen Varianten NP-schwer ist, wird ein vollkommen neuer Ansatz skizziert, bei dem das Netzwerk selbständig geometrische Cluster bildet und einen abstrakten Graphen konstruiert, der die Topologie des zugrunde liegenden Gebiets sehr genau widerspiegelt. Das sich daraus ergebende Positionsbewusstsein ist für einige Anwendungen dem klassischen euklidischen Ansatz deutlich überlegen. Der zweite Teil widmet sich einem Flussproblems für Sensornetzwerke, dass klassische dynamische Flüsse um Batteriebeschränkungen erweitert. Gesucht ist ein Fluss, der für gegebenen Zeithorizont die Datenmenge maximiert, die von einer Quelle zur Senke geschickt werden kann. Dieses Problem wird auch im zentralisierten Modell untersucht, da keine Vorarbeiten existieren. Wir beweisen Komplexitäten von Problemvarianten und entwickeln Approximationsschemata. Der dritte Teil stellt den Netzwerksimulator Shawn vor. Da der Benutzer zwischen verschiedenen geometrischen Kommunikationsmodellen wählen kann und das Speichermodell für den daraus resultierenden Graphen an den verfügbaren Speicher sowie an Simulationsparameter wie eventuell mögliche Mobilität der Knoten anpassen kann, ist Shawn hochflexibel und gleichzeitig deutlich schneller als vergleichbare Simulationsumgebungen

    Federal Preemption of Shrinkwrap and On-line Licenses

    Get PDF
    Symposium: Copyright Owners\u27 Rights and Users\u27 Privileges on the Interne
    corecore