105,612 research outputs found

    Analysis of memory use for improved design and compile-time allocation of local memory

    Get PDF
    Trace analysis techniques are used to study memory referencing behavior for the purpose of designing local memories and determining how to allocate them for data and instructions. In an attempt to assess the inherent behavior of the source code, the trace analysis system described here reduced the effects of the compiler and host architecture on the trace by using a technical called flattening. The variables in the trace, their associated single-assignment values, and references are histogrammed on the basis of various parameters describing memory referencing behavior. Bounds are developed specifying the amount of memory space required to store all live values in a particular histogram class. The reduction achieved in main memory traffic by allocating local memory is specified for each class

    A Compression Technique Exploiting References for Data Synchronization Services

    Get PDF
    Department of Computer Science and EngineeringIn a variety of network applications, there exists significant amount of shared data between two end hosts. Examples include data synchronization services that replicate data from one node to another. Given that shared data may have high correlation with new data to transmit, we question how such shared data can be best utilized to improve the efficiency of data transmission. To answer this, we develop an encoding technique, SyncCoding, that effectively replaces bit sequences of the data to be transmitted with the pointers to their matching bit sequences in the shared data so called references. By doing so, SyncCoding can reduce data traffic, speed up data transmission, and save energy consumption for transmission. Our evaluations of SyncCoding implemented in Linux show that it outperforms existing popular encoding techniques, Brotli, LZMA, Deflate, and Deduplication. The gains of SyncCoding over those techniques in the perspective of data size after compression in a cloud storage scenario are about 12.4%, 20.1%, 29.9%, and 61.2%, and are about 78.3%, 79.6%, 86.1%, and 92.9% in a web browsing scenario, respectively.ope

    New Bounds for Randomized List Update in the Paid Exchange Model

    Get PDF
    We study the fundamental list update problem in the paid exchange model P^d. This cost model was introduced by Manasse, McGeoch and Sleator [M.S. Manasse et al., 1988] and Reingold, Westbrook and Sleator [N. Reingold et al., 1994]. Here the given list of items may only be rearranged using paid exchanges; each swap of two adjacent items in the list incurs a cost of d. Free exchanges of items are not allowed. The model is motivated by the fact that, when executing search operations on a data structure, key comparisons are less expensive than item swaps. We develop a new randomized online algorithm that achieves an improved competitive ratio against oblivious adversaries. For large d, the competitiveness tends to 2.2442. Technically, the analysis of the algorithm relies on a new approach of partitioning request sequences and charging expected cost. Furthermore, we devise lower bounds on the competitiveness of randomized algorithms against oblivious adversaries. No such lower bounds were known before. Specifically, we prove that no randomized online algorithm can achieve a competitive ratio smaller than 2 in the partial cost model, where an access to the i-th item in the current list incurs a cost of i-1 rather than i. All algorithms proposed in the literature attain their competitiveness in the partial cost model. Furthermore, we show that no randomized online algorithm can achieve a competitive ratio smaller than 1.8654 in the standard full cost model. Again the lower bounds hold for large d

    Investigation on the automatic geo-referencing of archaeological UAV photographs by correlation with pre-existing ortho-photos

    Get PDF
    We present a method for the automatic geo-referencing of archaeological photographs captured aboard unmanned aerial vehicles (UAVs), termed UPs. We do so by help of pre-existing ortho-photo maps (OPMs) and digital surface models (DSMs). Typically, these pre-existing data sets are based on data that were captured at a widely different point in time. This renders the detection (and hence the matching) of homologous feature points in the UPs and OPMs infeasible mainly due to temporal variations of vegetation and illumination. Facing this difficulty, we opt for the normalized cross correlation coefficient of perspectively transformed image patches as the measure of image similarity. Applying a threshold to this measure, we detect candidates for homologous image points, resulting in a distinctive, but computationally intensive method. In order to lower computation times, we reduce the dimensionality and extents of the search space by making use of a priori knowledge of the data sets. By assigning terrain heights interpolated in the DSM to the image points found in the OPM, we generate control points. We introduce respective observations into a bundle block, from which gross errors i.e. false matches are eliminated during its robust adjustment. A test of our approach on a UAV image data set demonstrates its potential and raises hope to successfully process large image archives

    Astrometry with the Keck-Interferometer: the ASTRA project and its science

    Full text link
    The sensitivity and astrometry upgrade ASTRA of the Keck Interferometer is introduced. After a brief overview of the underlying interferometric principles, the technology and concepts of the upgrade are presented. The interferometric dual-field technology of ASTRA will provide the KI with the means to observe two objects simultaneously, and measure the distance between them with a precision eventually better than 100 uas. This astrometric functionality of ASTRA will add a unique observing tool to fields of astrophysical research as diverse as exo-planetary kinematics, binary astrometry, and the investigation of stars accelerated by the massive black hole in the center of the Milky Way as discussed in this contribution.Comment: 22 pages, 10 figures (low resolution), contribution to the summerschool "Astrometry and Imaging with the Very Large Telescope Interferometer", 2 - 13 June, 2008, Keszthely, Hungary, corrected authorlis

    GIS and Network Analysis

    Get PDF
    Both geographic information systems (GIS) and network analysis are burgeoning fields, characterised by rapid methodological and scientific advances in recent years. A geographic information system (GIS) is a digital computer application designed for the capture, storage, manipulation, analysis and display of geographic information. Geographic location is the element that distinguishes geographic information from all other types of information. Without location, data are termed to be non-spatial and would have little value within a GIS. Location is, thus, the basis for many benefits of GIS: the ability to map, the ability to measure distances and the ability to tie different kinds of information together because they refer to the same place (Longley et al., 2001). GIS-T, the application of geographic information science and systems to transportation problems, represents one of the most important application areas of GIS-technology today. While traditional GIS formulation's strengths are in mapping display and geodata processing, GIS-T requires new data structures to represent the complexities of transportation networks and to perform different network algorithms in order to fulfil its potential in the field of logistics and distribution logistics. This paper addresses these issues as follows. The section that follows discusses data models and design issues which are specifically oriented to GIS-T, and identifies several improvements of the traditional network data model that are needed to support advanced network analysis in a ground transportation context. These improvements include turn-tables, dynamic segmentation, linear referencing, traffic lines and non-planar networks. Most commercial GIS software vendors have extended their basic GIS data model during the past two decades to incorporate these innovations (Goodchild, 1998). The third section shifts attention to network routing problems that have become prominent in GIS-T: the travelling salesman problem, the vehicle routing problem and the shortest path problem with time windows, a problem that occurs as a subproblem in many time constrained routing and scheduling issues of practical importance. Such problems are conceptually simple, but mathematically complex and challenging. The focus is on theory and algorithms for solving these problems. The paper concludes with some final remarks.
    corecore