104,686 research outputs found

    An Algorithm for Data Reorganization in a Multi-dimensional Index

    Get PDF
    In spatial databases, data are associated with spatial coordinates and are retrieved based on spatial proximity. A spatial database uses spatial indexes to optimize spatial queries. An essential ingredient for efficient spatial query processing is spatial clustering of data and reorganization of spatial data. Traditional clustering algorithms and reorganization utilities lack in performance and execution. To solve this problem we have developed an algorithm to convert a two dimensional spatial index into a single dimensional value and then a reorganization is done on the spatial data. This report describes this algorithm as well as various experiments to validate its effectiveness

    U.S. Multinational Services Companies: Effects of Foreign Affiliate Activity on U.S. Employment

    Get PDF
    This working paper examines the effect that U.S. services firms’ establishment abroad has on domestic employment. Whereas many papers have explored the employment effects of foreign direct investment in manufacturing, few have explored the effects of services investment. We find that services multinationals’ activities abroad increase U.S. employment by promoting intrafirm exports from parent firms to their foreign affiliates. These exports support jobs at the parents’ headquarters and throughout their U.S. supply chains. Our findings are principally based on economic research and econometric analysis performed by Commission staff, services trade and investment data published by the Bureau of Economic Analysis, and employment data collected by the Bureau of Labor Statistics. In the aggregate, we find that services activities abroad support nearly 700,000 U.S. jobs. Case studies of U.S. multinationals in the banking, computer, logistics, and retail industries provide the global dimensions of U.S. MNC operations and identify domestic employment effects associated with foreign affiliate activity in each industry

    Impliance: A Next Generation Information Management Appliance

    Full text link
    ably successful in building a large market and adapting to the changes of the last three decades, its impact on the broader market of information management is surprisingly limited. If we were to design an information management system from scratch, based upon today's requirements and hardware capabilities, would it look anything like today's database systems?" In this paper, we introduce Impliance, a next-generation information management system consisting of hardware and software components integrated to form an easy-to-administer appliance that can store, retrieve, and analyze all types of structured, semi-structured, and unstructured information. We first summarize the trends that will shape information management for the foreseeable future. Those trends imply three major requirements for Impliance: (1) to be able to store, manage, and uniformly query all data, not just structured records; (2) to be able to scale out as the volume of this data grows; and (3) to be simple and robust in operation. We then describe four key ideas that are uniquely combined in Impliance to address these requirements, namely the ideas of: (a) integrating software and off-the-shelf hardware into a generic information appliance; (b) automatically discovering, organizing, and managing all data - unstructured as well as structured - in a uniform way; (c) achieving scale-out by exploiting simple, massive parallel processing, and (d) virtualizing compute and storage resources to unify, simplify, and streamline the management of Impliance. Impliance is an ambitious, long-term effort to define simpler, more robust, and more scalable information systems for tomorrow's enterprises.Comment: This article is published under a Creative Commons License Agreement (http://creativecommons.org/licenses/by/2.5/.) You may copy, distribute, display, and perform the work, make derivative works and make commercial use of the work, but, you must attribute the work to the author and CIDR 2007. 3rd Biennial Conference on Innovative Data Systems Research (CIDR) January 710, 2007, Asilomar, California, US

    Sam2bam: High-Performance Framework for NGS Data Preprocessing Tools

    Full text link
    This paper introduces a high-throughput software tool framework called {\it sam2bam} that enables users to significantly speedup pre-processing for next-generation sequencing data. The sam2bam is especially efficient on single-node multi-core large-memory systems. It can reduce the runtime of data pre-processing in marking duplicate reads on a single node system by 156-186x compared with de facto standard tools. The sam2bam consists of parallel software components that can fully utilize the multiple processors, available memory, high-bandwidth of storage, and hardware compression accelerators if available. The sam2bam provides file format conversion between well-known genome file formats, from SAM to BAM, as a basic feature. Additional features such as analyzing, filtering, and converting the input data are provided by {\it plug-in} tools, e.g., duplicate marking, which can be attached to sam2bam at runtime. We demonstrated that sam2bam could significantly reduce the runtime of NGS data pre-processing from about two hours to about one minute for a whole-exome data set on a 16-core single-node system using up to 130 GB of memory. The sam2bam could reduce the runtime for whole-genome sequencing data from about 20 hours to about nine minutes on the same system using up to 711 GB of memory
    • …
    corecore