189 research outputs found

    An Analysis on the Impact of Export Rebate to Export Structure in China

    Get PDF
    实践经验证明,出口退税政策的实施对促进一国对外贸易,尤其是出口贸易的发展具有重要的作用。而在坚持“应退尽退”的税收中性原则下,相机的实施差别退税率的出口退税政策则在优化出口商品结构、优化产业结构等方面具有积极的作用。我国现行的出口退税政策自1985年正式实施以来,国家曾根据经济形势的变化多次对出口退税率进行调整,对不同行业的出口退税率的调整力度不尽相同,因而出口退税政策在实施的过程中对不同行业的影响也迥然不同。 本文中,笔者首先介绍了出口退税政策在国外的发展情况以及在我国的演变过程,尤其是1994年税制改革后我国对出口退税率的调整经过,并结合我国出口商品结构的变动情况进行定性分析;然后再根据...It has been proved that export rebate policy plays an important role in promoting the international trade of a country,particularly the development of the export. Under the principle of tax neutrality, adopting different export rebate rate in different conditions makes positive effect on optimization of export structure and industry structure. Since 1985, China has repeatedly adjusted the export r...学位:经济学硕士院系专业:经济学院财政系_财政学(含税收学)学号:1552007115009

    ASCR/HEP Exascale Requirements Review Report

    Full text link
    This draft report summarizes and details the findings, results, and recommendations derived from the ASCR/HEP Exascale Requirements Review meeting held in June, 2015. The main conclusions are as follows. 1) Larger, more capable computing and data facilities are needed to support HEP science goals in all three frontiers: Energy, Intensity, and Cosmic. The expected scale of the demand at the 2025 timescale is at least two orders of magnitude -- and in some cases greater -- than that available currently. 2) The growth rate of data produced by simulations is overwhelming the current ability, of both facilities and researchers, to store and analyze it. Additional resources and new techniques for data analysis are urgently needed. 3) Data rates and volumes from HEP experimental facilities are also straining the ability to store and analyze large and complex data volumes. Appropriately configured leadership-class facilities can play a transformational role in enabling scientific discovery from these datasets. 4) A close integration of HPC simulation and data analysis will aid greatly in interpreting results from HEP experiments. Such an integration will minimize data movement and facilitate interdependent workflows. 5) Long-range planning between HEP and ASCR will be required to meet HEP's research needs. To best use ASCR HPC resources the experimental HEP program needs a) an established long-term plan for access to ASCR computational and data resources, b) an ability to map workflows onto HPC resources, c) the ability for ASCR facilities to accommodate workflows run by collaborations that can have thousands of individual members, d) to transition codes to the next-generation HPC platforms that will be available at ASCR facilities, e) to build up and train a workforce capable of developing and using simulations and analysis to support HEP scientific research on next-generation systems.Comment: 77 pages, 13 Figures; draft report, subject to further revisio

    Debugging Data Transfers in CMS

    Get PDF
    The CMS experiment at CERN is preparing for LHC data taking in severalcomputing preparation activities. In early 2007 a traffic load generator infrastructure for distributed data transfer tests was designed and deployed to equip the WLCG tiers which support the CMS virtual organization with a means for debugging, load-testing and commissioning data transfer routes among CMS computing centres. The LoadTest is based upon PhEDEx as a reliable, scalable data set replication system. The Debugging Data Transfers (DDT) task force was created to coordinate the debugging of the data transfer links. The task force aimed to commission most crucial transfer routes among CMS tiers by designing and enforcing a clear procedure to debug problematic links. Such procedure aimed to move a link from a debugging phase in a separate and independent environment to a production environment when a set of agreed conditions are achieved for that link. The goal was to deliver one by one working transfer routes to the CMS data operations team. The preparation, activities and experience of the DDT task force within the CMS experiment are discussed. Common technical problems and challenges encountered during the lifetime of the taskforce in debugging data transfer links in CMS are explained and summarized

    Performance of the CMS Cathode Strip Chambers with Cosmic Rays

    Get PDF
    The Cathode Strip Chambers (CSCs) constitute the primary muon tracking device in the CMS endcaps. Their performance has been evaluated using data taken during a cosmic ray run in fall 2008. Measured noise levels are low, with the number of noisy channels well below 1%. Coordinate resolution was measured for all types of chambers, and fall in the range 47 microns to 243 microns. The efficiencies for local charged track triggers, for hit and for segments reconstruction were measured, and are above 99%. The timing resolution per layer is approximately 5 ns

    Early experience on using glideinWMS in the cloud

    Get PDF
    Abstract. Cloud computing is steadily gaining traction both in commercial and research worlds, and there seems to be significant potential to the HEP community as well. However, most of the tools used in the HEP community are tailored to the current computing model, which is based on grid computing. One such tool is glideinWMS, a pilot-based workload management system. In this paper we present both what code changes were needed to make it work in the cloud world, as well as what architectural problems we encountered and how we solved them. Benchmarks comparing grid, Magellan, and Amazon EC2 resources are also included

    Performance and Operation of the CMS Electromagnetic Calorimeter

    Get PDF
    The operation and general performance of the CMS electromagnetic calorimeter using cosmic-ray muons are described. These muons were recorded after the closure of the CMS detector in late 2008. The calorimeter is made of lead tungstate crystals and the overall status of the 75848 channels corresponding to the barrel and endcap detectors is reported. The stability of crucial operational parameters, such as high voltage, temperature and electronic noise, is summarised and the performance of the light monitoring system is presented

    The USDA Barley Core Collection:Genetic Diversity, Population Structure, and Potential for Genome-Wide Association Studies

    Get PDF
    New sources of genetic diversity must be incorporated into plant breeding programs if they are to continue increasing grain yield and quality, and tolerance to abiotic and biotic stresses. Germplasm collections provide a source of genetic and phenotypic diversity, but characterization of these resources is required to increase their utility for breeding programs. We used a barley SNP iSelect platform with 7,842 SNPs to genotype 2,417 barley accessions sampled from the USDA National Small Grains Collection of 33,176 accessions. Most of the accessions in this core collection are categorized as landraces or cultivars/breeding lines and were obtained from more than 100 countries. Both STRUCTURE and principal component analysis identified five major subpopulations within the core collection, mainly differentiated by geographical origin and spike row number (an inflorescence architecture trait). Different patterns of linkage disequilibrium (LD) were found across the barley genome and many regions of high LD contained traits involved in domestication and breeding selection. The genotype data were used to define 'mini-core' sets of accessions capturing the majority of the allelic diversity present in the core collection. These 'mini-core' sets can be used for evaluating traits that are difficult or expensive to score. Genome-wide association studies (GWAS) of 'hull cover', 'spike row number', and 'heading date' demonstrate the utility of the core collection for locating genetic factors determining important phenotypes. The GWAS results were referenced to a new barley consensus map containing 5,665 SNPs. Our results demonstrate that GWAS and high-density SNP genotyping are effective tools for plant breeders interested in accessing genetic diversity in large germplasm collections

    Metagenomic analysis of viruses associated with maize lethal necrosis in Kenya

    Get PDF
    Background: Maize lethal necrosis is caused by a synergistic co-infection of Maize chlorotic mottle virus (MCMV) and a specific member of the Potyviridae, such as Sugarcane mosaic virus (SCMV), Wheat streak mosaic virus (WSMV) or Johnson grass mosaic virus (JGMV). Typical maize lethal necrosis symptoms include severe yellowing and leaf drying from the edges. In Kenya, we detected plants showing typical and atypical symptoms. Both groups of plants often tested negative for SCMV by ELISA. Methods: We used next-generation sequencing to identify viruses associated to maize lethal necrosis in Kenya through a metagenomics analysis. Symptomatic and asymptomatic leaf samples were collected from maize and sorghum representing sixteen counties. Results: Complete and partial genomes were assembled for MCMV, SCMV, Maize streak virus (MSV) and Maize yellow dwarf virus-RMV (MYDV-RMV). These four viruses (MCMV, SCMV, MSV and MYDV-RMV) were found together in 30 of 68 samples. A geographic analysis showed that these viruses are widely distributed in Kenya. Phylogenetic analyses of nucleotide sequences showed that MCMV, MYDV-RMV and MSV are similar to isolates from East Africa and other parts of the world. Single nucleotide polymorphism, nucleotide and polyprotein sequence alignments identified three genetically distinct groups of SCMV in Kenya. Variation mapped to sequences at the border of NIb and the coat protein. Partial genome sequences were obtained for other four potyviruses and one polerovirus. Conclusion: Our results uncover the complexity of the maize lethal necrosis epidemic in Kenya. MCMV, SCMV, MSV and MYDV-RMV are widely distributed and infect both maize and sorghum. SCMV population in Kenya is diverse and consists of numerous strains that are genetically different to isolates from other parts of the world. Several potyviruses, and possibly poleroviruses, are also involved
    corecore