1,144 research outputs found

    Effective Tax Rates in Transition

    Full text link
    The paper addresses the question of effective tax rates for Russian economic sectors in transition. It presents a detailed account of fiscal environment for 1995 and compares statutory obligations with reported tax liabilities. The paper finds that taxation did not contribute to recession, as some observors believed at the time. It extends research by questioning the role that inflation played distorting revenue structure. When the costs of intermediate inputs are adjusted for inflation, many sectors have negative residual revenue, which is indicative of recession. Yet, modeling tax changes to correct the situation does not produce positive results, for the tax share in the cost structure of many sectors is small and cannot compensate for inflationhttp://deepblue.lib.umich.edu/bitstream/2027.42/39762/3/wp378.pd

    Effective Tax Rates in Transition

    Get PDF
    The paper addresses the question of effective tax rates for Russian economic sectors in transition. It presents a detailed account of fiscal environment for 1995 and compares statutory obligations with reported tax liabilities. The paper finds that taxation did not contribute to recession, as some observors believed at the time. It extends research by questioning the role that inflation played distorting revenue structure. When the costs of intermediate inputs are adjusted for inflation, many sectors have negative residual revenue, which is indicative of recession. Yet, modeling tax changes to correct the situation does not produce positive results, for the tax share in the cost structure of many sectors is small and cannot compensate for inflationTaxation in transition, Russian fiscal system

    Current and future graphics requirements for LaRC and proposed future graphics system

    Get PDF
    The findings of an investigation to assess the current and future graphics requirements of the LaRC researchers with respect to both hardware and software are presented. A graphics system designed to meet these requirements is proposed

    Graphical workstation capability for reliability modeling

    Get PDF
    In addition to computational capabilities, software tools for estimating the reliability of fault-tolerant digital computer systems must also provide a means of interfacing with the user. Described here is the new graphical interface capability of the hybrid automated reliability predictor (HARP), a software package that implements advanced reliability modeling techniques. The graphics oriented (GO) module provides the user with a graphical language for modeling system failure modes through the selection of various fault-tree gates, including sequence-dependency gates, or by a Markov chain. By using this graphical input language, a fault tree becomes a convenient notation for describing a system. In accounting for any sequence dependencies, HARP converts the fault-tree notation to a complex stochastic process that is reduced to a Markov chain, which it can then solve for system reliability. The graphics capability is available for use on an IBM-compatible PC, a Sun, and a VAX workstation. The GO module is written in the C programming language and uses the graphical kernal system (GKS) standard for graphics implementation. The PC, VAX, and Sun versions of the HARP GO module are currently in beta-testing stages

    The role of graphics super-workstations in a supercomputing environment

    Get PDF
    A new class of very powerful workstations has recently become available which integrate near supercomputer computational performance with very powerful and high quality graphics capability. These graphics super-workstations are expected to play an increasingly important role in providing an enhanced environment for supercomputer users. Their potential uses include: off-loading the supercomputer (by serving as stand-alone processors, by post-processing of the output of supercomputer calculations, and by distributed or shared processing), scientific visualization (understanding of results, communication of results), and by real time interaction with the supercomputer (to steer an iterative computation, to abort a bad run, or to explore and develop new algorithms)

    The Visvalingam algorithm: metrics, measures and heuristics

    Get PDF
    This paper provides the background necessary for a clear understanding of forthcoming papers relating to the Visvalingam algorithm for line generalisation, for example on the testing and usage of its implementations. It distinguishes the algorithm from implementation-specific issues to explain why it is possible to get inconsistent but equally valid output from different implementations. By tracing relevant developments within the now-disbanded Cartographic Information Systems Research Group (CISRG) of the University of Hull, it explains why a) a partial metric-driven implementation was, and still is, sufficient for many projects but not for others; b) why the Effective Area (EA) is a measure derived from a metric; c) why this measure (EA) may serve as a heuristic indicator for in-line feature segmentation and model-based generalisation; and, d) how metrics may be combined to change the order of point elimination. The issues discussed in this paper also apply to the use of other metrics. It is hoped that the background and guidance provided in this paper will enable others to participate in further research based on the algorithm

    Qubit-Qutrit Separability-Probability Ratios

    Full text link
    Paralleling our recent computationally-intensive (quasi-Monte Carlo) work for the case N=4 (quant-ph/0308037), we undertake the task for N=6 of computing to high numerical accuracy, the formulas of Sommers and Zyczkowski (quant-ph/0304041) for the (N^2-1)-dimensional volume and (N^2-2)-dimensional hyperarea of the (separable and nonseparable) N x N density matrices, based on the Bures (minimal monotone) metric -- and also their analogous formulas (quant-ph/0302197) for the (non-monotone) Hilbert-Schmidt metric. With the same seven billion well-distributed (``low-discrepancy'') sample points, we estimate the unknown volumes and hyperareas based on five additional (monotone) metrics of interest, including the Kubo-Mori and Wigner-Yanase. Further, we estimate all of these seven volume and seven hyperarea (unknown) quantities when restricted to the separable density matrices. The ratios of separable volumes (hyperareas) to separable plus nonseparable volumes (hyperareas) yield estimates of the separability probabilities of generically rank-six (rank-five) density matrices. The (rank-six) separability probabilities obtained based on the 35-dimensional volumes appear to be -- independently of the metric (each of the seven inducing Haar measure) employed -- twice as large as those (rank-five ones) based on the 34-dimensional hyperareas. Accepting such a relationship, we fit exact formulas to the estimates of the Bures and Hilbert-Schmidt separable volumes and hyperareas.(An additional estimate -- 33.9982 -- of the ratio of the rank-6 Hilbert-Schmidt separability probability to the rank-4 one is quite clearly close to integral too.) The doubling relationship also appears to hold for the N=4 case for the Hilbert-Schmidt metric, but not the others. We fit exact formulas for the Hilbert-Schmidt separable volumes and hyperareas.Comment: 36 pages, 15 figures, 11 tables, final PRA version, new last paragraph presenting qubit-qutrit probability ratios disaggregated by the two distinct forms of partial transpositio

    Off-line computing for experimental high-energy physics

    Get PDF
    The needs of experimental high-energy physics for large-scale computing and data handling are explained in terms of the complexity of individual collisions and the need for high statistics to study quantum mechanical processes. The prevalence of university-dominated collaborations adds a requirement for high-performance wide-area networks. The data handling and computational needs of the different types of large experiment, now running or under construction, are evaluated. Software for experimental high-energy physics is reviewed briefly with particular attention to the success of packages written within the discipline. It is argued that workstations and graphics are important in ensuring that analysis codes are correct, and the worldwide networks which support the involvement of remote physicists are described. Computing and data handling are reviewed showing how workstations and RISC processors are rising in importance but have not supplanted traditional mainframe processing. Examples of computing systems constructed within high-energy physics are examined and evaluated
    corecore