9,114 research outputs found

    A subtraction scheme for computing QCD jet cross sections at NNLO: regularization of real-virtual emission

    Get PDF
    We present a subtraction scheme for computing jet cross sections in electron-positron annihilation at next-to-next-to-leading order accuracy in perturbative QCD. In this second part we deal with the regularization of the real-virtual contribution to the NNLO correction.Comment: 32 pages, LaTeX file, uses pstrick

    A subtraction scheme for computing QCD jet cross sections at NNLO: integrating the doubly unresolved subtraction terms

    Get PDF
    We finish the definition of a subtraction scheme for computing NNLO corrections to QCD jet cross sections. In particular, we perform the integration of the soft-type contributions to the doubly unresolved counterterms via the method of Mellin-Barnes representations. With these final ingredients in place, the definition of the scheme is complete and the computation of fully differential rates for electron-positron annihilation into two and three jets at NNLO accuracy becomes feasible.Comment: 33 pages, references added, exposition expanded, minor typos corrected. Version published in JHE

    Idempotent I/O for safe time travel

    Full text link
    Debuggers for logic programming languages have traditionally had a capability most other debuggers did not: the ability to jump back to a previous state of the program, effectively travelling back in time in the history of the computation. This ``retry'' capability is very useful, allowing programmers to examine in detail a part of the computation that they previously stepped over. Unfortunately, it also creates a problem: while the debugger may be able to restore the previous values of variables, it cannot restore the part of the program's state that is affected by I/O operations. If the part of the computation being jumped back over performs I/O, then the program will perform these I/O operations twice, which will result in unwanted effects ranging from the benign (e.g. output appearing twice) to the fatal (e.g. trying to close an already closed file). We present a simple mechanism for ensuring that every I/O action called for by the program is executed at most once, even if the programmer asks the debugger to travel back in time from after the action to before the action. The overhead of this mechanism is low enough and can be controlled well enough to make it practical to use it to debug computations that do significant amounts of I/O.Comment: In M. Ronsse, K. De Bosschere (eds), proceedings of the Fifth International Workshop on Automated Debugging (AADEBUG 2003), September 2003, Ghent. cs.SE/030902

    Subtraction method of computing QCD jet cross sections at NNLO accuracy

    Full text link
    We present a general subtraction method for computing radiative corrections to QCD jet cross sections at next-to-next-to-leading order accuracy. The steps needed to set up this subtraction scheme are the same as those used in next-to-leading order computations. However, all steps need non-trivial modifications, which we implement such that that those can be defined at any order in perturbation theory. We give a status report of the implementation of the method to computing jet cross sections in electron-positron annihilation at the next-to-next-to-leading order accuracy.Comment: 6 pages, talk given at the conference "Loops and Legs in Quantum Field Theory", Sondershausen, April 200

    A subtraction scheme for computing QCD jet cross sections at NNLO: integrating the subtraction terms I

    Get PDF
    In previous articles we outlined a subtraction scheme for regularizing doubly-real emission and real-virtual emission in next-to-next-to-leading order (NNLO) calculations of jet cross sections in electron-positron annihilation. In order to find the NNLO correction these subtraction terms have to be integrated over the factorized unresolved phase space and combined with the two-loop corrections. In this paper we perform the integration of all one-parton unresolved subtraction terms

    Public services in Hungary

    Get PDF

    Price Rigidity and Strategic Uncertainty An Agent-based Approach

    Get PDF
    The phenomenon of infrequent price changes has troubled economists for decades. Intuitively one feels that for most price-setters there exists a range of inaction, i.e. a substantial measure of the states of the world, within which they do not wish to modify prevailing prices. However, basic economics tells us that when marginal costs change it is rational to change prices, too. Economists wishing to maintain rationality of price-setters resorted to fixed price adjustment costs as an explanation for price rigidity. In this paper we propose an alternative explanation, without recourse to any sort of physical adjustment cost, by putting strategic interaction into the center-stage of our analysis. Price-making is treated as a repeated oligopoly game. The traditional analysis of these games cannot pinpoint any equilibrium as a reasonable "solution" of the strategic situation. Thus there is genuine strategic uncertainty, a situation where decision-makers are uncertain of the strategies of other decision-makers. Hesitation may lead to inaction. To model this situation we follow the style of agent-based models, by modelling firms that change their pricing strategies following an evolutionary algorithm. Our results are promising. In addition to reproducing the known negative relationship between price rigidity and the level of general inflation, our model exhibits several features observed in real data. Moreover, most prices fall into the theoretical "range" without explicitly building this property into strategies.Agent-based modeling, Evolutionary algorithm, Price rigidity, Social learning, Strategic Uncertainty

    Region-based memory management for Mercury programs

    Full text link
    Region-based memory management (RBMM) is a form of compile time memory management, well-known from the functional programming world. In this paper we describe our work on implementing RBMM for the logic programming language Mercury. One interesting point about Mercury is that it is designed with strong type, mode, and determinism systems. These systems not only provide Mercury programmers with several direct software engineering benefits, such as self-documenting code and clear program logic, but also give language implementors a large amount of information that is useful for program analyses. In this work, we make use of this information to develop program analyses that determine the distribution of data into regions and transform Mercury programs by inserting into them the necessary region operations. We prove the correctness of our program analyses and transformation. To execute the annotated programs, we have implemented runtime support that tackles the two main challenges posed by backtracking. First, backtracking can require regions removed during forward execution to be "resurrected"; and second, any memory allocated during a computation that has been backtracked over must be recovered promptly and without waiting for the regions involved to come to the end of their life. We describe in detail our solution of both these problems. We study in detail how our RBMM system performs on a selection of benchmark programs, including some well-known difficult cases for RBMM. Even with these difficult cases, our RBMM-enabled Mercury system obtains clearly faster runtimes for 15 out of 18 benchmarks compared to the base Mercury system with its Boehm runtime garbage collector, with an average runtime speedup of 24%, and an average reduction in memory requirements of 95%. In fact, our system achieves optimal memory consumption in some programs.Comment: 74 pages, 23 figures, 11 tables. A shorter version of this paper, without proofs, is to appear in the journal Theory and Practice of Logic Programming (TPLP
    corecore