92,591 research outputs found

    Julia: A Fresh Approach to Numerical Computing

    Get PDF
    Bridging cultures that have often been distant, Julia combines expertise from the diverse fields of computer science and computational science to create a new approach to numerical computing. Julia is designed to be easy and fast. Julia questions notions generally held as "laws of nature" by practitioners of numerical computing: 1. High-level dynamic programs have to be slow. 2. One must prototype in one language and then rewrite in another language for speed or deployment, and 3. There are parts of a system for the programmer, and other parts best left untouched as they are built by the experts. We introduce the Julia programming language and its design --- a dance between specialization and abstraction. Specialization allows for custom treatment. Multiple dispatch, a technique from computer science, picks the right algorithm for the right circumstance. Abstraction, what good computation is really about, recognizes what remains the same after differences are stripped away. Abstractions in mathematics are captured as code through another technique from computer science, generic programming. Julia shows that one can have machine performance without sacrificing human convenience.Comment: 37 page

    Speculative Staging for Interpreter Optimization

    Full text link
    Interpreters have a bad reputation for having lower performance than just-in-time compilers. We present a new way of building high performance interpreters that is particularly effective for executing dynamically typed programming languages. The key idea is to combine speculative staging of optimized interpreter instructions with a novel technique of incrementally and iteratively concerting them at run-time. This paper introduces the concepts behind deriving optimized instructions from existing interpreter instructions---incrementally peeling off layers of complexity. When compiling the interpreter, these optimized derivatives will be compiled along with the original interpreter instructions. Therefore, our technique is portable by construction since it leverages the existing compiler's backend. At run-time we use instruction substitution from the interpreter's original and expensive instructions to optimized instruction derivatives to speed up execution. Our technique unites high performance with the simplicity and portability of interpreters---we report that our optimization makes the CPython interpreter up to more than four times faster, where our interpreter closes the gap between and sometimes even outperforms PyPy's just-in-time compiler.Comment: 16 pages, 4 figures, 3 tables. Uses CPython 3.2.3 and PyPy 1.

    The Truth About Voter Fraud

    Get PDF
    Allegations of election-related fraud make for enticing press. Many Americans remember vivid stories of voting improprieties in Chicagoland, or the suspiciously sudden appearance of LBJ's alphabetized ballot box in Texas, or Governor Earl Long's quip: "When I die, I want to be buried in Louisiana, so I can stay active in politics." Voter fraud, in particular, has the feel of a bank heist caper: roundly condemned but technically fascinating, and sufficiently lurid to grab and hold headlines. Perhaps because these stories are dramatic, voter fraud makes a popular scapegoat. In the aftermath of a close election, losing candidates are often quick to blame voter fraud for the results. Legislators cite voter fraud as justification for various new restrictions on the exercise of the franchise. And pundits trot out the same few anecdotes time and again as proof that a wave of fraud is imminent.Allegations of widespread voter fraud, however, often prove greatly exaggerated. It is easy to grab headlines with a lurid claim ("Tens of thousands may be voting illegally!"); the follow-up -- when any exists -- is not usually deemed newsworthy. Yet on closer examination, many of the claims of voter fraud amount to a great deal of smoke without much fire. The allegations simply do not pan out

    Distributed Contingency Analysis over Wide Area Network among Dispatch Centers

    Full text link
    Traditionally, a regional dispatch center uses the equivalent method to deal with external grids, which fails to reflect the interactions among regions. This paper proposes a distributed N-1 contingency analysis (DCA) solution, where dispatch centers join a coordinated computation using their private data and computing resources. A distributed screening method is presented to determine the Critical Contingency Set (DCCS) in DCA. Then, the distributed power flow is formulated as a set of boundary equations, which is solved by a Jacobi-Free Newton-GMRES (JFNG) method. During solving the distributed power flow, only boundary conditions are exchanged. Acceleration techniques are also introduced, including reusing preconditioners and optimal resource scheduling during parallel processing of multiple contingencies. The proposed method is implemented on a real EMS platform, where tests using the Southwest Regional Grid of China are carried out to validate its feasibility.Comment: 5 pages, 6 figures, 2017 IEEE PES General Meetin
    • …
    corecore