4,821 research outputs found

    How Fast Can We Multiply Large Integers on an Actual Computer?

    Full text link
    We provide two complexity measures that can be used to measure the running time of algorithms to compute multiplications of long integers. The random access machine with unit or logarithmic cost is not adequate for measuring the complexity of a task like multiplication of long integers. The Turing machine is more useful here, but fails to take into account the multiplication instruction for short integers, which is available on physical computing devices. An interesting outcome is that the proposed refined complexity measures do not rank the well known multiplication algorithms the same way as the Turing machine model.Comment: To appear in the proceedings of Latin 2014. Springer LNCS 839

    The RAM equivalent of P vs. RP

    Full text link
    One of the fundamental open questions in computational complexity is whether the class of problems solvable by use of stochasticity under the Random Polynomial time (RP) model is larger than the class of those solvable in deterministic polynomial time (P). However, this question is only open for Turing Machines, not for Random Access Machines (RAMs). Simon (1981) was able to show that for a sufficiently equipped Random Access Machine, the ability to switch states nondeterministically does not entail any computational advantage. However, in the same paper, Simon describes a different (and arguably more natural) scenario for stochasticity under the RAM model. According to Simon's proposal, instead of receiving a new random bit at each execution step, the RAM program is able to execute the pseudofunction RAND(y)\textit{RAND}(y), which returns a uniformly distributed random integer in the range [0,y)[0,y). Whether the ability to allot a random integer in this fashion is more powerful than the ability to allot a random bit remained an open question for the last 30 years. In this paper, we close Simon's open problem, by fully characterising the class of languages recognisable in polynomial time by each of the RAMs regarding which the question was posed. We show that for some of these, stochasticity entails no advantage, but, more interestingly, we show that for others it does.Comment: 23 page

    Design of an FPGA-based parallel SIMD machine for power flow analysis

    Get PDF
    Power flow analysis consists of computationally intensive calculations on large matrices, consumes several hours of computational time, and has shown the need for the implementation of application-specific parallel machines. The potential of Single-Instruction stream Multiple-Data stream (SIMD) parallel architectures for efficient operations on large matrices has been demonstrated as seen in the case of many existing supercomputers. The unsuitability of existing parallel machines for low-cost power system applications, their long design cycles, and the difficulty in using them show the need for application-specific SIMI) machines. Advances in VLSI technology and Field-Programmable Gate-Arrays (FPGAs) enable the implementation of Custom Computing Machines (CCMs) which can yield better performance for specific applications. The advent of SoftCore processors made it possible to integrate reconfigurable logic as a slave to a peripheral bus and has demonstrated the ability in the rapid prototyping of complete systems on programmable chips. This thesis aims at designing and implementing an FPGA-based SIMI) machine for power flow analysis. It presents the architecture of an SIMI) machine that consists of an array of processing elements with mesh interconnection and a Soft-Core processor; the latter is used as the host. The FPGAbased SIMI) machine is implemented on the Annapolis Microsystems Wildstar-II board that contains multiple Virtex-II FPGAs. The Soft-Core processor used is the Xilinx Microblaze and the application targeted is matrix multiplication
    corecore