1,626 research outputs found

    Breeding unicorns:Developing trustworthy and scalable randomness beacons

    Get PDF
    Randomness beacons are services that periodically emit a random number, allowing users to base decisions on the same random value without trusting anyone: ideally, the randomness beacon does not only produce unpredictable values, but is also of low computational complexity for the users, bias-resistant and publicly verifiable. Such randomness beacons can serve as an important primitive for smart contracts in a variety of contexts. This paper first presents a structured security analysis, based on which we then design, implement, and evaluate a trustworthy and efficient randomness beacon. Our approach does not require users to register or run any computationally intensive operations. We then compare different implementation and deployment options on distributed ledgers, and report on an Ethereum smart contract-based lottery using our beacon

    Brownian forgery of statistical dependences

    Full text link
    The balance held by Brownian motion between temporal regularity and randomness is embodied in a remarkable way by Levy's forgery of continuous functions. Here we describe how this property can be extended to forge arbitrary dependences between two statistical systems, and then establish a new Brownian independence test based on fluctuating random paths. We also argue that this result allows revisiting the theory of Brownian covariance from a physical perspective and opens the possibility of engineering nonlinear correlation measures from more general functional integrals.Comment: 13 pages, 2 figures, formatting based on revtex4; v2: revised proof of extended forgery and minor changes; v3: additional discussion on practical implementation and minor edits, published versio

    Cyclical succession in semi-arid savannas revealed with a spatial simulation model

    Get PDF
    Patch-dynamics is a new scale-explicit mechanism explaining the coexistence between woody species and grasses in savannas including asynchronous cyclical successions at the patch-scale. In this dissertation, I developed, implemented, validated, and analysed the spatially-explicit individual-based simulation model SATCHMO for a semi-arid savanna patch to investigate whether cyclical succession emerge from a realistic parameterization to support the applicability of patch-dynamics to savannas. Model analyses revealed significant shrub cycles at a period of 33 years that were driven by precipitation and not by fire. I suggest that shrub cycles occur in three phases which is very well supported by field data from the study site in South Africa and model results. In the patch-dynamic context, the ecological-economic problem of shrub encroachment is a natural, transient phase in the cycle, so that large-scale rotational schemes are an appropriate option for livestock management. -Patch-dynamics ist ein neuer, skalenexpliziter Mechanismus zur Erklärung der Koexistenz von Holzpflanzen und Gräsern in Savannen, bei dem in mosaikartigen Patchen jeweils zyklische Sukzessionen asynchron ablaufen. In dieser Dissertation wurde das räumlich-explizite, individuenbasierte Simulationsmodell SATCHMO für einen semi-ariden Savannenpatch entwickelt, implementiert, mit eigens aufgenommenen Felddaten aus Südafrika validiert und analysiert, um herauszufinden, ob bei einer realitätsnahen Parametrisierung zyklische Sukzessionen aus dem Modell hervorgehen, um die Anwendbarkeit von Patch-dynamics auf Savannen zu untermauern. Die Modellanalyse ergab signifikante Strauchzyklen mit einer Periode von 33 Jahren, die von Bodenfeuchte und nicht von Feuerdynamik getrieben werden. Es wird vorgeschlagen, dass die Zyklen in drei Phasen ablaufen, die gut durch die Modell- und Felddatenergebnisse belegt sind. Das ökologisch-ökonomische Problem der Verbuschung ist in diesem Zusammenhang eine natürliche, vorübergehende Phase in den Zyklen, dem mit groß-skaligen Rotationsweiden beigekommen werden kann

    Accelerated Financial Applications through Specialized Hardware, FPGA

    Get PDF
    This project will investigate Field Programmable Gate Array (FPGA) technology in financial applications. FPGA implementation in high performance computing is still in its infancy. Certain companies like XtremeData inc. advertized speed improvements of 50 to 1000 times for DNA sequencing using FPGAs, while using an FPGA as a coprocessor to handle specific tasks provides two to three times more processing power. FPGA technology increases performance by parallelizing calculations. This project will specifically address speed and accuracy improvements of both fundamental and transcendental functions when implemented using FPGA technology. The results of this project will lead to a series of recommendations for effective utilization of FPGA technology in financial applications

    Conditional Lot Splitting to Avoid Setups While Reducing Flow Time

    Get PDF
    Previous research has clearly and consistently shown that flow time advantages accrue from splitting production lots into smaller transfer batches or sub-lots. Less extensively discussed, and certainly undesired, is the fact that lot splitting may dramatically increase the number of setups required, making it impractical in some settings. This paper describes and demonstrates a primary cause of these “extra” setups. It then proposes and evaluates decision rules which selectively invoke lot splitting in an attempt to avoid extra setups. For the closed job shop environment tested, our results indicate that conditional logic can achieve a substantial portion of lot splitting’s flow time improvement while avoiding the vast majority of the additional setups which would be caused by previously tested lot splitting schemes

    Inverse Problems and Data Assimilation

    Full text link
    These notes are designed with the aim of providing a clear and concise introduction to the subjects of Inverse Problems and Data Assimilation, and their inter-relations, together with citations to some relevant literature in this area. The first half of the notes is dedicated to studying the Bayesian framework for inverse problems. Techniques such as importance sampling and Markov Chain Monte Carlo (MCMC) methods are introduced; these methods have the desirable property that in the limit of an infinite number of samples they reproduce the full posterior distribution. Since it is often computationally intensive to implement these methods, especially in high dimensional problems, approximate techniques such as approximating the posterior by a Dirac or a Gaussian distribution are discussed. The second half of the notes cover data assimilation. This refers to a particular class of inverse problems in which the unknown parameter is the initial condition of a dynamical system, and in the stochastic dynamics case the subsequent states of the system, and the data comprises partial and noisy observations of that (possibly stochastic) dynamical system. We will also demonstrate that methods developed in data assimilation may be employed to study generic inverse problems, by introducing an artificial time to generate a sequence of probability measures interpolating from the prior to the posterior

    Hardware Considerations for Signal Processing Systems: A Step Toward the Unconventional.

    Full text link
    As we progress into the future, signal processing algorithms are becoming more computationally intensive and power hungry while the desire for mobile products and low power devices is also increasing. An integrated ASIC solution is one of the primary ways chip developers can improve performance and add functionality while keeping the power budget low. This work discusses ASIC hardware for both conventional and unconventional signal processing systems, and how integration, error resilience, emerging devices, and new algorithms can be leveraged by signal processing systems to further improve performance and enable new applications. Specifically this work presents three case studies: 1) a conventional and highly parallel mix signal cross-correlator ASIC for a weather satellite performing real-time synthetic aperture imaging, 2) an unconventional native stochastic computing architecture enabled by memristors, and 3) two unconventional sparse neural network ASICs for feature extraction and object classification. As improvements from technology scaling alone slow down, and the demand for energy efficient mobile electronics increases, such optimization techniques at the device, circuit, and system level will become more critical to advance signal processing capabilities in the future.PhDElectrical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/116685/1/knagphil_1.pd
    • …
    corecore