348,353 research outputs found

    State of Alaska Election Security Project Phase 2 Report

    Get PDF
    A laska’s election system is among the most secure in the country, and it has a number of safeguards other states are now adopting. But the technology Alaska uses to record and count votes could be improved— and the state’s huge size, limited road system, and scattered communities also create special challenges for insuring the integrity of the vote. In this second phase of an ongoing study of Alaska’s election security, we recommend ways of strengthening the system—not only the technology but also the election procedures. The lieutenant governor and the Division of Elections asked the University of Alaska Anchorage to do this evaluation, which began in September 2007.Lieutenant Governor Sean Parnell. State of Alaska Division of Elections.List of Appendices / Glossary / Study Team / Acknowledgments / Introduction / Summary of Recommendations / Part 1 Defense in Depth / Part 2 Fortification of Systems / Part 3 Confidence in Outcomes / Conclusions / Proposed Statement of Work for Phase 3: Implementation / Reference

    Learning Equations for Extrapolation and Control

    Full text link
    We present an approach to identify concise equations from data using a shallow neural network approach. In contrast to ordinary black-box regression, this approach allows understanding functional relations and generalizing them from observed data to unseen parts of the parameter space. We show how to extend the class of learnable equations for a recently proposed equation learning network to include divisions, and we improve the learning and model selection strategy to be useful for challenging real-world data. For systems governed by analytical expressions, our method can in many cases identify the true underlying equation and extrapolate to unseen domains. We demonstrate its effectiveness by experiments on a cart-pendulum system, where only 2 random rollouts are required to learn the forward dynamics and successfully achieve the swing-up task.Comment: 9 pages, 9 figures, ICML 201

    Housing Market Crash Prediction Using Machine Learning and Historical Data

    Get PDF
    The 2008 housing crisis was caused by faulty banking policies and the use of credit derivatives of mortgages for investment purposes. In this project, we look into datasets that are the markers to a typical housing crisis. Using those data sets we build three machine learning techniques which are, Linear regression, Hidden Markov Model, and Long Short-Term Memory. After building the model we did a comparative study to show the prediction done by each model. The linear regression model did not predict a housing crisis, instead, it showed that house prices would be rising steadily and the R-squared score of the model is 0.76. The Hidden Markov Model predicted a fall in the house prices and the R-squared score for this model is 0.706. Lastly, the Long Short-Term Memory showed that the house price would fall briefly but would stabilize after that. Also, fall is not as sharp as what was predicted by the HMM model. The R- squared scored for this model is 0.9, which is the highest among all other models. Although the R-squared score doesn’t say how accurate a model it definitely says how closely a model fits the data. From our model R-square score the model that best fits the data was LSTM. As the dataset used in all the models are the same therefore it is safe to say the prediction made by LSTM is better than the other ones

    An ultra-fast method for gain and noise prediction of Raman amplifiers

    Full text link
    A machine learning method for prediction of Raman gain and noise spectra is presented: it guarantees high-accuracy (RMSE < 0.4 dB) and low computational complexity making it suitable for real-time implementation in future optical networks controllers

    Distributed Robust Learning

    Full text link
    We propose a framework for distributed robust statistical learning on {\em big contaminated data}. The Distributed Robust Learning (DRL) framework can reduce the computational time of traditional robust learning methods by several orders of magnitude. We analyze the robustness property of DRL, showing that DRL not only preserves the robustness of the base robust learning method, but also tolerates contaminations on a constant fraction of results from computing nodes (node failures). More precisely, even in presence of the most adversarial outlier distribution over computing nodes, DRL still achieves a breakdown point of at least λ∗/2 \lambda^*/2 , where λ∗ \lambda^* is the break down point of corresponding centralized algorithm. This is in stark contrast with naive division-and-averaging implementation, which may reduce the breakdown point by a factor of k k when k k computing nodes are used. We then specialize the DRL framework for two concrete cases: distributed robust principal component analysis and distributed robust regression. We demonstrate the efficiency and the robustness advantages of DRL through comprehensive simulations and predicting image tags on a large-scale image set.Comment: 18 pages, 2 figure
    • …
    corecore