1,648 research outputs found

    A Unified View of Piecewise Linear Neural Network Verification

    Full text link
    The success of Deep Learning and its potential use in many safety-critical applications has motivated research on formal verification of Neural Network (NN) models. Despite the reputation of learned NN models to behave as black boxes and the theoretical hardness of proving their properties, researchers have been successful in verifying some classes of models by exploiting their piecewise linear structure and taking insights from formal methods such as Satisifiability Modulo Theory. These methods are however still far from scaling to realistic neural networks. To facilitate progress on this crucial area, we make two key contributions. First, we present a unified framework that encompasses previous methods. This analysis results in the identification of new methods that combine the strengths of multiple existing approaches, accomplishing a speedup of two orders of magnitude compared to the previous state of the art. Second, we propose a new data set of benchmarks which includes a collection of previously released testcases. We use the benchmark to provide the first experimental comparison of existing algorithms and identify the factors impacting the hardness of verification problems.Comment: Updated version of "Piecewise Linear Neural Network verification: A comparative study

    Adaptive Neural Compilation

    Full text link
    This paper proposes an adaptive neural-compilation framework to address the problem of efficient program learning. Traditional code optimisation strategies used in compilers are based on applying pre-specified set of transformations that make the code faster to execute without changing its semantics. In contrast, our work involves adapting programs to make them more efficient while considering correctness only on a target input distribution. Our approach is inspired by the recent works on differentiable representations of programs. We show that it is possible to compile programs written in a low-level language to a differentiable representation. We also show how programs in this representation can be optimised to make them efficient on a target distribution of inputs. Experimental results demonstrate that our approach enables learning specifically-tuned algorithms for given data distributions with a high success rate.Comment: Submitted to NIPS 2016, code and supplementary materials will be available on author's pag

    Comment on “The Thermosolutal Instability of Compressible Hall Plasma I”

    Get PDF

    Bitcoin Price Prediction Using Machine Learning Techniques

    Get PDF
    This paper discusses, trying to accurately assess the price of Bitcoin by looking at differ-ent parameters affects the value of Bitcoin. In our work, we focus on understanding and seeing the evolution of Bitcoin daily market, a1 and gaining intuition in the most rele-vant aspects surrounding the Bitcoin price. In the meantime, market capitalization of publicly traded cryptocurrencies exceeds $ 230 billion. The most important cryptocur-rency, Bitcoin, is used primarily as a digital value store, and its pricing opportunities have been extensively considered. These features are described in more detail in the fol-lowing paragraph: details of the main Bitcoin, as described in the paper. Bitcoin is the most expensive digital currency in the market. However, Bitcoin prices have been highly volatile, making it difficult to forecast. As a result, the goal of this research is to find the most efficient and accurate model for predicting Bitcoin prices using various machine learning algorithms. Several regression models with scikit-learn and Keras libraries were tested using 1-minute interval trading data from the Bitcoin exchange website bit stamp from January 1. 2012 to January 8, 2018. The best results showed a Mean Squared Error (MSE) as low as 0.00002 and an R- Square (R2) as high as 99.2 percent

    Efficient Linear Programming for Dense CRFs

    Get PDF
    The fully connected conditional random field (CRF) with Gaussian pairwise potentials has proven popular and effective for multi-class semantic segmentation. While the energy of a dense CRF can be minimized accurately using a linear programming (LP) relaxation, the state-of-the-art algorithm is too slow to be useful in practice. To alleviate this deficiency, we introduce an efficient LP minimization algorithm for dense CRFs. To this end, we develop a proximal minimization framework, where the dual of each proximal problem is optimized via block coordinate descent. We show that each block of variables can be efficiently optimized. Specifically, for one block, the problem decomposes into significantly smaller subproblems, each of which is defined over a single pixel. For the other block, the problem is optimized via conditional gradient descent. This has two advantages: 1) the conditional gradient can be computed in a time linear in the number of pixels and labels; and 2) the optimal step size can be computed analytically. Our experiments on standard datasets provide compelling evidence that our approach outperforms all existing baselines including the previous LP based approach for dense CRFs.Comment: 24 pages, 10 figures and 4 table

    Effective Feature Selection Methods for User Sentiment Analysis using Machine Learning

    Get PDF
    Text classification is the method of allocating a particular piece of text to one or more of a number of predetermined categories or labels. This is done by training a machine learning model on a labeled dataset, where the texts and their corresponding labels are provided. The model then learns to predict the labels of new, unseen texts. Feature selection is a significant step in text classification as it helps to identify the most relevant features or words in the text that are useful for predicting the label. This can include things like specific keywords or phrases, or even the frequency or placement of certain words in the text. The performance of the model can be improved by focusing on the features that are most important to the information that is most likely to be useful for classification. Additionally, feature selection can also help to reduce the dimensionality of the dataset, making the model more efficient and easier to interpret. A method for extracting aspect terms from product reviews is presented in the research paper. This method makes use of the Gini index, information gain, and feature selection in conjunction with the Machine learning classifiers. In the proposed method, which is referred to as wRMR, the Gini index and information gain are utilized for feature selection. Following that, machine learning classifiers are utilized in order to extract aspect terms from product reviews. A set of customer testimonials is used to assess how well the projected method works, and the findings indicate that in terms of the extraction of aspect terms, the method that has been proposed is superior to the method that has been traditionally used. In addition, the recommended approach is contrasted with methods that are currently thought of as being state-of-the-art, and the comparison reveals that the proposed method achieves superior performance compared to the other methods. In general, the method that was presented provides a promising solution for the extraction of aspect terms, and it can also be utilized for other natural language processing tasks
    corecore