15,126 research outputs found

    Physicist's Journeys Through the AI World - A Topical Review. There is no royal road to unsupervised learning

    Full text link
    Artificial Intelligence (AI), defined in its most simple form, is a technological tool that makes machines intelligent. Since learning is at the core of intelligence, machine learning poses itself as a core sub-field of AI. Then there comes a subclass of machine learning, known as deep learning, to address the limitations of their predecessors. AI has generally acquired its prominence over the past few years due to its considerable progress in various fields. AI has vastly invaded the realm of research. This has led physicists to attentively direct their research towards implementing AI tools. Their central aim has been to gain better understanding and enrich their intuition. This review article is meant to supplement the previously presented efforts to bridge the gap between AI and physics, and take a serious step forward to filter out the "Babelian" clashes brought about from such gabs. This necessitates first to have fundamental knowledge about common AI tools. To this end, the review's primary focus shall be on deep learning models called artificial neural networks. They are deep learning models which train themselves through different learning processes. It discusses also the concept of Markov decision processes. Finally, shortcut to the main goal, the review thoroughly examines how these neural networks are capable to construct a physical theory describing some observations without applying any previous physical knowledge.Comment: 26 pages, 10 figures, 2 appendices, 5 algorithm

    DC-NAS: Divide-and-Conquer Neural Architecture Search

    Full text link
    Most applications demand high-performance deep neural architectures costing limited resources. Neural architecture searching is a way of automatically exploring optimal deep neural networks in a given huge search space. However, all sub-networks are usually evaluated using the same criterion; that is, early stopping on a small proportion of the training dataset, which is an inaccurate and highly complex approach. In contrast to conventional methods, here we present a divide-and-conquer (DC) approach to effectively and efficiently search deep neural architectures. Given an arbitrary search space, we first extract feature representations of all sub-networks according to changes in parameters or output features of each layer, and then calculate the similarity between two different sampled networks based on the representations. Then, a k-means clustering is conducted to aggregate similar architectures into the same cluster, separately executing sub-network evaluation in each cluster. The best architecture in each cluster is later merged to obtain the optimal neural architecture. Experimental results conducted on several benchmarks illustrate that DC-NAS can overcome the inaccurate evaluation problem, achieving a 75.1%75.1\% top-1 accuracy on the ImageNet dataset, which is higher than that of state-of-the-art methods using the same search space

    Toward an AI Physicist for Unsupervised Learning

    Full text link
    We investigate opportunities and challenges for improving unsupervised machine learning using four common strategies with a long history in physics: divide-and-conquer, Occam's razor, unification and lifelong learning. Instead of using one model to learn everything, we propose a novel paradigm centered around the learning and manipulation of *theories*, which parsimoniously predict both aspects of the future (from past observations) and the domain in which these predictions are accurate. Specifically, we propose a novel generalized-mean-loss to encourage each theory to specialize in its comparatively advantageous domain, and a differentiable description length objective to downweight bad data and "snap" learned theories into simple symbolic formulas. Theories are stored in a "theory hub", which continuously unifies learned theories and can propose theories when encountering new environments. We test our implementation, the toy "AI Physicist" learning agent, on a suite of increasingly complex physics environments. From unsupervised observation of trajectories through worlds involving random combinations of gravity, electromagnetism, harmonic motion and elastic bounces, our agent typically learns faster and produces mean-squared prediction errors about a billion times smaller than a standard feedforward neural net of comparable complexity, typically recovering integer and rational theory parameters exactly. Our agent successfully identifies domains with different laws of motion also for a nonlinear chaotic double pendulum in a piecewise constant force field.Comment: Replaced to match accepted PRE version. Added references, improved discussion. 22 pages, 7 fig

    DataGrinder: Fast, Accurate, Fully non-Parametric Classification Approach Using 2D Convex Hulls

    Full text link
    It has been a long time, since data mining technologies have made their ways to the field of data management. Classification is one of the most important data mining tasks for label prediction, categorization of objects into groups, advertisement and data management. In this paper, we focus on the standard classification problem which is predicting unknown labels in Euclidean space. Most efforts in Machine Learning communities are devoted to methods that use probabilistic algorithms which are heavy on Calculus and Linear Algebra. Most of these techniques have scalability issues for big data, and are hardly parallelizable if they are to maintain their high accuracies in their standard form. Sampling is a new direction for improving scalability, using many small parallel classifiers. In this paper, rather than conventional sampling methods, we focus on a discrete classification algorithm with O(n) expected running time. Our approach performs a similar task as sampling methods. However, we use column-wise sampling of data, rather than the row-wise sampling used in the literature. In either case, our algorithm is completely deterministic. Our algorithm, proposes a way of combining 2D convex hulls in order to achieve high classification accuracy as well as scalability in the same time. First, we thoroughly describe and prove our O(n) algorithm for finding the convex hull of a point set in 2D. Then, we show with experiments our classifier model built based on this idea is very competitive compared with existing sophisticated classification algorithms included in commercial statistical applications such as MATLAB

    DCSVM: Fast Multi-class Classification using Support Vector Machines

    Full text link
    We present DCSVM, an efficient algorithm for multi-class classification using Support Vector Machines. DCSVM is a divide and conquer algorithm which relies on data sparsity in high dimensional space and performs a smart partitioning of the whole training data set into disjoint subsets that are easily separable. A single prediction performed between two partitions eliminates at once one or more classes in one partition, leaving only a reduced number of candidate classes for subsequent steps. The algorithm continues recursively, reducing the number of classes at each step, until a final binary decision is made between the last two classes left in the competition. In the best case scenario, our algorithm makes a final decision between kk classes in O(log⁑k)O(\log k) decision steps and in the worst case scenario DCSVM makes a final decision in kβˆ’1k-1 steps, which is not worse than the existent techniques

    Prediction-Based Task Assignment in Spatial Crowdsourcing (Technical Report)

    Full text link
    Spatial crowdsourcing refers to a system that periodically assigns a number of location-based workers with spatial tasks nearby (e.g., taking photos or videos at some spatial locations). Previous works on the spatial crowdsourcing usually designed task assignment strategies that maximize some assignment scores, which are however only based on available workers/tasks in the system at the time point of assigning workers/tasks. These strategies may achieve local optimality, due to the neglect of future workers/tasks that may join the system. In contrast, in this paper, we aim to achieve "globally" optimal task assignments, by considering not only those present, but also future (via predictions), workers/tasks. Specifically, we formalize an important problem, namely prediction-based spatial crowdsourcing (PB-SC), which expects to obtain a "globally" optimal strategy for worker-and-task assignments, over both present and predicted task/worker locations, such that the total assignment quality score is maximized under the constraint of the traveling budget. In this paper, we design an effective grid-based prediction method to estimate spatial distributions of workers/tasks in the future, and then utilize the predicted ones in our procedure of task assignments. We prove that the PB-SC problem is NP-hard, and thus intractable. Therefore, we propose efficient approximate algorithms to tackle the PB-SC problem, including greedy and divide-and-conquer (D&C) approaches, which can efficiently assign workers to spatial tasks with high quality scores and low budget consumptions, by considering both current and future task/worker distributions. Through extensive experiments, we demonstrate the efficiency and effectiveness of our PB-SC processing approaches on real/synthetic data.Comment: 15 page

    A Divide-and-Conquer Bayesian Approach to Large-Scale Kriging

    Full text link
    We propose a three-step divide-and-conquer strategy within the Bayesian paradigm that delivers massive scalability for any spatial process model. We partition the data into a large number of subsets, apply a readily available Bayesian spatial process model on every subset, in parallel, and optimally combine the posterior distributions estimated across all the subsets into a pseudo-posterior distribution that conditions on the entire data. The combined pseudo posterior distribution replaces the full data posterior distribution for predicting the responses at arbitrary locations and for inference on the model parameters and spatial surface. Based on distributed Bayesian inference, our approach is called "Distributed Kriging" (DISK) and offers significant advantages in massive data applications where the full data are stored across multiple machines. We show theoretically that the Bayes L2L_2-risk of the DISK posterior distribution achieves the near optimal convergence rate in estimating the true spatial surface with various types of covariance functions, and provide upper bounds for the number of subsets as a function of the full sample size. The model-free feature of DISK is demonstrated by scaling posterior computations in spatial process models with a stationary full-rank and a nonstationary low-rank Gaussian process (GP) prior. A variety of simulations and a geostatistical analysis of the Pacific Ocean sea surface temperature data validate our theoretical results.Comment: 29 pages, including 4 figures and 5 table

    SparseDTW: A Novel Approach to Speed up Dynamic Time Warping

    Full text link
    We present a new space-efficient approach, (SparseDTW), to compute the Dynamic Time Warping (DTW) distance between two time series that always yields the optimal result. This is in contrast to other known approaches which typically sacrifice optimality to attain space efficiency. The main idea behind our approach is to dynamically exploit the existence of similarity and/or correlation between the time series. The more the similarity between the time series the less space required to compute the DTW between them. To the best of our knowledge, all other techniques to speedup DTW, impose apriori constraints and do not exploit similarity characteristics that may be present in the data. We conduct experiments and demonstrate that SparseDTW outperforms previous approaches.Comment: 17 page

    Max-Diversity Distributed Learning: Theory and Algorithms

    Full text link
    We study the risk performance of distributed learning for the regularization empirical risk minimization with fast convergence rate, substantially improving the error analysis of the existing divide-and-conquer based distributed learning. An interesting theoretical finding is that the larger the diversity of each local estimate is, the tighter the risk bound is. This theoretical analysis motivates us to devise an effective maxdiversity distributed learning algorithm (MDD). Experimental results show that MDD can outperform the existing divide-andconquer methods but with a bit more time. Theoretical analysis and empirical results demonstrate that our proposed MDD is sound and effective

    Two-stage Best-scored Random Forest for Large-scale Regression

    Full text link
    We propose a novel method designed for large-scale regression problems, namely the two-stage best-scored random forest (TBRF). "Best-scored" means to select one regression tree with the best empirical performance out of a certain number of purely random regression tree candidates, and "two-stage" means to divide the original random tree splitting procedure into two: In stage one, the feature space is partitioned into non-overlapping cells; in stage two, child trees grow separately on these cells. The strengths of this algorithm can be summarized as follows: First of all, the pure randomness in TBRF leads to the almost optimal learning rates, and also makes ensemble learning possible, which resolves the boundary discontinuities long plaguing the existing algorithms. Secondly, the two-stage procedure paves the way for parallel computing, leading to computational efficiency. Last but not least, TBRF can serve as an inclusive framework where different mainstream regression strategies such as linear predictor and least squares support vector machines (LS-SVMs) can also be incorporated as value assignment approaches on leaves of the child trees, depending on the characteristics of the underlying data sets. Numerical assessments on comparisons with other state-of-the-art methods on several large-scale real data sets validate the promising prediction accuracy and high computational efficiency of our algorithm
    • …
    corecore