1,024 research outputs found

    Computing convexity properties of images on a pyramid computer

    Full text link
    We present efficient parallel algorithms for using a pyramid computer to determine convexity properties of digitized black/white pictures and labeled figures. Algorithms are presented for deciding convexity, identifying extreme points of convex hulls, and using extreme points in a variety of fashions. For a pyramid computer with a base of n simple processing elements arranged in an n 1/2 × n 1/2 square, the running times of the algorithms range from Θ(log n ) to find the extreme points of a convex figure in a digitized picture, to Θ( n 1/6 ) to find the diameter of a labeled figure, Θ( n 1/4 log n ) to find the extreme points of every figure in a digitized picture, to Θ( n 1/2 ) to find the extreme points of every labeled set of processing elements. Our results show that the pyramid computer can be used to obtain efficient solutions to nontrivial problems in image analysis. We also show the sensitivity of efficient pyramid-computer algorithms to the rate at which essential data can be compressed. Finally, we show that a wide variety of techniques are needed to make full and efficient use of the pyramid architecture.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/41351/1/453_2005_Article_BF01759066.pd

    Detection of Parthenium Weed (Parthenium hysterophorus L.) and Its Growth Stages Using Artificial Intelligence

    Get PDF
    Parthenium weed (Parthenium hysterophorus L. (Asteraceae)), native to the Americas, is in the top 100 most invasive plant species in the world. In Australia, it is an annual weed (herb/shrub) of national significance, especially in the state of Queensland where it has infested both agricultural and conservation lands, including riparian corridors. Effective control strategies for this weed (pasture management, biological control, and herbicide usage) require populations to be detected and mapped. However, the mapping is made difficult due to varying nature of the infested landscapes (e.g., uneven terrain). This paper proposes a novel method to detect and map parthenium populations in simulated pastoral environments using Red-Green-Blue (RGB) and/or hyperspectral imagery aided by artificial intelligence. Two datasets were collected in a control environment using a series of parthenium and naturally co-occurring, non-parthenium (monocot) plants. RGB images were processed with a YOLOv4 Convolutional Neural Network (CNN) implementation, achieving an overall accuracy of 95% for detection, and 86% for classification of flowering and non-flowering stages of the weed. An XGBoost classifier was used for the pixel classification of the hyperspectral dataset—achieving a classification accuracy of 99% for each parthenium weed growth stage class; all materials received a discernible colour mask. When parthenium and non-parthenium plants were artificially combined in various permutations, the pixel classification accuracy was 99% for each parthenium and non-parthenium class, again with all materials receiving an accurate and discernible colour mask. Performance metrics indicate that our proposed processing pipeline can be used in the preliminary design of parthenium weed detection strategies, and can be extended for automated processing of collected RGB and hyperspectral airborne unmanned aerial vehicle (UAV) data. The findings also demonstrate the potential for images collected in a controlled, glasshouse environment to be used in the preliminary design of invasive weed detection strategies in the field

    Vision algorithms for hypercube machines

    Full text link
    Several commercial hypercube parallel processors with the potential to deliver massive parallelism cost-effectively have been announced recently. They open the door to a wide variety of application areas that could benefit from parallelism. Computer vision is one of these application areas. This paper develops a general model for hypercube machines, and uses it to show how vision algorithms can be executed on hypercubes. In particular, the steps in the problem of thick-film inspection are used as a concrete example. The time needed to complete a typical inspection is used to demonstrate the performance of hypercube machines. Experimental results from a hypercube machine illustrate the potential use of such machines.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/26820/1/0000379.pd

    RELEAF: An Algorithm for Learning and Exploiting Relevance

    Full text link
    Recommender systems, medical diagnosis, network security, etc., require on-going learning and decision-making in real time. These -- and many others -- represent perfect examples of the opportunities and difficulties presented by Big Data: the available information often arrives from a variety of sources and has diverse features so that learning from all the sources may be valuable but integrating what is learned is subject to the curse of dimensionality. This paper develops and analyzes algorithms that allow efficient learning and decision-making while avoiding the curse of dimensionality. We formalize the information available to the learner/decision-maker at a particular time as a context vector which the learner should consider when taking actions. In general the context vector is very high dimensional, but in many settings, the most relevant information is embedded into only a few relevant dimensions. If these relevant dimensions were known in advance, the problem would be simple -- but they are not. Moreover, the relevant dimensions may be different for different actions. Our algorithm learns the relevant dimensions for each action, and makes decisions based in what it has learned. Formally, we build on the structure of a contextual multi-armed bandit by adding and exploiting a relevance relation. We prove a general regret bound for our algorithm whose time order depends only on the maximum number of relevant dimensions among all the actions, which in the special case where the relevance relation is single-valued (a function), reduces to O~(T2(21))\tilde{O}(T^{2(\sqrt{2}-1)}); in the absence of a relevance relation, the best known contextual bandit algorithms achieve regret O~(T(D+1)/(D+2))\tilde{O}(T^{(D+1)/(D+2)}), where DD is the full dimension of the context vector.Comment: to appear in IEEE Journal of Selected Topics in Signal Processing, 201

    Designing labeled graph classifiers by exploiting the R\'enyi entropy of the dissimilarity representation

    Full text link
    Representing patterns as labeled graphs is becoming increasingly common in the broad field of computational intelligence. Accordingly, a wide repertoire of pattern recognition tools, such as classifiers and knowledge discovery procedures, are nowadays available and tested for various datasets of labeled graphs. However, the design of effective learning procedures operating in the space of labeled graphs is still a challenging problem, especially from the computational complexity viewpoint. In this paper, we present a major improvement of a general-purpose classifier for graphs, which is conceived on an interplay between dissimilarity representation, clustering, information-theoretic techniques, and evolutionary optimization algorithms. The improvement focuses on a specific key subroutine devised to compress the input data. We prove different theorems which are fundamental to the setting of the parameters controlling such a compression operation. We demonstrate the effectiveness of the resulting classifier by benchmarking the developed variants on well-known datasets of labeled graphs, considering as distinct performance indicators the classification accuracy, computing time, and parsimony in terms of structural complexity of the synthesized classification models. The results show state-of-the-art standards in terms of test set accuracy and a considerable speed-up for what concerns the computing time.Comment: Revised versio

    The history of degenerate (bipartite) extremal graph problems

    Full text link
    This paper is a survey on Extremal Graph Theory, primarily focusing on the case when one of the excluded graphs is bipartite. On one hand we give an introduction to this field and also describe many important results, methods, problems, and constructions.Comment: 97 pages, 11 figures, many problems. This is the preliminary version of our survey presented in Erdos 100. In this version 2 only a citation was complete

    COMPOSE: Compacted object sample extraction a framework for semi-supervised learning in nonstationary environments

    Get PDF
    An increasing number of real-world applications are associated with streaming data drawn from drifting and nonstationary distributions. These applications demand new algorithms that can learn and adapt to such changes, also known as concept drift. Proper characterization of such data with existing approaches typically requires substantial amount of labeled instances, which may be difficult, expensive, or even impractical to obtain. In this thesis, compacted object sample extraction (COMPOSE) is introduced - a computational geometry-based framework to learn from nonstationary streaming data - where labels are unavailable (or presented very sporadically) after initialization. The feasibility and performance of the algorithm are evaluated on several synthetic and real-world data sets, which present various different scenarios of initially labeled streaming environments. On carefully designed synthetic data sets, we also compare the performance of COMPOSE against the optimal Bayes classifier, as well as the arbitrary subpopulation tracker algorithm, which addresses a similar environment referred to as extreme verification latency. Furthermore, using the real-world National Oceanic and Atmospheric Administration weather data set, we demonstrate that COMPOSE is competitive even with a well-established and fully supervised nonstationary learning algorithm that receives labeled data in every batch

    S2: An Efficient Graph Based Active Learning Algorithm with Application to Nonparametric Classification

    Full text link
    This paper investigates the problem of active learning for binary label prediction on a graph. We introduce a simple and label-efficient algorithm called S2 for this task. At each step, S2 selects the vertex to be labeled based on the structure of the graph and all previously gathered labels. Specifically, S2 queries for the label of the vertex that bisects the *shortest shortest* path between any pair of oppositely labeled vertices. We present a theoretical estimate of the number of queries S2 needs in terms of a novel parametrization of the complexity of binary functions on graphs. We also present experimental results demonstrating the performance of S2 on both real and synthetic data. While other graph-based active learning algorithms have shown promise in practice, our algorithm is the first with both good performance and theoretical guarantees. Finally, we demonstrate the implications of the S2 algorithm to the theory of nonparametric active learning. In particular, we show that S2 achieves near minimax optimal excess risk for an important class of nonparametric classification problems.Comment: A version of this paper appears in the Conference on Learning Theory (COLT) 201

    Vertex sparsification and universal rounding algorithms

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 125-129).Suppose we are given a gigantic communication network, but are only interested in a small number of nodes (clients). There are many routing problems we could be asked to solve for our clients. Is there a much smaller network - that we could write down on a sheet of paper and put in our pocket - that approximately preserves all the relevant communication properties of the original network? As we will demonstrate, the answer to this question is YES, and we call this smaller network a vertex sparsifier. In fact, if we are asked to solve a sequence of optimization problems characterized by cuts or flows, we can compute a good vertex sparsifier ONCE and discard the original network. We can run our algorithms (or approximation algorithms) on the vertex sparsifier as a proxy - and still recover approximately optimal solutions in the original network. This novel pattern saves both space (because the network we store is much smaller) and time (because our algorithms run on a much smaller graph). Additionally, we apply these ideas to obtain a master theorem for graph partitioning problems - as long as the integrality gap of a standard linear programming relaxation is bounded on trees, then the integrality gap is at most a logarithmic factor larger for general networks. This result implies optimal bounds for many well studied graph partitioning problems as a special case, and even yields optimal bounds for more challenging problems that had not been studied before. Morally, these results are all based on the idea that even though the structure of optimal solutions can be quite complicated, these solution values can be approximated by crude (even linear) functions.by Ankur Moitra.Ph.D
    corecore