146 research outputs found

    Multi-sensor fusion on mobile platforms

    Get PDF
    An important goal for many mobile platforms---terrestrial, aquatic, or airborne---is reliable, accurate, and on-time sensing of the world around them. The PRIME Lab has been investigating multi-sensor fusion for many applications, including explosive hazard detection and infrastructure inspection, from terrestrial vehicles and unmanned aerial vehicles (UAVs). New developments in multi-sensor fusion using radars, imaging sensors, and LIDAR will be discussed that encompass advancements from novel signal processing approaches for mobile ground-penetrating radar to more theoretical approaches for optimal fusion of measurements from multi-modal sensors. This talk will explore the area of sensor-fusion both from a practical, application-focused standpoint and also from a theoretical learning-theory approach to information fusion.https://digitalcommons.mtu.edu/techtalks/1027/thumbnail.jp

    Clustering in relational data and ontologies

    Get PDF
    Title from PDF of title page (University of Missouri--Columbia, viewed on August 20, 2010).The entire thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file; a non-technical public abstract appears in the public.pdf file.Dissertation advisor: Dr. James M. Keller.Vita.Ph. D. University of Missouri--Columbia 2010.This dissertation studies the problem of clustering objects represented by relational data. This is a pertinent problem as many real-world data sets can only be represented by relational data for which object-based clustering algorithms are not designed. Relational data are encountered in many fields including biology, management, industrial engineering, and social sciences. Unlike numerical object data, which are represented by a set of feature values (e.g. height, weight, shoe size) of an object, relational object data are the numerical values of (dis) similarity between objects. For this reason, conventional cluster analysis methods such as k-means and fuzzy c-means cannot be used directly with relational data. I focus on three main problems of cluster analysis of relational data: (i) tendency prior to clustering -- how many clusters are there?; (ii) partitioning of objects -- which objects belong to which cluster?; and (iii) validity of the resultant clusters -- are the partitions \good"?Analyses are included in this dissertation that prove that the Visual Assessment of cluster Tendency (VAT) algorithm has a direct relation to single-linkage hierarchical clustering and Dunn's cluster validity index. These analyses are important to the development of two novel clustering algorithms, CLODD-CLustering in Ordered Dissimilarity Data and ReSL-Rectangular Single-Linkage clustering. Last, this dissertation addresses clustering in ontologies; examples include the Gene Ontology, the MeSH ontology, patient medical records, and web documents. I apply an extension to the Self-Organizing Map (SOM) to produce a new algorithm, the OSOM-Ontological Self-Organizing Map. OSOM provides visualization and linguistic summarization of ontology-based data.Includes bibliographical references

    Identification of Dialect for Eastern and Southwestern Ojibwe Words Using a Small Corpus

    Get PDF
    The Ojibwe language has several dialects that vary to some degree in both spoken and written form. We present a method of using support vector machines to classify two different dialects (Eastern and Southwestern Ojibwe) using a very small corpus of text. Classification accuracy at the sentence level is 90% across a five-fold cross validation and 72% when the sentence-trained model is applied to a data set of individual words. Our code and the word level data set are released openly at https://github.com/evanperson/OjibweDialect

    The arithmetic recursive average as an instance of the recursive weighted power mean

    Get PDF
    The aggregation of multiple information sources has a long history and ranges from sensor fusion to the aggregation of individual algorithm outputs and human knowledge. A popular approach to achieve such aggregation is the fuzzy integral (FI) which is defined with respect to a fuzzy measure (FM (i.e. a normal, monotone capacity). In practice, the discrete FI aggregates information contributed by a discrete number of sources through a weighted aggregation (post-sorting), where the weights are captured by a FM that models the typically subjective ‘worth’ of subsets of the overall set of sources. While the combination of FI and FM has been very successful, challenges remain both in regards to the behavior of the resulting aggregation operators—which for example do not produce symmetrically mirrored outputs for symmetrically mirrored inputs—and also in a manifest difference between the intuitive interpretation of a stand-alone FM and its actual role and impact when used as part of information fusion with a FI. This paper elucidates these challenges and introduces a novel family of recursive average (RAV) operators as an alternative to the FI in aggregation with respect to a FM; focusing specifically on the arithmetic recursive average. The RAV is designed to address the above challenges, while also facilitating fine-grained analysis of the resulting aggregation of different combinations of sources. We provide the mathematical foundations of the RAV and include initial experiments and comparisons to the FI for both numeric and interval-valued data

    Data-informed fuzzy measures for fuzzy integration of intervals and fuzzy numbers

    Get PDF
    The fuzzy integral (FI) with respect to a fuzzy measure (FM) is a powerful means of aggregating information. The most popular FIs are the Choquet and Sugeno, and most research focuses on these two variants. The arena of the FM is much more populated, including numerically derived FMs such as the Sugeno λ-measure and decomposable measure, expert-defined FMs, and data-informed FMs. The drawback of numerically derived and expert-defined FMs is that one must know something about the relative values of the input sources. However, there are many problems where this information is unavailable, such as crowdsourcing. This paper focuses on data-informed FMs, or those FMs that are computed by an algorithm that analyzes some property of the input data itself, gleaning the importance of each input source by the data they provide. The original instantiation of a data-informed FM is the agreement FM, which assigns high confidence to combinations of sources that numerically agree with one another. This paper extends upon our previous work in datainformed FMs by proposing the uniqueness measure and additive measure of agreement for interval-valued evidence. We then extend data-informed FMs to fuzzy number (FN)-valued inputs. We demonstrate the proposed FMs by aggregating interval and FN evidence with the Choquet and Sugeno FIs for both synthetic and real-world data

    Light Field Compression by Residual CNN Assisted JPEG

    Full text link
    Light field (LF) imaging has gained significant attention due to its recent success in 3-dimensional (3D) displaying and rendering as well as augmented and virtual reality usage. Nonetheless, because of the two extra dimensions, LFs are much larger than conventional images. We develop a JPEG-assisted learning-based technique to reconstruct an LF from a JPEG bitstream with a bit per pixel ratio of 0.0047 on average. For compression, we keep the LF's center view and use JPEG compression with 50% quality. Our reconstruction pipeline consists of a small JPEG enhancement network (JPEG-Hance), a depth estimation network (Depth-Net), followed by view synthesizing by warping the enhanced center view. Our pipeline is significantly faster than using video compression on pseudo-sequences extracted from an LF, both in compression and decompression, while maintaining effective performance. We show that with a 1% compression time cost and 18x speedup for decompression, our methods reconstructed LFs have better structural similarity index metric (SSIM) and comparable peak signal-to-noise ratio (PSNR) compared to the state-of-the-art video compression techniques used to compress LFs

    Roach infestation optimization

    Get PDF
    Abstract only availableThere are many function optimization algorithms based on the collective behavior of natural systems — Particle Swarm Optimization (PSO) and Ant Colony Optimization (ACO) are two of the most popular. This poster presents a new adaptation of the PSO algorithm, entitled Roach Infestation Optimization (RIO), which is inspired by recent discoveries in the social behavior of cockroaches. We present the development of the simple behaviors of the individual agents, which emulate some of the discovered cockroach social behaviors. We also describe a "hungry" version of the PSO and RIO, which we aptly call Hungry PSO and Hungry RIO. Comparisons with standard PSO show that Hungry PSO, RIO, and Hungry RIO are all more effective at finding the global optima of a suite of test functions.College of Engineering Undergraduate Research Optio

    Efficient modeling and representation of agreement in interval-valued data

    Get PDF
    Recently, there has been much research into effective representation and analysis of uncertainty in human responses, with applications in cyber-security, forest and wildlife management, and product development, to name a few. Most of this research has focused on representing the response uncertainty as intervals, e.g., “I give the movie between 2 and 4 stars.” In this paper, we extend upon the model-based interval agreement approach (IAA) for combining interval data into fuzzy sets and propose the efficient IAA (eIAA) algorithm, which enables efficient representation of and operation on the fuzzy sets produced by IAA (and other interval-based approaches, for that matter). We develop methods for efficiently modeling, representing, and aggregating both crisp and uncertain interval data (where the interval endpoints are intervals themselves). These intervals are assumed to be collected from individual or multiple survey respondents over single or repeated surveys; although, without loss of generality, the approaches put forth in this paper could be used for any interval-based data where representation and analysis is desired. The proposed method is designed to minimize loss of information when transferring the interval-based data into fuzzy set models and then when projecting onto a compressed set of basis functions. We provide full details of eIAA and demonstrate it on real-world and synthetic data
    • 

    corecore