31 research outputs found

    Region of Interest Growing Neural Gas for Real-Time Point Cloud Processing

    Get PDF
    This paper proposes a real-time topological structure learning method based on concentrated/distributed sensing for a 2D/3D point cloud. First of all, we explain a modified Growing Neural Gas with Utility (GNG-U2) that can learn the topological structure of 3D space environment and color information simultaneously by using a weight vector. Next, we propose a Region Of Interest Growing Neural Gas (ROI-GNG) for realizing concentrated/distributed sensing in real-time. In ROI-GNG, the discount rates of the accumulated error and utility value are variable according to the situation. We show experimental results of the proposed method and discuss the effectiveness of the proposed method

    PCA Beyond The Concept of Manifolds: Principal Trees, Metro Maps, and Elastic Cubic Complexes

    Full text link
    Multidimensional data distributions can have complex topologies and variable local dimensions. To approximate complex data, we propose a new type of low-dimensional ``principal object'': a principal cubic complex. This complex is a generalization of linear and non-linear principal manifolds and includes them as a particular case. To construct such an object, we combine a method of topological grammars with the minimization of an elastic energy defined for its embedment into multidimensional data space. The whole complex is presented as a system of nodes and springs and as a product of one-dimensional continua (represented by graphs), and the grammars describe how these continua transform during the process of optimal complex construction. The simplest case of a topological grammar (``add a node'', ``bisect an edge'') is equivalent to the construction of ``principal trees'', an object useful in many practical applications. We demonstrate how it can be applied to the analysis of bacterial genomes and for visualization of cDNA microarray data using the ``metro map'' representation. The preprint is supplemented by animation: ``How the topological grammar constructs branching principal components (AnimatedBranchingPCA.gif)''.Comment: 19 pages, 8 figure

    Optimizing the procedure of grain nutrient predictions in barley via hyperspectral imaging

    Get PDF
    Hyperspectral imaging enables researchers and plant breeders to analyze various traits of interest like nutritional value in high throughput. In order to achieve this, the optimal design of a reliable calibration model, linking the measured spectra with the investigated traits, is necessary. In the present study we investigated the impact of different regression models, calibration set sizes and calibration set compositions on prediction performance. For this purpose, we analyzed concentrations of six globally relevant grain nutrients of the wild barley population HEB-YIELD as case study. The data comprised 1,593 plots, grown in 2015 and 2016 at the locations Dundee and Halle, which have been entirely analyzed through traditional laboratory methods and hyperspectral imaging. The results indicated that a linear regression model based on partial least squares outperformed neural networks in this particular data modelling task. There existed a positive relationship between the number of samples in a calibration model and prediction performance, with a local optimum at a calibration set size of ~40% of the total data. The inclusion of samples from several years and locations could clearly improve the predictions of the investigated nutrient traits at small calibration set sizes. It should be stated that the expansion of calibration models with additional samples is only useful as long as they are able to increase trait variability. Models obtained in a certain environment were only to a limited extent transferable to other environments. They should therefore be successively upgraded with new calibration data to enable a reliable prediction of the desired traits. The presented results will assist the design and conceptualization of future hyperspectral imaging projects in order to achieve reliable predictions. It will in general help to establish practical applications of hyperspectral imaging systems, for instance in plant breeding concepts

    Time-Series Prediction with Neural Networks: Combinatorial versus Sequential Approach

    No full text

    A fast algorithm to find Best Matching Units in Self-Organizing Maps

    No full text
    International audienceSelf-Organizing Maps (SOM) are well-known unsupervised neural networks able to perform vector quantization while mapping an underlying regular neighbourhood structure onto the codebook. They are used in a wide range of applications. As with most properly trained neural networks models, increasing the number of neurons in a SOM leads to better results or new emerging properties. Therefore highly efficient algorithms for learning and evaluation are key to improve the performance of such models. In this paper, we propose a faster alternative to compute the Winner Takes All component of SOM that scales better with a large number of neurons. We present our algorithm to find the so-called best matching unit (BMU) in a SOM, and we theoretically analyze its computational complexity. Statistical results on various synthetic and real-world datasets confirm this analysis and show an even more significant improvement in computing time with a minimal degradation of performance. With our method, we explore a new approach for optimizing SOM that can be combined with other optimization methods commonly used in these models for an even faster computation in both learning and recall phases

    COVID-19 classification using chest X-ray images: A framework of CNN-LSTM and improved max value moth flame optimization

    No full text
    Coronavirus disease 2019 (COVID-19) is a highly contagious disease that has claimed the lives of millions of people worldwide in the last 2 years. Because of the disease's rapid spread, it is critical to diagnose it at an early stage in order to reduce the rate of spread. The images of the lungs are used to diagnose this infection. In the last 2 years, many studies have been introduced to help with the diagnosis of COVID-19 from chest X-Ray images. Because all researchers are looking for a quick method to diagnose this virus, deep learning-based computer controlled techniques are more suitable as a second opinion for radiologists. In this article, we look at the issue of multisource fusion and redundant features. We proposed a CNN-LSTM and improved max value features optimization framework for COVID-19 classification to address these issues. The original images are acquired and the contrast is increased using a combination of filtering algorithms in the proposed architecture. The dataset is then augmented to increase its size, which is then used to train two deep learning networks called Modified EfficientNet B0 and CNN-LSTM. Both networks are built from scratch and extract information from the deep layers. Following the extraction of features, the serial based maximum value fusion technique is proposed to combine the best information of both deep models. However, a few redundant information is also noted; therefore, an improved max value based moth flame optimization algorithm is proposed. Through this algorithm, the best features are selected and finally classified through machine learning classifiers. The experimental process was conducted on three publically available datasets and achieved improved accuracy than the existing techniques. Moreover, the classifiers based comparison is also conducted and the cubic support vector machine gives better accuracy
    corecore