5,328 research outputs found
Principal Component Analysis
This book is aimed at raising awareness of researchers, scientists and engineers on the benefits of Principal Component Analysis (PCA) in data analysis. In this book, the reader will find the applications of PCA in fields such as image processing, biometric, face recognition and speech processing. It also includes the core concepts and the state-of-the-art methods in data analysis and feature extraction
5G Enabled Moving Robot Captured Image Encryption with Principal Component Analysis Method
Estimating the captured image of moving robots is very difficult. These images are vital in analyzing earth's surface objects for many applications like studying environmental conditions, Land use and Land Cover changes, and change detection studies of worldwide change. Multispectral robot-captured images have a massive amount of low-resolution data, which is lost due to a lack of capture efficiency due to artificial and atmospheric reasons. The image transformation is required in a 5G network with effective transmission by reducing noise, inconsistent lighting, and low resolution, degrading image quality. In this paper, the authors proposed the machine learning dimensionality reduction technique i.e. Principle Component Analysis (PCA) and which is used for metastasizing the 5 G-enabled moving robot captured image to enrich the image's visual perception to analyze the exact information of global or local data. The encryption algorithm implanted for data reduction and transmission over the 5G network gives sophisticated results compared with other standard methods. This proposed algorithm gives better performance in developing data reduction, network convergence speed, reduces the training time for object classification, and improves accuracy for multispectral moving robot-captured images by the support of 5G network
Incremental and Adaptive L1-Norm Principal Component Analysis: Novel Algorithms and Applications
L1-norm Principal-Component Analysis (L1-PCA) is known to attain remarkable resistance against faulty/corrupted points among the processed data. However, computing L1-PCA of “big data” with large number of measurements and/or dimensions may be computationally impractical. This work proposes new algorithmic solutions for incremental and adaptive L1-PCA. The first algorithm computes L1-PCA incrementally, processing one measurement at a time, with very low computational and memory requirements; thus, it is appropriate for big data and big streaming data applications. The second algorithm combines the merits of the first one with additional ability to track changes in the nominal signal subspace by revising the computed L1-PCA as new measurements arrive, demonstrating both robustness against outliers and adaptivity to signal-subspace changes. The proposed algorithms are evaluated in an array of experimental studies on subspace estimation, video surveillance (foreground/background separation), image conditioning, and direction-of-arrival (DoA) estimation
Recommended from our members
Improving the multi-objective evolutionary optimization algorithm for hydropower reservoir operations in the California Oroville-Thermalito complex
This study demonstrates the application of an improved Evolutionary optimization Algorithm (EA), titled Multi-Objective Complex Evolution Global Optimization Method with Principal Component Analysis and Crowding Distance Operator (MOSPD), for the hydropower reservoir operation of the Oroville-Thermalito Complex (OTC) - a crucial head-water resource for the California State Water Project (SWP). In the OTC's water-hydropower joint management study, the nonlinearity of hydropower generation and the reservoir's water elevation-storage relationship are explicitly formulated by polynomial function in order to closely match realistic situations and reduce linearization approximation errors. Comparison among different curve-fitting methods is conducted to understand the impact of the simplification of reservoir topography. In the optimization algorithm development, techniques of crowding distance and principal component analysis are implemented to improve the diversity and convergence of the optimal solutions towards and along the Pareto optimal set in the objective space. A comparative evaluation among the new algorithm MOSPD, the original Multi-Objective Complex Evolution Global Optimization Method (MOCOM), the Multi-Objective Differential Evolution method (MODE), the Multi-Objective Genetic Algorithm (MOGA), the Multi-Objective Simulated Annealing approach (MOSA), and the Multi-Objective Particle Swarm Optimization scheme (MOPSO) is conducted using the benchmark functions. The results show that best the MOSPD algorithm demonstrated the best and most consistent performance when compared with other algorithms on the test problems. The newly developed algorithm (MOSPD) is further applied to the OTC reservoir releasing problem during the snow melting season in 1998 (wet year), 2000 (normal year) and 2001 (dry year), in which the more spreading and converged non-dominated solutions of MOSPD provide decision makers with better operational alternatives for effectively and efficiently managing the OTC reservoirs in response to the different climates, especially drought, which has become more and more severe and frequent in California
Fuzzy model predictive control. Complexity reduction by functional principal component analysis
En el Control Predictivo basado en Modelo, el controlador ejecuta una optimización en tiempo real para obtener la mejor solución para la acción de control. Un problema de optimización se resuelve para identificar la mejor acción de control que minimiza una función de coste relacionada con las predicciones de proceso. Debido a la carga computacional de los algoritmos, el control predictivo sujeto a restricciones, no es adecuado para funcionar en cualquier plataforma de hardware. Las técnicas de control predictivo son bien conocidos en la industria de proceso durante décadas. Es cada vez más atractiva la aplicación de técnicas de control avanzadas basadas en modelos a otros muchos campos tales como la automatización de edificios, los teléfonos inteligentes, redes de sensores inalámbricos, etc., donde las plataformas de hardware nunca se han conocido por tener una elevada potencia de cálculo. El objetivo principal de esta tesis es establecer una metodología para reducir la complejidad de cálculo al aplicar control predictivo basado en modelos no lineales sujetos a restricciones, utilizando como plataforma, sistemas de hardware de baja potencia de cálculo, permitiendo una implementación basado en estándares de la industria. La metodología se basa en la aplicación del análisis de componentes principales funcionales, proporcionando un enfoque matemáticamente elegante para reducir la complejidad de los sistemas basados en reglas, como los sistemas borrosos y los sistemas lineales a trozos. Lo que permite reducir la carga computacional en el control predictivo basado en modelos, sujetos o no a restricciones. La idea de utilizar sistemas de inferencia borrosos, además de permitir el modelado de sistemas no lineales o complejos, dota de una estructura formal que permite la implementación de la técnica de reducción de la complejidad mencionada anteriormente. En esta tesis, además de las contribuciones teóricas, se describe el trabajo realizado con plantas reales en los que se han llevado a cabo tareas de modelado y control borroso. Uno de los objetivos a cubrir en el período de la investigación y el desarrollo de la tesis ha sido la experimentación con sistemas borrosos, su simplificación y aplicación a sistemas industriales. La tesis proporciona un marco de conocimiento práctico, basado en la experiencia.In Model-based Predictive Control, the controller runs a real-time optimisation to obtain the best solution for the control action. An optimisation problem is solved to identify the best control action that minimises a cost function related to the process predictions. Due to the computational load of the algorithms, predictive control subject to restric- tions is not suitable to run on any hardware platform. Predictive control techniques have been well known in the process industry for decades. The application of advanced control techniques based on models is becoming increasingly attractive in other fields such as building automation, smart phones, wireless sensor networks, etc., as the hardware platforms have never been known to have high computing power. The main purpose of this thesis is to establish a methodology to reduce the computational complexity of applying nonlinear model based predictive control systems subject to constraints, using as a platform hardware systems with low computational power, allowing a realistic implementation based on industry standards. The methodology is based on applying the functional principal component analysis, providing a mathematically elegant approach to reduce the complexity of rule-based systems, like fuzzy and piece wise affine systems, allowing the reduction of the computational load on modelbased predictive control systems, subject or not subject to constraints. The idea of using fuzzy inference systems, in addition to allowing nonlinear or complex systems modelling, endows a formal structure which enables implementation of the aforementioned complexity reduction technique. This thesis, in addition to theoretical contributions, describes the work done with real plants on which tasks of modeling and fuzzy control have been carried out. One of the objectives to be covered for the period of research and development of the thesis has been training with fuzzy systems and their simplification and application to industrial systems. The thesis provides a practical knowledge framework, based on experience
Plant identification using deep convolutional networks based on principal component analysis
Plants have substantial effects in human vitality through their different uses in agriculture, food industry, pharmacology, and climate control. The large number of herbs and plant species and shortage of skilled botanists have increased the need for automated plant identification systems in recent years. As one of the challenging problems in object recognition, automatic plant identification aims to assign the plant in an image to a known taxon or species using machine learning and computer vision algorithms. However, this problem is challenging due to the inter-class similarities within a plant family and large intra-class variations in background, occlusion, pose, color, and illumination. In this thesis, we propose an automatic plant identification system based on deep convolutional networks. This system uses a simple baseline and applies principal component analysis (PCA) to patches of images to learn the network weights in an unsupervised learning approach. After multi-stage PCA filter banks are learned, a simple binary hashing is applied to output maps and the obtained maps are subsampled through max-pooling. Finally, the spatial pyramid pooling is applied to the downsampled data to extract features from block histograms. A multi-class linear support vector machine is then trained to classify the different species. The system performance is evaluated on the plant identification datasets of LifeCLEF 2014 in terms of classification accuracy, inverse rank score, and robustness against pose (translation, scaling, and rotation) and illumination variations. A comparison of our results with those of the top systems submitted to LifeCLEF 2014 campaign reveals that our proposed system would have achieved the second place in the categories of Entire, Branch, Fruit, Leaf, Scanned Leaf, and Stem, and the third place in the Flower category while having a simpler architecture and lower computational complexity than the winner system(s). We achieved the best accuracy in scanned leaves where we obtained an inverse rank score of 0.6157 and a classification accuracy of 68.25%
- …