2,292 research outputs found

    Fast Compressed Automatic Target Recognition for a Compressive Infrared Imager

    Get PDF
    Many military systems utilize infrared sensors which allow an operator to see targets at night. Several of these are either mid-wave or long-wave high resolution infrared sensors, which are expensive to manufacture. But compressive sensing, which has primarily been demonstrated in medical applications, can be used to minimize the number of measurements needed to represent a high-resolution image. Using these techniques, a relatively low cost mid-wave infrared sensor can be realized which has a high effective resolution. In traditional military infrared sensing applications, like targeting systems, automatic targeting recognition algorithms are employed to locate and identify targets of interest to reduce the burden on the operator. The resolution of the sensor can increase the accuracy and operational range of a targeting system. When using a compressive sensing infrared sensor, traditional decompression techniques can be applied to form a spatial-domain infrared image, but most are iterative and not ideal for real-time environments. A more efficient method is to adapt the target recognition algorithms to operate directly on the compressed samples. In this work, we will present a target recognition algorithm which utilizes a compressed target detection method to identify potential target areas and then a specialized target recognition technique that operates directly on the same compressed samples. We will demonstrate our method on the U.S. Army Night Vision and Electronic Sensors Directorate ATR Algorithm Development Image Database which has been made available by the Sensing Information Analysis Center

    Identifikasi Personal Biometrik Berdasarkan Sinyal Photoplethysmography dari Detak Jantung

    Get PDF
    Sistem biometrik sangat berguna untuk membedakan karakteristik individu seseorang. Sistem identifikasi yang paling banyak digunakan diantaranya berdasarkan metode fingerprint, face detection, iris atu hand geometry. Penelitian ini mencoba untuk meningkatkan sistem biometrik menggunakan sinyal Photoplethysmography dari detak jantung. Algoritma yang diusulkan menggunakan seluruh ektraksi fitur yang didapatkan melalui sistem untuk pengenalan biometrik. Efesiensi dari algoritma yang diusulkan didemonstrasikan oleh hasil percobaan yang didapatkan menggunakan metode klasifikasi Multilayer Perceptron, Naïve Bayes dan Random Forest berdasarkan fitur ekstraksi yang didapatkan dari proses sinyal prosesing. Didapatkan 51 subjek pada penelitian ini; sinyal PPG signals dari setiap individu didapatkan melalui sensor pada dua rentang waktu yang berbeda. 30 fitur karakteristik didapatkan dari setiap periode dan kemudian digunakan untuk proses klasifikasi. Sistem klasifikasi menggunakan metode Multilayer Perceptron, Naïve Bayes dan Random Forest; nilai true positive dari masing-masing metode adalah 94.6078 %, 92.1569 % dan 90.3922 %. Hasil yang didapatkan menunjukkan bahwa seluruh algoritma yang diusulkan dan sistem identifikasi biometrik dari pengembangan sinyal PPG ini sangat menjanjikan untuk sistem pengenalan individu manusia. ============================================================================================= The importance of biometric system can distinguish the uniqueness of personal characteristics. The most popular identification systems have concerned the method based on fingerprint, face detection, iris or hand geometry. This study is trying to improve the biometric system using Photoplethysmography signal by heart rate. The proposed algorithm calculates the contribution of all extracted features to biometric recognition. The efficiency of the proposed algorithms is demonstrated by the experiment results obtained from the Multilayer Perceptron, Naïve Bayes and Random Forest classifier applications based on the extracted features. There are fifty one persons joined for the experiments; the PPG signals of each person were recorded for two different time spans. 30 characteristic features were extracted for each period and these characteristic features are used for the purpose of classification. The results were evaluated via the Multilayer Perceptron, Naïve Bayes and Random Forest classifier models; the true positive rates are then 94.6078 %, 92.1569 % and 90.3922 %, respectively. The obtained results showed that both the proposed algorithm and the biometric identification model based on this developed PPG signal are very promising for contact less recognizing systems

    “Dust in the wind...”, deep learning application to wind energy time series forecasting

    Get PDF
    To balance electricity production and demand, it is required to use different prediction techniques extensively. Renewable energy, due to its intermittency, increases the complexity and uncertainty of forecasting, and the resulting accuracy impacts all the different players acting around the electricity systems around the world like generators, distributors, retailers, or consumers. Wind forecasting can be done under two major approaches, using meteorological numerical prediction models or based on pure time series input. Deep learning is appearing as a new method that can be used for wind energy prediction. This work develops several deep learning architectures and shows their performance when applied to wind time series. The models have been tested with the most extensive wind dataset available, the National Renewable Laboratory Wind Toolkit, a dataset with 126,692 wind points in North America. The architectures designed are based on different approaches, Multi-Layer Perceptron Networks (MLP), Convolutional Networks (CNN), and Recurrent Networks (RNN). These deep learning architectures have been tested to obtain predictions in a 12-h ahead horizon, and the accuracy is measured with the coefficient of determination, the R² method. The application of the models to wind sites evenly distributed in the North America geography allows us to infer several conclusions on the relationships between methods, terrain, and forecasting complexity. The results show differences between the models and confirm the superior capabilities on the use of deep learning techniques for wind speed forecasting from wind time series data.Peer ReviewedPostprint (published version

    Comparative study of state-of-the-art machine learning models for analytics-driven embedded systems

    Get PDF
    Analytics-driven embedded systems are gaining foothold faster than ever in the current digital era. The innovation of Internet of Things(IoT) has generated an entire ecosystem of devices, communicating and exchanging data automatically in an interconnected global network. The ability to efficiently process and utilize the enormous amount of data being generated from an ensemble of embedded devices like RFID tags, sensors etc., enables engineers to build smart real-world systems. Analytics-driven embedded system explores and processes the data in-situ or remotely to identify a pattern in the behavior of the system and in turn can be used to automate actions and embark decision making capability to a device. Designing an intelligent data processing model is paramount for reaping the benefits of data analytics, because a poorly designed analytics infrastructure would degrade the system’s performance and effectiveness. There are many different aspects of this data that make it a more complex and challenging analytics task and hence a suitable candidate for big data. Big data is mainly characterized by its high volume, hugely varied data types and high speed of data receipt; all these properties mandate the choice of correct data mining techniques to be used for designing the analytics model. Datasets with images like face recognition, satellite images would perform better with deep learning algorithms, time-series datasets like sensor data from wearable devices would give better results with clustering and supervised learning models. A regression model would suit best for a multivariate dataset like appliances energy prediction data, forest fire data etc. Each machine learning task has a varied range of algorithms which can be used in combination to create an intelligent data analysis model. In this study, a comprehensive comparative analysis was conducted using different datasets freely available on online machine learning repository, to analyze the performance of state-of-art machine learning algorithms. WEKA data mining toolkit was used to evaluate C4.5, Naïve Bayes, Random Forest, kNN, SVM and Multilayer Perceptron for classification models. Linear regression, Gradient Boosting Machine(GBM), Multilayer Perceptron, kNN, Random Forest and Support Vector Machines (SVM) were applied to dataset fit for regression machine learning. Datasets were trained and analyzed in different experimental setups and a qualitative comparative analysis was performed with k-fold Cross Validation(CV) and paired t-test in Weka experimenter

    Online and Non-Parametric Drift Detection Methods Based on Hoeffding’s Bounds

    Get PDF
    I. Frías-Blanco, J. d. Campo-Ávila, G. Ramos-Jiménez, R. Morales-Bueno, A. Ortiz-Díaz and Y. Caballero-Mota, "Online and Non-Parametric Drift Detection Methods Based on Hoeffding’s Bounds," in IEEE Transactions on Knowledge and Data Engineering, vol. 27, no. 3, pp. 810-823, 1 March 2015 doi: 10.1109/TKDE.2014.2345382. © 2015 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Incremental and online learning algorithms are more relevant in the data mining context because of the increasing necessity to process data streams. In this context, the target function may change over time, an inherent problem of online learning (known as concept drift). In order to handle concept drift regardless of the learning model, we propose new methods to monitor the performance metrics measured during the learning process, to trigger drift signals when a significant variation has been detected. To monitor this performance, we apply some probability inequalities that assume only independent, univariate and bounded random variables to obtain theoretical guarantees for the detection of such distributional changes. Some common restrictions for the online change detection as well as relevant types of change (abrupt and gradual) are considered. Two main approaches are proposed, the first one involves moving averages and is more suitable to detect abrupt changes. The second one follows a widespread intuitive idea to deal with gradual changes using weighted moving averages. The simplicity of the proposed methods, together with the computational efficiency make them very advantageous. We use a Naïve Bayes classifier and a Perceptron to evaluate the performance of the methods over synthetic and real data.Supported in part by the SESAAME project number TIN2008-06582-C03-03 of the MICINN, Spain. Supported in part by the AUIP (Asociación Universitaria Iberoamericana de Postgrado)

    Enabling Explainable Fusion in Deep Learning with Fuzzy Integral Neural Networks

    Full text link
    Information fusion is an essential part of numerous engineering systems and biological functions, e.g., human cognition. Fusion occurs at many levels, ranging from the low-level combination of signals to the high-level aggregation of heterogeneous decision-making processes. While the last decade has witnessed an explosion of research in deep learning, fusion in neural networks has not observed the same revolution. Specifically, most neural fusion approaches are ad hoc, are not understood, are distributed versus localized, and/or explainability is low (if present at all). Herein, we prove that the fuzzy Choquet integral (ChI), a powerful nonlinear aggregation function, can be represented as a multi-layer network, referred to hereafter as ChIMP. We also put forth an improved ChIMP (iChIMP) that leads to a stochastic gradient descent-based optimization in light of the exponential number of ChI inequality constraints. An additional benefit of ChIMP/iChIMP is that it enables eXplainable AI (XAI). Synthetic validation experiments are provided and iChIMP is applied to the fusion of a set of heterogeneous architecture deep models in remote sensing. We show an improvement in model accuracy and our previously established XAI indices shed light on the quality of our data, model, and its decisions.Comment: IEEE Transactions on Fuzzy System

    Performance Analysis of Fixed-Random Weights in Artificial Neural Networks

    Get PDF
    Deep neural networks train millions of parameters to achieve state-of-the-art performance on a wide foray of applications. However, finding a global minimum with gradient descent approaches leads to lengthy training times coupled with high computational resource requirements. To alleviate these concerns, the idea of fixed-random weights in deep neural networks is explored. More critically the goal is to maintain performance akin to fully trained models. Metrics such as floating point operations per second and memory size are compared and contrasted for fixed-random and fully trained models. Additional analysis on downsized models that mimic the number of trained parameters of the fixed-random models, shows that fixed-random weights enable slightly higher performance. In a fixed-random convolutional model, ResNet achieves ∼57% image classification accuracy on CIFAR-10. In contrast, a DenseNet architecture, with only fixed-random filters in the convolutional layers, achieves ∼88% accuracy for the same task. DenseNet’s fully trained model achieves ∼96% accuracy, which highlights the importance of architectural choice for a high performing model. To further understand the role of architectures, random projection networks trained using a least squares approximation learning rule are studied. In these networks, deep random projection layers and skipped connections are exploited as they are shown to boost the overall network performance. In several of the image classification experiments conducted, additional layers and skipped connectivity consistently outperform a baseline random projection network by 1% to 3%. To reduce the complexity of the models in general, a tensor decomposition technique, known as the Tensor-Train decomposition, is leveraged. The compression of the fully-connected hidden layer leads to a minimum ∼40x decrease in memory size at a slight cost in resource utilization. This research study helps to gain a better understanding of how random filters and weights can be utilized to obtain lighter models

    Personalized bank campaign using artificial neural networks

    Get PDF
    Internship Report presented as the partial requirement for obtaining a Master's degree in Data Science and Advanced AnalyticsNowadays, high market competition requires Banks to focus more at individual customers´ behaviors. Specifically, customers prefer a personal relationship with the finance institution and they want to receive exclusive offers. Thus, a successful cross-sell and up- sell personalized campaign requires to know the individual client interest for the offer. The aim of this project is to create a model, that, is able to identify the probability of a customer to buy a product of the bank. The strategic plan is to run a long-term personalized campaign and the challenge is to create a model which remains accurate during this time. The source datasets consist of 12 dataMarts, which represent a monthly snapshot of the Bank’s dataWarehouse between April 2016 and March 2017. They consist of 191 original variables, which contain personal and transactional information and around 1.400.000 clients each. The selected modeling technique is Artificial Neural Networks and specifically, Multilayer Perceptron running with Back-propagation. The results showed that the model performs well and the business can use it to optimize the profitability. Despite the good results, business must monitor the model´s outputs to check the performance through time
    corecore