38 research outputs found
Blind Wavelet-Based Image Watermarking
In this chapter, the watermarking technique is blind; blind watermarking does not need any of the original images or any information about it to recover watermark. In this technique the watermark is inserted into the high frequencies. Three-level wavelet transform is applied to the image, and the size of the watermark is equal to the size of the detailed sub-band. Significant coefficients are used to embed the watermark. The proposed technique depends on quantization. The proposed watermarking technique generates images with less degradation
A Novel Optimization for GPU Mining Using Overclocking and Undervolting
Cryptography and associated technologies have existed for a long time. This field is advancing at a remarkable speed. Since the inception of its initial application, blockchain has come a long way. Bitcoin is a cryptocurrency based on blockchain, also known as distributed ledger technology (DLT). The most well-known cryptocurrency for everyday use is Bitcoin, which debuted in 2008. Its success ushered in a digital revolution, and it currently provides security, decentralization, and a reliable data transport and storage mechanism to various industries and companies. Governments and developing enterprises seeking a competitive edge have expressed interest in Bitcoin and other cryptocurrencies due to the rapid growth of this recent technology. For computer experts and individuals looking for a method to supplement their income, cryptocurrency
mining has become a big source of anxiety. Mining is a way of resolving mathematical problems based on the processing capacity and speed of the computers employed to solve them in return for the digital currency incentives. Herein, we have illustrated benefits of utilizing GPUs (graphical processing units) for cryptocurrency mining and compare two methods, namely overclocking and undervolting, which are the superior techniques when it comes to GPU optimization. The techniques we have used in this paper will not only help the miners to gain profits while mining cryptocurrency but also solve a major flaw; in order to mitigate the energy and resources that are consumed by the mining hardware, we have designed the mining hardware to simultaneously run longer and consume much less electricity. We have also compared our techniques with other popular techniques that are already in existence with respect to GPU mining.publishedVersio
Malware Detection in Internet of Things (IoT) Devices Using Deep Learning
Internet of Things (IoT) devices usage is increasing exponentially with the spread of the internet. With the increasing capacity of data on IoT devices, these devices are becoming venerable to malware attacks; therefore, malware detection becomes an important issue in IoT devices. An effective, reliable, and time-efficient mechanism is required for the identification of sophisticated malware. Researchers have proposed multiple methods for malware detection in recent years, however, accurate detection remains a challenge. We propose a deep learning-based ensemble classification method for the detection of malware in IoT devices. It uses a three steps approach; in the first step, data is preprocessed using scaling, normalization, and de-noising, whereas in the second step, features are selected and one hot encoding is applied followed by the ensemble classifier based on CNN and LSTM outputs for detection of malware. We have compared results with the state-of-the-art methods and our proposed method outperforms the existing methods on standard datasets with an average accuracy of 99.5%.publishedVersio
Effectively Predicting the Presence of Coronary Heart Disease Using Machine Learning Classifiers
Coronary heart disease is one of the major causes of deaths around the globe. Predicating a heart disease is one of the most challenging tasks in the field of clinical data analysis. Machine learning (ML) is useful in diagnostic assistance in terms of decision making and prediction on the basis of the data produced by healthcare sector globally. We have also perceived ML techniques employed in the medical field of disease prediction. In this regard, numerous research studies have been shown on heart disease prediction using an ML classifier. In this paper, we used eleven ML classifiers to identify key features, which improved the predictability of heart disease. To introduce the prediction model, various feature combinations and well-known classification algorithms were used. We achieved 95% accuracy with gradient boosted trees and multilayer perceptron in the heart disease prediction model. The Random Forest gives a better performance level in heart disease prediction, with an accuracy level of 96%.publishedVersio
Urban Crowd Detection Using SOM, DBSCAN and LBSN Data Entropy: A Twitter Experiment in New York and Madrid
The surfer and the physical location are two important concepts associated with each other in the social network-based localization service. This work consists of studying urban behavior based on location-based social networks (LBSN) data; we focus especially on the detection of abnormal events. The proposed crowd detection system uses the geolocated social network provided by the Twitter application programming interface (API) to automatically detect the abnormal events. The methodology we propose consists of using an unsupervised competitive learning algorithm (self-organizing map (SOM)) and a density-based clustering method (density-based spatial clustering of applications with noise (DBCSAN)) to identify and detect crowds. The second stage is to build the entropy model to determine whether the detected crowds fit into the daily pattern with reference to a spatio-temporal entropy model, or whether they should be considered as evidence that something unusual occurs in the city because of their number, size, location and time of day. To detect an abnormal event in the city, it is sufficient to determine the real entropy model and to compare it with the reference model. For the normal day, the reference model is constructed offline for each time interval. The obtained results confirm the effectiveness of our method used in the first stage (SOM and DBSCAN stage) to detect and identify clusters dynamically, and imitating human activity. These findings also clearly confirm the detection of special days in New York City (NYC), which proves the performance of our proposed model
Deep-Learning-Based Feature Extraction Approach for Significant Wave Height Prediction in SAR Mode Altimeter Data
Predicting sea wave parameters such as significant wave height (SWH) has recently been identified as a critical requirement for maritime security and economy. Earth observation satellite missions have resulted in a massive rise in marine data volume and dimensionality. Deep learning technologies have proven their capabilities to process large amounts of data, draw useful insights, and assist in environmental decision making. In this study, a new deep-learning-based hybrid feature selection approach is proposed for SWH prediction using satellite Synthetic Aperture Radar (SAR) mode altimeter data. The introduced approach integrates the power of autoencoder deep neural networks in mapping input features into representative latent-space features with the feature selection power of the principal component analysis (PCA) algorithm to create significant features from altimeter observations. Several hybrid feature sets were generated using the proposed approach and utilized for modeling SWH using Gaussian Process Regression (GPR) and Neural Network Regression (NNR). SAR mode altimeter data from the Sentinel-3A mission calibrated by in situ buoy data was used for training and evaluating the SWH models. The significance of the autoencoder-based feature sets in improving the prediction performance of SWH models is investigated against original, traditionally selected, and hybrid features. The autoencoder–PCA hybrid feature set generated by the proposed approach recorded the lowest average RMSE values of 0.11069 for GPR models, which outperforms the state-of-the-art results. The findings of this study reveal the superiority of the autoencoder deep learning network in generating latent features that aid in improving the prediction performance of SWH models over traditional feature extraction methods
An Improved Bald Eagle Search Algorithm with Deep Learning Model for Forest Fire Detection Using Hyperspectral Remote Sensing Images
This paper presents an improved Bald Eagle Search Algorithm with Deep Learning model for forest fire detection (IBESDL-FFD) technique using hyperspectral images (HSRS). The major intention of the IBESDL-FFD technique is to identify the presence of forest fire in the HSRS images. To achieve this, the IBESDL-FFD technique involves data pre-processing in two stages namely data augmentation and noise removal. Besides, IBES algorithm with NASNetLarge method was utilized as a feature extractor to determine feature vectors. Finally, Firefly algorithm (FFA) with denoising autoencoder (DAE) is applied for the classification of forest fire. The design of IBES and FFA techniques helps to adjust optimally the parameters contained in the NSANetLarge and DAE models respectively. For demonstrating the better outcomes of the IBESDL-FFD approach, a wide-ranging simulation was implemented and the outcomes are examined. The results reported the better outcomes of the IBESDL-FFD technique over the existing techniques with maximum average accuracy of 93.75%
A Multi Parameter Forecasting for Stock Time Series Data Using LSTM and Deep Learning Model
Financial data are a type of historical time series data that provide a large amount of information that is frequently employed in data analysis tasks. The question of how to forecast stock prices continues to be a topic of interest for both investors and financial professionals. Stock price forecasting is quite challenging because of the significant noise, non-linearity, and volatility of time series data on stock prices. The previous studies focus on a single stock parameter such as close price. A hybrid deep-learning, forecasting model is proposed. The model takes the input stock data and forecasts two stock parameters close price and high price for the next day. The experiments are conducted on the Shanghai Composite Index (000001), and the comparisons have been performed by existing methods. These existing methods are CNN, RNN, LSTM, CNN-RNN, and CNN-LSTM. The generated result shows that CNN performs worst, LSTM outperforms CNN-LSTM, CNN-RNN outperforms CNN-LSTM, CNN-RNN outperforms LSTM, and the suggested single Layer RNN model beats all other models. The proposed single Layer RNN model improves by 2.2%, 0.4%, 0.3%, 0.2%, and 0.1%. The experimental results validate the effectiveness of the proposed model, which will assist investors in increasing their profits by making good decisions
Lightweight Real-Time Recurrent Models for Speech Enhancement and Automatic Speech Recognition
Traditional recurrent neural networks (RNNs) encounter difficulty in capturing long-term temporal dependencies. However, lightweight recurrent models for speech enhancement are important to improve noisy speech, while being computationally efficient and able to capture long-term temporal dependencies efficiently. This study proposes a lightweight hourglass-shaped model for speech enhancement (SE) and automatic speech recognition (ASR). Simple recurrent units (SRU) with skip connections are implemented where attention gates are added to the skip connections, highlighting the important features and spectral regions. The model operates without relying on future information that is well-suited for real-time processing. Combined acoustic features and two training objectives are estimated. Experimental evaluations using the short time speech intelligibility (STOI), perceptual evaluation of speech quality (PESQ), and word error rates (WERs) indicate better intelligibility, perceptual quality, and word recognition rates. The composite measures further confirm the performance of residual noise and speech distortion. With the TIMIT database, the proposed model improves the STOI and PESQ by 16.21% and 0.69 (31.1%) whereas with the LibriSpeech database, the model improves STOI by 16.41% and PESQ by 0.71 (32.9%) over the noisy speech. Further, our model outperforms other deep neural networks (DNNs) in seen and unseen conditions. The ASR performance is measured using the Kaldi toolkit and achieves 15.13% WERs in noisy backgrounds
Performance Evaluation of Hydroponic Wastewater Treatment Plant Integrated with Ensemble Learning Techniques: A Feature Selection Approach
Wastewater treatment and reuse are being regarded as the most effective strategy for combating water scarcity threats. This study examined and reported the applications of the Internet of Things (IoT) and artificial intelligence in the phytoremediation of wastewater using Salvinia molesta plants. Water quality (WQ) indicators (total dissolved solids (TDS), temperature, oxidation-reduction potential (ORP), and turbidity) of the S. molesta treatment system at a retention time of 24 h were measured using an Arduino IoT device. Finally, four machine learning tools (ML) were employed in modeling and evaluating the predicted concentration of the total dissolved solids after treatment (TDSt) of the water samples. Additionally, three nonlinear error ensemble methods were used to enhance the prediction accuracy of the TDSt models. The outcome obtained from the modeling and prediction of the TDSt depicted that the best results were observed at SVM-M1 with 0.9999, 0.0139, 1.0000, and 0.1177 for R2, MSE, R, and RMSE, respectively, at the training stage. While at the validation stage, the R2, MSE, R, and RMSE were recorded as 0.9986, 0.0356, 0.993, and 0.1887, respectively. Furthermore, the error ensemble techniques employed significantly outperformed the single models in terms of mean square error (MSE) and root mean square error (RMSE) for both training and validation, with 0.0014 and 0.0379, respectively