161 research outputs found

    Counting Human Flow with Deep Neural Network

    Get PDF
    Human flow counting has many applications in space management. This study applied channel state information (CSI) available in IEEE 802.11n networks to characterize the flow count. Raw inputs including mean, standard deviation and five-number summary were extracted from windowed CSI data. Due to the large number of raw inputs, stacked denoising autoencoders were used to extract hierarchical features from raw inputs and a final layer of softmax regression was used to model the flow counting problem. It is found that this deep neural network structure beats other popular classification algorithms including random forest, logistic regression, support vector machine and multilayer perceptron in predicting the flow count with attractive speed performance

    Deep Learning in Cardiology

    Full text link
    The medical field is creating large amount of data that physicians are unable to decipher and use efficiently. Moreover, rule-based expert systems are inefficient in solving complicated medical tasks or for creating insights using big data. Deep learning has emerged as a more accurate and effective technology in a wide range of medical problems such as diagnosis, prediction and intervention. Deep learning is a representation learning method that consists of layers that transform the data non-linearly, thus, revealing hierarchical relationships and structures. In this review we survey deep learning application papers that use structured data, signal and imaging modalities from cardiology. We discuss the advantages and limitations of applying deep learning in cardiology that also apply in medicine in general, while proposing certain directions as the most viable for clinical use.Comment: 27 pages, 2 figures, 10 table

    A novel framework using deep auto-encoders based linear model for data classification

    Get PDF
    This paper proposes a novel data classification framework, combining sparse auto-encoders (SAEs) and a post-processing system consisting of a linear system model relying on Particle Swarm Optimization (PSO) algorithm. All the sensitive and high-level features are extracted by using the first auto-encoder which is wired to the second auto-encoder, followed by a Softmax function layer to classify the extracted features obtained from the second layer. The two auto-encoders and the Softmax classifier are stacked in order to be trained in a supervised approach using the well-known backpropagation algorithm to enhance the performance of the neural network. Afterwards, the linear model transforms the calculated output of the deep stacked sparse auto-encoder to a value close to the anticipated output. This simple transformation increases the overall data classification performance of the stacked sparse auto-encoder architecture. The PSO algorithm allows the estimation of the parameters of the linear model in a metaheuristic policy. The proposed framework is validated by using three public datasets, which present promising results when compared with the current literature. Furthermore, the framework can be applied to any data classification problem by considering minor updates such as altering some parameters including input features, hidden neurons and output classes. Keywords: deep sparse auto-encoders, medical diagnosis, linear model, data classification, PSO algorithmpublishedVersio

    Optimized Deeplearning Algorithm for Software Defects Prediction

    Get PDF
    Accurate software defect prediction (SDP) helps to enhance the quality of the software by identifying potential flaws early in the development process. However, existing approaches face challenges in achieving reliable predictions. To address this, a novel approach is proposed that combines a two-tier-deep learning framework. The proposed work includes four major phases:(a) pre-processing, (b) Dimensionality reduction, (c) Feature Extraction and (d) Two-fold deep learning-based SDP. The collected raw data is initially pre-processed using a data cleaning approach (handling null values and missing data) and a Decimal scaling normalisation approach. The dimensions of the pre-processed data are reduced using the newly developed Incremental Covariance Principal Component Analysis (ICPCA), and this approach aids in solving the “curse of dimensionality” issue. Then, onto the dimensionally reduced data, the feature extraction is performed using statistical features (standard deviation, skewness, variance, and kurtosis), Mutual information (MI), and Conditional entropy (CE). From the extracted features, the relevant ones are selected using the new Euclidean Distance with Mean Absolute Deviation (ED-MAD). Finally, the SDP (decision making) is carried out using the optimized Two-Fold Deep Learning Framework (O-TFDLF), which encapsulates the RBFN and optimized MLP, respectively. The weight of MLP is fine-tuned using the new Levy Flight Cat Mouse Optimisation (LCMO) method to improve the model's prediction accuracy. The final detected outcome (forecasting the presence/ absence of defect) is acquired from optimized MLP. The implementation has been performed using the MATLAB software. By using certain performance metrics such as Sensitivity, Accuracy, Precision, Specificity and MSE the proposed model’s performance is compared to that of existing models. The accuracy achieved for the proposed model is 93.37%

    Online Non-linear Prediction of Financial Time Series Patterns

    Get PDF
    We consider a mechanistic non-linear machine learning approach to learning signals in financial time series data. A modularised and decoupled algorithm framework is established and is proven on daily sampled closing time-series data for JSE equity markets. The input patterns are based on input data vectors of data windows preprocessed into a sequence of daily, weekly and monthly or quarterly sampled feature measurement changes (log feature fluctuations). The data processing is split into a batch processed step where features are learnt using a Stacked AutoEncoder (SAE) via unsupervised learning, and then both batch and online supervised learning are carried out on Feedforward Neural Networks (FNNs) using these features. The FNN output is a point prediction of measured time-series feature fluctuations (log differenced data) in the future (ex-post). Weight initializations for these networks are implemented with restricted Boltzmann machine pretraining, and variance based initializations. The validity of the FNN backtest results are shown under a rigorous assessment of backtest overfitting using both Combinatorially Symmetrical Cross Validation and Probabilistic and Deflated Sharpe Ratios. Results are further used to develop a view on the phenomenology of financial markets and the value of complex historical data under unstable dynamics

    Looking Over the Research Literature on Software Engineering from 2016 to 2018

    Get PDF
    This paper carries out a bibliometric analysis to detect (i) what is the most influential research on software engineering at the moment, (ii) where is being published that relevant research, (iii) what are the most commonly researched topics, (iv) and where is being undertaken that research (i.e., in which countries and institutions). For that, 6,365 software engineering articles, published from 2016 to 2018 on a variety of conferences and journals, are examined.This work has been funded by the Spanish Ministry of Science, Innovation, and Universities under Project DPI2016-77677-P, the Community of Madrid under Grant RoboCity2030-DIH-CM P2018/NMT-4331, and grant TIN2016-75850-R from the FEDER funds
    • …
    corecore