125 research outputs found

    State-of-the-art Survey of Data Hiding in ECG Signal

    Get PDF
    With the development of new communication technologies, the number of biomedical data that is transmitted is constantly increasing. This is sensitive data and therefore it is very important to preserve privacy when transmitting it. For this purpose, techniques for data hiding in biomedical signals are used. This is a comprehensive survey of research papers that covers the latest techniques for data hiding in ECG signal and old techniques that are not covered by the latest surveys. We show an overview of the methodology, robustness, and imperceptibility of the techniques

    Recent Advances in Signal Processing

    Get PDF
    The signal processing task is a very critical issue in the majority of new technological inventions and challenges in a variety of applications in both science and engineering fields. Classical signal processing techniques have largely worked with mathematical models that are linear, local, stationary, and Gaussian. They have always favored closed-form tractability over real-world accuracy. These constraints were imposed by the lack of powerful computing tools. During the last few decades, signal processing theories, developments, and applications have matured rapidly and now include tools from many areas of mathematics, computer science, physics, and engineering. This book is targeted primarily toward both students and researchers who want to be exposed to a wide variety of signal processing techniques and algorithms. It includes 27 chapters that can be categorized into five different areas depending on the application at hand. These five categories are ordered to address image processing, speech processing, communication systems, time-series analysis, and educational packages respectively. The book has the advantage of providing a collection of applications that are completely independent and self-contained; thus, the interested reader can choose any chapter and skip to another without losing continuity

    Computational Intelligence and Complexity Measures for Chaotic Information Processing

    Get PDF
    This dissertation investigates the application of computational intelligence methods in the analysis of nonlinear chaotic systems in the framework of many known and newly designed complex systems. Parallel comparisons are made between these methods. This provides insight into the difficult challenges facing nonlinear systems characterization and aids in developing a generalized algorithm in computing algorithmic complexity measures, Lyapunov exponents, information dimension and topological entropy. These metrics are implemented to characterize the dynamic patterns of discrete and continuous systems. These metrics make it possible to distinguish order from disorder in these systems. Steps required for computing Lyapunov exponents with a reorthonormalization method and a group theory approach are formalized. Procedures for implementing computational algorithms are designed and numerical results for each system are presented. The advance-time sampling technique is designed to overcome the scarcity of phase space samples and the buffer overflow problem in algorithmic complexity measure estimation in slow dynamics feedback-controlled systems. It is proved analytically and tested numerically that for a quasiperiodic system like a Fibonacci map, complexity grows logarithmically with the evolutionary length of the data block. It is concluded that a normalized algorithmic complexity measure can be used as a system classifier. This quantity turns out to be one for random sequences and a non-zero value less than one for chaotic sequences. For periodic and quasi-periodic responses, as data strings grow their normalized complexity approaches zero, while a faster deceasing rate is observed for periodic responses. Algorithmic complexity analysis is performed on a class of certain rate convolutional encoders. The degree of diffusion in random-like patterns is measured. Simulation evidence indicates that algorithmic complexity associated with a particular class of 1/n-rate code increases with the increase of the encoder constraint length. This occurs in parallel with the increase of error correcting capacity of the decoder. Comparing groups of rate-1/n convolutional encoders, it is observed that as the encoder rate decreases from 1/2 to 1/7, the encoded data sequence manifests smaller algorithmic complexity with a larger free distance value

    Entropy in Image Analysis II

    Get PDF
    Image analysis is a fundamental task for any application where extracting information from images is required. The analysis requires highly sophisticated numerical and analytical methods, particularly for those applications in medicine, security, and other fields where the results of the processing consist of data of vital importance. This fact is evident from all the articles composing the Special Issue "Entropy in Image Analysis II", in which the authors used widely tested methods to verify their results. In the process of reading the present volume, the reader will appreciate the richness of their methods and applications, in particular for medical imaging and image security, and a remarkable cross-fertilization among the proposed research areas

    A new hybrid deep neural networks (DNN) algorithm for Lorenz chaotic system parameter estimation in image encryption

    Get PDF
    One of the greatest discoveries of the 20th century was the chaotic phenomenon, which has been a popular area of study up to this point. The Lorenz Attractor is a mathematical model that describes a chaotic system. It is a solution to a set of differential equations known as the Lorenz Equations, which Edward N. Lorenz originally introduced. Hybridizing the Deep Neural Network (DNN) with the K-Means Clustering algorithm will increase the accuracy and reduce the data complexity of the Lorenz dataset. Then, hyperparameters of DNN must be tuned to get the best setting for a given problem, and it becomes crucial to evaluate them to verify whether the model can accurately categorize the data. Furthermore, conventional encryption methods such as Data Encryption Standards (DES) are not adapted to image data because of their high redundancy and big capacity. The first research objective is to develop a new deep learning algorithm by a hybrid of DNN and K-Means Clustering algorithms for estimating the Lorenz chaotic system. Then, this study aims to optimize the hyperparameters of the developed DNN model using the Arithmetic Optimization Algorithm (AOA) and, lastly, to evaluate the performance of the newly proposed deep learning model with Simulated Kalman Filter (SKF) algorithm in solving image encryption application. This work uses a Lorenz dataset from Professor Roberto Barrio of the University of Zaragoza in Spain and focuses on multi-class classification. The dataset was split into training, testing, and validation datasets, comprising 70%, 15%, and 15% of the total. The research starts with developing the hybrid deep learning model consisting of DNN and a K-Means Clustering Algorithm. Then, the developed algorithm is implemented to estimate the parameters of the Lorenz system. In addition, the hyperparameter tuning problem is considered in this research to improve the developed hybrid model by using the AOA algorithm. Lastly, a new hybrid technique suggests tackling the current image encryption application problem by using the estimated parameters of chaotic systems with an optimization algorithm, the SKF algorithm. The fitness function used is the correlation function in the SKF algorithm to optimize the cipher image produced using the Lorenz system. Next, the thesis will be discussed about the findings of this study. As for accuracy, the developed model obtained 72.27% compared to 66.47% for the baseline model. Besides, the baseline model's loss value is 0.3661, while the developed model is 0.1712, lower than the standalone model. Hence, the clustering algorithm is performed well to enhance the accuracy of the model performances, as mentioned in the first objective. The combination of the first two objectives obtained the R2 value of 0.8054 and ρ value of 0.9912, which are higher than the standalone DNN model. Then, for the hybrid model, the Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE) values are 0.1964 and 0.0045, respectively. Both error values are lower than the baseline model, 0.2913 and 0.1976. The findings showed that the model improved the model’s effectiveness and could predict the outcome accurately. This study also discusses the detailed analysis of the developed combined image encryption, including the statistical, security, and robustness analysis related to the third objective. The comparisons between seven image encryption schemes were discussed at the end of the subtopic. Based on the cropping attack’s findings, the proposed technique obtained higher Peak Signal Noise Ratio (PSNR) values for two conditions, which are 1/16 and 1/4 cropping ratios. At the same time, Zhou et al. performed a higher PSNR value for a 1/2 cropping ratio only. In conclusion, hybrid DNN with the K-Means Clustering Algorithm is proven to resolve parameter estimations of the chaotic system by developing an accurate prediction model

    Networking Architecture and Key Technologies for Human Digital Twin in Personalized Healthcare: A Comprehensive Survey

    Full text link
    Digital twin (DT), refers to a promising technique to digitally and accurately represent actual physical entities. One typical advantage of DT is that it can be used to not only virtually replicate a system's detailed operations but also analyze the current condition, predict future behaviour, and refine the control optimization. Although DT has been widely implemented in various fields, such as smart manufacturing and transportation, its conventional paradigm is limited to embody non-living entities, e.g., robots and vehicles. When adopted in human-centric systems, a novel concept, called human digital twin (HDT) has thus been proposed. Particularly, HDT allows in silico representation of individual human body with the ability to dynamically reflect molecular status, physiological status, emotional and psychological status, as well as lifestyle evolutions. These prompt the expected application of HDT in personalized healthcare (PH), which can facilitate remote monitoring, diagnosis, prescription, surgery and rehabilitation. However, despite the large potential, HDT faces substantial research challenges in different aspects, and becomes an increasingly popular topic recently. In this survey, with a specific focus on the networking architecture and key technologies for HDT in PH applications, we first discuss the differences between HDT and conventional DTs, followed by the universal framework and essential functions of HDT. We then analyze its design requirements and challenges in PH applications. After that, we provide an overview of the networking architecture of HDT, including data acquisition layer, data communication layer, computation layer, data management layer and data analysis and decision making layer. Besides reviewing the key technologies for implementing such networking architecture in detail, we conclude this survey by presenting future research directions of HDT

    Simulation and implementation of novel deep learning hardware architectures for resource constrained devices

    Get PDF
    Corey Lammie designed mixed signal memristive-complementary metal–oxide–semiconductor (CMOS) and field programmable gate arrays (FPGA) hardware architectures, which were used to reduce the power and resource requirements of Deep Learning (DL) systems; both during inference and training. Disruptive design methodologies, such as those explored in this thesis, can be used to facilitate the design of next-generation DL systems

    Survey of FPGA applications in the period 2000 – 2015 (Technical Report)

    Get PDF
    Romoth J, Porrmann M, Rückert U. Survey of FPGA applications in the period 2000 – 2015 (Technical Report).; 2017.Since their introduction, FPGAs can be seen in more and more different fields of applications. The key advantage is the combination of software-like flexibility with the performance otherwise common to hardware. Nevertheless, every application field introduces special requirements to the used computational architecture. This paper provides an overview of the different topics FPGAs have been used for in the last 15 years of research and why they have been chosen over other processing units like e.g. CPUs

    Feature binding of MPEG-7 Visual Descriptors Using Chaotic Series

    Get PDF
    Due to advanced segmentation and tracking algorithms, a video can be divided into numerous objects. Segmentation and tracking algorithms output different low-level object features, resulting in a high-dimensional feature vector per object. The challenge is to generate feature vector of objects which can be mapped to human understandable description, such as object labels, e.g., person, car. MPEG-7 provides visual descriptors to describe video contents. However, generally the MPEG-7 visual descriptors are highly redundant, and the feature coefficients in these descriptors need to be pre-processed for domain specific application. Ideal case would be if MPEG-7 visual descriptor based feature vector, can be processed similar to some functional simulations of human brain activity. There has been a established link between the analysis of temporal human brain oscillatory signals and chaotic dynamics from the electroencephalography (EEG) of the brain neurons. Neural signals in limited brain activities are found to be behaviorally relevant (previously appeared to be noise) and can be simulated using chaotic series. Chaotic series is referred to as either a finite-difference or an ordinary differential equation, which presents non-random, irregular fluctuations of parameter values over time in a dynamical system. The dynamics in a chaotic series can be high - or low -dimensional, and the dimensionality can be deduced from the topological dimension of the attractor of the chaotic series. An attractor is manifested by the tendency of a non-linear finite difference equation or an ordinary differential equation, under various but delimited conditions, to go to a reproducible active state, and stay there. We propose a feature binding method, using chaotic series, to generate a new feature vector, C-MP7 , to describe video objects. The proposed method considers MPEG-7 visual descriptor coefficients as dynamical systems. Dynamical systems are excited (similar to neuronal excitation) with either high- or low-dimensional chaotic series, and then histogram-based clustering is applied on the simulated chaotic series coefficients to generate C-MP7 . The proposed feature binding offers better feature vector with high-dimensional chaotic series simulation than with low-dimensional chaotic series, over MPEG-7 visual descriptor based feature vector. Diverse video objects are grouped in four generic classes (e.g., has [barbelow]person, has [barbelow]group [barbelow]of [barbelow]persons, has [barbelow]vehicle, and has [barbelow]unknown ) to observe how well C-MP7 describes different video objects compared to MPEG-7 feature vector. In C-MP7 , with high dimensional chaotic series simulation, 1). descriptor coefficients are reduced dynamically up to 37.05% compared to 10% in MPEG-7 , 2) higher variance is achieved than MPEG-7 , 3) multi-class discriminant analysis of C-MP7 with Fisher-criteria shows increased binary class separation for clustered video objects than that of MPEG-7 , and 4) C-MP7 , specifically provides good clustering of video objects for has [barbelow]vehicle class against other classes. To test C-MP7 in an application, we deploy a combination of multiple binary classifiers for video object classification. Related work on video object classification use non-MPEG-7 features. We specifically observe classification of challenging surveillance video objects, e.g., incomplete objects, partial occlusion, background over lapping, scale and resolution variant objects, indoor / outdoor lighting variations. C-MP7 is used to train different classes of video objects. Object classification accuracy is verified with both low-dimensional and high-dimensional chaotic series based feature binding for C-MP7 . Testing of diverse video objects with high-dimensional chaotic series simulation shows, 1) classification accuracy significantly improves on average, 83% compared to the 62% with MPEG-7 , 2) excellent clustering of vehicle objects leads to above 99% accuracy for only vehicles against all other objects, and 3) with diverse video objects, including objects from poor segmentation. C-MP7 is more robust as a feature vector in classification than MPEG-7 . Initial results on sub-group classification for male and female video objects in has [barbelow]person class are also presentated as subjective observations. Earlier, chaos series properties have been used in video processing applications for compression and digital watermarking. To our best knowledge, this work is the first to use chaotic series for video object description and apply it for object classificatio
    corecore