558 research outputs found
Statistical Review of Health Monitoring Models for Real-Time Hospital Scenarios
Health Monitoring System Models (HMSMs) need speed, efficiency, and security to work. Cascading components ensure data collection, storage, communication, retrieval, and privacy in these models. Researchers propose many methods to design such models, varying in scalability, multidomain efficiency, flexibility, usage and deployment, computational complexity, cost of deployment, security level, feature usability, and other performance metrics. Thus, HMSM designers struggle to find the best models for their application-specific deployments. They must test and validate different models, which increases design time and cost, affecting deployment feasibility. This article discusses secure HMSMs' application-specific advantages, feature-specific limitations, context-specific nuances, and deployment-specific future research scopes to reduce model selection ambiguity. The models based on the Internet of Things (IoT), Machine Learning Models (MLMs), Blockchain Models, Hashing Methods, Encryption Methods, Distributed Computing Configurations, and Bioinspired Models have better Quality of Service (QoS) and security than their counterparts. Researchers can find application-specific models. This article compares the above models in deployment cost, attack mitigation performance, scalability, computational complexity, and monitoring applicability. This comparative analysis helps readers choose HMSMs for context-specific application deployments. This article also devises performance measuring metrics called Health Monitoring Model Metrics (HM3) to compare the performance of various models based on accuracy, precision, delay, scalability, computational complexity, energy consumption, and security
Medical Images Encryption Based on Adaptive-Robust Multi-Mode Synchronization of Chen Hyper-Chaotic Systems
In this paper, a novel medical image encryption method based on multi-mode synchronization of hyper-chaotic systems is presented. The synchronization of hyper-chaotic systems is of
great significance in secure communication tasks such as encryption of images. Multi-mode synchronization is a novel and highly complex issue, especially if there is uncertainty and disturbance. In
this work, an adaptive-robust controller is designed for multimode synchronized chaotic systems
with variable and unknown parameters, despite the bounded disturbance and uncertainty with a
known function in two modes. In the first case, it is a main system with some response systems,
and in the second case, it is a circular synchronization. Using theorems it is proved that the two
synchronization methods are equivalent. Our results show that, we are able to obtain the convergence
of synchronization error and parameter estimation error to zero using Lyapunov’s method. The
new laws to update time-varying parameters, estimating disturbance and uncertainty bounds are
proposed such that stability of system is guaranteed. To assess the performance of the proposed
synchronization method, various statistical analyzes were carried out on the encrypted medical
images and standard benchmark images. The results show effective performance of the proposed
synchronization technique in the medical images encryption for telemedicine application.MINECO/ FEDER under the RTI2018-098913-B100
CV20-45250 and A-TIC- 080-UGR18 project
Entropy in Image Analysis II
Image analysis is a fundamental task for any application where extracting information from images is required. The analysis requires highly sophisticated numerical and analytical methods, particularly for those applications in medicine, security, and other fields where the results of the processing consist of data of vital importance. This fact is evident from all the articles composing the Special Issue "Entropy in Image Analysis II", in which the authors used widely tested methods to verify their results. In the process of reading the present volume, the reader will appreciate the richness of their methods and applications, in particular for medical imaging and image security, and a remarkable cross-fertilization among the proposed research areas
Review on biomedical sensors, technologies, and algorithms for diagnosis of sleep-disordered breathing: Comprehensive survey
This paper provides a comprehensive review of available technologies for measurements of vital physiology related parameters that cause sleep disordered breathing (SDB). SDB is a chronic disease that may lead to several health problems and increase the risk of high blood pressure and even heart attack. Therefore, the diagnosis of SDB at an early stage is very important. The essential primary step before diagnosis is measurement. Vital health parameters related to SBD might be measured through invasive or non-invasive methods. Nowadays, with respect to increase in aging population, improvement in home health management systems is needed more than even a decade ago. Moreover, traditional health parameter measurement techniques such as polysomnography are not comfortable and introduce additional costs to the consumers. Therefore, in modern advanced self-health management devices, electronics and communication science are combined to provide appliances that can be used for SDB diagnosis, by monitoring a patient's physiological parameters with more comfort and accuracy. Additionally, development in machine learning algorithms provides accurate methods of analysing measured signals. This paper provides a comprehensive review of measurement approaches, data transmission, and communication networks, alongside machine learning algorithms for sleep stage classification, to diagnose SDB
Networking Architecture and Key Technologies for Human Digital Twin in Personalized Healthcare: A Comprehensive Survey
Digital twin (DT), refers to a promising technique to digitally and
accurately represent actual physical entities. One typical advantage of DT is
that it can be used to not only virtually replicate a system's detailed
operations but also analyze the current condition, predict future behaviour,
and refine the control optimization. Although DT has been widely implemented in
various fields, such as smart manufacturing and transportation, its
conventional paradigm is limited to embody non-living entities, e.g., robots
and vehicles. When adopted in human-centric systems, a novel concept, called
human digital twin (HDT) has thus been proposed. Particularly, HDT allows in
silico representation of individual human body with the ability to dynamically
reflect molecular status, physiological status, emotional and psychological
status, as well as lifestyle evolutions. These prompt the expected application
of HDT in personalized healthcare (PH), which can facilitate remote monitoring,
diagnosis, prescription, surgery and rehabilitation. However, despite the large
potential, HDT faces substantial research challenges in different aspects, and
becomes an increasingly popular topic recently. In this survey, with a specific
focus on the networking architecture and key technologies for HDT in PH
applications, we first discuss the differences between HDT and conventional
DTs, followed by the universal framework and essential functions of HDT. We
then analyze its design requirements and challenges in PH applications. After
that, we provide an overview of the networking architecture of HDT, including
data acquisition layer, data communication layer, computation layer, data
management layer and data analysis and decision making layer. Besides reviewing
the key technologies for implementing such networking architecture in detail,
we conclude this survey by presenting future research directions of HDT
Recent Advances in Signal Processing
The signal processing task is a very critical issue in the majority of new technological inventions and challenges in a variety of applications in both science and engineering fields. Classical signal processing techniques have largely worked with mathematical models that are linear, local, stationary, and Gaussian. They have always favored closed-form tractability over real-world accuracy. These constraints were imposed by the lack of powerful computing tools. During the last few decades, signal processing theories, developments, and applications have matured rapidly and now include tools from many areas of mathematics, computer science, physics, and engineering. This book is targeted primarily toward both students and researchers who want to be exposed to a wide variety of signal processing techniques and algorithms. It includes 27 chapters that can be categorized into five different areas depending on the application at hand. These five categories are ordered to address image processing, speech processing, communication systems, time-series analysis, and educational packages respectively. The book has the advantage of providing a collection of applications that are completely independent and self-contained; thus, the interested reader can choose any chapter and skip to another without losing continuity
Towards a low complexity scheme for medical images in scalable video coding
Medical imaging has become of vital importance for diagnosing diseases and conducting noninvasive procedures. Advances in eHealth applications are challenged by the fact that Digital Imaging and Communications in Medicine (DICOM) requires high-resolution images, thereby increasing their size and the associated computational complexity, particularly when these images are communicated over IP and wireless networks. Therefore, medical research requires an efficient coding technique to achieve high-quality and low-complexity images with error-resilient features. In this study, we propose an improved coding scheme that exploits the content features of encoded videos with low complexity combined with flexible macroblock ordering for error resilience. We identify the homogeneous region in which the search for optimal macroblock modes is early terminated. For non-homogeneous regions, the integration of smaller blocks is employed only if the vector difference is less than the threshold. Results confirm that the proposed technique achieves a considerable performance improvement compared with existing schemes in terms of reducing the computational complexity without compromising the bit-rate and peak signal-to-noise ratio. © 2013 IEEE
- …