6 research outputs found

    Smart Home Privacy Protection Methods against a Passive Wireless Snooping Side-Channel Attack

    No full text
    Smart home technologies have attracted more users in recent years due to significant advancements in their underlying enabler components, such as sensors, actuators, and processors, which are spreading in various domains and have become more affordable. However, these IoT-based solutions are prone to data leakage; this privacy issue has motivated researchers to seek a secure solution to overcome this challenge. In this regard, wireless signal eavesdropping is one of the most severe threats that enables attackers to obtain residents’ sensitive information. Even if the system encrypts all communications, some cyber attacks can still steal information by interpreting the contextual data related to the transmitted signals. For example, a “fingerprint and timing-based snooping (FATS)” attack is a side-channel attack (SCA) developed to infer in-home activities passively from a remote location near the targeted house. An SCA is a sort of cyber attack that extracts valuable information from smart systems without accessing the content of data packets. This paper reviews the SCAs associated with cyber–physical systems, focusing on the proposed solutions to protect the privacy of smart homes against FATS attacks in detail. Moreover, this work clarifies shortcomings and future opportunities by analyzing the existing gaps in the reviewed methods

    Lightning mapping: Techniques, challenges, and opportunities

    No full text
    Despite the significant progress in the understanding of the phenomenon of lightning and the physics behind it, locating and mapping its occurrence remain a challenge. Such localization and mapping of very high frequency (VHF) lightning radiation sources provide a foundation for the subsequent research on predicting lightning, saving lives, and protecting valuable assets. A major technical challenge in attempting to map the sources of lightning is mapping accuracy. The three common electromagnetic radio frequency-based lightning locating techniques are magnetic direction finder, time of arrival, and interferometer (ITF). Understanding these approaches requires critically reviewing previous attempts. The performance and reliability of each method are evaluated on the basis of the mapping accuracy obtained from lightning data from different sources. In this work, we review various methods for lightning mapping. We study the approaches, describe their techniques, analyze their merits and demerits, classify them, and derive few opportunities for further research. We find that the ITF system is the most effective method and that its performance may be improved further. One approach is to improve how lightning signals are preprocessed and how noise is filtered. Signal processing can also be utilized to improve mapping accuracy by introducing methods such as wavelet transform in place of conventional cross-correlation approaches

    Identifying individuals using EEG-based brain connectivity patterns

    No full text
    Considering the recent rapid advancements in digital technology, electroencephalogram (EEG) signal is a potential candidate for a robust human biometric authentication system. In this paper the focus of investigation is the use of brain activity as a new modality for identification. Univariate model biometrics such as speech, heart sound and electrocardiogram (ECG) require high-resolution computer system with special devices. The heart sound is obtained by placing the digital stethoscope on the chest, the ECG signals at the hands or chest of the client and speaks into a microphone for speaker recognition. It is challenging task when adapting these technologies to human beings. This paper proposed a series of tasks in a single paradigm rather than having users perform several tasks one by one. The advantage of using brain electrical activity as suggested in this work is its uniqueness; the recorded brain response cannot be duplicated, and a person’s identity is therefore unlikely to be forged or stolen. The disadvantage of applying univariate is that the process only includes correlation in time precedence of a signal, while the correlation between regions is ignored. The inter-regional could not be assessed directly from univariate models. The alternative to this problem is the generalization of univariate model to multivariate modeling, hypothesized that the inter-regional correlations could give additional information to discriminate between brain conditions where the models or methods can measure the synchronization between coupling regions and the coherency among them on brain biometrics. The key issue is to handle the single task paradigm proposed in this paper with multivariate signal EEG classification using Multivariate Autoregressive (MVAR) rather than univariate model. The brain biometric systems obtained a significant result of 95.33% for dynamic Vector autoregressive (VAR) time series and 94.59% for Partial Directed Coherence (PDC) and Coherence (COH) frequency domain features

    Enhanced signal processing using modified cyclic shift tree denoising

    No full text
    The cortical pyramidal neurons in the cerebral cortex, which are positioned perpendicularly to the brain’s surface, are assumed to be the primary source of the electroencephalogram (EEG) reading. The EEG reading generated by the brainstem in response to auditory impulses is known as the Auditory Brainstem Response (ABR). The identification of wave V in ABR is now regarded as the most efficient method for audiology testing. The ABR signal is modest in amplitude and is lost in the background noise. The traditional approach of retrieving the underlying wave V, which employs an averaging methodology, necessitates more attempts. This results in a protracted length of screening time, which causes the subject discomfort. For the detection of wave V, this paper uses Kalman filtering and Cyclic Shift Tree Denoising (CSTD). In state space form, we applied Markov process modeling of ABR dynamics. The Kalman filter, which is optimum in the mean-square sense, is used to estimate the clean ABRs. To save time and effort, discrete wavelet transform (DWT) coefficients are employed as features instead of filtering the raw ABR signal. The results show that even with a smaller number of epochs, the wave is still visible and the morphology of the ABR signal is preserved

    A brief review of computation techniques for ECG signal analysis

    No full text
    Automatic detection of life-threatening cardiac arrhythmias has been a subject of interest for many decades. The automatic ECG signal analysis methods are mainly aiming for the interpretation of long-term ECG recordings. In fact, the experienced cardiologists perform the ECG analysis using a strip of ECG graph paper in an event-by-event manner. This manual interpretation becomes more difficult, time-consuming, and more tedious when dealing with long-term ECG recordings. Rather, an automatic computerized ECG analysis system will provide valuable assistance to the cardiologists to deliver fast or remote medical advice and diagnosis to the patient. However, achieving accurate automated arrhythmia diagnosis is a challenging task that has to account for all the ECG characteristics and processing steps. Detecting the P wave, QRS complex, and T wave is crucial to perform automatic analysis of EEG signals. Most of the research in this area uses the QRS complex as it is the easiest symbol to detect in the first stage. The QRS complex represents ventricular depolarization and consists of three consequences waves. However, the main challenge in any algorithm design is the large variation of QRS, P, and T waveform, leading to failure for each method. The QRS complex may only occupy R waves QR (no R), QR (no S), S (no Q), or RSR, depending on the ECG lead. Variations from the normal electrical patterns can indicate damage to the heart, and these variations are manifested as heart attack or heart disease. This paper will discuss the most recent and relevant methods related to each sub-stage, maintaining the related literature to the scope of ECG research

    Classification of ECG ventricular beats assisted by Gaussian parameters’ dictionary

    No full text
    Automatic processing and diagnosis of electrocardiogram (ECG) signals remain a very challenging problem, especially with the growth of advanced monitoring technologies. A particular task in ECG processing that has received tremendous attention is to detect and identify pathological heartbeats, e.g., those caused by premature ventricular contraction (PVC). This paper aims to build on the existing methods of heartbeat classification and introduce a new approach to detect ventricular beats using a dictionary of Gaussian-based parameters that model ECG signals. The proposed approach relies on new techniques to segment the stream of ECG signals and automatically cluster the beats for each patient. Two benchmark datasets have been used to evaluate the classification performance, namely, the QTDB and MIT-BIH Arrhythmia databases, based on a single lead short ECG segment. Using the QTDB database, the method achieved the average accuracies of 99.3% ± 0.7 and 99.4% ± 0.6% for lead-1 and lead-2, respectively. On the other hand, identifying ventricular beats in the MIT-BIH Arrhythmia dataset resulted in a sensitivity of 82.8%, a positive predictivity of 62.0%, and F1 score of 70.9%. For non-ventricular beats, the method achieved a sensitivity of 96.0%, a positive predictivity of 98.6%, and F1 score of 97.3%. The proposed technique represents an improvement in the field of ventricular beat classification compared with the conventional methods
    corecore