14 research outputs found

    Correlation between the Severity of Obstructive Sleep Apnea and Heart Rate Variability Indices

    Get PDF
    The risk of cardiovascular disease is known to be increased in obstructive sleep apnea syndrome (OSAS). Its mechanism can be explained by the observation that the sympathetic tone increases due to repetitive apneas accompanied by hypoxias and arousals during sleep. Heart rate variability (HRV) representing cardiac autonomic function is mediated by respiratory sinus arrhythmia, baroreflex-related fluctuation, and thermoregulation-related fluctuation. We evaluated the heart rate variability of OSAS patients during night to assess their relationship with the severity of the symptoms. We studied overnight polysomnographies of 59 male untreated OSAS patients with moderate to severe symptoms (mean age 45.4± 11.7 yr, apnea-hypopnea index [AHI]=43.2±23.4 events per hour, and AHI >15). Moderate (mean age 47.1±9.4 yr, AHI=15-30, n=22) and severe (mean age 44.5±12.9 yr, AHI >30, n=37) OSAS patients were compared for the indices derived from time and frequency domain analysis of HRV, AHI, oxygen desaturation event index (ODI), arousal index (ArI), and sleep parameters. As a result, the severe OSAS group showed higher mean powers of total frequency (TF) (p=0.012), very low frequency (VLF) (p= 0.038), and low frequency (LF) (p=0.002) than the moderate OSAS group. The LF/HF ratio (p=0.005) was higher in the severe group compared to that of the moderate group. On the time domain analysis, the HRV triangular index (p=0.026) of severe OSAS group was significantly higher. AHI was correlated best with the LF/HF ratio (rp=0.610, p<0.001) of all the HRV indices. According to the results, the frequency domain indices tended to reveal the difference between the groups better than time domain indices. Especially the LF/HF ratio was thought to be the most useful parameter to estimate the degree of AHI in OSAS patients

    I/O Strength-Aware Credit Scheduler for Virtualized Environments

    No full text
    With the evolution of cloud technology, the number of user applications is increasing, and computational workloads are becoming increasingly diverse and unpredictable. However, cloud data centers still exhibit a low I/O performance because of the scheduling policies employed, which are based on the degree of physical CPU (pCPU) occupancy. Notably, existing scheduling policies cannot guarantee good I/O performance because of the uncertainty of the extent of I/O occurrence and the lack of fine-grained workload classification. To overcome these limitations, we propose ISACS, an I/O strength-aware credit scheduler for virtualized environments. Based on the Credit2 scheduler, ISACS provides a fine-grained workload-aware scheduling technique to mitigate I/O performance degradation in virtualized environments. Further, ISACS uses the event channel mechanism in the virtualization architecture to expand the scope of the scheduling information area and measures the I/O strength of each virtual CPU (vCPU) in the run-queue. Then, ISACS allocates two types of virtual credits for all vCPUs in the run-queue to increase I/O performance and concurrently prevent CPU performance degradation. Finally, through I/O load balancing, ISACS prevents I/O-intensive vCPUs from becoming concentrated on specific cores. Our experiments show that compared with existing virtualization environments, ISACS provides a higher I/O performance with a negligible impact on CPU performance

    Fine-Grained I/O Traffic Control Middleware for I/O Fairness in Virtualized System

    No full text
    The development of IT technology in the 21st century has created a new paradigm for real-time, data-intensive user services, such as connected cars, smart factories, and remote health care services. The considerable computational resources required by these services are rendering the cloud increasingly more important. In the cloud server, user services are forced to share physical resources because of the emerging resource competition, thus introducing various types of unpredictable workloads. The core technology of the cloud is a virtualized system, which isolates and shares the powerful physical resources of the server in the form of a virtual machine (VM) to increase resource efficiency. However, the scheduling policy of a virtual CPU (vCPU), which is a logical CPU of a VM, generally schedules the vCPU based on the degree of occupation of the physical CPU (pCPU) without regarding I/O strength; so it brings the unfair I/O performance among VMs in the virtualized systems. The user services performing on the VM are not aware of the user-contention architectures, which sharing of I/O devices, in the virtualized systems; Furthermore, the current virtualized system simply adopts the Linux-based I/O processing process which optimized for user-contention-free architectures. Therefore, the architecture that brings the unfair usage of I/O devices among user services is hardly regarded and has low awareness in current virtualized systems. To overcome this problem, in this study, I-Balancer is presented to provide fair I/O performance among I/O-intensive user services by applying an asynchronous inter-communication control technique for the virtualized system with a high VM density. The main design goal of I-Balancer is to increase the awareness of user-contention architectures in the hypervisor. I-Balancer derives the fine-grained workload and I/O strength for each vCPU during the scheduler and event channel areas. Subsequently, to strengthen fair I/O performance, an I/O traffic control mechanism is implemented to control the inter-domain communication traffic according to the I/O strength of the VMs. Experiments were performed on the fairness of I/O(disk and network) performance on virtualized systems with Xen 4.12 hypervisor adopted based on various performance metrics. The experimental results showed that the virtualization system to which I-Balancer is applied reduces the network and disk I/O performance standard deviation among VMs by up to 71&#x0025; and 61&#x0025; respectively compared to the existing virtualization system; and, performance interference and overhead are also confirmed to be negligible

    Streamflow and Sediment Yield Prediction for Watershed Prioritization in the Upper Blue Nile River Basin, Ethiopia

    Get PDF
    Inappropriate use of land and poor ecosystem management have accelerated land degradation and reduced the storage capacity of reservoirs. To mitigate the effect of the increased sediment yield, it is important to identify erosion-prone areas in a 287 km2 catchment in Ethiopia. The objectives of this study were to: (1) assess the spatial variability of sediment yield; (2) quantify the amount of sediment delivered into the reservoir; and (3) prioritize sub-catchments for watershed management using the Soil and Water Assessment Tool (SWAT). The SWAT model was calibrated and validated using SUFI-2, GLUE, ParaSol, and PSO SWAT-CUP optimization algorithms. For most of the SWAT-CUP simulations, the observed and simulated river discharge were not significantly different at the 95% level of confidence (95PPU), and sources of uncertainties were captured by bracketing more than 70% of the observed data. This catchment prioritization study indicated that more than 85% of the sediment was sourced from lowland areas (slope range: 0–8%) and the variation in sediment yield was more sensitive to the land use and soil type prevailing in the area regardless of the terrain slope. Contrary to the perception of the upland as an important source of sediment, the lowland in fact was the most important source of sediment and should be the focus area for improved land management practice to reduce sediment delivery into storage reservoirs. The research also showed that lowland erosion-prone areas are typified by extensive agriculture, which causes significant modification of the landscape. Tillage practice changes the infiltration and runoff characteristics of the land surface and interaction of shallow groundwater table and saturation excess runoff, which in turn affects the delivery of water and sediment to the reservoir and catchment evapotranspiration

    Machine-Learning-Based Elderly Stroke Monitoring System Using Electroencephalography Vital Signals

    No full text
    Stroke is the third highest cause of death worldwide after cancer and heart disease, and the number of stroke diseases due to aging is set to at least triple by 2030. As the top three causes of death worldwide are all related to chronic disease, the importance of healthcare is increasing even more. Models that can predict real-time health conditions and diseases using various healthcare services are attracting increasing attention. Most diagnosis and prediction methods of stroke for the elderly involve imaging techniques such as magnetic resonance imaging (MRI). It is difficult to rapidly and accurately diagnose and predict stroke diseases due to the long testing times and high costs associated with MRI. Thus, in this paper, we design and implement a health monitoring system that can predict the precursors of stroke diseases in the elderly in real time during daily walking. First, raw electroencephalography (EEG) data from six channels were preprocessed via Fast Fourier Transform (FFT). The raw EEG power values were then extracted from the raw spectra: alpha (α), beta (β), gamma (γ), delta (δ), and theta (θ) as well as the low β, high β, and θ to β ratio, respectively. The experiments in this paper confirm that the important features of EEG biometric signals alone during walking can accurately determine stroke precursors and occurrence in the elderly with more than 90% accuracy. Further, the Random Forest algorithm with quartiles and Z-score normalization validates the clinical significance and performance of the system proposed in this paper with a 92.51% stroke prediction accuracy. The proposed system can be implemented at a low cost, and it can be applied for early disease detection and prediction using the precursor symptoms of real-time stroke. Furthermore, it is expected that it will be able to detect other diseases such as cancer and heart disease in the future
    corecore