23 research outputs found
Identification of risk factors for hypertension in overweight and obese people and analysis of risk factor interactions: an R-based analysis
ObjectiveThis study identified the independent risk factors for hypertension in overweight and obese people and also analyzed the interaction between the risk factors.MethodsA total of 5,098 overweight and obese people were enrolled in this study. First, the clinical metabolic characteristics of hypertension and control groups were compared. The logistic regression (LR) and classification and regression trees (CRT)-based decision tree (DT) models were used to screen the independent risk factors for hypertension in overweight and obese people. The multiplicative and additive scale analyses were used to analyze the two risk factors with interaction from the perspective of statistics and biological interaction. Finally, the receiver operating characteristic (ROC) and calibration curves were used to analyze the accuracy and identification ability of the LR and DT models.ResultsAge, UA, FPG, SBP, Cr, AST, TG, and FPG were higher in the hypertension group than in the control group (P < 0.05). The results of LR revealed that NAFLD, FPG, age, TG, LDL-c, UA, and Cr were positively correlated with hypertension in overweight and obese people, and GFR was negatively correlated with hypertension in overweight and obese people (P < 0.05). The DT model suggested that the risk factors of age, FPG, and UA interacted with each other. The multiplicative single and multiple factor analysis for FPG + UA, age + UA, age + FPG revealed a positive multiplicative interaction (P < 0.05, B ≠0, OR > 1). The additive single and multiple factor analysis for age + UA indicated a positive additive interaction. The ROC and calibration curve analysis indicated that the CRT decision tree, FPG + UA, age + UA, and age + FPG have certain accuracy and discrimination ability.ConclusionThe independent risk factors for hypertension in overweight and obese people included NAFLD, FPG, age, TG, LDL-c, UA, and Cr. Among these, age + UA exhibited synergistic interaction, thereby providing a reference for the prevention and control of hypertension in overweight and obese people
DCQA: Document-Level Chart Question Answering towards Complex Reasoning and Common-Sense Understanding
Visually-situated languages such as charts and plots are omnipresent in
real-world documents. These graphical depictions are human-readable and are
often analyzed in visually-rich documents to address a variety of questions
that necessitate complex reasoning and common-sense responses. Despite the
growing number of datasets that aim to answer questions over charts, most only
address this task in isolation, without considering the broader context of
document-level question answering. Moreover, such datasets lack adequate
common-sense reasoning information in their questions. In this work, we
introduce a novel task named document-level chart question answering (DCQA).
The goal of this task is to conduct document-level question answering,
extracting charts or plots in the document via document layout analysis (DLA)
first and subsequently performing chart question answering (CQA). The newly
developed benchmark dataset comprises 50,010 synthetic documents integrating
charts in a wide range of styles (6 styles in contrast to 3 for PlotQA and
ChartQA) and includes 699,051 questions that demand a high degree of reasoning
ability and common-sense understanding. Besides, we present the development of
a potent question-answer generation engine that employs table data, a rich
color set, and basic question templates to produce a vast array of reasoning
question-answer pairs automatically. Based on DCQA, we devise an OCR-free
transformer for document-level chart-oriented understanding, capable of DLA and
answering complex reasoning and common-sense questions over charts in an
OCR-free manner. Our DCQA dataset is expected to foster research on
understanding visualizations in documents, especially for scenarios that
require complex reasoning for charts in the visually-rich document. We
implement and evaluate a set of baselines, and our proposed method achieves
comparable results
vBalance: Using Interrupt Load Balance to Improve I/O Performance for SMP Virtual Machines
A Symmetric MultiProcessing (SMP) virtual machine (VM) enables users to take advantage of a multiprocessor infrastructure in supporting scalable job throughput and request responsiveness. It is known that hypervisor scheduling activities can heavily degrade a VM’s I/O performance, as the scheduling latencies of the virtual CPU (vCPU) eventually translates into the processing delays of the VM’s I/O events. As for a UniProcessor (UP) VM, since all its interrupts are bound to the only vCPU, it completely relies on the hypervisor’s help to shorten I/O processing delays, making the hypervisor increasingly complicated. Regarding SMP-VMs, most researches ignore the fact that the problem can be greatly mitigated at the level of guest OS, instead of imposing all scheduling pressure on the hypervisor. In this paper, we present vBalance, a cross-layer software solution to substantially improve the I/O performance for SMP-VMs. Under the principle of keeping hypervisor scheduler’s simplicity and efficiency, vBalance only requires very limited help in the hypervisor layer. In the guest OS, vBalance can dynamically and adaptively migrate the interrupts from a preempted vCPU to a running one, and hence avoids interrupt processing delays. The prototype of vBalance is implemented in Xen 4.1.2 hypervisor, with Linux 3.2.2 as the guest. The evaluation results of both micro-level and application-level benchmarks prove the effectiveness and lightweightness of our solution
Delay-aware network I/O virtualization for cloud computing
Cloud datacenters are largely adopting virtual machines (VMs) to provide elastic computing services. As many cloud applications are communication-intensive, such as distributed data processing and web services, satisfactory network performance is critical to guarantee the service quality. In virtualized environments, a major problem is that the end hosts are separated by a virtualization layer and subject to the intervention of the hypervisor’s scheduling. Since the scheduling delays due to CPU sharing are commonly from tens of milliseconds to over one hundred milliseconds, when they are added to packet processing, network performance can be seriously degraded. To tackle this problem, prior works focus dominantly on modifying the hypervisor scheduler to hide the virtualization reality by reducing the delays as much as possible. However, this type of approaches brings many other problems, such as increased VM context switches and more complicated CPU resource allocation. This thesis tries to look at the problem from a different but simpler angle: we let the guest operating system (OS) accept the reality that each virtual CPU (vCPU) can be suspended and resumed at any time, and then think about how to refactor the network I/O subsystem to automatically tolerate such scheduling delays.
In general, network I/O processing in the kernel includes two layers: protocol layer and interrupt layer. Our study shows that both layers are very sensitive to VM scheduling delays, and therefore must be redesigned accordingly. First, in the protocol layer, we propose a paravirtualized approach to help TCP counteract the distortion of congestion control caused by the hypervisor scheduler. Second, in the interrupt layer, for SMP-VMs that have multiple vCPUs, we propose a method to dynamically migrate interrupts from a preempted vCPU to a running one whenever it is possible, so that the delays will not be propagated to the protocol layer. Experiments with our prototypes in Xen/Linux show that our approaches can significantly improve the network throughput and responsiveness.published_or_final_versionComputer ScienceDoctoralDoctor of Philosoph
Network performance isolation for virtual machines
Cloud computing is a new computing paradigm that aims to transform computing
services into a utility, just as providing electricity in a “pay-as-you-go”
manner. Data centers are increasingly adopting virtualization technology for the
purpose of server consolidation, flexible resource management and better fault
tolerance. Virtualization-based cloud services host networked applications in virtual
machines (VMs), with each VM provided the desired amount of resources
using resource isolation mechanisms.
Effective network performance isolation is fundamental to data centers, which
offers significant benefit of performance predictability for applications. This research
is application-driven. We study how network performance isolation can be
achieved for latency-sensitive cloud applications. For media streaming applications,
network performance isolation means both predicable network bandwidth
and low-jittered network latency. The current resource sharing methods for VMs
mainly focus on resource proportional share, whereas ignore the fact that I/O latency
in VM-hosted platforms is mostly related to resource provisioning rate. The
resource isolation with only quantitative promise does not sufficiently guarantee
performance isolation. Even the VM is allocated with adequate resources such as
CPU time and network bandwidth, problems such as network jitter (variation in
packet delays) can still happen if the resources are provisioned at inappropriate
moments. So in order to achieve performance isolation, the problem is not only
how many/much resources each VM gets, but more importantly whether the resources are provisioned in a timely manner. How to guarantee both requirements
to be achieved in resource allocation is challenging.
This thesis systematically analyzes the causes of unpredictable network latency
in VM-hosted platforms, with both technical discussion and experimental
illustration. We identify that the varied network latency is jointly caused by
VMM CPU scheduler and network traffic shaper, and then address the problem
in these two parts. In our solutions, we consider the design goals of resource
provisioning rate and resource proportionality as two orthogonal dimensions. In
the hypervisor, a proportional share CPU scheduler with soft real-time support
is proposed to guarantee predictable scheduling delay; in network traffic shaper,
we introduce the concept of smooth window to smooth packet delay and apply
closed-loop feedback control to maintain network bandwidth consumption.
The solutions are implemented in Xen 4.1.0 and Linux 2.6.32.13, which are
both the latest versions when this research was conducted. Extensive experiments
have been carried out using both real-life applications and low-level benchmarks.
Testing results show that the proposed solutions can effectively guarantee network
performance isolation, by achieving both predefined network bandwidth and low-jittered
network latency.published_or_final_versionComputer ScienceMasterMaster of Philosoph
Social-optimized Win-win Resource Allocation for Self-organizing Cloud
Abstract—With the increasing scale of applications and the number of users, we design a Self-organizing Cloud (SoC) which aims to make use of the distributed volunteer computers or dedicated machines to provide powerful computing ability. These resources are provisioned elastically according to user’s specific demand, by leveraging virtual machine (VM) resource isolation technology. Based on such a framework, we propose a social-optimized auction-based resource allocation scheme, which mainly tackles two issues: (1) how to make full use of the widely dispersed multi-attribute idle resources to construct a win-win situation, such that each task schedule could let both sides (resource providers and consumers) be satisfied with their final payoffs. (2) The total resource utility welfare should also be optimized to guarantee the overall performance around the global system. The key challenge of getting the win-win effect with social optimization is its provable NP-completeness. Finally, we validate that our approach can effectively improve the resource contributor’s payoffs up to about five times as the level without our method via simulation work. Meanwhile, our approach can also keep a high level of the processing rate for the task scheduling. I
PVTCP: Towards Practical and Effective Congestion Control in Virtualized Datacenters
Abstract—While modern datacenters are increasingly adopting virtual machines (VMs) to provide elastic cloud services, they still rely on traditional TCP for congestion control. In virtualized datacenters, TCP endpoints are separated by a virtualization layer and subject to the intervention of the hypervisor’s scheduling. Most previous attempts focused on tuning the hypervisor layer to try to improve the VMs ’ I/O performance, and there is very little work on how a VM’s guest OS may help the transport layer to adapt to the virtualized environment. In this paper, we find that VM scheduling delays can heavily contaminate RTTs as sensed by VM senders, preventing TCP from correctly learning the physical network condition. After giving an account of the source of the problem, we propose PVTCP, a ParaVirtualized TCP to counter the distorted congestion information caused by VM scheduling on the sender side. PVTCP is self-contained, requiring no modification to the hypervisor. Experiments show that PVTCP is much more effective in addressing incast congestion in virtualized datacenters than standard TCP. I