109 research outputs found
The influence of breast cancer-related lymphedema on women's return-to-work
The majority of women who develop breast cancer are under retirement age. Therefore, occupational functioning and employment are issues of significant concern. Breast cancer-related lymphedema (BCRL) is one of the major treatment complications for breast cancer patients and it has been shown to be associated with adverse work outcomes. This study is one of the first to ask "how and why" lymphedema may interact with breast cancer survivors' return-to-work. The International Classification of Functioning, Disability, and Health (ICF), which was adopted to guide research design and analysis of data related to health outcomes, was used as a platform for thinking about the phenomenon of return-to-work. Case study methodologies drawn from Yin's (2014) definition were employed in this dissertation study. Thirteen women with BCRL were enrolled in this study. Participants completed a survey and then participated in a sixty-minute semi-structured individual interview. The results suggested that the return-to-work experience was shaped by interactions of the disease processes, the work activities required, the individual, and an array of environmental factors. Four main themes emerged: 1) BCRL affects physical and emotional functioning associated with work; 2) On-going treatment for BCRL creates challenges for work; 3) Environmental factors affect the work experience; and 4) Personal factors play a key role in adjusting to return-to-work. The findings shared considerable agreement with ICF model and suggested new perspectives towards understanding the model. This study suggests implications for BCRL education, clinical practice, health policy, and research.Includes bibliographical reference
A Causal And-Or Graph Model for Visibility Fluent Reasoning in Tracking Interacting Objects
Tracking humans that are interacting with the other subjects or environment
remains unsolved in visual tracking, because the visibility of the human of
interests in videos is unknown and might vary over time. In particular, it is
still difficult for state-of-the-art human trackers to recover complete human
trajectories in crowded scenes with frequent human interactions. In this work,
we consider the visibility status of a subject as a fluent variable, whose
change is mostly attributed to the subject's interaction with the surrounding,
e.g., crossing behind another object, entering a building, or getting into a
vehicle, etc. We introduce a Causal And-Or Graph (C-AOG) to represent the
causal-effect relations between an object's visibility fluent and its
activities, and develop a probabilistic graph model to jointly reason the
visibility fluent change (e.g., from visible to invisible) and track humans in
videos. We formulate this joint task as an iterative search of a feasible
causal graph structure that enables fast search algorithm, e.g., dynamic
programming method. We apply the proposed method on challenging video sequences
to evaluate its capabilities of estimating visibility fluent changes of
subjects and tracking subjects of interests over time. Results with comparisons
demonstrate that our method outperforms the alternative trackers and can
recover complete trajectories of humans in complicated scenarios with frequent
human interactions.Comment: accepted by CVPR 201
Effects of Quercetin on Uric Acid Metabolism
Background and Objective: High blood uric acid (hyperuricemia) is a common phenomenon in populations with hypertension, hyperglycemia, obesity and/or dyslipidemia. This study was to investigate the effects of quercetin supplementation on blood uric acid level and the biochemical mechanism behind it.
Methods: A pilot trial confirmed the delivery of quercetin from a supplement tablet in healthy males (n=6). Randomised, double-blind, cross-over, placebo-controlled 4-week dietary intervention trial with the same supplement tablet containing 500 mg quercetin d-1 was conducted in selected healthy males (n=22, with higher blood uric acid but within normal range). Changes of uric acid and glucose were analysed in fasting blood plasma at 0, 2 and 4 weeks. Plasma metabolomics were profiled by 1H-NMR. Where quercetin and its metabolites may affect in the pathway of uric acid metabolism was investigated in vitro and ex vivo.
Results: At the end of the 4-week trial, plasma uric acid levels were significantly reduced (mean change -26.5 µM, 95% CI -7.6 to -45.5, P = 0.008, n=22), as were diastolic blood pressures in normotensive subjects (-3.1 mm Hg, -0.5 to -5.8, P = 0.048, n=10). Paired plasma 1H-NMR spectrum showed lowered glutamine (P = 0.008), acetoacetate (P = 0.005) and lactate (P = 0.03) after quercetin treatment. A dose-dependent inhibition of quercetin, quercetin-3'-O-sulfate and 3,4-dihydroxyphenylacetic acid on xanthine oxidase in vitro and a mild inhibitory effect of quercetin on plasma adenosine deaminase was found.
Conclusions: Quercetin supplementation can maintain blood uric acid level and blood pressure within a low-risk range. It is probably a result of regulated purine metabolism by quercetin, its microbial derivatives and their metabolites
Recommended from our members
Rare-Event Estimation and Calibration for Large-Scale Stochastic Simulation Models
Stochastic simulation has been widely applied in many domains. More recently, however, the rapid surge of sophisticated problems such as safety evaluation of intelligent systems has posed various challenges to conventional statistical methods. Motivated by these challenges, in this thesis, we develop novel methodologies with theoretical guarantees and numerical applications to tackle them from different perspectives.
In particular, our works can be categorized into two areas: (1) rare-event estimation (Chapters 2 to 5) where we develop approaches to estimating the probabilities of rare events via simulation; (2) model calibration (Chapters 6 and 7) where we aim at calibrating the simulation model so that it is close to reality.
In Chapter 2, we study rare-event simulation for a class of problems where the target hitting sets of interest are defined via modern machine learning tools such as neural networks and random forests. We investigate an importance sampling scheme that integrates the dominating point machinery in large deviations and sequential mixed integer programming to locate the underlying dominating points. We provide efficiency guarantees and numerical demonstration of our approach.
In Chapter 3, we propose a new efficiency criterion for importance sampling, which we call probabilistic efficiency. Conventionally, an estimator is regarded as efficient if its relative error is sufficiently controlled. It is widely known that when a rare-event set contains multiple "important regions" encoded by the dominating points, importance sampling needs to account for all of them via mixing to achieve efficiency. We argue that the traditional analysis recipe could suffer from intrinsic looseness by using relative error as an efficiency criterion. Thus, we propose the new efficiency notion to tighten this gap. In particular, we show that under the standard Gartner-Ellis large deviations regime, an importance sampling that uses only the most significant dominating points is sufficient to attain this efficiency notion.
In Chapter 4, we consider the estimation of rare-event probabilities using sample proportions output by crude Monte Carlo. Due to the recent surge of sophisticated rare-event problems, efficiency-guaranteed variance reduction may face implementation challenges, which motivate one to look at naive estimators. In this chapter we construct confidence intervals for the target probability using this naive estimator from various techniques, and then analyze their validity as well as tightness respectively quantified by the coverage probability and relative half-width.
In Chapter 5, we propose the use of extreme value analysis, in particular the peak-over-threshold method which is popularly employed for extremal estimation of real datasets, in the simulation setting. More specifically, we view crude Monte Carlo samples as data to fit on a generalized Pareto distribution. We test this idea on several numerical examples. The results show that in the absence of efficient variance reduction schemes, it appears to offer potential benefits to enhance crude Monte Carlo estimates.
In Chapter 6, we investigate a framework to develop calibration schemes in parametric settings, which satisfies rigorous frequentist statistical guarantees via a basic notion that we call eligibility set designed to bypass non-identifiability via a set-based estimation. We investigate a feature extraction-then-aggregation approach to construct these sets that target at multivariate outputs. We demonstrate our methodology on several numerical examples, including an application to calibration of a limit order book market simulator.
In Chapter 7, we study a methodology to tackle the NASA Langley Uncertainty Quantification Challenge, a model calibration problem under both aleatory and epistemic uncertainties. Our methodology is based on an integration of distributionally robust optimization and importance sampling. The main computation machinery in this integrated methodology amounts to solving sampled linear programs. We present theoretical statistical guarantees of our approach via connections to nonparametric hypothesis testing, and numerical performances including parameter calibration and downstream decision and risk evaluation tasks
- …