9,530 research outputs found

    Evaluating health risks from occupational exposure to pesticides and the regulatory response.

    Get PDF
    In this study, we used measurements of occupational exposures to pesticides in agriculture to evaluate health risks and analyzed how the federal regulatory program is addressing these risks. Dose estimates developed by the State of California from measured occupational exposures to 41 pesticides were compared to standard indices of acute toxicity (LD50) and chronic effects (reference dose). Lifetime cancer risks were estimated using cancer potencies. Estimated absorbed daily doses for mixers, loaders, and applicators of pesticides ranged from less than 0.0001% to 48% of the estimated human LD50 values, and doses for 10 of 40 pesticides exceeded 1% of the estimated human LD50 values. Estimated lifetime absorbed daily doses ranged from 0.1% to 114,000% of the reference doses developed by the U.S. Environmental Protection Agency, and doses for 13 of 25 pesticides were above them. Lifetime cancer risks ranged from 1 per million to 1700 per million, and estimates for 12 of 13 pesticides were above 1 per million. Similar results were obtained for field workers and flaggers. For the pesticides examined, exposures pose greater risks of chronic effects than acute effects. Exposure reduction measures, including use of closed mixing systems and personal protective equipment, significantly reduced exposures. Proposed regulations rely primarily on requirements for personal protective equipment and use restrictions to protect workers. Chronic health risks are not considered in setting these requirements. Reviews of pesticides by the federal pesticide regulatory program have had little effect on occupational risks. Policy strategies that offer immediate protection for workers and that are not dependent on extensive review of individual pesticides should be pursued

    Hot new directions for quasi-Monte Carlo research in step with applications

    Full text link
    This article provides an overview of some interfaces between the theory of quasi-Monte Carlo (QMC) methods and applications. We summarize three QMC theoretical settings: first order QMC methods in the unit cube [0,1]s[0,1]^s and in Rs\mathbb{R}^s, and higher order QMC methods in the unit cube. One important feature is that their error bounds can be independent of the dimension ss under appropriate conditions on the function spaces. Another important feature is that good parameters for these QMC methods can be obtained by fast efficient algorithms even when ss is large. We outline three different applications and explain how they can tap into the different QMC theory. We also discuss three cost saving strategies that can be combined with QMC in these applications. Many of these recent QMC theory and methods are developed not in isolation, but in close connection with applications

    Language, learning and context: developing students' critical thinking in Hong Kong secondary school English writing classes

    Get PDF
    published_or_final_versionThe 42nd Annual Meeting of the British Association for Applied Linguistics (BAAL), Newcastle University, U.K., 3-5 September 2009. In Proceedings of the BAAL Annual Conference, 2009, p. 105-10

    Bronchoabsorption; a novel bronchoscopic technique to improve biomarker sampling of the airway

    Get PDF
    BACKGROUND: Current techniques used to obtain lung samples have significant limitations and do not provide reproducible biomarkers of inflammation. We have developed a novel technique that allows multiple sampling methods from the same area (or multiple areas) of the lung under direct bronchoscopic vision. It allows collection of mucosal lining fluid and bronchial brushing from the same site; biopsy samples may also be taken. The novel technique takes the same time as standard procedures and can be conducted safely. METHODS: Eight healthy smokers aged 40–65 years were included in this study. An absorptive filter paper was applied to the bronchial mucosa under direct vision using standard bronchoscopic techniques. Further samples were obtained from the same site using bronchial brushings. Bronchoalveolar lavage (BAL) was obtained using standard techniques. Chemokine (C-C Motif) Ligand 20 (CCL20), CCL4, CCL5, Chemokine (C-X-C Motif) Ligand 1 (CXCL1), CXCL8, CXCL9, CXCL10, CXCL11, Interleukin 1 beta (IL-1β), IL-6, Vascular endothelial growth factor (VEGF), Matrix metalloproteinase 8 (MMP-8) and MMP-9 were measured in exudate and BAL. mRNA was collected from the bronchial brushings for gene expression analysis. RESULTS: A greater than 10 fold concentration of all the biomarkers was detected in lung exudate in comparison to BAL. High yield of good quality RNA with RNA integrity numbers (RIN) between 7.6 and 9.3 were extracted from the bronchial brushings. The subset of genes measured were reproducible across the samples and corresponded to the inflammatory markers measured in exudate and BAL. CONCLUSIONS: The bronchoabsorption technique as described offers the ability to sample lung fluid direct from the site of interest without the dilution effects caused by BAL. Using this method we were able to successfully measure the concentrations of biomarkers present in the lungs as well as collect high yield mRNA samples for gene expression analysis from the same site. This technique demonstrates superior sensitivity to standard BAL for the measurement of biomarkers of inflammation. It could replace BAL as the method of choice for these measurements. This method provides a systems biology approach to studying the inflammatory markers of respiratory disease progression. TRIAL REGISTRATION: NHS Health Research Authority (13/LO/0256)

    Compressing Inertial Motion Data in Wireless Sensing Systems – An Initial Experiment

    Get PDF
    The use of wireless inertial motion sensors, such as accelerometers, for supporting medical care and sport’s training, has been under investigation in recent years. As the number of sensors (or their sampling rates) increases, compressing data at source(s) (i.e. at the sensors), i.e. reducing the quantity of data that needs to be transmitted between the on-body sensors and the remote repository, would be essential especially in a bandwidth-limited wireless environment. This paper presents a set of compression experiment results on a set of inertial motion data collected during running exercises. As a starting point, we selected a set of common compression algorithms to experiment with. Our results show that, conventional lossy compression algorithms would achieve a desirable compression ratio with an acceptable time delay. The results also show that the quality of the decompressed data is within acceptable range

    Successive Coordinate Search and Component-by-Component Construction of Rank-1 Lattice Rules

    Full text link
    The (fast) component-by-component (CBC) algorithm is an efficient tool for the construction of generating vectors for quasi-Monte Carlo rank-1 lattice rules in weighted reproducing kernel Hilbert spaces. We consider product weights, which assigns a weight to each dimension. These weights encode the effect a certain variable (or a group of variables by the product of the individual weights) has. Smaller weights indicate less importance. Kuo (2003) proved that the CBC algorithm achieves the optimal rate of convergence in the respective function spaces, but this does not imply the algorithm will find the generating vector with the smallest worst-case error. In fact it does not. We investigate a generalization of the component-by-component construction that allows for a general successive coordinate search (SCS), based on an initial generating vector, and with the aim of getting closer to the smallest worst-case error. The proposed method admits the same type of worst-case error bounds as the CBC algorithm, independent of the choice of the initial vector. Under the same summability conditions on the weights as in [Kuo,2003] the error bound of the algorithm can be made independent of the dimension dd and we achieve the same optimal order of convergence for the function spaces from [Kuo,2003]. Moreover, a fast version of our method, based on the fast CBC algorithm by Nuyens and Cools, is available, reducing the computational cost of the algorithm to O(dnlog(n))O(d \, n \log(n)) operations, where nn denotes the number of function evaluations. Numerical experiments seeded by a Korobov-type generating vector show that the new SCS algorithm will find better choices than the CBC algorithm and the effect is better when the weights decay slower.Comment: 13 pages, 1 figure, MCQMC2016 conference (Stanford

    Application of quasi-Monte Carlo methods to PDEs with random coefficients -- an overview and tutorial

    Full text link
    This article provides a high-level overview of some recent works on the application of quasi-Monte Carlo (QMC) methods to PDEs with random coefficients. It is based on an in-depth survey of a similar title by the same authors, with an accompanying software package which is also briefly discussed here. Embedded in this article is a step-by-step tutorial of the required analysis for the setting known as the uniform case with first order QMC rules. The aim of this article is to provide an easy entry point for QMC experts wanting to start research in this direction and for PDE analysts and practitioners wanting to tap into contemporary QMC theory and methods.Comment: arXiv admin note: text overlap with arXiv:1606.0661

    Improvement of musculoskeletal model inputs : adjustment of acceleration by dynamic optimisation

    Get PDF
    The knowledge of intrinsic dynamics in terms of joint torques and muscle tensions is of importance for clinical investigations. The common process is to solve a multibody inverse dynamic problem based on a set of iterative equations using noisy experimental data as guest. Body segment accelerations are usually assessed by double differentiation, a method well-known to amplify kinematic measurement noise. As a result, iterative equations propagate uncertainties leading to inconsistencies between measured external force and the rate of change of linear momentum. Recent studies addressed this residual force problem by adjusting mass distribution while they calculate force tensions or by dealing with acceleration computation. However, these different approaches were based on a least-square problem still leading to approximate intrinsic dynamics. The aim of this communication is to compute joint accelerations by solving a dynamic optimization problem. We will examine the effect of the optimal adjustment on joint torques and muscle tensions

    Evaluation of the importance of various operating and sludge property parameters to the fouling of membrane bioreactors

    Get PDF
    A single-fibre microfiltration system was employed to investigate the importance of various operating and sludge property parameters to the membrane fouling during sludge filtration. The sludge was obtained from a submerged membrane bioreactor (SMBR). A series of comparative and correlative filtration and fouling tests were conducted on the influence of the operating variables, sludge properties and the liquid-phase organic substances on the membrane fouling development. The test results were analysed statistically with Pearson's correlation coefficients and the stepwise multivariable linear regression. According to the statistical evaluation, the membrane fouling rate has a positive correlation with the biopolymer cluster (BPC) concentration, sludge concentration (mixed liquor suspended solids, MLSS), filtration flux and viscosity, a negative correlation with the cross-flow velocity, and a weak correlation with the extracellular polymeric substances and soluble microbial products. BPC appear to be the most important factor to membrane fouling development during the sludge filtration, followed by the filtration flux and MLSS concentration. The cross-flow rate also is important to the fouling control. It is argued that, during membrane filtration of SMBR sludge, BPC interact with sludge flocs at the membrane surface to facilitate the deposition of the sludge cake layer, leading to serious membrane fouling.postprin
    corecore