1,070,743 research outputs found

    Black pine (Pinus nigra) barks: A critical evaluation of some sampling and analysis parameters for mercury biomonitoring purposes

    Get PDF
    Abstract Tree barks are increasingly used as biomonitors of airborne pollutants. However, many authors stress the poor comparability of the results achieved in different studies. This drawback is mainly caused by a poor understanding of the critical sampling parameters to be considered. To minimize the biases that could be introduced during sampling, in this study the barks of Pinus nigra J.F. Arnold from thirteen sites were investigated in the abandoned Mt. Amiata mercury (Hg) mining district (Southern Tuscany, Italy) and surroundings. The influence of some sampling and analyzing parameters on Hg content was critically assessed. At each site, a total of eight bark samples were taken from a single tree at two heights (70 cm and 150 cm from soil) and at four different sides of the trunk, corresponding to the four cardinal directions; a composite soil sample was also collected. Mercury contents in barks range from 0.1 to 28.8 mg/kg, and are correlated with soil Hg contents (1–480 mg/kg), indicating that barks record both gaseous Hg concentrations in air, and wind-transported Hg-bearing particulate. For each tree, samples at 70 cm and 150 cm show Hg contents of the same order of magnitude, even if values for 150 cm are slightly less dispersed, possibly because barks at 70 cm are more influenced by random soil particles. There is no statistically significant dependence of Hg content on direction and tree age. Simulated rain events cause a negligible loss of Hg from barks. Results suggest that a convenient sampling practice for Pinus nigra is to collect a bark slice (typically 1–2 mm) within the outermost 1.5 cm layer

    Extracting Rules for Diagnosis of Diabetes Using Genetic Programming

    Get PDF
    Background: Diabetes is a global health challenge that cusses high incidence of major social and economic consequences. As such, early prevention or identification of those people at risk is crucial for reducing the problems caused by it. The aim of study was to extract the rules for diabetes diagnosing using genetic programming. Methods: This study utilized the PIMA dataset of the University of California, Irvine. This dataset consists of the information of 768 Pima heritage women, including 500 healthy persons and 268 persons with diabetes. Regarding the missing values and outliers in this dataset, the K-nearest neighbor and k-means methods are applied respectively. Moreover, a genetic programming model (GP) was conducted to diagnose diabetes as well as to determine the most important factors affecting it. Accuracy, sensitivity and specificity of the proposed model on the PIMA dataset were obtained as 79.32, 58.96 and 90.74%, respectively. Results: The experimental results of our model on PIMA revealed that age, PG concentration, BMI, Tri Fold Thick and Serum Ins were effective in diabetes mellitus and increased risk of diabetes. In addition, the good performance of the model coupled with the simplicity and comprehensiveness of the extracted rules is also shown by the experimental results. Conclusions: GPs can effectively implement the rules for diagnosing diabetes. Both BMI and PG Concentration are also the most important factors to increase the risk of suffering from diabetes. Keywords: Diabetes, PIMA, Genetic programming, KNNi, K-means, Missing value, Outlier detection, Rule extraction

    Extracting Rules for Diagnosis of Diabetes Using Genetic Programming

    Get PDF
    Background: Diabetes is a global health challenge that cusses high incidence of major social and economic consequences. As such, early prevention or identification of those people at risk is crucial for reducing the problems caused by it. The aim of study was to extract the rules for diabetes diagnosing using genetic programming. Methods: This study utilized the PIMA dataset of the University of California, Irvine. This dataset consists of the information of 768 Pima heritage women, including 500 healthy persons and 268 persons with diabetes. Regarding the missing values and outliers in this dataset, the K-nearest neighbor and k-means methods are applied respectively. Moreover, a genetic programming model (GP) was conducted to diagnose diabetes as well as to determine the most important factors affecting it. Accuracy, sensitivity and specificity of the proposed model on the PIMA dataset were obtained as 79.32, 58.96 and 90.74%, respectively. Results: The experimental results of our model on PIMA revealed that age, PG concentration, BMI, Tri Fold Thick and Serum Ins were effective in diabetes mellitus and increased risk of diabetes. In addition, the good performance of the model coupled with the simplicity and comprehensiveness of the extracted rules is also shown by the experimental results. Conclusions: GPs can effectively implement the rules for diagnosing diabetes. Both BMI and PG Concentration are also the most important factors to increase the risk of suffering from diabetes. Keywords: Diabetes, PIMA, Genetic programming, KNNi, K-means, Missing value, Outlier detection, Rule extraction

    IST Austria Technical Report

    Get PDF
    We consider the problem of developing automated techniques to aid the average-case complexity analysis of programs. Several classical textbook algorithms have quite efficient average-case complexity, whereas the corresponding worst-case bounds are either inefficient (e.g., QUICK-SORT), or completely ineffective (e.g., COUPONCOLLECTOR). Since the main focus of average-case analysis is to obtain efficient bounds, we consider bounds that are either logarithmic, linear, or almost-linear (O(log n), O(n), O(n · log n), respectively, where n represents the size of the input). Our main contribution is a sound approach for deriving such average-case bounds for randomized recursive programs. Our approach is efficient (a simple linear-time algorithm), and it is based on (a) the analysis of recurrence relations induced by randomized algorithms, and (b) a guess-and-check technique. Our approach can infer the asymptotically optimal average-case bounds for classical randomized algorithms, including RANDOMIZED-SEARCH, QUICKSORT, QUICK-SELECT, COUPON-COLLECTOR, where the worstcase bounds are either inefficient (such as linear as compared to logarithmic of average-case, or quadratic as compared to linear or almost-linear of average-case), or ineffective. We have implemented our approach, and the experimental results show that we obtain the bounds efficiently for various classical algorithms

    Dense subsets of products of finite trees

    Full text link
    We prove a "uniform" version of the finite density Halpern-L\"{a}uchli Theorem. Specifically, we say that a tree TT is homogeneous if it is uniquely rooted and there is an integer b≥2b\geq 2, called the branching number of TT, such that every t∈Tt\in T has exactly bb immediate successors. We show the following. For every integer d≥1d\geq 1, every b1,...,bd∈Nb_1,...,b_d\in\mathbb{N} with bi≥2b_i\geq 2 for all i∈{1,...,d}i\in\{1,...,d\}, every integer k\meg 1 and every real 0<ϵ≤10<\epsilon\leq 1 there exists an integer NN with the following property. If (T1,...,Td)(T_1,...,T_d) are homogeneous trees such that the branching number of TiT_i is bib_i for all i∈{1,...,d}i\in\{1,...,d\}, LL is a finite subset of N\mathbb{N} of cardinality at least NN and DD is a subset of the level product of (T1,...,Td)(T_1,...,T_d) satisfying ∣D∩(T1(n)×...×Td(n))∣≥ϵ∣T1(n)×...×Td(n)∣|D\cap \big(T_1(n)\times ...\times T_d(n)\big)| \geq \epsilon |T_1(n)\times ...\times T_d(n)| for every n∈Ln\in L, then there exist strong subtrees (S1,...,Sd)(S_1,...,S_d) of (T1,...,Td)(T_1,...,T_d) of height kk and with common level set such that the level product of (S1,...,Sd)(S_1,...,S_d) is contained in DD. The least integer NN with this property will be denoted by UDHL(b1,...,bd∣k,ϵ)UDHL(b_1,...,b_d|k,\epsilon). The main point is that the result is independent of the position of the finite set LL. The proof is based on a density increment strategy and gives explicit upper bounds for the numbers UDHL(b1,...,bd∣k,ϵ)UDHL(b_1,...,b_d|k,\epsilon).Comment: 36 pages, no figures; International Mathematics Research Notices, to appea

    Endpoint bounds for the quartile operator

    Full text link
    It is a result by Lacey and Thiele that the bilinear Hilbert transform maps L^{p_1}(R) \times L^{p_2}(R) into L^{p_3}(R) whenever (p_1,p_2,p_3) is a Holder tuple with p_1,p_2 > 1 and p_3>2/3. We study the behavior of the quartile operator, which is the Walsh model for the bilinear Hilbert transform, when p_3=2/3. We show that the quartile operator maps L^{p_1}(R) \times L^{p_2}(R) into L^{2/3,\infty}(R) when p_1,p_2>1 and one component is restricted to subindicator functions. As a corollary, we derive that the quartile operator maps L^{p_1}(R) \times L^{p_2,2/3}(R) into L^{2/3,\infty}(R). We also provide restricted weak-type estimates and boundedness on Orlicz-Lorentz spaces near p_1=1,p_2=2 which improve, in the Walsh case, on results of Bilyk and Grafakos, and Carro-Grafakos-Martell-Soria. Our main tool is the multi-frequency Calder\'on-Zygmund decomposition first used by Nazarov, Oberlin and Thiele.Comment: 17 pages; update includes referee's suggestions and two improved results near L^1 x L^2. To appear on Journal of Fourier Analysis and Application
    • …
    corecore