646 research outputs found

    Wetland-based passive treatment systems for gold ore processing effluents containing residual cyanide, metals and nitrogen species

    Get PDF
    Gold extraction operations generate a variety of wastes requiring responsible disposal in compliance with current environmental regulations. During recent decades, increased emphasis has been placed on effluent control and treatment, in order to avoid the threat to the environment posed by toxic constituents. In many modern gold mining and ore processing operations, cyanide species are of most immediate concern. Given that natural degradation processes are known to reduce the toxicity of cyanide over time, trials have been made at laboratory and field scales into the feasibility of using wetland-based passive systems as low-cost and environmentally friendly methods for long-term treatment of leachates from closed gold mine tailing disposal facilities. Laboratory experiments on discrete aerobic and anaerobic treatment units supported the development of design parameters for the construction of a field-scale passive system at a gold mine site in northern Spain. An in situ pilot-scale wetland treatment system was designed, constructed and monitored over a nine-month period. Overall, the results suggest that compost-based constructed wetlands are capable of detoxifying cyanidation effluents, removing about 21.6% of dissolved cyanide and 98% of Cu, as well as nitrite and nitrate. Wetland-based passive systems can therefore be considered as a viable technology for removal of residual concentrations of cyanide from leachates emanating from closed gold mine tailing disposal facilities

    A family of higher-order single layer plate models meeting Cz0C^0_z -- requirements for arbitrary laminates

    Get PDF
    In the framework of displacement-based equivalent single layer (ESL) plate theories for laminates, this paper presents a generic and automatic method to extend a basis higher-order shear deformation theory (polynomial, trigonometric, hyperbolic, ...) to a multilayer Cz0C^0_z higher-order shear deformation theory. The key idea is to enhance the description of the cross-sectional warping: the odd high-order Cz1C^1_z function of the basis model is replaced by one odd and one even high-order function and including the characteristic zig-zag behaviour by means of piecewise linear functions. In order to account for arbitrary lamination schemes, four such piecewise continuous functions are considered. The coefficients of these four warping functions are determined in such a manner that the interlaminar continuity as well as the homogeneity conditions at the plate's top and bottom surfaces are {\em a priori} exactly verified by the transverse shear stress field. These Cz0C_z^0 ESL models all have the same number of DOF as the original basis HSDT. Numerical assessments are presented by referring to a strong-form Navier-type solution for laminates with arbitrary stacking sequences as well for a sandwich plate. In all practically relevant configurations for which laminated plate models are usually applied, the results obtained in terms of deflection, fundamental frequency and local stress response show that the proposed zig-zag models give better results than the basis models they are issued from

    Bayesian Methods for Analysis and Adaptive Scheduling of Exoplanet Observations

    Full text link
    We describe work in progress by a collaboration of astronomers and statisticians developing a suite of Bayesian data analysis tools for extrasolar planet (exoplanet) detection, planetary orbit estimation, and adaptive scheduling of observations. Our work addresses analysis of stellar reflex motion data, where a planet is detected by observing the "wobble" of its host star as it responds to the gravitational tug of the orbiting planet. Newtonian mechanics specifies an analytical model for the resulting time series, but it is strongly nonlinear, yielding complex, multimodal likelihood functions; it is even more complex when multiple planets are present. The parameter spaces range in size from few-dimensional to dozens of dimensions, depending on the number of planets in the system, and the type of motion measured (line-of-sight velocity, or position on the sky). Since orbits are periodic, Bayesian generalizations of periodogram methods facilitate the analysis. This relies on the model being linearly separable, enabling partial analytical marginalization, reducing the dimension of the parameter space. Subsequent analysis uses adaptive Markov chain Monte Carlo methods and adaptive importance sampling to perform the integrals required for both inference (planet detection and orbit measurement), and information-maximizing sequential design (for adaptive scheduling of observations). We present an overview of our current techniques and highlight directions being explored by ongoing research.Comment: 29 pages, 11 figures. An abridged version is accepted for publication in Statistical Methodology for a special issue on astrostatistics, with selected (refereed) papers presented at the Astronomical Data Analysis Conference (ADA VI) held in Monastir, Tunisia, in May 2010. Update corrects equation (3

    Bayesian astrostatistics: a backward look to the future

    Full text link
    This perspective chapter briefly surveys: (1) past growth in the use of Bayesian methods in astrophysics; (2) current misconceptions about both frequentist and Bayesian statistical inference that hinder wider adoption of Bayesian methods by astronomers; and (3) multilevel (hierarchical) Bayesian modeling as a major future direction for research in Bayesian astrostatistics, exemplified in part by presentations at the first ISI invited session on astrostatistics, commemorated in this volume. It closes with an intentionally provocative recommendation for astronomical survey data reporting, motivated by the multilevel Bayesian perspective on modeling cosmic populations: that astronomers cease producing catalogs of estimated fluxes and other source properties from surveys. Instead, summaries of likelihood functions (or marginal likelihood functions) for source properties should be reported (not posterior probability density functions), including nontrivial summaries (not simply upper limits) for candidate objects that do not pass traditional detection thresholds.Comment: 27 pp, 4 figures. A lightly revised version of a chapter in "Astrostatistical Challenges for the New Astronomy" (Joseph M. Hilbe, ed., Springer, New York, forthcoming in 2012), the inaugural volume for the Springer Series in Astrostatistics. Version 2 has minor clarifications and an additional referenc

    Observation of Entanglement-Dependent Two-Particle Holonomic Phase

    Get PDF
    Holonomic phases---geometric and topological---have long been an intriguing aspect of physics. They are ubiquitous, ranging from observations in particle physics to applications in fault tolerant quantum computing. However, their exploration in particles sharing genuine quantum correlations lack in observations. Here we experimentally demonstrate the holonomic phase of two entangled-photons evolving locally, which nevertheless gives rise to an entanglement-dependent phase. We observe its transition from geometric to topological as the entanglement between the particles is tuned from zero to maximal, and find this phase to behave more resilient to evolution changes with increasing entanglement. Furthermore, we theoretically show that holonomic phases can directly quantify the amount of quantum correlations between the two particles. Our results open up a new avenue for observations of holonomic phenomena in multi-particle entangled quantum systems.Comment: 8 pages, 6 figure

    Pencil-Beam Surveys for Faint Trans-Neptunian Objects

    Get PDF
    We have conducted pencil-beam searches for outer solar system objects to a limiting magnitude of R ~ 26. Five new trans-neptunian objects were detected in these searches. Our combined data set provides an estimate of ~90 trans-neptunian objects per square degree brighter than ~ 25.9. This estimate is a factor of 3 above the expected number of objects based on an extrapolation of previous surveys with brighter limits, and appears consistent with the hypothesis of a single power-law luminosity function for the entire trans-neptunian region. Maximum likelihood fits to all self-consistent published surveys with published efficiency functions predicts a cumulative sky density Sigma(<R) obeying log10(Sigma) = 0.76(R-23.4) objects per square degree brighter than a given magnitude R.Comment: Accepted by AJ, 18 pages, including 6 figure

    A Bayesian Periodogram Finds Evidence for Three Planets in 47 Ursae Majoris

    Full text link
    A Bayesian analysis of 47 Ursae Majoris (47 UMa) radial velocity data confirms and refines the properties of two previously reported planets with periods of 1079 and 2325 days and finds evidence for an additional long period planet with a period of approximately 10000 days. The three planet model is found to be 10^5 times more probable than the next most probable model which is a two planet model. The nonlinear model fitting is accomplished with a new hybrid Markov chain Monte Carlo (HMCMC) algorithm which incorporates parallel tempering, simulated annealing and genetic crossover operations. Each of these features facilitate the detection of a global minimum in chi-squared. By combining all three, the HMCMC greatly increases the probability of realizing this goal. When applied to the Kepler problem it acts as a powerful multi-planet Kepler periodogram. The measured periods are 1078 \pm 2, 2391{+100}{-87}, and 14002{+4018}{-5095}d, and the corresponding eccentricities are 0.032 \pm 0.014, 0.098{+.047}{-.096}, and 0.16{+.09}{-.16}. The results favor low eccentricity orbits for all three. Assuming the three signals (each one consistent with a Keplerian orbit) are caused by planets, the corresponding limits on planetary mass (M sin i) and semi-major axis are (2.53{+.07}{-.06}MJ, 2.10\pm0.02au), (0.54\pm0.07MJ, 3.6\pm0.1au), and (1.6{+0.3}{-0.5}MJ, 11.6{+2.1}{-2.9}au), respectively. We have also characterized a noise induced eccentricity bias and designed a correction filter that can be used as an alternate prior for eccentricity, to enhance the detection of planetary orbits of low or moderate eccentricity

    Validity of BMI-based Equations for Estimating Body Fat Percentage in Collegiate Male Soccer Players: A Three-Compartment Model Comparison

    Get PDF
    The ease of calculating body mass index (BMI)-based body fat percentage (BF%) is appealing in collegiate male soccer player who have limited time availability and strict training regimens. However, research has yet to evaluate whether BMI-based BF% equations are valid when compared to a criterion multi-compartment model. PURPOSE: The purpose of this study was to compare BMI-based BF% equations with a three-compartment (3C) model in collegiate male soccer players. METHODS: Sixteen NCAA Division II male soccer players (age = 21 ± 2 years; ht = 179.0 ± 8.2 cm; wt = 78.0 ± 8.5 kg) participated in this study. BMI was calculated as weight (kg) divided by height squared (m2). BF% was predicted with the BMI-based equations of Jackson et al. (BMIJA), Deurenberg et al. (BMIDE) Gallagher et al. (BMIGA), and Womersley and Durnin (BMIWO). The criterion 3C model BF% was determined using air displacement plethysmography (BOD PODŸ) for body volume and bioimpedance spectroscopy for total body water. RESULTS: The BMI-based BF% equations significantly overestimated mean group BF% for all equations when compared to the 3C model (2.78 to 5.18%; all p \u3c 0.05). The standard error of estimate ranged from 4.18 (BMIDE) to 4.29% (BMIWO). Furthermore, the 95% limits of agreement were similar for all comparisons and ranged from ±7.96 (BMIGA) to 8.18% (BMIJA). CONCLUSIONS: The results of this study demonstrate that the selected BMI-based BF% equations produce fairly small SEEs and 95% limits of agreement. However, the equations also revealed systematic error and a tendency to overestimate mean group BF% when compared to the 3C model. BMI-based equations can be used as an alternative for the individual estimation of BF% in collegiate male soccer players when a more advanced 3C model is not available, but practitioners should consider adjusting for the systematic error (e.g., decrease BMIDE by 2.78%)

    Getting the Measure of the Flatness Problem

    Get PDF
    The problem of estimating cosmological parameters such as Ω\Omega from noisy or incomplete data is an example of an inverse problem and, as such, generally requires a probablistic approach. We adopt the Bayesian interpretation of probability for such problems and stress the connection between probability and information which this approach makes explicit. This connection is important even when information is ``minimal'' or, in other words, when we need to argue from a state of maximum ignorance. We use the transformation group method of Jaynes to assign minimally--informative prior probability measure for cosmological parameters in the simple example of a dust Friedman model, showing that the usual statements of the cosmological flatness problem are based on an inappropriate choice of prior. We further demonstrate that, in the framework of a classical cosmological model, there is no flatness problem.Comment: 11 pages, submitted to Classical and Quantum Gravity, Tex source file, no figur

    Avoiding selection bias in gravitational wave astronomy

    Get PDF
    When searching for gravitational waves in the data from ground-based gravitational wave detectors it is common to use a detection threshold to reduce the number of background events which are unlikely to be the signals of interest. However, imposing such a threshold will also discard some real signals with low amplitude, which can potentially bias any inferences drawn from the population of detected signals. We show how this selection bias is naturally avoided by using the full information from the search, considering both the selected data and our ignorance of the data that are thrown away, and considering all relevant signal and noise models. This approach produces unbiased estimates of parameters even in the presence of false alarms and incomplete data. This can be seen as an extension of previous methods into the high false rate regime where we are able to show that the quality of parameter inference can be optimised by lowering thresholds and increasing the false alarm rate.Comment: 13 pages, 2 figure
    • 

    corecore