644 research outputs found

    Fecal Sludge Management: a comparative assessment of 12 cities

    Get PDF
    This paper outlines the findings of a fecal sludge management (FSM) initial scoping study in twelve cities. This short, desk-based study used innovative tools to assess the institutional context and the outcome in terms of the amount of fecal sludge safely managed. A range of cities was included in the review, all in low- and middle-income countries. None of the cities studied managed fecal sludge effectively, although performance varied. Where cities are seeking to address fecal sludge challenges the solutions are, at best, only partial, with a focus on sewerage which serves a small minority in most cases. FSM requires strong city-level oversight and an enabling environment that drives coordinated actions along the sanitation service chain; this was largely absent in the cities studied. Based on the findings of the review a typology of cities was developed to aid the identification of key interventions to improve FSM service delivery. Additional work is recommended to further improve the tools used in this study in order to enable better understanding of the FSM challenges and identify appropriate operational solutions

    Fecal Sludge Management: analytical tools for assessing FSM in cities

    Get PDF
    This paper describes the results of a research study which aimed in part to develop a method for rapidly assessing fecal sludge management (FSM) in low- and middle-income cities. The method uses innovative tools to assess both the institutional context and the outcome in terms of the amount of fecal sludge safely managed. To assess FSM outcomes, a fecal sludge matrix and accompanying flow diagram was developed to illustrate the different pathways fecal sludge takes from containment in water closets, pits and tanks, through to treatment and reuse/disposal. This was supplemented by an FSM service delivery assessment (SDA) tool which measures the quality of the enabling environment, the level of service development and the level of commitment to service sustainability. The tools were developed through an iterative process of literature review, consultation and case studies. This paper considers previous work done on FSM, suggest reasons why it is often neglected in favour of sewerage, and highlights the importance of supporting the increasing focus on solving the FSM challenge. The tools are presented here as useful initial scoping instruments for use in advocacy around the need for a change in policy, funding or indeed a city’s overall approach to urban sanitation

    Minimising the impact of scale-dependent galaxy bias on the joint cosmological analysis of large scale structures

    Get PDF
    We present a mitigation strategy to reduce the impact of non-linear galaxy bias on the joint ‘3 × 2pt’ cosmological analysis of weak lensing and galaxy surveys. The Ψ-statistics that we adopt are based on Complete Orthogonal Sets of E/B Integrals (COSEBIs). As such they are designed to minimize the contributions to the observable from the smallest physical scales where models are highly uncertain. We demonstrate that Ψ-statistics carry the same constraining power as the standard two-point galaxy clustering and galaxy-galaxy lensing statistics, but are significantly less sensitive to scale-dependent galaxy bias. Using two galaxy bias models, motivated by halo-model fits to data and simulations, we quantify the error in a standard 3 × 2pt analysis where constant galaxy bias is assumed. Even when adopting conservative angular scale cuts, that degrade the overall cosmological parameter constraints, we find of order 1σ biases for Stage III surveys on the cosmological parameter S8 = σ8(Ωm/0.3)α. This arises from a leakage of the smallest physical scales to all angular scales in the standard two-point correlation functions. In contrast, when analysing Ψ-statistics under the same approximation of constant galaxy bias, we show that the bias on the recovered value for S8 can be decreased by a factor of ∼2, with less conservative scale cuts. Given the challenges in determining accurate galaxy bias models in the highly non-linear regime, we argue that 3 × 2pt analyses should move towards new statistics that are less sensitive to the smallest physical scales

    Very weak lensing in the CFHTLS Wide: Cosmology from cosmic shear in the linear regime

    Full text link
    We present an exploration of weak lensing by large-scale structure in the linear regime, using the third-year (T0003) CFHTLS Wide data release. Our results place tight constraints on the scaling of the amplitude of the matter power spectrum sigma_8 with the matter density Omega_m. Spanning 57 square degrees to i'_AB = 24.5 over three independent fields, the unprecedented contiguous area of this survey permits high signal-to-noise measurements of two-point shear statistics from 1 arcmin to 4 degrees. Understanding systematic errors in our analysis is vital in interpreting the results. We therefore demonstrate the percent-level accuracy of our method using STEP simulations, an E/B-mode decomposition of the data, and the star-galaxy cross correlation function. We also present a thorough analysis of the galaxy redshift distribution using redshift data from the CFHTLS T0003 Deep fields that probe the same spatial regions as the Wide fields. We find sigma_8(Omega_m/0.25)^0.64 = 0.785+-0.043 using the aperture-mass statistic for the full range of angular scales for an assumed flat cosmology, in excellent agreement with WMAP3 constraints. The largest physical scale probed by our analysis is 85 Mpc, assuming a mean redshift of lenses of 0.5 and a LCDM cosmology. This allows for the first time to constrain cosmology using only cosmic shear measurements in the linear regime. Using only angular scales theta> 85 arcmin, we find sigma_8(Omega_m/0.25)_lin^0.53 = 0.837+-0.084, which agree with the results from our full analysis. Combining our results with data from WMAP3, we find Omega_m=0.248+-0.019 and sigma_8 = 0.771+-0.029.Comment: 23 pages, 16 figures (A&A accepted

    Prognosis and course of work-participation in patients with chronic non-specific low back pain: a 12-month follow-up cohort study

    Get PDF
    AbstractOBJECTIVE:To investigate the clinical course of, and prognostic factors for, work-participation in patients with chronic non-specific low back pain.METHODS:A total of 1,608 patients with chronic non-specific low back pain received a multidisciplinary therapy and were evaluated at baseline and 2-, 5- and 12-month follow-ups. Recovery was defined as absolute recovery if the patient worked 90% of his contract hours at follow-up. Potential factors were identified using multivariable logistic regression analysis.RESULTS:Patients reported a mean increase in work-participation from 38% at baseline to 82% after 12 months. Prognostic factors for ≥ 90% work-participation at 5 months were being married (odds ratio (OR) 1.72 (95% confidence interval (95% CI) 1.12-2.65)), male (OR 1.99 (95% CI 1.24-3.20)), a higher score on disability (OR 1.00 (95% CI 0.997-1.02)) and physical component scale (Short-Form 36 (SF-36)) (OR 1.05 (95% CI 1.02-1.07)), previous rehabilitation (OR 1.85 (95% CI 1.14-2.98)), not receiving sickness benefits (OR 0.52 (95% CI 0.24-1.10)) and more work-participation (OR 4.86 (95% CI 2.35-10.04)). More work-participation (OR 5.22 (95% CI 3.47-7.85)) and male sex (OR 1.79 (95% CI 1.25-2.55)) were also prognostic factors at 12-month follow-up.CONCLUSION:At 12 months 52% of patients reported ≥ 90% work-participation. The strongest prognostic factor was more work-participation at baseline for the recovery of chronic non-specific low back pai

    Small portion sizes in worksite cafeterias: do they help consumers to reduce their food intake?

    Get PDF
    Background:Environmental interventions directed at portion size might help consumers to reduce their food intake.Objective:To assess whether offering a smaller hot meal, in addition to the existing size, stimulates people to replace their large meal with a smaller meal.Design:Longitudinal randomized controlled trial assessing the impact of introducing small portion sizes and pricing strategies on consumer choices.Setting/participants:In all, 25 worksite cafeterias and a panel consisting of 308 consumers (mean age39.18 years, 50% women).Intervention:A small portion size of hot meals was offered in addition to the existing size. The meals were either proportionally priced (that is, the price per gram was comparable regardless of the size) or value size pricing was employed.Main outcome measures:Daily sales of small and the total number of meals, consumers self-reported compensation behavior and frequency of purchasing small meals.Results:The ratio of small meals sales in relation to large meals sales was 10.2%. No effect of proportional pricing was found B0.11 (0.33), P0.74, confidence interval (CI): 0.76 to 0.54). The consumer data indicated that 19.5% of the participants who had selected a small meal often-to-always purchased more products than usual in the worksite cafeteria. Small meal purchases were negatively related to being male (B0.85 (0.20), P0.00, CI: 1.24 to 0.46, n178).Conclusion:When offering a small meal in addition to the existing size, a percentage of consumers that is considered reasonable were inclined to replace the large meal with the small meal. Proportional prices did not have an additional effect. The possible occurrence of compensation behavior is an issue that merits further attention. © 2011 Macmillan Publishers Limited All rights reserved

    Fractional Dirac Bracket and Quantization for Constrained Systems

    Full text link
    So far, it is not well known how to deal with dissipative systems. There are many paths of investigation in the literature and none of them present a systematic and general procedure to tackle the problem. On the other hand, it is well known that the fractional formalism is a powerful alternative when treating dissipative problems. In this paper we propose a detailed way of attacking the issue using fractional calculus to construct an extension of the Dirac brackets in order to carry out the quantization of nonconservative theories through the standard canonical way. We believe that using the extended Dirac bracket definition it will be possible to analyze more deeply gauge theories starting with second-class systems.Comment: Revtex 4.1. 9 pages, two-column. Final version to appear in Physical Review

    The WiggleZ Dark Energy Survey: Direct constraints on blue galaxy intrinsic alignments at intermediate redshifts

    Get PDF
    Correlations between the intrinsic shapes of galaxy pairs, and between the intrinsic shapes of galaxies and the large-scale density field, may be induced by tidal fields. These correlations, which have been detected at low redshifts (z<0.35) for bright red galaxies in the Sloan Digital Sky Survey (SDSS), and for which upper limits exist for blue galaxies at z~0.1, provide a window into galaxy formation and evolution, and are also an important contaminant for current and future weak lensing surveys. Measurements of these alignments at intermediate redshifts (z~0.6) that are more relevant for cosmic shear observations are very important for understanding the origin and redshift evolution of these alignments, and for minimising their impact on weak lensing measurements. We present the first such intermediate-redshift measurement for blue galaxies, using galaxy shape measurements from SDSS and spectroscopic redshifts from the WiggleZ Dark Energy Survey. Our null detection allows us to place upper limits on the contamination of weak lensing measurements by blue galaxy intrinsic alignments that, for the first time, do not require significant model-dependent extrapolation from the z~0.1 SDSS observations. Also, combining the SDSS and WiggleZ constraints gives us a long redshift baseline with which to constrain intrinsic alignment models and contamination of the cosmic shear power spectrum. Assuming that the alignments can be explained by linear alignment with the smoothed local density field, we find that a measurement of \sigma_8 in a blue-galaxy dominated, CFHTLS-like survey would be contaminated by at most +/-0.02 (95% confidence level, SDSS and WiggleZ) or +/-0.03 (WiggleZ alone) due to intrinsic alignments. [Abridged]Comment: 18 pages, 12 figures, accepted to MNRAS; v2 has correction to one author's name, NO other changes; v3 has minor changes in explanation and calculations, no significant difference in results or conclusions; v4 has an additional footnote about model interpretation, no changes to data/calculations/result

    Predicting long-term sickness absence among retail workers after four days of sick-listing

    Get PDF
    Objective This study tested and validated an existing tool for its ability to predict the risk of long-term (ie, ≥6 weeks) sickness absence (LTSA) after four days of sick-listing. Methods A 9-item tool is completed online on the fourth day of sick-listing. The tool was tested in a sample (N=13 597) of food retail workers who reported sick between March and May 2017. It was validated in a new sample (N=104 698) of workers (83% retail) who reported sick between January 2020 and April 2021. LTSA risk predictions were calibrated with the Hosmer-Lemeshow (H-L) test; non-significant H-L P-values indicated adequate calibration. Discrimination between workers with and without LTSA was investigated with the area (AUC) under the receiver operating characteristic (ROC) curve. Results The data of 2203 (16%) workers in the test sample and 14 226 (13%) workers in the validation sample was available for analysis. In the test sample, the tool together with age and sex predicted LTSA (H-L test P=0.59) and discriminated between workers with and without LTSA [AUC 0.85, 95% confidence interval (CI) 0.83–0.87]. In the validation sample, LTSA risk predictions were adequate (H-L test P=0.13) and discrimination was excellent (AUC 0.91, 95% CI 0.90–0.92). The ROC curve had an optimal cut-off at a predicted 36% LTSA risk, with sensitivity 0.85 and specificity 0.83. Conclusion The existing 9-item tool can be used to invite sick-listed retail workers with a ≥36% LTSA risk for expedited consultations. Further studies are needed to determine LTSA cut-off risks for other economic sectors

    Cosmic Shear Analysis with CFHTLS Deep data

    Full text link
    We present the first cosmic shear measurements obtained from the T0001 release of the Canada-France-Hawaii Telescope Legacy Survey. The data set covers three uncorrelated patches (D1, D3 and D4) of one square degree each observed in u*, g', r', i' and z' bands, out to i'=25.5. The depth and the multicolored observations done in deep fields enable several data quality controls. The lensing signal is detected in both r' and i' bands and shows similar amplitude and slope in both filters. B-modes are found to be statistically zero at all scales. Using multi-color information, we derived a photometric redshift for each galaxy and separate the sample into medium and high-z galaxies. A stronger shear signal is detected from the high-z subsample than from the low-z subsample, as expected from weak lensing tomography. While further work is needed to model the effects of errors in the photometric redshifts, this results suggests that it will be possible to obtain constraints on the growth of dark matter fluctuations with lensing wide field surveys. The various quality tests and analysis discussed in this work demonstrate that MegaPrime/Megacam instrument produces excellent quality data. The combined Deep and Wide surveys give sigma_8= 0.89 pm 0.06 assuming the Peacock & Dodds non-linear scheme and sigma_8=0.86 pm 0.05 for the halo fitting model and Omega_m=0.3. We assumed a Cold Dark Matter model with flat geometry. Systematics, Hubble constant and redshift uncertainties have been marginalized over. Using only data from the Deep survey, the 1 sigma upper bound for w_0, the constant equation of state parameter is w_0 < -0.8.Comment: 14 pages, 16 figures, accepted A&
    corecore