207 research outputs found

    Human African Trypanosomiasis: Real Obstacles to Elimination

    Get PDF
    Abstract presented at: 5ème Congrès International de Pathologie Infectieuse et Parasitaire - en présence du Ministre de la Santé, Kinshasa, DRC, November 2009Significant progress has been made in controlling human African trypanosomiasis (HAT) caused by T.b. gambiense as evidenced by the clear decline in the number of reported cases in recent years. Now the prevailing discourse is about the possible elimination of HAT and the need to integrate treatment for it into existing health structures. However, “Hot spots” still exist and one of which is the northeastern region of Orientale Province in the Democratic Republic of Congo (DRC). In this region there is neither a monitoring system nor working health centres capable of diagnosing and treating patients.\ud An assessment carried out by the DRC’s national program to fight HAT and Doctors Without Borders/Médecins Sans Frontières (MSF) in 2004 discovered an alarming prevalence (2.1%) in the region. Between June 2007 and March 2009 MSF launched a HAT monitoring program in the Doruma, Ango, and Bili health zones. The overall prevalence was found to be 3.4%. Of the 46,601 people tested (18,559 through passive screening and 28,042 through active screening), 1,570 people were infected with T.b. gambiense. Of that group, 947 (60%) were in the first phase of HAT, indicating intense transmission of the disease. \ud Due to the acute insecurity in this region of the DRC, MSF had to suspend its projects in March 2009, even though the limits of the disease foci had not yet been reached. Moreover, the disease could spread further by the displacement of entire populations who are fleeing the insecurity and heading for areas that had been previously “cleaned” of HAT.\ud The intervention, which took place during a crisis situation, leads us to question the feasibility of eliminating HAT and integrating treatment in crisis areas where health services are at a minimum

    Critically Appraised Paper for “The Effect of Modified Constraint-Induced Movement Therapy on Spasticity and Motor Function of the Affected Arm in Patients with Chronic Stroke.”

    Get PDF
    This study explored the effect of modified constraint-induced movement therapy (CIMT) on the spasticity and functional use of the affected arm and hand among persons of working age who presented with spastic hemiplegia resulting from a stroke that occurred more than 6 months ago. The researchers developed a modified CIMT program for use in an outpatient rehabilitation clinic with intensive and varied exercise training aimed at targeting the negative symptoms of spastic hemiplegia. Previous research on CIMT has taken place in laboratory settings and has not specifically focused on CIMT’s effects on spasticity. The researchers used a battery of assessments to evaluate the effects of the modified CIMT program on spasticity, active range of motion (AROM), grip strength, daily hand use, functional change in dexterity, and gross manual dexterity of the affected limb. Participants took part in a 2-week modified CIMT intervention in which they were instructed to wear a restraint on their unaffected arm for 90% of each day and were encouraged to actively use the affected arm in daily activities at home. From Monday through Friday, participants completed an individualized training program for 6 hr/day at the outpatient clinic. On the weekends, participants were instructed to continue wearing the restraint; they were asked not to perform any exercise but continue with their daily activities. The training program was implemented at an outpatient rehabilitation clinic by an occupational therapist and a physiotherapist. Participants were initially assessed for baseline data. They were then retested for changes in spasticity and functional use of the affected limb after the 2-week modified CIMT training period and again at the 6-month follow-up. At the end of the 2-week training period and the 6-month postintervention follow-up, results showed that application of the modified CIMT program was successful in reducing spasticity in the affected elbow and wrist flexors, increasing AROM in the affected elbow and wrist, increasing grip strength of the affected hand, and increasing functional use in the affected arm and hand. This study suggests that a 2-week modified CIMT program, using intensive and varied exercise training aimed at the negative symptoms of spastic hemiplegia, can be used in outpatient rehabilitation clinics to reduce spasticity and increase functional use among persons with poststroke upper extremity spastic hemiplegia. This study further suggests that these changes may persist 6 months after completion of the program. This study lacks generalizability to populations outside the intervention group, because of its small sample size and noninclusion of patients older than 67 years. This study also lacks a control group, which diminishes its validity. In summary, a modified CIMT intervention shows promising results for reducing spasticity among persons ages 22–67 years with poststroke upper extremity spastic hemiplegia; however, research on this topic would benefit from further validation through studies that include a larger sample size, a control group, and a greater age range of participants

    Corporate Hedging, Executive Compensation and Commodity Price Prediction

    Get PDF
    This thesis examines the agency problem surrounding the corporate hedging decision. It gives insight on how managerial incentives impact corporate hedging decisions and on how executive compensation can be used to minimize the agency problem and factors determining the optimal compensation. The model predictions are then tested against empirical data. One of the factors aecting optimal executive compensation is volatility of commodity prices. To explore this, the last chapter develops an empirical model to forecast commodity prices. Past theoretical and empirical studies found that risk-averse managers tend to overhedge, without analyzing how to align shareholders and managers hedging strategies. In this dissertation I develop a model aligning hedging strategies using executive compensation, incorporating a risk-averse managers utility into the hedging decision. Consistent with standard theories, the model show managers hedge more of the expected production than shareholders. The model shows there is a decrease in corporate hedging with the presence of managerial equity-based incentive pay. It also shows managerial incentives can be used to impact corporate hedging to minimize agency problem. To align and optimize managerial hedging decisions, the optimal managerial incentive should comprise more of the equity-based portion when there is a low risk tolerance, or low price volatility, or a low variable cost. In contrast, when there is high coecient of absolute risk aversion, or low price volatility, or high variable cost, it is best to compensate the manager with a lower equity-based portion in order to optimally align hedging decisions. In other words, by determining and examining the primary factors aecting compensation scheme includes risk aversion, price volatility, and prot margin we can determine the optimal compensation scheme. When there is a low (high) coecient of absolute risk aversion, low (high) price volatility, or low (high) variable cost, then optimal compensation should comprise more (less) equity-based incentives. Next, using empirical data I test the model predictions from the theoretical framework; (i) when incentive pay increases, the optimal hedge ratio decreases, (ii) when price volatility increases, the optimal hedge ratio decreases, while price volatility have a negative relation with equity-based incentive, (iii) when risk aversion increases, the optimal hedge ratio decreases, while risk aversion have a negative relation with equity-based incentive, and (iv) when variable cost increases, the optimal hedge ratio decreases, while variable cost have a negative relation with equity-based incentive. The predictions are tested against data obtained from oil and gas rms using a standard regression approach. I nd that the model predictions are further supported by empirical evidence from the oil and gas industry showing (i) a negative relationship between incentive pay and hedge ratio, (ii) a negative relationship between price volatility and hedge ratio/incentive pay, (iii) a negative relationship between risk aversion and hedge ratio/incentive pay, and (iv) a negative relationship between price volatility and hedge ratio/incentive pay. Overall, the rst two chapters claries the optimal compensation scheme under varying economic environments in order to mitigate the agency problem associated with hedging decisions. Last, a new model for the series of West Texas Intermediate (WTI) crude oil prices process is introduced, which accommodates spikes and local trends in its trajectory, as well as the multimodality of its sample distribution. The model relies on the convolution of two stationary processes, causal and noncausal processes, which allows for the estimation of the monthly WTI crude oil prices series. As an alternative specication, the mixed causal-noncausal autoregressive (MAR) models are estimated and used for oil price prediction. Two forecasting methods developed in the literature on MAR processes are applied to the data and compared. In addition, this chapter examines the long-term relationships between the WTI crude oil price, the Ontario Energy Price Index (OEP) and the Ontario Consumer Price Index (OCPI). These relationships are established using the cointegration analysis. The vector error correction (VEC) model allows us to predict the Ontario price indexes and the WTI crude oil prices. This chapter shows an alternative simple method of forecasting Ontario price indexes from stationary combinations of WTI crude oil price forecasts obtained from the mixed causal-noncausal autoregressive (MAR) models. This chapter shows that both method of prediction yields forecasts that are close approximation of the out of sample value

    Instrumento para medir el empoderamiento de la mujer (IMEM): evidencias psicométricas en mujeres jóvenes de Lima Metropolitana

    Get PDF
    Este estudio tuvo como objetivo analizar las propiedades psicométricas del Instrumento para Medir el Empoderamiento en Mujeres (IMEM) en mujeres jóvenes de Lima Metropolitana, para ello se contó con una muestra de 300 féminas de edades entre 18 a 29 años (M=23.15, DE= 3.282). El análisis de ítems halló valores no adecuados en varios reactivos. Por ello, se eliminaron aquellos reactivos que no contaban con calidad métrica, de igual manera, se obtuvo resultados poco favorables en sus índices de ajuste del AFC, es así que se ejecutó un AFE, el cual retuvo 22 ítems en 4 factores explicando el 39.7% de varianza total. Sin embargo, la relación entre el cuarto factor y el tercero era pequeña (r= .057), siendo así se procedió a analizar un modelo de segundo orden, hallando adecuados índices de ajuste: X2/gl=2.23, NFI=.952, CFI=.934, GFI=.972, AGFI=.959, RMSEA=.061, SRMR=.080 y TLI=.925. Por otro lado, la correlación con la CD-RISC10 fue directa y significativa confirmando validez de tipo convergente. Asimismo, la consistencia interna fue de ω >.70 para la escala total y sus factores. Finalmente, se elaboraron datos normativos considerando tres niveles (bajo empoderamiento ≤ 63; empoderamiento regular = 64 hasta 75; alto empoderamiento ≥ 76)

    Would firm generators facilitate or deter variable renewable energy in a carbon-free electricity system?

    Get PDF
    To reduce atmospheric carbon dioxide emissions and mitigate impacts of climate change, countries across the world have mandated quotas for renewable electricity. But a question has remained largely unexplored: would low-cost, firm, zero-carbon electricity generation technologies enhance—or would they displace—deployment of variable renewable electricity generation technologies, i.e., wind and solar photovoltaics, in a least-cost, fully reliable, and deeply decarbonized electricity system? To address this question, we modeled idealized electricity systems based on historical weather data and considered only technoeconomic factors. We did not apply a predetermined use model. We found that cost reductions in firm generation technologies (starting at current costs, ramping down to nearly zero) uniformly resulted in increased penetration of the firm technologies and decreased penetration of variable renewable electricity generation, in electricity systems where technology deployment is primarily driven by relative costs, and across a wide array of future technology cost assumptions. Similarly, reduced costs of variable renewable electricity (starting at current costs, ramping down to nearly zero) drove out firm generation technologies. Yet relative to deployment of “must-run” firm generation technologies, and when the studied firm technologies have high fixed costs relative to variable costs, the addition of flexibility to firm generation technologies had only limited impacts on the system cost, less than a 9% system cost reduction in our idealized model. These results reveal that policies and funding that support particular technologies for low- or zero-carbon electricity generation can inhibit the development of other low- or zero-carbon alternatives

    Turbulence in Focus: Benchmarking Scaling Behavior of 3D Volumetric Super-Resolution with BLASTNet 2.0 Data

    Full text link
    Analysis of compressible turbulent flows is essential for applications related to propulsion, energy generation, and the environment. Here, we present BLASTNet 2.0, a 2.2 TB network-of-datasets containing 744 full-domain samples from 34 high-fidelity direct numerical simulations, which addresses the current limited availability of 3D high-fidelity reacting and non-reacting compressible turbulent flow simulation data. With this data, we benchmark a total of 49 variations of five deep learning approaches for 3D super-resolution - which can be applied for improving scientific imaging, simulations, turbulence models, as well as in computer vision applications. We perform neural scaling analysis on these models to examine the performance of different machine learning (ML) approaches, including two scientific ML techniques. We demonstrate that (i) predictive performance can scale with model size and cost, (ii) architecture matters significantly, especially for smaller models, and (iii) the benefits of physics-based losses can persist with increasing model size. The outcomes of this benchmark study are anticipated to offer insights that can aid the design of 3D super-resolution models, especially for turbulence models, while this data is expected to foster ML methods for a broad range of flow physics applications. This data is publicly available with download links and browsing tools consolidated at https://blastnet.github.io.Comment: Accepted in Advances in Neural Information Processing Systems 36 (NeurIPS 2023). 55 pages, 21 figures. v2: Corrected co-author name. Keywords: Super-resolution, 3D, Neural Scaling, Physics-informed Loss, Computational Fluid Dynamics, Partial Differential Equations, Turbulent Reacting Flows, Direct Numerical Simulation, Fluid Mechanics, Combustio
    • …
    corecore