86 research outputs found

    Analyse fine des interactions entre fluctuations de vitesse et de température dans une couche de mélange

    Get PDF
    On s’intéresse au couplage vitesse-température au sein d’une couche de mélange anisotherme entre deux masses d’air de vitesses et de températures différentes telles que rencontrées dans les dispositifs de rideaux d'air séparateurs d'ambiance. Les mesures, basées sur une nouvelle technique d’anémométrie fil chaud, fil chaud unique à surchauffe variable, sont synchronisées et permettent une analyse fine des interactions entre vitesse et de température dans le mélange turbulent. Une analyse en quadrants confirme la prédominance, dans le flux de chaleur turbulent, des éjections aux points selles des régions inter-tourbillonnaires

    Etude expérimentale d'une couche de mélange anisotherme

    Get PDF
    Une couche de mélange anisotherme plane est étudiée dans différentes configurations de gradients forcés de vitesse et de température. L'écoulement est mis en oeuvre dans une soufflerie spécialement conçue pour générer des écoulements à basse vitesse avec génération séparée de deux courants à vitesses et températures contrôlées séparément. L'étude utilise une nouvelle technique d'anémométrie par fil chaud à surchauffe programmable dénommée PCTA. Le capteur permet de mesurer simultanément la vitesse et la température à haute fréquence en un même point. Les profils transversaux de vitesse et de température mesurés le long de la direction principale de l'écoulement donnent accès aux paramètres d'expansion de la couche de mélange. Les expansions de l'épaisseur de vorticité et de l'épaisseur de mélange thermique sont comparées, en fonction du paramètre de cisaillement dynamique et du nombre de Richardson. L'utilisation de l'anémomètre PCTA ouvre des perspectives d'analyse fine des interactions vitesse-température dans le mélange turbulent

    The response of North Sea ecosystem functional groups to warming and changes in fishing

    Get PDF
    Achieving Good Environmental Status (GES) requires managing ecosystems subject to a variety of pressures such as climate change, eutrophication, and fishing. However, ecosystem models are generally much better at representing top-down impacts from fishing than bottom-up impacts due to warming or changes in nutrient loading. Bottom-up processes often have to be parameterised with little data or worse still taken as a system input rather than being represented explicitly. In this study we use an end-to-end ecosystem model (StrathE2E2) for the North Sea with 18 broad functional groups, five resource pools, and representations of feeding, metabolism, reproduction, active migrations, advection, and mixing. Environmental driving data include temperature, irradiance, hydrodynamics, and nutrient inputs from rivers, atmosphere, and ocean boundaries, so the model is designed to evaluate rigorously top-down and bottom-up impacts and is ideal for looking at possible changes in energy flows and “big picture” ecosystem function. In this study we considered the impacts of warming (2 and 4°C) and various levels of fishing, by demersal and pelagic fleets, on the structure and function of the foodweb. A key aim is to demonstrate whether monitoring of broad ecosystem groups could assist in deciding whether GES was being achieved. We found that warming raised primary productivity and increased the size (total biomass) of the ecosystem. Warming raised metabolic demands on omnivorous zooplankton and reduced their abundance, thus favouring benthivorous and piscivorous demersal fish at the expense of planktivorous pelagic fish but otherwise had modest effects on energy pathways and top predators, whereas changes in fishing patterns could materially alter foodweb function and the relative outcomes for top predators. We suggest that GES should be defined in terms of an unfished state and that abundances of broad groupings and the balance between them can help to assess whether indicator outcomes were consistent with GES. Our findings underwrite the need for an ecosystem approach for the management of human activities supported by relevant monitoring. We also highlight the need to improve our basic understanding of bottom-up processes, improve their representation within models, and ensure that our ecosystem models can capture growth limitation by nitrogen and other elements, and not just food/energy uptake

    High-Throughput High-Resolution Class I HLA Genotyping in East Africa

    Get PDF
    HLA, the most genetically diverse loci in the human genome, play a crucial role in host-pathogen interaction by mediating innate and adaptive cellular immune responses. A vast number of infectious diseases affect East Africa, including HIV/AIDS, malaria, and tuberculosis, but the HLA genetic diversity in this region remains incompletely described. This is a major obstacle for the design and evaluation of preventive vaccines. Available HLA typing techniques, that provide the 4-digit level resolution needed to interpret immune responses, lack sufficient throughput for large immunoepidemiological studies. Here we present a novel HLA typing assay bridging the gap between high resolution and high throughput. The assay is based on real-time PCR using sequence-specific primers (SSP) and can genotype carriers of the 49 most common East African class I HLA-A, -B, and -C alleles, at the 4-digit level. Using a validation panel of 175 samples from Kampala, Uganda, previously defined by sequence-based typing, the new assay performed with 100% sensitivity and specificity. The assay was also implemented to define the HLA genetic complexity of a previously uncharacterized Tanzanian population, demonstrating its inclusion in the major East African genetic cluster. The availability of genotyping tools with this capacity will be extremely useful in the identification of correlates of immune protection and the evaluation of candidate vaccine efficacy

    Transcriptomic Analysis of Human Retinal Detachment Reveals Both Inflammatory Response and Photoreceptor Death

    Get PDF
    Background Retinal detachment often leads to a severe and permanent loss of vision and its therapeutic management remains to this day exclusively surgical. We have used surgical specimens to perform a differential analysis of the transcriptome of human retinal tissues following detachment in order to identify new potential pharmacological targets that could be used in combination with surgery to further improve final outcome. Methodology/Principal Findings Statistical analysis reveals major involvement of the immune response in the disease. Interestingly, using a novel approach relying on coordinated expression, the interindividual variation was monitored to unravel a second crucial aspect of the pathological process: the death of photoreceptor cells. Within the genes identified, the expression of the major histocompatibility complex I gene HLA-C enables diagnosis of the disease, while PKD2L1 and SLCO4A1 -which are both down-regulated- act synergistically to provide an estimate of the duration of the retinal detachment process. Our analysis thus reveals the two complementary cellular and molecular aspects linked to retinal detachment: an immune response and the degeneration of photoreceptor cells. We also reveal that the human specimens have a higher clinical value as compared to artificial models that point to IL6 and oxidative stress, not implicated in the surgical specimens studied here. Conclusions/Significance This systematic analysis confirmed the occurrence of both neurodegeneration and inflammation during retinal detachment, and further identifies precisely the modification of expression of the different genes implicated in these two phenomena. Our data henceforth give a new insight into the disease process and provide a rationale for therapeutic strategies aimed at limiting inflammation and photoreceptor damage associated with retinal detachment and, in turn, improving visual prognosis after retinal surgery

    Safety and Reactogenicity of Canarypox ALVAC-HIV (vCP1521) and HIV-1 gp120 AIDSVAX B/E Vaccination in an Efficacy Trial in Thailand

    Get PDF
    A prime-boost vaccination regimen with ALVAC-HIV (vCP1521) administered intramuscularly at 0, 4, 12, and 24 weeks and gp120 AIDSVAX B/E at 12 and 24 weeks demonstrated modest efficacy of 31.2% for prevention of HIV acquisition in HIV-uninfected adults participating in a community-based efficacy trial in Thailand.Reactogenicity was recorded for 3 days following vaccination. Adverse events were monitored every 6 months for 3.5 years, during which pregnancy outcomes were recorded. Of the 16,402 volunteers, 69% of the participants reported an adverse event any time after the first dose. Only 32.9% experienced an AE within 30 days following any vaccination. Overall adverse event rates and attribution of relatedness did not differ between groups. The frequency of serious adverse events was similar in vaccine (14.3%) and placebo (14.9%) recipients (p = 0.33). None of the 160 deaths (85 in vaccine and 75 in placebo recipients, p = 0.43) was assessed as related to vaccine. The most common cause of death was trauma or traffic accident. Approximately 30% of female participants reported a pregnancy during the study. Abnormal pregnancy outcomes were experienced in 17.1% of vaccine and 14.6% (p = 0.13) of placebo recipients. When the conception occurred within 3 months (estimated) of a vaccination, the majority of these abnormal outcomes were spontaneous or elective abortions among 22.2% and 15.3% of vaccine and placebo pregnant recipients, respectively (p = 0.08). Local reactions occurred in 88.0% of vaccine and 61.0% of placebo recipients (p<0.001) and were more frequent after ALVAC-HIV than AIDSVAX B/E vaccination. Systemic reactions were more frequent in vaccine than placebo recipients (77.2% vs. 59.8%, p<0.001). Local and systemic reactions were mostly mild to moderate, resolving within 3 days.The ALVAC-HIV and AIDSVAX B/E vaccine regimen was found to be safe, well tolerated and suitable for potential large-scale use in Thailand.ClinicalTrials.govNCT00223080

    Large-scale unit commitment under uncertainty: an updated literature survey

    Get PDF
    The Unit Commitment problem in energy management aims at finding the optimal production schedule of a set of generation units, while meeting various system-wide constraints. It has always been a large-scale, non-convex, difficult problem, especially in view of the fact that, due to operational requirements, it has to be solved in an unreasonably small time for its size. Recently, growing renewable energy shares have strongly increased the level of uncertainty in the system, making the (ideal) Unit Commitment model a large-scale, non-convex and uncertain (stochastic, robust, chance-constrained) program. We provide a survey of the literature on methods for the Uncertain Unit Commitment problem, in all its variants. We start with a review of the main contributions on solution methods for the deterministic versions of the problem, focussing on those based on mathematical programming techniques that are more relevant for the uncertain versions of the problem. We then present and categorize the approaches to the latter, while providing entry points to the relevant literature on optimization under uncertainty. This is an updated version of the paper "Large-scale Unit Commitment under uncertainty: a literature survey" that appeared in 4OR 13(2), 115--171 (2015); this version has over 170 more citations, most of which appeared in the last three years, proving how fast the literature on uncertain Unit Commitment evolves, and therefore the interest in this subject

    New perspectives on rare connective tissue calcifying diseases

    Get PDF
    Connective tissue calcifying diseases (CTCs) are characterized by abnormal calcium deposition in connective tissues. CTCs are caused by multiple factors including chronic diseases (Type II diabetes mellitus, chronic kidney disease), the use of pharmaceuticals (e.g. warfarin, glucocorticoids) and inherited rare genetic diseases such as pseudoxanthoma elasticum (PXE), generalized arterial calcification in infancy (GACI) and Keutel syndrome (KTLS). This review explores our current knowledge of these rare inherited CTCs, and highlights the most promising avenues for pharmaceutical intervention. Advancing our understanding of rare inherited forms of CTC is not only essential for the development of therapeutic strategies for patients suffering from these diseases, but also fundamental to delineating the mechanisms underpinning acquired chronic forms of CTC

    The seeds of divergence: the economy of French North America, 1688 to 1760

    Get PDF
    Generally, Canada has been ignored in the literature on the colonial origins of divergence with most of the attention going to the United States. Late nineteenth century estimates of income per capita show that Canada was relatively poorer than the United States and that within Canada, the French and Catholic population of Quebec was considerably poorer. Was this gap long standing? Some evidence has been advanced for earlier periods, but it is quite limited and not well-suited for comparison with other societies. This thesis aims to contribute both to Canadian economic history and to comparative work on inequality across nations during the early modern period. With the use of novel prices and wages from Quebec—which was then the largest settlement in Canada and under French rule—a price index, a series of real wages and a measurement of Gross Domestic Product (GDP) are constructed. They are used to shed light both on the course of economic development until the French were defeated by the British in 1760 and on standards of living in that colony relative to the mother country, France, as well as the American colonies. The work is divided into three components. The first component relates to the construction of a price index. The absence of such an index has been a thorn in the side of Canadian historians as it has limited the ability of historians to obtain real values of wages, output and living standards. This index shows that prices did not follow any trend and remained at a stable level. However, there were episodes of wide swings—mostly due to wars and the monetary experiment of playing card money. The creation of this index lays the foundation of the next component. The second component constructs a standardized real wage series in the form of welfare ratios (a consumption basket divided by nominal wage rate multiplied by length of work year) to compare Canada with France, England and Colonial America. Two measures are derived. The first relies on a “bare bones” definition of consumption with a large share of land-intensive goods. This measure indicates that Canada was poorer than England and Colonial America and not appreciably richer than France. However, this measure overestimates the relative position of Canada to the Old World because of the strong presence of land-intensive goods. A second measure is created using a “respectable” definition of consumption in which the basket includes a larger share of manufactured goods and capital-intensive goods. This second basket better reflects differences in living standards since the abundance of land in Canada (and Colonial America) made it easy to achieve bare subsistence, but the scarcity of capital and skilled labor made the consumption of luxuries and manufactured goods (clothing, lighting, imported goods) highly expensive. With this measure, the advantage of New France over France evaporates and turns slightly negative. In comparison with Britain and Colonial America, the gap widens appreciably. This element is the most important for future research. By showing a reversal because of a shift to a different type of basket, it shows that Old World and New World comparisons are very sensitive to how we measure the cost of living. Furthermore, there are no sustained improvements in living standards over the period regardless of the measure used. Gaps in living standards observed later in the nineteenth century existed as far back as the seventeenth century. In a wider American perspective that includes the Spanish colonies, Canada fares better. The third component computes a new series for Gross Domestic Product (GDP). This is to avoid problems associated with using real wages in the form of welfare ratios which assume a constant labor supply. This assumption is hard to defend in the case of Colonial Canada as there were many signs of increasing industriousness during the eighteenth and nineteenth centuries. The GDP series suggest no long-run trend in living standards (from 1688 to circa 1765). The long peace era of 1713 to 1740 was marked by modest economic growth which offset a steady decline that had started in 1688, but by 1760 (as a result of constant warfare) living standards had sunk below their 1688 levels. These developments are accompanied by observations that suggest that other indicators of living standard declined. The flat-lining of incomes is accompanied by substantial increases in the amount of time worked, rising mortality and rising infant mortality. In addition, comparisons of incomes with the American colonies confirm the results obtained with wages— Canada was considerably poorer. At the end, a long conclusion is provides an exploratory discussion of why Canada would have diverged early on. In structural terms, it is argued that the French colony was plagued by the problem of a small population which prohibited the existence of scale effects. In combination with the fact that it was dispersed throughout the territory, the small population of New France limited the scope for specialization and economies of scale. However, this problem was in part created, and in part aggravated, by institutional factors like seigneurial tenure. The colonial origins of French America’s divergence from the rest of North America are thus partly institutional
    corecore