94 research outputs found

    Causes of death and demographic characteristics of victims of meteorological disasters in Korea from 1990 to 2008

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Meteorological disasters are an important component when considering climate change issues that impact morbidity and mortality rates. However, there are few epidemiological studies assessing the causes and characteristics of deaths from meteorological disasters. The present study aimed to analyze the causes of death associated with meteorological disasters in Korea, as well as demographic and geographic vulnerabilities and their changing trends, to establish effective measures for the adaptation to meteorological disasters.</p> <p>Methods</p> <p>Deaths associated with meteorological disasters were examined from 2,045 cases in Victim Survey Reports prepared by 16 local governments from 1990 to 2008. Specific causes of death were categorized as drowning, structural collapse, electrocution, lightning, fall, collision, landslide, avalanche, deterioration of disease by disaster, and others. Death rates were analyzed according to the meteorological type, specific causes of death, and demographic and geographic characteristics.</p> <p>Results</p> <p>Drowning (60.3%) caused the greatest number of deaths in total, followed by landslide (19.7%) and structural collapse (10.1%). However, the causes of deaths differed between disaster types. The meteorological disaster associated with the greatest number of deaths has changed from flood to typhoon. Factors that raised vulnerability included living in coastal provinces (11.3 times higher than inland metropolitan), male gender (1.9 times higher than female), and older age.</p> <p>Conclusions</p> <p>Epidemiological analyses of the causes of death and vulnerability associated with meteorological disasters can provide the necessary information for establishing future adaptation measures against climate change. A more comprehensive system for assessing disaster epidemiology needs to be established.</p

    Appropriate model use for predicting elevations and inundation extent for extreme flood events

    Get PDF
    Flood risk assessment is generally studied using flood simulation models; however, flood risk managers often simplify the computational process; this is called a “simplification strategy”. This study investigates the appropriateness of the “simplification strategy” when used as a flood risk assessment tool for areas prone to flash flooding. The 2004 Boscastle, UK, flash flood was selected as a case study. Three different model structures were considered in this study, including: (1) a shock-capturing model, (2) a regular ADI-type flood model and (3) a diffusion wave model, i.e. a zero-inertia approach. The key findings from this paper strongly suggest that applying the “simplification strategy” is only appropriate for flood simulations with a mild slope and over relatively smooth terrains, whereas in areas susceptible to flash flooding (i.e. steep catchments), following this strategy can lead to significantly erroneous predictions of the main parameters—particularly the peak water levels and the inundation extent. For flood risk assessment of urban areas, where the emergence of flash flooding is possible, it is shown to be necessary to incorporate shock-capturing algorithms in the solution procedure, since these algorithms prevent the formation of spurious oscillations and provide a more realistic simulation of the flood levels

    Social disparities in food preparation behaviours: a DEDIPAC study

    Get PDF
    BACKGROUND: The specific role of major socio-economic indicators in influencing food preparation behaviours could reveal distinct socio-economic patterns, thus enabling mechanisms to be understood that contribute to social inequalities in health. This study investigated whether there was an independent association of each socio-economic indicator (education, occupation, income) with food preparation behaviours. METHODS: A total of 62,373 adults participating in the web-based NutriNet-Santé cohort study were included in our cross-sectional analyses. Cooking skills, preparation from scratch and kitchen equipment were assessed using a 0-10-point score; frequency of meal preparation, enjoyment of cooking and willingness to cook better/more frequently were categorical variables. Independent associations between socio-economic factors (education, income and occupation) and food preparation behaviours were assessed using analysis of covariance and logistic regression models stratified by sex. The models simultaneously included the three socio-economic indicators, adjusting for age, household composition and whether or not they were the main cook in the household. RESULTS: Participants with the lowest education, the lowest income group and female manual and office workers spent more time preparing food daily than participants with the highest education, those with the highest income and managerial staff (P < 0.0001). The lowest educated individuals were more likely to be non-cooks than those with the highest education level (Women: OR = 3.36 (1.69;6.69); Men: OR = 1.83 (1.07;3.16)) while female manual and office workers and the never-employed were less likely to be non-cooks (OR = 0.52 (0.28;0.97); OR = 0.30 (0.11;0.77)). Female manual and office workers had lower scores of preparation from scratch and were less likely to want to cook more frequently than managerial staff (P < 0.001 and P < 0.001). Women belonging to the lowest income group had a lower score of kitchen equipment (P < 0.0001) and were less likely to enjoy cooking meal daily (OR = 0.68 (0.45;0.86)) than those with the highest income. CONCLUSION: Lowest socio-economic groups, particularly women, spend more time preparing food than high socioeconomic groups. However, female manual and office workers used less raw or fresh ingredients to prepare meals than managerial staff. In the unfavourable context in France with reduced time spent preparing meals over last decades, our findings showed socioeconomic disparities in food preparation behaviours in women, whereas few differences were observed in men

    Sequential morphological characteristics of murine fetal liver hematopoietic microenvironment in Swiss Webster mice

    Get PDF
    Embryonic hematopoiesis occurs via dynamic development with cells migrating into various organs. Fetal liver is the main hematopoietic organ responsible for hematopoietic cell expansion during embryologic development. We describe the morphological sequential characteristics of murine fetal liver niches that favor the settlement and migration of hematopoietic cells from 12 days post-coitum (dpc) to 0 day post-partum. Liver sections were stained with hematoxylin and eosin, Lennert’s Giemsa, Sirius Red pH 10.2, Gomori’s Reticulin, and Periodic Acid Schiff/Alcian Blue pH 1.0 and pH 2.5 and were analyzed by bright-field microscopy. Indirect imunohistochemistry for fibronectin, matrix metalloproteinase-1 (MMP-1), and MMP-9 and histochemistry for naphthol AS-D chloroacetate esterase (NCAE) were analyzed by confocal microscopy. The results showed that fibronectin was related to the promotion of hepatocyte and trabecular differentiation; reticular fibers did not appear to participate in fetal hematopoiesis but contributed to the physical support of the liver after 18 dpc. During the immature phase, hepatocytes acted as the fundamental stroma for the erythroid lineage. The appearance of myeloid cells in the liver was related to perivascular and subcapsular collagen, and NCAE preceded MMP-1 expression in neutrophils, an occurrence that appeared to contribute to their liver evasion. Thus, the murine fetal liver during ontogenesis shows two different phases: one immature and mainly endodermic (<14 dpc) and the other more developed (endodermic-mesenchymal; >15 dpc) with the maturation of hepatocytes, a better definition of trabecular pattern, and an increase in the connective tissue in the capsule, portal spaces, and liver parenchyma. The decrease of hepatic hematopoiesis (migration) coincides with hepatic maturation

    Activation of Protein Kinase A and Exchange Protein Directly Activated by cAMP Promotes Adipocyte Differentiation of Human Mesenchymal Stem Cells

    Get PDF
    Human mesenchymal stem cells are primary multipotent cells capable of differentiating into several cell types including adipocytes when cultured under defined in vitro conditions. In the present study we investigated the role of cAMP signaling and its downstream effectors, protein kinase A (PKA) and exchange protein directly activated by cAMP (Epac) in adipocyte conversion of human mesenchymal stem cells derived from adipose tissue (hMADS). We show that cAMP signaling involving the simultaneous activation of both PKA- and Epac-dependent signaling is critical for this process even in the presence of the strong adipogenic inducers insulin, dexamethasone, and rosiglitazone, thereby clearly distinguishing the hMADS cells from murine preadipocytes cell lines, where rosiglitazone together with dexamethasone and insulin strongly promotes adipocyte differentiation. We further show that prostaglandin I2 (PGI2) may fully substitute for the cAMP-elevating agent isobutylmethylxanthine (IBMX). Moreover, selective activation of Epac-dependent signaling promoted adipocyte differentiation when the Rho-associated kinase (ROCK) was inhibited. Unlike the case for murine preadipocytes cell lines, long-chain fatty acids, like arachidonic acid, did not promote adipocyte differentiation of hMADS cells in the absence of a PPARγ agonist. However, prolonged treatment with the synthetic PPARδ agonist L165041 promoted adipocyte differentiation of hMADS cells in the presence of IBMX. Taken together our results emphasize the need for cAMP signaling in concert with treatment with a PPARγ or PPARδ agonist to secure efficient adipocyte differentiation of human hMADS mesenchymal stem cells

    Pharmacology and therapeutic implications of current drugs for type 2 diabetes mellitus

    Get PDF
    Type 2 diabetes mellitus (T2DM) is a global epidemic that poses a major challenge to health-care systems. Improving metabolic control to approach normal glycaemia (where practical) greatly benefits long-term prognoses and justifies early, effective, sustained and safety-conscious intervention. Improvements in the understanding of the complex pathogenesis of T2DM have underpinned the development of glucose-lowering therapies with complementary mechanisms of action, which have expanded treatment options and facilitated individualized management strategies. Over the past decade, several new classes of glucose-lowering agents have been licensed, including glucagon-like peptide 1 receptor (GLP-1R) agonists, dipeptidyl peptidase 4 (DPP-4) inhibitors and sodium/glucose cotransporter 2 (SGLT2) inhibitors. These agents can be used individually or in combination with well-established treatments such as biguanides, sulfonylureas and thiazolidinediones. Although novel agents have potential advantages including low risk of hypoglycaemia and help with weight control, long-term safety has yet to be established. In this Review, we assess the pharmacokinetics, pharmacodynamics and safety profiles, including cardiovascular safety, of currently available therapies for management of hyperglycaemia in patients with T2DM within the context of disease pathogenesis and natural history. In addition, we briefly describe treatment algorithms for patients with T2DM and lessons from present therapies to inform the development of future therapies

    Maternal smoking during pregnancy and birth defects in children: a systematic review with meta-analysis

    Full text link
    corecore