2,799 research outputs found
Cartographie du risque unitaire d'endommagement (CRUE) par inondations pour les résidences unifamiliales du Québec
Actuellement, en considérant simultanément les éléments constitutifs du risque, soit l'aléa et la vulnérabilité, aucune des méthodes existantes dites de cartographie des risques d'inondation ne permet d'établir de façon précise et quantifiable en tous points du territoire les risques d'inondation. La méthode de cartographie présentée permet de combler ce besoin en répondant aux critères suivants : facilité d'utilisation, de consultation et d'application, résultats distribués spatialement, simplicité de mise à jour, applicabilité à divers types de résidences.La méthode présentée utilise une formulation unitaire du risque basée sur les taux d'endommagement distribués et reliés à diverses périodes de retour de crues à l'eau libre. Ceux-ci sont d'abord calculés à partir des hauteurs de submersion qu'on déduit de la topographie, des niveaux d'eau pour des périodes de retour représentatives et du mode d'implantation des résidences (présence de sous-sol, hauteur moyenne du rez-de-chaussée). Ensuite, le risque unitaire est obtenu par intégration du produit du taux d'endommagement croissant par son incrément de probabilité au dépassement. Le résultat est une carte représentant le risque en % de dommage direct moyen annuel. Une étude pilote sur un tronçon de la rivière Montmorency (Québec, Canada) a montré que les cartes sont expressives, flexibles et peuvent recevoir tous les traitements additionnels permis par un SIG tel que le logiciel MODELEUR/HYDROSIM développé à l'INRS-ETE, l'outil utilisé pour cette recherche. Enfin, l'interprétation sur la Montmorency des cartes d'inondation en vigueur actuellement au Canada (les limites de crue de 20/100 ans) soulève des interrogations sur le niveau de risque actuellement accepté dans la réglementation, surtout quand on le compare aux taux de taxation municipale.Public managers of flood risks need simple and precise tools to deal with this problem and to minimize its consequences, especially for land planning and management. Several methods exist that produce flood risk maps and help to restrict building residences in flood plains. For example, the current method in Canada is based on the delineation in flood plains of two regions corresponding to floods of 20- and 100-year return periods (CONVENTION CANADA/QUÉBEC, 1994), mostly applied to ice-free flooding conditions. The method applied by the Federal Emergency Management Agency FEMA (2004) is also based on the statistical structure of the floods in different contexts, with a goal mostly oriented towards the determination of insurance rates. In France, the INONDABILITÉ method (GILARD and GENDREAU, 1998) seeks to match the present probability of flooding to a reduced one that the stakeholders would be willing to accept.However, considering that the commonly accepted definition of risk includes both the probability of flooding and its consequences (costs of damages), very few, if any of the present methods can strictly be considered as risk-mapping methods. The method presented hereafter addresses this gap by representing the mean annual rate of direct damage (unit value) for different residential building modes, taking into account the flood probability structure and the spatial distribution of the submersion height, which takes into account the topography of the flood plain and the water stage distribution, the residential settlement mode (basement or not) and the first floor elevation of the building. The method seeks to meet important criteria related to efficient land planning and management, including: ease of utilisation, consultation and application for managers; spatially distributed results usable in current geographical information systems (GIS maps); availability anywhere in the area under study; ease of updating; and adaptability for a wide range of residence types.The proposed method is based on a unit treatment of the risk variable that corresponds to a rate of damage, instead of an absolute value expressed in monetary units. Direct damages to the building are considered, excluding damages to furniture and other personal belongs. Damage rates are first computed as a function of the main explanatory variable represented by the field of submersion depths. This variable, which is obtained from the 2D subtraction of the terrain topography from the water stage for each reference flood event, is defined by its probability of occurrence. The mean annual rate of damage (unit risk) is obtained by integrating the field of damage rate with respect to the annual probability structure of the available flood events. The result is a series of maps corresponding to representative modes of residential settlement.The damage rate was computed with a set of empirical functional relationships developed for the Saguenay region (Québec, Canada) after the flood of 1996. These curves were presented in LECLERC et al. (2003); four different curves form the set that represents residences with or without a basement, with a value below or above $CAD 50,000, which is roughly correlated with the type of occupation (i.e., secondary or main residence). While it cannot be assumed that theses curves are generic with respect to the general situation in Canada, or more specifically, in the province of Québec, the method itself can still be applied by making use of alternate sets of submersion rates of damage curves developed for other specific scenarios. Moreover, as four different functional relationships were used to represent the different residential settlement modes, four different maps have to be drawn to represent the vulnerability of the residential sector depending of the type of settlement. Consequently, as the maps are designed to represent a homogeneous mode of settlement, they represent potential future development in a given region better than the current situation. They can also be used to evaluate public policies regarding urban development and building restrictions in the flood plains.A pilot study was conducted on a reach of the Montmorency River (Québec, Canada; BLIN, 2002). It was possible to verify the compliance of the method to the proposed utilisation criteria. The method proved to be simple to use, adaptive and compatible with GIS modeling environments, such as MODELEUR (SECRETAN at al, 1999), a 2D finite elements modeling system designed for a fluvial environment. Water stages were computed with a 2D hydrodynamic simulator (HYDROSIM; HENICHE et al., 1999a) to deal with the river reach complexity (a breaded reach with back waters). Due to the availability of 2D results, a 2D graphic representation of the information layers can therefore be configured, taking into account the specific needs of the interveners. In contexts where one dimensional water stage profiles are computed (e.g., HEC-RAS by USACE, 1990; DAMBRK by FREAD, 1984), an extended 2D representation of these data needs to be developed in the lateral flood plains in order to achieve a 2D distributed submersion field.Among the interesting results, it was possible to compare the risk level for given modes of settlements (defined by the presence/absence of a basement and the elevation of the first floor with respect to the land topography) with current practices, based only on the delineation of the limits of the flood zones corresponding to 20/100 year return periods. We conclude that, at least in the particular case under study, the distributed annual rate of damage seems relatively large with respect to other financial indicators for residences such as urban taxation rates
Une méthodologie de modélisation numérique de terrain pour la simulation hydrodynamique bidimensionnelle
L'article pose la problématique de la construction du Modèle Numérique de Terrain (MNT) dans le contexte d'études hydrauliques à deux dimensions, ici reliées aux inondations. La difficulté est liée à l'hétérogénéité des ensembles de données qui diffèrent en précision, en couverture spatiale, en répartition et en densité, ainsi qu'en géoréférentiation, notamment. Dans le cadre d'un exercice de modélisation hydrodynamique, toute la région à l'étude doit être documentée et l'information portée sur un support homogène. L'article propose une stratégie efficace supportée par un outil informatique, le MODELEUR, qui permet de fusionner rapidement les divers ensembles disponibles pour chaque variable qu'elle soit scalaire comme la topographie ou vectorielle comme le vent, d'en préserver l'intégrité et d'y donner accès efficacement à toutes les étapes du processus d'analyse et de modélisation. Ainsi, quelle que soit l'utilisation environnementale du modèle numérique de terrain (planification d'aménagement, conservation d'habitats, inondations, sédimentologie), la méthode permet de travailler avec la projection des données sur un support homogène de type maillage d'éléments finis et de conserver intégralement l'original comme référence. Cette méthode est basée sur une partition du domaine d'analyse par type d'information : topographie, substrat, rugosité de surface, etc.. Une partition est composée de sous-domaines et chacun associe un jeu de données à une portion du domaine d'analyse par un procédé déclaratoire. Ce modèle conceptuel forme à notre sens le MNT proprement dit. Le processus de transfert des données des partitions à un maillage d'analyse est considéré comme un résultat du MNT et non le MNT lui-même. Il est réalisé à l'aide d'une technique d'interpolation comme la méthode des éléments finis. Suite aux crues du Saguenay en 1996, la méthode a pu être testée et validée pour en démontrer l'efficacité. Cet exemple nous sert d'illustration.This article exposes the problem of constructing a Numerical Terrain Model (NTM) in the particular context of two-dimensional (2D) hydraulic studies, herein related to floods. The main difficulty is related to the heterogeneity of the data sets that differ in precision, in spatial coverage, distribution and density, and in georeference, among others. Within the framework of hydrodynamic modelling, the entire region under study must be documented and the information carried on a homogeneous grid. One proposes here an efficient strategy entirely supported by a software tool called MODELEUR, which allows to import, gather and merge together very heterogeneous data sets, whatever type they are, scalar like topography or vectorial like wind, to preserve their integrity, and provide access to them in their original form at every step of the modelling exercise. Thus, whatever the environmental purpose of the modelling exercise (enhancement works, sedimentology, conservation of habitats, flood risks analysis), the method allows to work with the projection of the data sets on a homogeneous finite element grid and to conserves integrally the original sets as the ultimate reference. This method is based on a partition of the domain under study for each data type: topography, substrates, surface roughness, etc. Each partition is composed of sub-domains and each of them associates a data set to a portion of the domain in a declarative way. This conceptual model represents formally the NTM. The process of data transfer from the partitions to the final grid is considered as a result of the NTM and not the NTM itself. It is performed by interpolation with a technique like the finite element method. Following the huge Saguenay flood in 1996, the efficiency of this method has been tested and validated successfully and this example serves here as an illustration.The accurate characteristics description of both river main channel and flood plain is essential to any hydrodynamic simulation, especially if extreme discharges are considered and if the two-dimensional approach is used.The ground altitude and the different flow resistance factors are basic information that the modeler should pass on to the simulator. For too long, this task remained "the poor relative" of the modeling process because it does not a priori seem to raise any particular difficulty. In practice however, it represents a very significant workload for the mobilisation of the models, besides hiding many pitfalls susceptible to compromise the quality of the hydraulic results. As well as the velocity and water level fields are results of the hydrodynamic model, the variables describing the terrain and transferred on the simulation mesh constitute the results of the Numerical Terrain Model (NTM). Because this is strictly speaking a modeling exercise, a validation of the results that assess the quality of the model is necessary.In this paper, we propose a methodology to integrate the heterogeneous data sets for the construction of the NTM with the aim of simulating 2D hydrodynamics of natural streams with the finite element method. This methodology is completely supported by a software, MODELEUR, developed at INRS-Eau (Secretan and Leclerc, 1998; Secretan et al., 2000). This tool, which can be assimilated to a Geographical Information System (GIS) dedicated to the applications of 2D flow simulations, allows to carry out all the steps of integrating the raw data sets for the conception of a complete NTM. Furthermore, it facilitates the application and the piloting of hydrodynamic simulations with the simulator HYDROSIM (Heniche et al., 1999).Scenarios for flow analysis require frequent and important changes in the mesh carrying the data. A return to the basis data sets is then required, which obliges to preserve them in their entirety, to have easily access to them and to transfer them efficiently on the mesh. That is why the NTM should rather put emphasis on basic data rather than on their transformed and inevitably degraded aspect after their transfer to a mesh.The data integrity should be preserved as far as possible in the sense that it is imperative to keep distinct and to give access separately to different data sets. Two measuring campaigns will not be mixed; for example, the topography resulting from digitised maps will be maintained separated from that resulting from echo-sounding campaigns. This approach allows at any time to return to the measures, to control them, to validate them, to correct them and possibly, to substitute a data set a by another one.The homogeneity of the data support with respect to the location of the data points is essential to allow the algebraic interaction between the different information layers. The operational objective which bear up ultimately the creation of the NTM in the present context is to be able to transfer efficiently the spatial basic data (measurements, geometry of civil works, etc.) each carried by diverse discretisations towards a single carrying structure.With these objectives of integrity, accessibility, efficiency and homogeneity, the proposed method consists of the following steps:1. Import of the data sets into the database, which possibly implies to digitise maps and/or to reformat the raw files to a compatible file format; 2. Construction and assembly of the NTM properly which consists, for each variable (topography, roughness, etc.), to create a partition of the domain under study, that is to subdivide it into juxtaposed sub-domains and to associate to each sub-domain the data set which describes the variable on it. More exactly, this declaratory procedure uses irregular polygons allowing to specify in the corresponding sub-domains the data source to be used in the construction of the NTM. As it is also possible to transform regions of the domain with algebraic functions to represent for example civil works in river (dikes, levees, etc.), the NTM integrates all the validated data sets and the instructions to transform them locally. From this stage, the NTM exists as entity-model and it has a conceptual character;3. Construction of a finite element mesh;4. Transfer by interpolation and assembly of the data of the different components of the NTM on the finite element mesh according to the instructions contained in the various partitions. The result is an instance of the NTM and its quality depends on the density of the mesh and the variability of the data. So, it requires a validation with respect to the original data;5. Realisation of the analysis tasks and/or hydrodynamic simulations. If the mesh should be modified for a project variant or for an analysis scenario, only tasks 3 and 4 are to be redone and task 4 is completely automated in the MODELEUR.The heterogeneity of the data sources, which constitutes one of the main difficulties of the exercise, can be classified in three groups: according to the measuring technique used; according to the format or the representation model used; according to the geographic datum and projection system.For the topography, the measuring techniques include conventional or radar satellite, airborne techniques, photogrammetry or laser scanning, ground techniques, total station or GPS station, as well as embarked techniques as the echo-sounder. These data come in the form of paper maps that have to be digitised, in the form of regular or random data points, isolines of altitude, or even as transects. They can be expressed in different datums and projections and sometime are not even georeferenced and must be first positioned.As for the bed roughness that determines the resistance to the flow, also here the data sets differ one from the other in many aspects. Data can here also have been picked as regular or random points, as homogeneous zones or as transects. Data can represent the average grain size of the present materials, the dimension of the passing fraction (D85 or D50 or median), the represented % of the surface corresponding to every fraction of the grain assemblage, etc... In absence of this basic data, the NTM can only represent the value of the friction parameter, typically n of Manning, which should be obtained by calibration for the hydrodynamic model. For the vegetation present in the flood plain or for aquatic plants, source data can be as variable as for the bed roughness. Except for the cases where data exists, the model of vegetation often consists of the roughness parameter obtained during the calibration exercise. The method was successfully applied in numerous contexts as demonstrated by the application realised on the Chicoutimi River after the catastrophic flood in the Saguenay region in 1996. The huge heterogeneity of the available data in that case required the application of such a method as proposed. So, elevation data obtained by photogrammetry, by total station or by echo-sounder on transects could be coordinated and investigated simultaneously for the purposes of hydrodynamic simulation or of sedimentary balance in zones strongly affected by the flood
Rythmes de travail imposés et douleurs aux coudes, effets directs et indirects, rôle des facteurs psycho-sociaux et biomécaniques
Objectifs
Évaluer les effets directs et indirects du rythme de travail imposé sur les douleurs aux coudes (DoulC).
MĂ©thodes
On dispose de données sur 3710 salariés qui ont participé à un programme de surveillance des troubles musculo-squelettiques (TMS) dans la région Pays de la Loire entre 2002 et 2005 (réseau pilote de surveillance des TMS). Lors d’examens cliniques standardisés, 83 médecins du travail ont diagnostiqué les éventuels TMS, dont la présence de douleurs au niveau de l’épicondyle (DoulC). Les expositions professionnelles (rythme de travail imposé, effort important combiné aux mouvements répétitifs aux coudes, tâches répétitives, faible soutien social, faible latitude décisionnelle) et les facteurs personnels (âge, sexe, indice de masse corporel) ont été évalués par auto-questionnaire. Les associations univariées entre les DoulC et les facteurs de risques ont été quantifiées par des odds ratios (ORs) issus de modèles logistiques. La part directe et indirecte de l’association avec le rythme de travail imposé sur les DoulC a été estimée par un modèle à équations structurelles (EQ) et par des calculs causaux basés sur une méthode proposée par Van Der Weele et al. de 2014 (méthode VDW).
RĂ©sultats
L’OR entre le rythme de travail imposé et les DoulC est de 1,49 [1,22;1,82] (en comparaison, l’OR pour « faible soutien social » est 1,30 [1,07;1,58] et pour les efforts combinées aux mouvements aux coudes, 1,94 [1,59;2,36]). La part de l’association entre le rythme de travail imposé et les DoulC médiée par les autres facteurs professionnels est estimée à 36,5 % [14,1 %;59,0 %] par la méthode EQ et à 30,6 % [15,7 %;57,0 %] par la méthode VDW. Dans l’analyse EQ, cette association indirecte est principalement expliquée par l’association passant par les facteurs biomécaniques (correspondant à 82,1 % [44,4 %;119,9 %] de l’effet indirect).
Conclusions
Cette analyse exploratoire basée sur des données transversales donne des pistes pour l’évaluation des différents mécanismes causaux à l’origine des liens entre les facteurs organisationnels et les TMS. L’effet du rythme de travail est certes plus faible que celui des contraintes biomécaniques, il est cependant significatif. Les résultats montrent qu’une intervention visant à diminuer la fréquence de l’exposition aux rythmes de travail imposés aurait un effet direct et aussi indirect sur les douleurs par la diminution des expositions professionnelles psychosociales et biomécaniques
Zonation related function and ubiquitination regulation in human hepatocellular carcinoma cells in dynamic vs. static culture conditions
<p>Abstract</p> <p>Background</p> <p>Understanding hepatic zonation is important both for liver physiology and pathology. There is currently no effective systemic chemotherapy for human hepatocellular carcinoma (HCC) and its pathogenesis is of special interest. Genomic and proteomic data of HCC cells in different culture models, coupled to pathway-based analysis, can help identify HCC-related gene and pathway dysfunctions.</p> <p>Results</p> <p>We identified zonation-related expression profiles contributing to selective phenotypes of HCC, by integrating relevant experimental observations through gene set enrichment analysis (GSEA). Analysis was based on gene and protein expression data measured on a human HCC cell line (HepG2/C3A) in two culture conditions: dynamic microfluidic biochips and static Petri dishes. Metabolic activity (HCC-related cytochromes P450) and genetic information processing were dominant in the dynamic cultures, in contrast to kinase signaling and cancer-specific profiles in static cultures. That, together with analysis of the published literature, leads us to propose that biochips culture conditions induce a periportal-like hepatocyte phenotype while standard plates cultures are more representative of a perivenous-like phenotype. Both proteomic data and GSEA results further reveal distinct ubiquitin-mediated protein regulation in the two culture conditions.</p> <p>Conclusions</p> <p>Pathways analysis, using gene and protein expression data from two cell culture models, confirmed specific human HCC phenotypes with regard to CYPs and kinases, and revealed a zonation-related pattern of expression. Ubiquitin-mediated regulation mechanism gives plausible explanations of our findings. Altogether, our results suggest that strategies aimed at inhibiting activated kinases and signaling pathways may lead to enhanced metabolism-mediated drug resistance of treated tumors. If that were the case, mitigating inhibition or targeting inactive forms of kinases would be an alternative.</p
Study of oscillations during methane oxidation with species probing
Biogas has been considered as a renewable energy source with respect to fossil fuels due to its
sustainability, security supply, and environmental friendly potential [1-4]. Methane occupies a
large part in biogas. It is of great value to review the methane oxidation for a primary understanding
of the features associated with biogas combustion. It was found that dynamic behavior
in terms of methane oxidation occurred under specific conditions. The first methane oxidation
oscillation experiments were conducted by [5] in a jet-stirred reactor (JSR) and were extended
to a higher inlet temperature [6]. The map of dynamic behavior was drawn in terms of various
C/O ratios and temperatures ranging from 1025-1275 K at a fixed 90% nitrogen bath gas. Recently,
Lubrano Lavadera et al. [7] investigated the main parameters, such as, equivalence ratios
(0.5-1.5), residence time (1.5-2 s), various bath gases (N2, CO2, He, H2O), on the oscillatory
behavior of methane oxidation. However, to our best knowledge, studies of dynamic phenomenology
with species probing have never been reported.
Because of the heat release in terms of the exothermic or endothermic reactions, the temperature
and species oscillations are strongly coupled during fuel oxidation. In order to put emphasis on
species dynamic behavior, very diluted conditions are needed to decouple as much as possible
temperature and species oscillations.
The purpose of this work is to investigate the effects of various parameters: inlet mole fraction
of methane (0.1-0.5%), stoichiometric condition (=1) and reactor temperatures (950-1200 K),
on the species oscillations during methane oxidation. A detailed kinetic mechanism (POLIMI)
[8] is selected to interpret the experimental data
- …