273 research outputs found

    Galactic star formation in parsec-scale resolution simulations

    Get PDF
    The interstellar medium (ISM) in galaxies is multiphase and cloudy, with stars forming in the very dense, cold gas found in Giant Molecular Clouds (GMCs). Simulating the evolution of an entire galaxy, however, is a computational problem which covers many orders of magnitude, so many simulations cannot reach densities high enough or temperatures low enough to resolve this multiphase nature. Therefore, the formation of GMCs is not captured and the resulting gas distribution is smooth, contrary to observations. We investigate how star formation (SF) proceeds in simulated galaxies when we obtain parsec-scale resolution and more successfully capture the multiphase ISM. Both major mergers and the accretion of cold gas via filaments are dominant contributors to a galaxy's total stellar budget and we examine SF at high resolution in both of these contexts.Comment: 4 pages, 4 figures. To appear in the proceedings for IAU Symposium 270: Computational Star Formation (eds. Alves, Elmegreen, Girart, Trimble

    An MDE-based framework to support the development of Mixed Interactive Systems

    Get PDF
    International audienceIn the domain of Human Computer Interaction (HCI), recent advances in sensors, communication technologies, miniaturization and computing capabilities have led to new and advanced forms of interaction. Among them, Mixed Interactive Systems (MIS), form a class of interactive systems that comprises augmented reality, tangible interfaces and ambient computing; MIS aim to take advantage of physical and digital worlds to promote a more transparent integration of interactive systems with the user's environment. Due to the constant change of technologies and the multiplicity of these interaction forms, specific development approaches have been developed. As a result, numerous taxonomies, frameworks, API and models have emerged, each one covering a specific and limited aspect of the development of MIS. To support a coherent use of these multiple development resources and contribute to the increasing popularity of MIS, we have developed a framework based on Model-Driven Engineering. The goal is to take advantage of Model-Driven Engineering (MDE) standards, methodology and tools to support the manipulation of complementary Domain Specific Languages (DSL), to organize and link the use of different design and implementation resources, and to ensure a rationalized implementation based on design choices. In this paper, we first summarize existing uses of MDE in HCI before focusing on five major benefits MDE can provide in a MIS development context. We then detail which MDE tools and resources support these benefits and thus form the pillars of the success of an MDE-based MIS development approach. Based on this analysis, we introduce our framework, called Guide-Me, and illustrate its use through a case study. This framework includes two design models. Model transformations are also included to link one model to another; as a result the frameworks coverage extends from the earliest design step to a software component-based prototyping platform. A toolset based on Eclipse Modeling Framework (EMF) that supports the use of the framework is also presented. We finally assess our MDE-based development process for MIS based on the five major MDE benefits for MIS

    Why has the American Trade Balance Continued Deteriorating since 2002 despite the Depreciation of the U.S. Dollar?

    Get PDF
    The American trade deficit has grown continually since 2002, reaching a record high in 2005, even though the U.S. dollars four-year decline has had a positive impact, estimated at nearly 1% of GDP, as expected from the J-curve mechanism Improved non oil terms of trade, and cyclical divergence between the United States and the rest of the world, have also been favourable to the American trade balance. However, other factors have offset these positive effects. The United States energy bill and the initial low import coverage ratio. The remaining source of deficit growth, is due to unexplained factors such as "non-price competitiveness". The American trade balance is expected to remain at the same level as 2006. Simulations suggest that in the short term, only a slowdown of the American economy (- 1% growth) could lead to a significant reduction in its trade deficit (by - 0.4% in 2007).Trade deficit, J Curve, Trade Elasticities, Exchange Rate Pass-Through, US Dollar, Global Imbalances

    Impacts of spatial and temporal resolutions on the near-optimal spaces of energy system optimisation models

    Full text link
    peer reviewedOver the past years, the rising penetration of renewable energy in power systems has led to the need for more detailed energy system models. Specifically, spatial and temporal resolutions have become increasingly important, and multiple studies have investigated their impact on the optimal solutions to energy system optimisation problems. However, these studies have yet to be conducted for near-optimal solutions, which can provide valuable insights to decision-makers. This paper aims to initiate this research by examining the effects of spatial and temporal resolutions on the values of necessary conditions for near-optimality. In particular, we investigate how spatiotemporal resolution changes affect minimal capacity investments in the European electricity grid. Our analysis leads to three key observations. Firstly, we show that minimal capacities for near-optimality exhibit similar trends to optimal capacities when each resolution varies. Secondly, the resolutions that result in higher optimal capacities are also the ones where minimal capacities deviate the least from the optimal capacities. Thirdly, as a result of the second observation, spatial or temporal resolution changes have a greater impact on minimal capacities for near-optimality than on optimal capacities. We conclude by suggesting solutions to expand this research track and gain a deeper understanding of the impact of spatiotemporal resolution on near-optimal spaces.7. Affordable and clean energ

    Computing Necessary Conditions for Near-Optimality in Capacity Expansion Planning Problems

    Full text link
    peer reviewedIn power systems, large-scale optimisation problems are extensively used to plan for capacity expansion at the supranational level. However, their cost-optimal solutions are often not exploitable by decision-makers who are preferably looking for features of solutions that can accommodate their different requirements. This paper proposes a generic framework for addressing this problem. It is based on the concept of the epsilon-optimal feasible space of a given optimisation problem and the identification of necessary conditions over this space. This framework has been developed in a generic case, and an approach for solving this problem is subsequently described for a specific case where conditions are constrained sums of variables. The approach is tested on a case study about capacity expansion planning of the European electricity network to determine necessary conditions on the minimal investments in transmission, storage and generation capacity.7. Affordable and clean energ

    DynEmo: A video database of natural facial expressions of emotions.

    Get PDF
    International audienceDynEmo is a database available to the scientific community (https://DynEmo.liglab.fr/). It contains dynamic and natural emotional facial expressions (EFEs) displaying subjective affective states rated by both the expresser and observers. Methodological and contextual information is provided for each expression. This multimodal corpus meets psychological, ethical, and technical criteria. It is quite large, containing two sets of 233 and 125 recordings of EFE of ordinary Caucasian people (ages 25 to 65, 182 females and 176 males) filmed in natural but standardized conditions. In the Set 1, EFE recordings are associated with the affective state of the expresser (self-reported after the emotion inducing task, using dimensional, action readiness, and emotional labels items). In the Set 2, EFE recordings are both associated with the affective state of the expresser and with the time line (continuous annotations) of observers' ratings of the emotions displayed throughout the recording. The time line allows any researcher interested in analysing non-verbal human behavior to segment the expressions into emotions

    Swirling around filaments: are large-scale structure vortices spinning up dark halos?

    Full text link
    The kinematic analysis of dark matter and hydrodynamical simulations suggests that the vorticity in large-scale structure is mostly confined to, and predominantly aligned with their filaments, with an excess of probability of 20 per cent to have the angle between vorticity and filaments direction lower than 60 degrees relative to random orientations. The cross sections of these filaments are typically partitioned into four quadrants with opposite vorticity sign, arising from multiple flows, originating from neighbouring walls. The spins of halos embedded within these filaments are consistently aligned with this vorticity for any halo mass, with a stronger alignment for the most massive structures up to an excess of probability of 165 per cent. On large scales, adiabatic/cooling hydrodynamical simulations display the same vorticity in the gas as in the dark matter. The global geometry of the flow within the cosmic web is therefore qualitatively consistent with a spin acquisition for smaller halos induced by this large-scale coherence, as argued in Codis et al. (2012). In effect, secondary anisotropic infall (originating from the vortex-rich filament within which these lower-mass halos form) dominates the angular momentum budget of these halos. The transition mass from alignment to orthogonality is related to the size of a given multi-flow region with a given polarity. This transition may be reconciled with the standard tidal torque theory if the latter is augmented so as to account for the larger scale anisotropic environment of walls and filaments.Comment: 17 pages, 19 figures, 3 tables. accepted for publication in MNRA

    DiabĂšte de type I et polyols: quels informations et conseils transmettre aux patients ? : travail de Bachelor

    Get PDF
    Introduction : L’objectif de la prise en charge du diabĂšte de type 1 (DT1) est de maintenir la glycĂ©mie au plus proche des normes physiologiques afin d’éviter et/ou de retarder les complications Ă  court et Ă  long terme. Sur le plan alimentaire, la mĂ©thode actuellement prĂ©conisĂ©e par l’American Diabetes Association (ADA) est le comptage des glucides. Il permet d’ajuster au mieux le dosage d’insuline en fonction de la quantitĂ© de glucides consommĂ©e. Pour ce faire, le DT1 se rĂ©fĂšre aux Ă©quivalences glucidiques et Ă  la mention « glucides » prĂ©sente sur l’étiquetage nutritionnel des produits. Elle comprend les sucres complexes, les sucres simples ainsi que les polyols. Les polyols ont la propriĂ©tĂ© d’avoir des taux d’absorption plus faibles que les autres glucides et variables d’un polyol Ă  l’autre en raison de leur mĂ©tabolisme propre. Les sociĂ©tĂ©s savantes du diabĂšte n’émettent, Ă  l’heure actuelle, aucune recommandation concernant les polyols et le DT1. En Suisse, le DT1 considĂšre, au moment du comptage des glucides, de maniĂšre similaire les polyols aux autres glucides. Notre travail de Bachelor a pour but de se positionner sur l’effet engendrĂ© par la consommation de polyols sur la glycĂ©mie et les besoins en insuline des DT1. Les rĂ©sultats permettront de formuler un positionnement argumentĂ© sur des fondements scientifiques afin que les professionnels de santĂ© puissent transmettre des informations/conseils adaptĂ©s aux patients concernĂ©s. MĂ©thodologie : Nous avons rĂ©alisĂ© une revue de littĂ©rature systĂ©matique qui nous a permis d’observer l’influence de la consommation de chaque polyol sur la glycĂ©mie et les besoins en insuline. Suite Ă  cela, une analyse non-exhaustive de produits contenant des polyols nous a permis d’estimer si la quantitĂ© de polyols prĂ©sente pourrait impacter ou non la glycĂ©mie et nĂ©cessiter ou non un ajustement des doses d’insuline. Pour ce faire, nous avons calculĂ© si la diffĂ©rence entre glucides actuellement comptabilisĂ©s et glucides mĂ©tabolisĂ©s Ă©tait significative (≄ Ă  10 g) pour devoir adapter l’insuline. Notre analyse de produit s’est basĂ©e sur l’étude de l'absorption et du mĂ©tabolisme des polyols, ainsi que sur les rĂ©sultats de notre revue de littĂ©rature systĂ©matique. RĂ©sultats : La consommation de polyols engendre une Ă©lĂ©vation moindre voire nulle de la glycĂ©mie et de l’insuline sĂ©rique par rapport au glucose et/ou au saccharose. Il existe une variabilitĂ© d’un polyol Ă  un autre, en fonction du taux d’absorption et de leur mĂ©tabolisme propre. L’analyse de produits a dĂ©montrĂ© qu’à partir d’un certain seuil de consommation spĂ©cifique Ă  chaque produit, il existe une diffĂ©rence significative entre glucides comptabilisĂ©s et glucides mĂ©tabolisĂ©s qui pourrait modifier les besoins en insuline des DT1. Perspectives : La prĂ©sentation de nos rĂ©sultats aux professionnels de santĂ© concernĂ©s permettrait de les sensibiliser au risque potentiel de surdosage d’insuline lorsque le DT1 consomme des polyols. Notre travail constitue une premiĂšre rĂ©flexion qui pourrait amener les sociĂ©tĂ©s savantes du diabĂšte Ă  se questionner sur la mise en place de recommandations sur la consommation de polyols. A plus long terme, des dĂ©marches visant Ă  modifier l’étiquetage pourraient ĂȘtre envisageables afin de rendre attentif le DT1 Ă  la prĂ©sence de polyol dans le produit et/ou de l’informer de la quantitĂ© de glucides mĂ©tabolisĂ©s. Conclusion : Notre travail tend Ă  dĂ©montrer que les polyols devraient ĂȘtre comptabilisĂ©s en tenant compte de la partie non-absorbĂ©e et de leur mĂ©tabolisme propre, afin d’éviter un risque de surdosage d’insuline. Les DT1 devraient comptabiliser la quantitĂ© de glucides mĂ©tabolisĂ©s plutĂŽt que la quantitĂ© de glucides totaux. Pour ce faire, nous proposons de soustraire un pourcentage spĂ©cifique Ă  chaque polyol de la quantitĂ© totale contenue qui correspond Ă  la part non-mĂ©tabolisĂ©e. Le pourcentage dĂ©fini dĂ©coule du taux d’absorption et du mĂ©tabolisme de chaque polyol

    The impact of ISM turbulence, clustered star formation and feedback on galaxy mass assembly through cold flows and mergers

    Full text link
    Two of the dominant channels for galaxy mass assembly are cold flows (cold gas supplied via the filaments of the cosmic web) and mergers. How these processes combine in a cosmological setting, at both low and high redshift, to produce the whole zoo of galaxies we observe is largely unknown. Indeed there is still much to understand about the detailed physics of each process in isolation. While these formation channels have been studied using hydrodynamical simulations, here we study their impact on gas properties and star formation (SF) with some of the first simulations that capture the multiphase, cloudy nature of the interstellar medium (ISM), by virtue of their high spatial resolution (and corresponding low temperature threshold). In this regime, we examine the competition between cold flows and a supernovae(SNe)-driven outflow in a very high-redshift galaxy (z {\approx} 9) and study the evolution of equal-mass galaxy mergers at low and high redshift, focusing on the induced SF. We find that SNe-driven outflows cannot reduce the cold accretion at z {\approx} 9 and that SF is actually enhanced due to the ensuing metal enrichment. We demonstrate how several recent observational results on galaxy populations (e.g. enhanced HCN/CO ratios in ULIRGs, a separate Kennicutt Schmidt (KS) sequence for starbursts and the population of compact early type galaxies (ETGs) at high redshift) can be explained with mechanisms captured in galaxy merger simulations, provided that the multiphase nature of the ISM is resolved.Comment: To appear in the proceedings of IAUS 277, 'Tracing the ancestry of galaxies', eds Carignan, Freeman & Combes. 4 pages, 2 figure
    • 

    corecore