4,937 research outputs found

    Accretion disc outbursts: a new version of an old model

    Get PDF
    We have developed 1D time-dependent numerical models of accretion discs, using an adaptive grid technique and an implicit numerical scheme, in which the disc size is allowed to vary with time. The code fully resolves the cooling and heating fronts propagating in the disc. We show that models in which the radius of the outer edge of the disc is fixed produce incorrect results, from which probably incorrect conclusions about the viscosity law have been inferred. In particular we show that outside-in outbursts are possible when a standard bimodal behaviour of the Shakura-Sunyaev viscosity parameter alpha is used. We also discuss to what extent insufficient grid resolutions have limited the predictive power of previous models. We find that the global properties (magnitudes, etc. ...) of transient discs can be addressed by codes using a high, but reasonable, number of fixed grid points. However, the study of the detailed physical properties of the transition fronts generally requires resolutions which are out of reach of fixed grid codes. It appears that most time-dependent models of accretion discs published in the literature have been limited by resolution effects, improper outer boundary conditions, or both.Comment: 13 pages, 12 figures; accepted for publication in MNRA

    Calibration and improved prediction of computer models by universal Kriging

    Full text link
    This paper addresses the use of experimental data for calibrating a computer model and improving its predictions of the underlying physical system. A global statistical approach is proposed in which the bias between the computer model and the physical system is modeled as a realization of a Gaussian process. The application of classical statistical inference to this statistical model yields a rigorous method for calibrating the computer model and for adding to its predictions a statistical correction based on experimental data. This statistical correction can substantially improve the calibrated computer model for predicting the physical system on new experimental conditions. Furthermore, a quantification of the uncertainty of this prediction is provided. Physical expertise on the calibration parameters can also be taken into account in a Bayesian framework. Finally, the method is applied to the thermal-hydraulic code FLICA 4, in a single phase friction model framework. It allows to improve the predictions of the thermal-hydraulic code FLICA 4 significantly

    Trans-gram, Fast Cross-lingual Word-embeddings

    Full text link
    We introduce Trans-gram, a simple and computationally-efficient method to simultaneously learn and align wordembeddings for a variety of languages, using only monolingual data and a smaller set of sentence-aligned data. We use our new method to compute aligned wordembeddings for twenty-one languages using English as a pivot language. We show that some linguistic features are aligned across languages for which we do not have aligned data, even though those properties do not exist in the pivot language. We also achieve state of the art results on standard cross-lingual text classification and word translation tasks.Comment: EMNLP 201

    La visualisation de traces, support à l'analyse, déverminage et optimisation d'applications de calcul haute performance

    Get PDF
    National audienceL'analyse du comportement d'applications logicielles est une tâche de plus en plus difficile à cause de la complexité croissante des systèmes sur lesquels elles s'exécutent. Alors que l'analyse des systèmes embarqués doit faire face à une pile logicielle complexe, celle des systèmes parallèles doit être ca- pable de s'adapter à l'envergure de leur architecture matérielle et à leur indéterminisme. La visualisation de traces obtenues lors du déroulement des applications s'exécutant sur ces plate-formes s'est répandue dans les outils d'analyse pour faire face à ces problématiques. Il existe aujourd'hui un large éventail de techniques qui se distinguent par la quantité d'informations, l'échelle des systèmes, ou les comportements qu'elles sont capables de représenter. Nous nous proposons d'en faire un état de l'art, en discutant des méthodes de visualisation statistiques, comportementales et structurelles de l'application, et des techniques permettant le passage à l'échelle de l'analyse

    La spécialisation cognitive : les systèmes locaux de compétences.

    Get PDF
    National audienceLes notions de district industriel (Beccatini G., 1992), de système industriel local (Raveyre M.-F., Saglio J., 1984), de cluster (Jacobs D., De Jong M. W., 1992 ; Porter M., 2000) ou de pôles de compétitivité (Blanc C., 200 ?) désignent des ensembles d'entreprises spécialisées dans un même type de produit ou de filière et concentrées dans un espace restreint dont les relations associent concurrence et coopération. Parfois, ces firmes utilisent ou partagent des infrastructures spécialisées mises en place par des institutions publiques ou privées

    Agrégation temporelle pour l'analyse de traces volumineuses

    No full text
    National audienceL'analyse de la trace d'exécution d'une application est difficile quand la quantité d'événements qu'elle contient est importante. Les principales limites sont dues à la surface d'écran disponible, en particulier lors de l'utilisation de techniques représentant les ressources et le temps. Le diagramme de Gantt, employé par les analystes pour comprendre les relations de causalité et la structure de l'application, en est un exemple. Dans le but de fournir une vue d'ensemble tem- porelle d'une trace, ce que ne fournissent pas les techniques actuelles, nous proposons une nouvelle technique, implémentée dans l'outil Ocelotl. Cette technique permet une analyse temporelle macroscopique qui n'est pas gênée par l'affichage d'un grand nombre de ressources. Elle représente le déroulement de l'application au cours du temps en agrégeant les parties de la trace où le comportement des ressources est homogène. Cette agrégation est modulée dynamiquement par l'utilisateur qui choisit un compromis entre la complexité et la perte d'information

    Vero SF technology platform: Strategy for rapid and effective vaccine development; flavivirus vaccines case study

    Get PDF
    As a world leader in vaccine development, Sanofi Pasteur has acquired a strong expertise in the development and manufacturing of Vero SF-based vaccines, including against diseases of the Flavivirus family. To develop this innovative platform and provide a fast response to new viral epidemics, the vaccine manufacturing process development strategies have considerably evolved over the past decade. Toolboxes dedicated to high-throughput development have been designed and optimized to provide rapid response and effective vaccine availability. For the sake of speed and efficiency, development strategies have been reorganized by platforms. Theses platforms, such as screening, modelling, process monitoring, bioreactor, have themselves been completely redesigned to allow fast implementation of all development phases up to industrial scale. Major investments have been made in automated bioreactors and on-line analytics that enable high-throughput studies to support process definition, product characterization, ultimately moving to scale up and clinical manufacturing. Additional focus has been placed on chemically defined media definition, suitable for both cell culture and viral amplification, thus allowing higher generic development and simpler process optimization. Part of the platform considers automated bioreactors Ambr15 and 250 implemented and combined to scale up models up to 200 Liters. These models have been characterized to reduce time dedicated to scale-up studies and validation. Overall development timelines have been greatly reduced and the optimization phases reduced by a few months. ‘Funding : Sanofi Pasteur
    corecore