1,543 research outputs found
Magnetic field evolution of the K2 dwarf V471 Tau
Observations of the eclipsing binary system V471 Tau show that the time of
the primary eclipses varies in an apparent periodic way. With growing evidence
that the magnetically active K2 dwarf component might be responsible for
driving the eclipse timing variations (ETVs), it is necessary to monitor the
star throughout the predicted ~ 35 yr activity cycle that putatively fuels the
observed ETVs. We contribute to this goal with this paper by analysing
spectropolarimetric data obtained with ESPaDOnS at the Canada-France-Hawaii
Telescope in December 2014 and January 2015. Using Zeeman-Doppler Imaging, we
reconstruct the distribution of brightness inhomogeneities and large-scale
magnetic field at the surface of the K2 dwarf. Compared to previous tomographic
reconstructions of the star carried out with the same code, we probe a new
phase of the ETVs cycle, offering new constraints for future works exploring
whether a magnetic mechanism operating in the K2 dwarf star is indeed able to
induce the observed ETVs of V471 Tau.Comment: 12 pages, 10 figures, submitted to MNRA
Generation of vortex lattices at the liquid-gas interface using rotating surface waves
In this paper, we demonstrate experimentally that by generating two orthogonal standing waves at the liquid surface, one can control the motion of floating microparticles. The mechanism of the vortex generation is somewhat similar to a classical Stokes drift in linear progression waves. By adjusting the relative phase between the waves, it is possible to generate a vortex lattice, seen as a stationary horizontal flow consisting of counter-rotating vortices. Two orthogonal waves which are phase-shifted by π/2 create locally rotating waves. Such waves induce nested circular drift orbits of the surface fluid particles. Such a configuration allows for the trapping of particles within a cell of the size about half the wavelength of the standing waves. By changing the relative phase, it is possible to either create or to destroy the vortex crystal. This method creates an opportunity to confine surface particles within cells, or to greatly increase mixing of the surface matter over the wave field surface.This work was supported by the Australian Research Council’s Discovery Projects funding scheme
DP160100863 and Linkage Projects funding scheme LP160100477. H.X. acknowledges support from the Australian Research Council’s Future Fellowship (FT140100067). N.F. acknowledges support by the Australian Research Council’s DECRA award (DE160100742)
Code Translation with Compiler Representations
In this paper, we leverage low-level compiler intermediate representations
(IR) to improve code translation. Traditional transpilers rely on syntactic
information and handcrafted rules, which limits their applicability and
produces unnatural-looking code. Applying neural machine translation (NMT)
approaches to code has successfully broadened the set of programs on which one
can get a natural-looking translation. However, they treat the code as
sequences of text tokens, and still do not differentiate well enough between
similar pieces of code which have different semantics in different languages.
The consequence is low quality translation, reducing the practicality of NMT,
and stressing the need for an approach significantly increasing its accuracy.
Here we propose to augment code translation with IRs, specifically LLVM IR,
with results on the C++, Java, Rust, and Go languages. Our method improves upon
the state of the art for unsupervised code translation, increasing the number
of correct translations by 11% on average, and up to 79% for the Java -> Rust
pair with greedy decoding. With beam search, it increases the number of correct
translations by 5.5% in average. We extend previous test sets for code
translation, by adding hundreds of Go and Rust functions. Additionally, we
train models with high performance on the problem of IR decompilation,
generating programming source code from IR, and study using IRs as intermediary
pivot for translation.Comment: 9 page
Experiences of Medical Imaging Students and Clinical Learning in a Limited Resource Setting - A Qualitative Study in Rwanda
Purpose: This qualitative research aimed to explore the experiences of University of Rwanda medical imaging students during their clinical practice in the country.
Methods and Materials: Focus Group Discussions (FGDs) with open-ended questions were held with twenty five medical imaging sciences students who were enrolled in their second and final year respectively of the bachelors with honors and national diploma programs.
Results: Qualitative exploratory descriptive research was conducted in March 2017 through FDGs. The recorded data was transcribed, anonymized, coded, categorized and conceptualized into four themes: theory-practice gap, teaching and learning support, occupational health and safety, resources and infrastructure. Data was analyzed using content analysis. The findings indicate that there were aspects which negatively impacted clinical experiences of medical imaging students. This valuable information is important to create awareness among medical imaging academia and practicing professionals about the challenges faced by medical imaging students in clinical practice.
Conclusion: Medical imaging students experienced a number of challenges during their clinical training in Rwanda. Based on these findings, specific recommendations are suggested with an aim to enhance the clinical training process of medical imaging student
Gestion optimale d'un réservoir hydraulique multiusages et changement climatique. Modèles, projections et incertitudes (Application à la réserve de Serre-Ponçon)
Pouvoir évaluer l'impact du changement climatique sur la ressource en eau, et les systèmes de gestion qui lui sont associés, est une préoccupation majeure de nos sociétés. Une telle évaluation nécessite la mise en place d'une chaîne de simulation qui permet, sur la base d'expériences climatiques futures, i) d'estimer à l'échelle régionale l'évolution possible de la ressource et de sa variabilité, ii) de simuler le comportement des systèmes utilisés pour leur gestion pour iii) estimer les éventuelles modifications de performance. Cette thèse vise à tester la possibilité de mettre en place une chaîne de simulation de ce type pour un système de gestion réel et à identifier quelles sont les composantes à considérer dans ce cas. Pour ce faire, nous chercherons en particulier à apporter des éléments de réponse aux questions suivantes: - Quelles représentations peut-on faire d'un système de gestion opérationnel pour une application en climat modifié ? - Quels éléments d'évaluation peuvent permettre d'estimer l'impact du changement climatique sur ce système de gestion ? - Quelles sont les sources d'incertitudes influençant cette évaluation ? Quelles sont les contributions relatives à l'incertitude totale des différentes méthodes et modèles utilisés ? Nous considérerons plus précisément le système de gestion du barrage de Serre-Ponçon, alimenté par le haut bassin versant de la Durance. Ce barrage, géré par EDF, est l'un des plus grands barrages artificiels européens. Il est multi-usages (irrigation, soutien d'étiage, production d'hydroélectricité, tourisme). Dans un premier temps, nous présenterons le contexte du système de gestion actuel. Nous mettrons ensuite en place un modèle de gestion du barrage visant à reproduire de façon réaliste du point de vue du gestionnaire actuel (EDF), mais simplifiée pour pouvoir être appliqué sous scénarios futurs - la gestion actuelle du barrage. Nous développerons pour cela i) des modèles permettant d'estimer les différentes demandes en eau et ii) un modèle d'optimisation de la gestion sous contraintes. Ce modèle permettra de simuler la gestion du système au pas de temps journalier sur plusieurs décennies du climat récent, ou de climats futurs modifiés. Nous proposerons ensuite un ensemble d'indicateurs qui permettent de fournir une estimation de la performance d'un tel système à partir des sorties du modèle de gestion obtenues par simulation pour différentes périodes de 30 ans. Nous explorerons la façon dont la performance estimée dépend du modèle choisi pour la représentation du système de gestion actuel, et plus précisément de la façon dont la stratégie utilisée pour l'optimisation de la gestion est élaborée. A ce titre, nous proposerons trois modèles de gestion basés sur trois types de stratégies, obtenues pour des degrés différents de prévisibilité des apports et sollicitations futurs à la retenue. Pour ces simulations, les modèles d'impacts nécessitent des scénarios de forçages météorologiques à l'échelle de bassin versant (e.g. modèle hydrologique, modèle d'usages de l'eau, modèle de gestion de la ressource). Ces scénarios peuvent être obtenus par des méthodes de descente d'échelle statistique (MDES), sur la base des simulations grande échelle des modèles climatiques globaux. Enfin, nous évaluerons les incertitudes liées aux deux types de modèles et estimerons leurs contributions relatives à l'incertitude globale. Nous utiliserons pour cela les scénarios issus de différentes chaines de simulation GCM/MDES produits sur la période 1860-2011 dans le cadre du projet RIWER2030. Nous montrerons que ces deux sources d'incertitudes sont du même ordre de grandeur sur l'estimation des modifications de performance.Assess the impact of climate change on water resources and management systems associated, is a major concern of our society. This requires the establishment of a simulation chain which allows, on the basis of future climate experiments i) to estimate the possible changes in regional resource and its variability, ii) to simulate the behavior of the systems used to manage them in order to iii) estimate the possible changes in performance. This thesis aims to test the feasibility of establishing a chain simulation of such a management system to identify what are the real components to consider in this case. To do this, we have to provide answers to the following questions: - How can we represent an operational management system in a climate change context? - What elements of evaluation can be used to estimate the impact of climate change on the management system? - What are the sources of uncertainty influencing this assessment? What are the relative contributions to the total uncertainty of these different methods and models used? We consider the system of management of the reservoir of Serre-Ponçon, built on the high basin of the Durance. This dam, operated by EDF, is one of the largest artificial dams Europe. It is multi-purpose (irrigation, low-flow support, hydropower, tourism). As a first step, we will present the context of the current management system. Then, we will establish a management model to reproduce - in a realistic way from the point of view of the current manager (EDF), but simplified to be applied in future scenarios - the current management of the Serre-Ponçon reserve. We will develop for this, i) different models to estimate different water demands and ii) an optimization model with constraints management. This model will simulate the management system in daily time step on several decades of recent climate or future climate change. We then propose a set of indicators to provide an estimate of the performance of such a system from the outputs of the management model obtained by simulation for different periods of 30 years. We will explore how the estimated performance depends on the model chosen to represent the current management system, and more specifically how the strategy used to optimize the management is developed. To this end, we will propose three management models based on three types of strategies, obtained for different degrees of predictability of future inflows and constraints. For these simulations, the impact models require meteorological forcing scenarios at watershed scale (eg hydrological model, model of water use model of resource management). These scenarios can be obtained by statistical downscaling methods (SDM), on the basis of large-scale simulations of global climate models. Finally, we will evaluate the uncertainties associated with the two types of models and will estimate their relative contributions to the overall uncertainty. We have used this scenario from different GCM/SDM simulations over the period 1860-2100 obtained within the RIWER2030 project. We show that these two sources of uncertainty are of the same order of magnitude estimate of changes in performance.SAVOIE-SCD - Bib.électronique (730659901) / SudocGRENOBLE1/INP-Bib.électronique (384210012) / SudocGRENOBLE2/3-Bib.électronique (384219901) / SudocSudocFranceF
Anderson impurity solver integrating tensor network methods with quantum computing
Solving the Anderson impurity model typically involves a two-step process,
where one first calculates the ground state of the Hamiltonian, and then
computes its dynamical properties to obtain the Green's function. Here we
propose a hybrid classical/quantum algorithm where the first step is performed
using a classical computer to obtain the tensor network ground state as well as
its quantum circuit representation, and the second step is executed on the
quantum computer to obtain the Green's function. Our algorithm exploits the
efficiency of tensor networks for preparing ground states on classical
computers, and takes advantage of quantum processors for the evaluation of the
time evolution, which can become intractable on classical computers. We
demonstrate the algorithm using 20 qubits on a quantum computing emulator for
SrVO3 with a multi-orbital Anderson impurity model within the dynamical mean
field theory. The tensor network based ground state quantum circuit preparation
algorithm can also be performed for up to 40 qubits with our available
computing resources, while the state vector emulation of the quantum algorithm
for time evolution is beyond what is accessible with such resources. We show
that, provided the tensor network calculation is able to accurately obtain the
ground state energy, this scheme does not require a perfect reproduction of the
ground state wave function on the quantum circuit to give an accurate Green's
function. This hybrid approach may lead to quantum advantage in materials
simulations where the ground state can be computed classically, but where the
dynamical properties cannot
Thermalizing a telescope in Antarctica: Analysis of ASTEP observations
The installation and operation of a telescope in Antarctica represent
particular challenges, in particular the requirement to operate at extremely
cold temperatures, to cope with rapid temperature fluctuations and to prevent
frosting. Heating of electronic subsystems is a necessity, but solutions must
be found to avoid the turbulence induced by temperature fluctua- tions on the
optical paths. ASTEP 400 is a 40 cm Newton telescope installed at the Concordia
station, Dome C since 2010 for photometric observations of fields of stars and
their exoplanets. While the telescope is designed to spread star light on
several pixels to maximize photometric stability, we show that it is
nonetheless sensitive to the extreme variations of the seeing at the ground
level (between about 0.1 and 5 arcsec) and to temperature fluctuations between
--30 degrees C and --80 degrees C. We analyze both day-time and night-time
observations and obtain the magnitude of the seeing caused by the mirrors, dome
and camera. The most important effect arises from the heating of the primary
mirror which gives rise to a mirror seeing of 0.23 arcsec K--1 . We propose
solutions to mitigate these effects.Comment: Appears in Astronomical Notes / Astronomische Nachrichten, Wiley-VCH
Verlag, 2015, pp.1-2
- …