4,184 research outputs found

    Development of a general time-dependent absorbing potential for the constrained adiabatic trajectory method

    Full text link
    The Constrained Adiabatic Trajectory Method (CATM) allows us to compute solutions of the time-dependent Schr\"odinger equation using the Floquet formalism and Fourier decomposition, using matrix manipulation within a non-orthogonal basis set, provided that suitable constraints can be applied to the initial conditions for the Floquet eigenstate. A general form is derived for the inherent absorbing potential, which can reproduce any dispersed boundary conditions. This new artificial potential acting over an additional time interval transforms any wavefunction into a desired state, with an error involving exponentially decreasing factors. Thus a CATM propagation can be separated into several steps to limit the size of the required Fourier basis. This approach is illustrated by some calculations for the H2+H_2^+ molecular ion illuminated by a laser pulse.Comment: 8 pages, 7 figure

    Global integration of the Schr\"odinger equation within the wave operator formalism: The role of the effective Hamiltonian in multidimensional active spaces

    Full text link
    A global solution of the Schr\"odinger equation, obtained recently within the wave operator formalism for explicitly time-dependent Hamiltonians [J. Phys. A: Math. Theor. 48, 225205 (2015)], is generalized to take into account the case of multidimensional active spaces. An iterative algorithm is derived to obtain the Fourier series of the evolution operator issuing from a given multidimensional active subspace and then the effective Hamiltonian corresponding to the model space is computed and analysed as a measure of the cyclic character of the dynamics. Studies of the laser controlled dynamics of diatomic models clearly show that a multidimensional active space is required if the wavefunction escapes too far from the initial subspace. A suitable choice of the multidimensional active space, including the initial and target states, increases the cyclic character and avoids divergences occuring when one-dimensional active spaces are used. The method is also proven to be efficient in describing dissipative processes such as photodissociation.Comment: 33 pages, 11 figure

    Constrained Adiabatic Trajectory Method (CATM): a global integrator for explicitly time-dependent Hamiltonians

    Full text link
    The Constrained Adiabatic Trajectory Method (CATM) is reexamined as an integrator for the Schr\"odinger equation. An initial discussion places the CATM in the context of the different integrators used in the literature for time-independent or explicitly time-dependent Hamiltonians. The emphasis is put on adiabatic processes and within this adiabatic framework the interdependence between the CATM, the wave operator, the Floquet and the (t,t') theories is presented in detail. Two points are then more particularly analysed and illustrated by a numerical calculation describing the H2+H_2^+ ion submitted to a laser pulse. The first point is the ability of the CATM to dilate the Hamiltonian spectrum and thus to make the perturbative treatment of the equations defining the wave function possible, possibly by using a Krylov subspace approach as a complement. The second point is the ability of the CATM to handle extremely complex time-dependencies, such as those which appear when interaction representations are used to integrate the system.Comment: 15 pages, 14 figure

    Méthodologie d'analyse détaillée de la contamination par tronçon du fleuve Saint-Laurent par modélisation numérique : le cas du lac Saint-Pierre

    Get PDF
    Dans le cadre du Plan d'Action Saint-Laurent, une méthodologie détaillée d'analyse de la contamination par tronçon faisant appel à la modélisation numérique a été développée. Une méthode de simulation utilisant le mouvement aléatoire de particules a servi à élaborer le logiciel PANACHE. Les concentrations sont obtenues en post-traitement en attribuant une masse de contaminant aux particules du modèle. Les champs de vitesses servant à calculer leurs mouvements sont produits à l'aide d'un modèle bidimensionnel aux éléments finis. Une nouvelle approche pour l'analyse de la contamination est proposée. Celle-ci s'inspire de la méthodologie de modélisation des micro-habitats populaire dans le domaine de l'hydrobiologie. Le résultat apparaît sous la forme d'Aires Pondérées Inutilisables (API), c'est-à-dire, des surfaces où certains critères de qualité de l'eau ne sont pas respectés dans les zones de mélange. Ce système Informatisé a été élaboré sur une plate-forme INTEL/386-486 - OS2/PM.ContextThe St-Lawrence Center, part of Environment Canada, undertook a few years ago the very ambitious project of studying the toxic contamination of the St-Lawrence River. In collaboration with the Institut National de la Recherche Scientifique - Eau, a sub-project based on numerical modeling was defined in order to analyze contaminant propagation from industrial and municipal effluents into the river system.GoalsThe specific goals of the project were the following :1) to provide a precise quantification of contaminant concentrations in the effluent plume al a convenient scale;2) to analyze areas influenced by main tributaries and different water masses entering the river reach;3) to map and quantify areas as compared to water quality criteria ;4) to provide a method to select relevant hydrological events as a significant part of the analysis frameworkMethodologySome basic choices were made at the beginning of the project :1) the analysis framework emphasis the instream water quality instead of the effluent water quality;2) numerical modelling was the main tool used to evaluate the water quality;3) as far as possible references to public regulations were incorporated;4) a strong complementarity of different computer tools was favoured : Geographical Information Systems, Database management systems, simulation models;5) the numerical solution method for the transport diffusion model is typically Lagrangian : the Random Walk Method;6) the contamination analysis uses the so-called « Weighted Unusable Area » method to quantify areas that do not respect some water quality criteria.A typical contamination analysis project based on numerical modelling includes the following steps (fig 2) :1) a preliminary study to determine the main characteristics of the problem and to choose the best strategy to analyze it;2) field measurements essential to the calibration and validation of the computer model;3) hydrodynamic modelling provides the basic data on the flow field; this step includes the calibration and the validation of the model, as well as the prediction of the flow fields corresponding to well-defined and contamination relevant hydrological events;4) hydrological analysis identifies the relevant flow events chat will further be used in the mode) prediction ; this approach allows standardization of this very important input data set and avoids arbitrary choices of flow field;5) transport-diffusion modelling constitutes the main step; it provides the chemical species concentrations downstream from the effluent discharge and affords an estimate of the overall water quality of the reach, as influenced by the main tributaries. This step includes the calibration and the validation of the model which precedes the prediction exercise;6) contamination analysis necessitates the choice of appropriate and relevant water quality criteria ; we propose a new approach, inspired by the Instream Flow Incremental Methodology often used to define the quality and availability of fish habitat in river reaches, to implement this step.Numerical methodsAs previously mentioned, the project included the development of a Lagrangian model to simulate the transport of solutes in a two-dimensional steady-state river flow. We will emphasize this point. The main objective of the software development was to provide an efficient and user-friendly management tool for the public agencies. Many analytical test cases helped in the choice of the best numerical algorithms, non-physical related parameters, and in the validation of the computer code. Furthermore, the results of two dye tracing experiments performed in conjunction with airborne remote sensing techniques provided data to validate the model on the St-Lawrence River (fig. 5, 6, land 8 illustrate different simulation results corresponding to the different tasks mentioned previously). In the next paragraphs, we will summerize the basic mathematical and numerical concepts implemented in the simulations.To simulate solute transport in water media (porous or free surface), one usually uses eulerian methods which lead directly to concentration values. The solution algorithm presented here is rather based on a Lagrangian method which offers an explicit control over the additional numerical diffusion associated with every discretization method. This approach, also called the Random Walk Method (illustrated in fig. 3), or Particle Tracking Method, is more and more often used to solve hyperbolic equations. So far, the literature does not provide many applications of this method to solute transport in free surface flow. Oil spin modeling is a domain where many applications have been reported.The propagation of solute matter in free surface flow is mathematically described with momentum, mass and solute conservation equations. Since the Random Walk solution method of the transport-diffusion equation (equ. 1) requires hydrodynamic data to calculate the mean transport on streamlines along with dispersion, independent simulations providing the necessary flow field data (velocities, diffusivities, depths) have to be performed before undertaking the transport-diffusion tasks. For this purpose, the Navier-Stokes shallow water equations have become a well known tool to represent flow field in shallow waters. However, one should be aware of some often neglected but important aspects of such models, such as moving boundaries and turbulence closure. Solution techniquesTwo main goals were kept in mind during the implementation of the various algorithms : precision of results and fast computation. The following choices were made to achieve these objectives :1) A finite element discretization and solution method provides and carries hydrodynamic Information, but particles are tracked on a finite-difference grid (mixed discretization principle).2) The convective component of the movement is realized by moving the grid instead of the particles (shifted grid principle).3) Computation of concentrations optimizes smoothing while minimizing artificial diffusion (controlled effusive smoothing principle).4) When a section of the plume is described in a steady state « regime », it is mot necessary to continue the simulation on that section to proceed downstream ; the simulation is divided in almost independent sections (convolution principle).5) The particles have an a priori nondimensional weight and a unit concentration is calculated from these (unit plume principle).6) The real concentration is linearly dependent on the pollutant loads introduced into the milieu (linearity principle).The Weighted Unusable Area MethodThe Weighted Unusable Area method provides a convenient means to compare effluent plume water quality to water quality criteria as well as to quantify areas that do not comply to them. A comparable method is widely used to define the quality and availability of fish habitat downstream from regulation reservoirs, with the purpose of establishing minimum guaranteed flow discharge to protect target species (the Instream Flow Incremental Methodology : IFIM). The method consists essentially of computing areas within the analysis domain weighted by a certain factor that represents the exceedence of certain water quality, criteria. Among different options to define the weighting factor, all incorporating the effective contaminant concentration, we defined the following :1) the ratio of the concentration to the water quality criterion without consideration of exceedence or compliance;2) weighting factor equal to 1 only if the concentration exceeds the criterion (non-compliance);3) option #1, but using the concentration results corresponding only to the effluent plumes excluding the ambient water quality of the reach ; this emphasizes individual corporate responsibility (proposed for implementation);4) option 11, but with the ratio increased by a power « n », a procedure that emphasizes the non-linear increase of toxicity related to the exceedence of the criterion (could be useful for academic purposes).We also propose a Global Weighted Unusable Area concept to combine all the different chemical species present in an effluent plume. The combination is made possible using the specific criterion corresponding to each species. This procedure leads to a new state variable that represents Contamination Standard Units

    Epistemic Logic Programs with World View Constraints

    Get PDF
    An epistemic logic program is a set of rules written in the language of Epistemic Specifications, an extension of the language of answer set programming that provides for more powerful introspective reasoning through the use of modal operators K and M. We propose adding a new construct to Epistemic Specifications called a world view constraint that provides a universal device for expressing global constraints in the various versions of the language. We further propose the use of subjective literals (literals preceded by K or M) in rule heads as syntactic sugar for world view constraints. Additionally, we provide an algorithm for finding the world views of such programs

    Dirac Particles in a Gravitational Field

    Full text link
    The semiclassical approximation for the Hamiltonian of Dirac particles interacting with an arbitrary gravitational field is investigated. The time dependence of the metrics leads to new contributions to the in-band energy operator in comparison to previous works on the static case. In particular we find a new coupling term between the linear momentum and the spin, as well as couplings which contribute to the breaking of the particle - antiparticle symmetry

    Cartographie du risque unitaire d'endommagement (CRUE) par inondations pour les résidences unifamiliales du Québec

    Get PDF
    Actuellement, en considérant simultanément les éléments constitutifs du risque, soit l'aléa et la vulnérabilité, aucune des méthodes existantes dites de cartographie des risques d'inondation ne permet d'établir de façon précise et quantifiable en tous points du territoire les risques d'inondation. La méthode de cartographie présentée permet de combler ce besoin en répondant aux critères suivants : facilité d'utilisation, de consultation et d'application, résultats distribués spatialement, simplicité de mise à jour, applicabilité à divers types de résidences.La méthode présentée utilise une formulation unitaire du risque basée sur les taux d'endommagement distribués et reliés à diverses périodes de retour de crues à l'eau libre. Ceux-ci sont d'abord calculés à partir des hauteurs de submersion qu'on déduit de la topographie, des niveaux d'eau pour des périodes de retour représentatives et du mode d'implantation des résidences (présence de sous-sol, hauteur moyenne du rez-de-chaussée). Ensuite, le risque unitaire est obtenu par intégration du produit du taux d'endommagement croissant par son incrément de probabilité au dépassement. Le résultat est une carte représentant le risque en % de dommage direct moyen annuel. Une étude pilote sur un tronçon de la rivière Montmorency (Québec, Canada) a montré que les cartes sont expressives, flexibles et peuvent recevoir tous les traitements additionnels permis par un SIG tel que le logiciel MODELEUR/HYDROSIM développé à l'INRS-ETE, l'outil utilisé pour cette recherche. Enfin, l'interprétation sur la Montmorency des cartes d'inondation en vigueur actuellement au Canada (les limites de crue de 20/100 ans) soulève des interrogations sur le niveau de risque actuellement accepté dans la réglementation, surtout quand on le compare aux taux de taxation municipale.Public managers of flood risks need simple and precise tools to deal with this problem and to minimize its consequences, especially for land planning and management. Several methods exist that produce flood risk maps and help to restrict building residences in flood plains. For example, the current method in Canada is based on the delineation in flood plains of two regions corresponding to floods of 20- and 100-year return periods (CONVENTION CANADA/QUÉBEC, 1994), mostly applied to ice-free flooding conditions. The method applied by the Federal Emergency Management Agency FEMA (2004) is also based on the statistical structure of the floods in different contexts, with a goal mostly oriented towards the determination of insurance rates. In France, the INONDABILITÉ method (GILARD and GENDREAU, 1998) seeks to match the present probability of flooding to a reduced one that the stakeholders would be willing to accept.However, considering that the commonly accepted definition of risk includes both the probability of flooding and its consequences (costs of damages), very few, if any of the present methods can strictly be considered as risk-mapping methods. The method presented hereafter addresses this gap by representing the mean annual rate of direct damage (unit value) for different residential building modes, taking into account the flood probability structure and the spatial distribution of the submersion height, which takes into account the topography of the flood plain and the water stage distribution, the residential settlement mode (basement or not) and the first floor elevation of the building. The method seeks to meet important criteria related to efficient land planning and management, including: ease of utilisation, consultation and application for managers; spatially distributed results usable in current geographical information systems (GIS maps); availability anywhere in the area under study; ease of updating; and adaptability for a wide range of residence types.The proposed method is based on a unit treatment of the risk variable that corresponds to a rate of damage, instead of an absolute value expressed in monetary units. Direct damages to the building are considered, excluding damages to furniture and other personal belongs. Damage rates are first computed as a function of the main explanatory variable represented by the field of submersion depths. This variable, which is obtained from the 2D subtraction of the terrain topography from the water stage for each reference flood event, is defined by its probability of occurrence. The mean annual rate of damage (unit risk) is obtained by integrating the field of damage rate with respect to the annual probability structure of the available flood events. The result is a series of maps corresponding to representative modes of residential settlement.The damage rate was computed with a set of empirical functional relationships developed for the Saguenay region (Québec, Canada) after the flood of 1996. These curves were presented in LECLERC et al. (2003); four different curves form the set that represents residences with or without a basement, with a value below or above $CAD 50,000, which is roughly correlated with the type of occupation (i.e., secondary or main residence). While it cannot be assumed that theses curves are generic with respect to the general situation in Canada, or more specifically, in the province of Québec, the method itself can still be applied by making use of alternate sets of submersion rates of damage curves developed for other specific scenarios. Moreover, as four different functional relationships were used to represent the different residential settlement modes, four different maps have to be drawn to represent the vulnerability of the residential sector depending of the type of settlement. Consequently, as the maps are designed to represent a homogeneous mode of settlement, they represent potential future development in a given region better than the current situation. They can also be used to evaluate public policies regarding urban development and building restrictions in the flood plains.A pilot study was conducted on a reach of the Montmorency River (Québec, Canada; BLIN, 2002). It was possible to verify the compliance of the method to the proposed utilisation criteria. The method proved to be simple to use, adaptive and compatible with GIS modeling environments, such as MODELEUR (SECRETAN at al, 1999), a 2D finite elements modeling system designed for a fluvial environment. Water stages were computed with a 2D hydrodynamic simulator (HYDROSIM; HENICHE et al., 1999a) to deal with the river reach complexity (a breaded reach with back waters). Due to the availability of 2D results, a 2D graphic representation of the information layers can therefore be configured, taking into account the specific needs of the interveners. In contexts where one dimensional water stage profiles are computed (e.g., HEC-RAS by USACE, 1990; DAMBRK by FREAD, 1984), an extended 2D representation of these data needs to be developed in the lateral flood plains in order to achieve a 2D distributed submersion field.Among the interesting results, it was possible to compare the risk level for given modes of settlements (defined by the presence/absence of a basement and the elevation of the first floor with respect to the land topography) with current practices, based only on the delineation of the limits of the flood zones corresponding to 20/100 year return periods. We conclude that, at least in the particular case under study, the distributed annual rate of damage seems relatively large with respect to other financial indicators for residences such as urban taxation rates

    Using tracer experiments to determine deep saline aquifers caprocks transport characteristics for carbon dioxide storage

    Get PDF
    It is shown how a simple gas tracer technique can contribute to the determination of transport characteristics of tight rock formations. Main obtained parameters are intrinsic permeability and the Klinkenberg coefficient; permeability as low as 10-21 m2 is easily attainable. Some information is also gained on diffusion characteristics and porosity. An example of application is given using caprocks from a deep saline aquifer in the Paris basin

    Imaging Gold Nanoparticles in Living Cells Environments using Heterodyne Digital Holographic Microscopy

    Full text link
    This paper describes an imaging microscopic technique based on heterodyne digital holography where subwavelength-sized gold colloids can be imaged in cell environment. Surface cellular receptors of 3T3 mouse fibroblasts are labeled with 40 nm gold nanoparticles, and the biological specimen is imaged in a total internal reflection configuration with holographic microscopy. Due to a higher scattering efficiency of the gold nanoparticles versus that of cellular structures, accurate localization of a gold marker is obtained within a 3D mapping of the entire sample's scattered field, with a lateral precision of 5 nm and 100 nm in the x,y and in the z directions respectively, demonstrating the ability of holographic microscopy to locate nanoparticles in living cells environments
    • …
    corecore