1,941 research outputs found

    Alien Registration- Sylvain, Denis (Allagash, Aroostook County)

    Get PDF
    https://digitalmaine.com/alien_docs/32478/thumbnail.jp

    Attentional and Semantic Anticipations

    No full text
    Why are attentional processes important in the driving of anticipations? Anticipatory processes are fundamental cognitive abilities of living systems, in order to rapidly and accurately perceive new events in the environment, and to trigger adapted behaviors to the newly perceived events. To process anticipations adapted to sequences of various events in complex environments, the cognitive system must be able to run specific anticipations on the basis of selected relevant events. Then more attention must be given to events potentially relevant for the living system, compared to less important events. What are useful attentional factors in anticipatory processes? The relevance of events in the environment depend on the effects they can have on the survival of the living system. The cognitive system must then be able to detect relevant events to drive anticipations and to trigger adapted behaviors. The attention given to an event depends on i) its external physical relevance in the environment, such as time duration and visual quality, and ii) on its internal semantic relevance in memory, such as knowledge about the event (semantic field in memory) and anticipatory power (associative strength to anticipated associates). How can we model interactions between attentional and semantic anticipations? Specific types of distributed recurrent neural networks are able to code temporal sequences of events as associated attractors in memory. Particular learning protocol and spike rate transmission through synaptic associations allow the model presented to vary attentionally the amount of activation of anticipations (by activation or inhibition processes) as a function of the external and internal relevance of the perceived events. This type of model offers a unique opportunity to account for both anticipations and attention in unified terms of neural dynamics in a recurrent network

    Regular Temporal Cost Functions

    Get PDF
    International audienceRegular cost functions have been introduced recently as an extension to the notion of regular languages with counting capabilities. The specificity of cost functions is that exact values are not considered, but only estimated. In this paper, we study the strict subclass of regular temporal cost functions. In such cost functions, it is only allowed to count the number of occurrences of consecutive events. For this reason, this model intends to measure the length of intervals, i.e., a discrete notion of time. We provide various equivalent representations for functions in this class, using automata, and 'clock based' reduction to regular languages. We show that the conversions are much simpler to obtain, and much more efficient than in the general case of regular cost functions. Our second aim in this paper is to use temporal cost function as a test-case for exploring the algebraic nature of regular cost functions. Following the seminal ideas of Schützenberger, this results in a decidable algebraic characterization of regular temporal cost functions inside the class of regular cost functions

    Sparse + smooth decomposition models for multi-temporal SAR images

    No full text
    International audienceSAR images have distinctive characteristics compared to optical images: speckle phenomenon produces strong fluctuations, and strong scatterers have radar signatures several orders of magnitude larger than others. We propose to use an image decomposition approach to account for these peculiarities. Several methods have been proposed in the field of image processing to decompose an image into components of different nature, such as a geometrical part and a textural part. They are generally stated as an energy minimization problem where specific penalty terms are applied to each component of the sought decomposition. We decompose temporal series of SAR images into three components: speckle, strong scatterers and background. Our decomposition method is based on a discrete optimization technique by graph-cut. We apply it to change detection tasks

    Programmation unifiée multi-accélérateur OpenCL

    Get PDF
    National audienceLe standard OpenCL propose une interface de programmation basée sur un parallé- lisme de tâches et supportée par différents types d'unités de calcul (GPU, CPU, Cell. . . ). L'une des caractéristiques d'OpenCL est que le placement des tâches sur les différentes unités de cal- cul doit être fait manuellement. Pour une machine hybride disposant par exemple de multicœur et d'accélérateur(s), l'équilibrage de charge entre les différentes unités est très difficile à obte- nir à cause de cette contrainte. C'est particulièrement le cas des applications dont le grain et le nombre des tâches varient au cours de l'exécution. Il en découle par ailleurs que le passage à l'échelle d'une application OpenCL est limitée dans le contexte d'une machine hybride. Nous proposons dans cet article de remédier à cette limitation en créant une unité virtuelle et paral- lèle de calcul, regroupant les différentes unités de la machine. Le placement manuel d'OpenCL cible cette unité virtuelle, et la responsabilité du placement sur les unités réelles est laissée à un support exécutif. Ce support exécutif se charge d'effectuer les transferts de données et les placements des tâches sur les unités réelles. Nous montrons que cette solution permet de simpli- fier grandement la programmation d'applications pour architectures hybrides et cela de façon efficace

    Scale Normalization for the Distance Maps AAM

    No full text
    International audienceThe Active Apearence Models (AAM) are often used in Man-Machine Interaction for their ability to align the faces. We propose a new normalization method for AAM based on distance map in order to strengthen their robustness to differences in illumination. Our normalization do not use the photometric normalization protocol classically used in AAM and is much more simpler to implement. Compared to Distance Map AAM performances of {Leg06} and other AAM implementation which use CLAHE {Zuiderveld94} normalization or gradient information, our proposition is at the same time much robust to illumination and AAM initialization. The tests have been drive in the context of generalization: 10 persons with frontal illumination from M2VTS database {m2vts} were considered to build the AAM, and 17 persons under 21 different illuminations from CMU database {Sim02} were used for the testing base

    Contact resistances in trigate and FinFET devices in a Non-Equilibrium Green's Functions approach

    Full text link
    We compute the contact resistances RcR_{\rm c} in trigate and FinFET devices with widths and heights in the 4 to 24 nm range using a Non-Equilibrium Green's Functions approach. Electron-phonon, surface roughness and Coulomb scattering are taken into account. We show that RcR_{\rm c} represents a significant part of the total resistance of devices with sub-30 nm gate lengths. The analysis of the quasi-Fermi level profile reveals that the spacers between the heavily doped source/drain and the gate are major contributors to the contact resistance. The conductance is indeed limited by the poor electrostatic control over the carrier density under the spacers. We then disentangle the ballistic and diffusive components of RcR_{\rm c}, and analyze the impact of different design parameters (cross section and doping profile in the contacts) on the electrical performances of the devices. The contact resistance and variability rapidly increase when the cross sectional area of the channel goes below 50\simeq 50 nm2^2. We also highlight the role of the charges trapped at the interface between silicon and the spacer material.Comment: 16 pages, 15 figure

    Contemporary (1951–2001) Evolution of Lakes in the Old Crow Basin, Northern Yukon, Canada: Remote Sensing, Numerical Modeling, and Stable Isotope Analysis

    Get PDF
    This study reports on changes in the distribution, surface area, and modern water balance of lakes and ponds located in the Old Crow Basin, northern Yukon, over a 50-year period (1951–2001), using aerial photographs, satellite imagery, a numerical lake model, and stable O-H isotope analysis. Results from the analysis of historical air photos (1951 and 1972) and a Landsat-7 Enhanced Thematic Mapper (ETM+) image (2001) show an overall decrease (-3.5%) in lake surface area between 1951 and 2001. Large lakes typically decreased in extent over the study period, whereas ponds generally increased. Between 1951 and 1972, approximately 70% of the lakes increased in extent; however, between 1972 and 2001, 45% decreased in extent. These figures are corroborated by a numerical lake water balance simulation (P-E index) and stable O-H isotope analysis indicating that most lakes experienced a water deficit over the period 1988–2001. These observed trends towards a reduction in lake surface area are mainly attributable to a warmer and drier climate. The modern decrease in lake levels corresponds well to changes in regional atmospheric teleconnection patterns (Arctic and Pacific Decadal oscillations). In 1977, the climate in the region switched from a predominantly cool and moist regime, associated with the increase in lake surface area, to a hot and dry one, thus resulting in the observed decrease in lake surface area. Although some lakes may have drained catastrophically by stream erosion or bank overflow, it is not possible to determine with certainty which lakes experienced such catastrophic drainage, since an interval of two decades separates the two air photo mosaics, and the satellite image was obtained almost30 years after the second mosaic of air photos.La présente étude fait état des changements caractérisant la répartition, l’étendue et le bilan hydrique contemporain des lacs et des étangs situés dans le bassin Old Crow, dans le nord du Yukon, sur une période de 50 ans (1951–2001). L’étude s’est appuyée sur des photographies aériennes, l’imagerie satellitaire, un modèle numérique des lacs et l’analyse des isotopes stables O-H. D’après les résultats de l’analyse des photos aériennes historiques (1951 et 1972) et d’une image par capteur ETM+ (Enhanced Thematic Mapper) de Landsat-7 (2001), il y a eu rétrécissement général ( 3,5 %) de la surface des lacs entre 1951 et 2001. D’un point de vue général, l’étendue des grands lacs a diminué au cours de la période visée par l’étude, tandis que celle des étangs a augmenté. Entre 1951 et 1972, l’étendue d’environ 70 % des lacs s’est accrue, mais entre 1972 et 2001, l’étendue de 45 % des lacs a diminué. Ces données ont été corroborées au moyen de la simulation numérique du bilan hydrique des lacs (indice P-E) et de l’analyse des isotopes stables O-H, qui ont laissé entrevoir que la plupart des lacs ont enregistré un déficit en eau au cours de la période allant de 1988 à 2001. Les tendances de réduction de la surface des lacs qui ont été observées sont principalement attribuables à un climat plus chaud et plus sec. La diminution contemporaine du niveau des lacs correspond bien aux changements caractérisant les modèles régionaux de téléconnexion atmosphérique (oscillations décadaires arctiques et pacifiques). En 1977, le climat de la région est passé d’un régime à prédominance fraîche et humide (associé à l’augmentation de la surface des lacs de la région) à un régime chaud et sec, ce qui s’est traduit par la diminution de la surface des lacs qui a été observée. Bien que certains lacs puissent avoir été drainés de manière catastrophique en raison de l’érosion des cours d’eau ou du débordement des rives, il est impossible de déterminer avec certitude quels lacs ont été la cible d’un assèchement si catastrophique puisqu’un intervalle de deux décennies sépare les deux mosaïques de photographies aériennes, et que l’image satellitaire a été obtenue presque une trentaine d’années après la deuxième mosaïque de photo aérienne

    Variationnal data assimilation of AirSWOT and SWOT data into the 2D shallow water model Dassflow, method and test case on the Garonne river (France)

    Get PDF
    For river hydraulic studies, water level measurements are fundamental information, yet they are currently mostly provided by gauging stations mostly located on the main river channel. That is why they are sparsely distributed in space and can have gaps in their time series (because of floods damages on sensors or sensors failures). These issues can be compensated by remote sensing data, which have considerably contributed to improve the observation of physical processes in hydrology and hydraulics in general and, in particular, in flood hydrodynamic. Indeed, the new generation of satellites is equipped with sensors of metric resolution. Remotely-sensed images from satellites such as SWOT (Surface Water and Ocean Topography) would give spatially distributed information on water elevations with a high accuracy (able to observe river wider than100m with a vertical precision ~dm) and periodic in time (revisiting ~week at mid-latitude). Gathering pre-mission data over specific and varied science targets is the purpose of the AirSWOT airborne campaign in order to implement and test SWOT products retrieval algorithms. A reach of the Garonne River, downstream of Toulouse (FRANCE), is a proposed study area for AirSWOT flights. This choice is motivated by previous hydraulic and thermal studies (Larnier et al., 2010) already performed on this section of 100km reach of the river. Moreover, on this highly instrumented and studied portion of river many typical free surface flow modelling issue has been encountered, and this river reach represents the limit of SWOT observation capability. The 2DH (vertically integrated) free surface flow model Dassflow (Honnorat et al., 2005; Honnorat et al., 2007; Honnorat et al., 2009; Hostache et al., 2010; Lai and Monnier, 2009) especially designed for variational data assimilation, will be used on this portion of the Garonne River. Mathematical methodologies such as twin experiments (Roux and Dartus, 2005; Roux and Dartus, 2006) will be performed on several modelling hypothesis in order to identify main characteristic of the river. An identification strategy would allow to retrieve spatial roughness along the main channel, variation of the local topographic slope or else temporal evolution of the streamflow
    corecore