10,123 research outputs found
Fluorescence from a few electrons
Systems containing few Fermions (e.g., electrons) are of great current
interest. Fluorescence occurs when electrons drop from one level to another
without changing spin. Only electron gases in a state of equilibrium are
considered. When the system may exchange electrons with a large reservoir, the
electron-gas fluorescence is easily obtained from the well-known Fermi-Dirac
distribution. But this is not so when the number of electrons in the system is
prevented from varying, as is the case for isolated systems and for systems
that are in thermal contact with electrical insulators such as diamond. Our
accurate expressions rest on the assumption that single-electron energy levels
are evenly spaced, and that energy coupling and spin coupling between electrons
are small. These assumptions are shown to be realistic for many systems.
Fluorescence from short, nearly isolated, quantum wires is predicted to drop
abruptly in the visible, a result not predicted by the Fermi-Dirac
distribution. Our exact formulas are based on restricted and unrestricted
partitions of integers. The method is considerably simpler than the ones
proposed earlier, which are based on second quantization and contour
integration.Comment: 10 pages, 3 figures, RevTe
La modélisation stochastique des pluies horaires et leur transformation en débits pour la prédétermination des crues
Pour étudier les distributions de fréquences des variables hydrologiques (pluies et débits) au pas de temps horaire, une méthodologie associant un générateur de chroniques de pluies horaires et un modèle conceptuel global de transformation de la pluie en débit a été développée. Sur une période de simulation donnée, la méthode génère une collection de scénarios de crues vraisemblables utilisée en prédétermination des risques hydrologiques. Les distributions de fréquences des variables hydrologiques sont construites empiriquement à partir des événements de pluies et de crues générés. L'extrapolation des distributions de fréquences des variables hydrologiques vers les fréquences rares se fait de façon empirique en augmentant la période de simulation, et non plus sur l'ajustement direct des distributions observées. Le principe de cette méthode (appelée SHYPRE : Simulation d'HYdrogrammes pour la PREdétermination) est donc d'utiliser les observations pour décrire le phénomène, afin de le reproduire statistiquement et de s'affranchir ainsi du manque d'observation. Son utilisation permet une estimation originale des quantiles de crues de fréquences courantes à rares et présente l'intérêt d'obtenir une information temporelle complète sur ces crues. De plus, on montre que l'approche fournit une estimation de quantiles de crues bien plus robuste que les ajustements statistiques des distributions observées, même pour les événements de fréquences courantes. Cette robustesse provient d'une meilleure prise en compte de l'information pluviométrique et de la stabilité de la paramétrisation du modèle pluie-débit.A statistical approach encompassing a stochastic model to generate hourly rainfall and rainfall runoff was used to study frequency distributions of hydrologic variables. The method generates numerous different flood events over a given simulation period to evaluate hydrologic risks. Entitled Simulated HYdrographs for flood PRobability Estimation (SHYPRE), it makes use of observed values to describe hydrological phenomena and successfully reproduces observed-value statistics. Frequency distributions of hydrologic variables are built empirically from model-generated rainfall and flood events. Extrapolation of these frequency distributions to rare frequencies is performed by simulation over longer periods, rather than by direct fit of theoretical probability distributions over observed values. This approach yields different estimations of flood quantiles for common to rare frequencies as well as complete temporal flood data. Moreover, SHYPRE estimates of flood quantiles are more stable than statistical distributions fitted onto observed values, even for frequent events. The improvement stems from better use of rainfall data and from the parametric stability of the rainfall model and rainfall-runoff model
Comment on "Canonical and Mircocanonical Calculations for Fermi Systems"
In the context of nuclear physics Pratt recently investigated noninteracting
Fermi systems described by the microcanonical and canonical ensemble. As will
be shown his discussion of the model of equally spaced levels contains a flaw
and a statement which is at least confusing.Comment: Comment on S. Pratt, Phys. Rev. Lett. 84, 4255 (2000) and
nucl-th/990505
The emission of Cygnus X-1: observations with INTEGRAL SPI from 20 keV to 2 MeV
We report on Cyg X-1 observations performed by the SPI telescope onboard the
INTEGRAL mission and distributed over more than 6 years. We investigate the
variability of the intensity and spectral shape of this peculiar source in the
hard X-rays domain, and more particularly up to the MeV region. We first study
the total averaged spectrum which presents the best signal to noise ratio (4 Ms
of data). Then, we refine our results by building mean spectra by periods and
gathering those of similar hardness.
Several spectral shapes are observed with important changes in the curvature
between 20 and 200 keV, even at the same luminosity level. In all cases, the
emission decreases sharply above 700 keV, with flux values above 1 MeV (or
upper limits) well below the recently reported polarised flux (Laurent et al.
2011), while compatible with the MeV emission detected some years ago by
CGRO/COMPTEL (McConnell et al., 2002).
Finally, we take advantage of the spectroscopic capability of the instrument
to seek for spectral features in the 500 keV region with negative results for
any significant annihilation emission on 2 ks and days timescales, as well as
in the total dataset.Comment: 14 pages, 10 figures, accepted for publication in Ap
Evidence in Virgo for the Universal Dark Matter Halo
A model is constructed for the mass and dynamics of M87 and the Virgo
Cluster. Existing surface photometry of the galaxy, mass estimates from X-ray
observations of the hot intracluster gas, and the velocity dispersions of
early-type Virgo galaxies, all are used to constrain the run of dark matter
density over radii to 2 Mpc in the cluster. The ``universal'' halo advocated by
Navarro, Frenk, & White provides an excellent description of the combined data,
as does a Hernquist profile. These models are favored over isothermal spheres,
and their central structure is preferred to density cusps either much stronger
or much weaker than r^{-1}. The galaxies and gas in the cluster trace its total
mass distribution, the galaxies' velocity ellipsoid is close to isotropic, and
the gas temperature follows the virial temperature profile of the dark halo.
The virial radius and mass and the intracluster gas fraction of Virgo are
evaluated.Comment: ApJ Letters in pres
Group Analysis of Self-organizing Maps based on Functional MRI using Restricted Frechet Means
Studies of functional MRI data are increasingly concerned with the estimation
of differences in spatio-temporal networks across groups of subjects or
experimental conditions. Unsupervised clustering and independent component
analysis (ICA) have been used to identify such spatio-temporal networks. While
these approaches have been useful for estimating these networks at the
subject-level, comparisons over groups or experimental conditions require
further methodological development. In this paper, we tackle this problem by
showing how self-organizing maps (SOMs) can be compared within a Frechean
inferential framework. Here, we summarize the mean SOM in each group as a
Frechet mean with respect to a metric on the space of SOMs. We consider the use
of different metrics, and introduce two extensions of the classical sum of
minimum distance (SMD) between two SOMs, which take into account the
spatio-temporal pattern of the fMRI data. The validity of these methods is
illustrated on synthetic data. Through these simulations, we show that the
three metrics of interest behave as expected, in the sense that the ones
capturing temporal, spatial and spatio-temporal aspects of the SOMs are more
likely to reach significance under simulated scenarios characterized by
temporal, spatial and spatio-temporal differences, respectively. In addition, a
re-analysis of a classical experiment on visually-triggered emotions
demonstrates the usefulness of this methodology. In this study, the
multivariate functional patterns typical of the subjects exposed to pleasant
and unpleasant stimuli are found to be more similar than the ones of the
subjects exposed to emotionally neutral stimuli. Taken together, these results
indicate that our proposed methods can cast new light on existing data by
adopting a global analytical perspective on functional MRI paradigms.Comment: 23 pages, 5 figures, 4 tables. Submitted to Neuroimag
Amélioration des performances d'un modèle stochastique de génération de hyétogrammes horaires: application au pourtour méditerranéen français
Depuis quelques années, un modèle stochastique de génération de hyétogrammes horaires est développé au groupement d'Aix-en-Provence du Cemagref, pour être couplé à une modélisation de la pluie en débit, fournissant ainsi une multitude de scénarios de crues analysés statistiquement et utilisés en prédétermination des débits de crues. L'extension de la zone d'application du modèle de pluies horaires au-delà de sa zone de conception, a fait apparaître une hétérogénéité dans les résultats. Ce constat a entraîné certaines modifications du modèle comme : la recherche d'une loi de probabilité théorique peu sensible aux problèmes d'échantillonnage pour une variable du modèle (intensité d'une averse), la prise en compte originale de la dépendance observée entre deux variables du modèle (durée et intensité d'une averse), et la modélisation de la persistance des averses au sein d'une même période pluvieuse. Ces différentes modifications apportées au modèle initial ont entraîné une très nette amélioration de ses performances sur la cinquantaine de postes pluviographiques du pourtour méditerranéen français. On obtient ainsi un outil beaucoup plus robuste et validé sur une zone étendue, capable de fournir de multiples formes de hyétogrammes, couvrant toute la gamme des fréquences, permettant ainsi de s'affranchir des pluies de projet uniques. On aborde aussi une nouvelle approche du comportement à l'infini des distributions de fréquences des pluies qui semble parfois supérieur à une tendance strictement exponentielle. De plus, l'étude de plusieurs événements par an dont chacun présente plusieurs réalisations des différentes variables du modèle augmente la taille des échantillons analysés, semblant rendre la méthode plus rapidement fiable qu'une approche statistique classique basée par exemple sur l'ajustement de valeurs maximales annuelles.A stochastic model for generating hourly hyetographs has been recently developed, in the Cemagref of Aix-en-Provence, to be coupled with a rainfall runoff conversion modelling. Thus, by simulation of very long periods (1000 years for example), we obtain a large number of hourly hyetographs and flood scenarios that are statistically studied and used in flood predetermination problems. The rainfall model studied is based on the theory that rainfall can be linked to a random and intermittent process whose evolution is described by stochastic laws. It is also based on the hypothesis of independence between variables describing hyetographs and on the hypothesis of the stationary nature of the phenomenon studied. Generating a rainfall time series involves two steps : descriptive study of the phenomenon (nine independent variables are chosen to describe the phenomenon and these variables are defined by a theoretical law of probability fitted to the observations) and creation of a rainfall time series using descriptive variables generated randomly from their law of probability. Initially developed on the Réal Collobrier watershed data, the model has been applied to fifty raingauges located on the Mediterranean French seaboard. The extension of the model applying area has shown heterogeneousness in the results. Therefore, modifications have been made to the model to improve its performances. Among these modifications, three of them have presented notable improvements. A study of the sensitivity of the parameters has been made. Parameters of shape variables and of some other variables had only a slight influence on depth of generated rainfalls. But, the law of mean rainfall intensities clearly differentiates the stations. Then, a theoretical probability distribution for the storm intensity variable, less sensitive to the sampling problems, has been searched. An exponential distribution is fitted to the value smaller than four times the mean of the variable. A slope breakage was then introduced to generate all the values beyond this limit. The breakage at the value four times the mean of the variable and modelling this breakage were based on a study of so-called "regional" distributions of the storm intensity variable. These distributions were designed by clustering the variable's homogenized values for all 50 studied stations. A second modification has been made to develop new model for the observed dependence between two variables (duration and intensity of the storm). The study of this dependence has been considered directly based on the cumulative frequency of the two variables. Then, an additional parameter was defined to model the dependence between the probabilities of the two variables. This parameter characterises the cumulative frequency curve of the sum of the probabilities of the two variables. This point, neglected during a long time, has been very important in the improvement of the model. Finally, the modelling of storm persistence in a same rainfall episode has been studied to generate some high 24 hours maximum rainfalls. Persistence modelling is entirely justified by the fact that "ordinary storms" cluster together around the "main storm" (the "main storm" is the greatest storm of an episode and the "ordinary storms" are the other storms of the episode). When the study of this phenomenon is extended, it can be observed that there is a certain positive dependency between occurrence probability of the "main storm" and occurrence probability of storms which come before or after it. Two combined effects occur : within one rainy episode, the strongest "ordinary storms" are preferentially clustered together around the "main storm", and considering the number of "ordinary storms" throughout all the episodes, the strongest storms close to the "main storm" are preferentially associated with the strongest "main storms" and vice versa. This modification improves the performances of the altitude raingauges, which are characterised by high daily rainfall accumulations. The different modifications added to the initial model, give very important improvements on the calibration of the fifty raingauges studied on the French Mediterranean seaboard. Its aptitude to generate rains observed in Mediterranean climate, strongly variables, consolidates us in the idea of its application on a zone much larger. The generation of hyetographs makes it possible to use the maximum the temporal information of the rain. Thus, we obtain a reliable tool, validated on a large area, for simulating hyetographs and hourly flood scenarios at all frequencies, and used instead of a unique design storm and design flood. The approach allows a new cumulative probability curve extrapolation, which seems sometimes greater than an exponential behaviour. Moreover, the study of many events per year, with many occurrences of the different variables of the model, increase the analysed sample size and seems to make the method more reliable than a statistical approach simply based, for example, on the fitting of annual maximum values
Extracting galactic binary signals from the first round of Mock LISA Data Challenges
We report on the performance of an end-to-end Bayesian analysis pipeline for
detecting and characterizing galactic binary signals in simulated LISA data.
Our principal analysis tool is the Blocked-Annealed Metropolis Hasting (BAM)
algorithm, which has been optimized to search for tens of thousands of
overlapping signals across the LISA band. The BAM algorithm employs Bayesian
model selection to determine the number of resolvable sources, and provides
posterior distribution functions for all the model parameters. The BAM
algorithm performed almost flawlessly on all the Round 1 Mock LISA Data
Challenge data sets, including those with many highly overlapping sources. The
only misses were later traced to a coding error that affected high frequency
sources. In addition to the BAM algorithm we also successfully tested a Genetic
Algorithm (GA), but only on data sets with isolated signals as the GA has yet
to be optimized to handle large numbers of overlapping signals.Comment: 13 pages, 4 figures, submitted to Proceedings of GWDAW-11 (Berlin,
Dec. '06
- …