3,354 research outputs found

    What Did the United States Sentencing Commission Miss?

    Get PDF

    The effect of internal gravity waves on cloud evolution in sub-stellar atmospheres

    Get PDF
    Context. Sub-stellar objects exhibit photometric variability which is believed to be caused by a number of processes such as magnetically-driven spots or inhomogeneous cloud coverage. Recent sub-stellar models have shown that turbulent flows and waves, including internal gravity waves, may play an important role in cloud evolution.Aims. The aim of this paper is to investigate the effect of internal gravity waves on dust cloud nucleation and dust growth, and whether observations of the resulting cloud structures could be used to recover atmospheric density information.Methods. For a simplified atmosphere in two dimensions, we numerically solve the governing fluid equations to simulate the effect on dust nucleation and mantle growth as a result of the passage of an internal gravity wave. Furthermore, we derive an expression that relates the properties of the wave-induced cloud structures to observable parameters in order to deduce the atmospheric density.Results. Numerical simulations show that the density, pressure and temperature variations caused by gravity waves lead to an increase of dust nucleation by up to a factor 20, and dust mantle growth rate by up to a factor 1:6, compared to their equilibrium values. Through an exploration of the wider sub-stellar parameter space, we show that in absolute terms, the increase in dust nucleation due to internal gravity waves is stronger in cooler (T dwarfs) and TiO2-rich sub-stellar atmospheres. The relative increase however is greater in warm(L dwarf) and TiO2-poor atmospheres due to conditions less suited for efficient nucleation at equilibrium. These variations lead to banded areas in which dust formation is much more pronounced, and lead to banded cloud structures similar to those observed on Earth. Conclusions. Using the proposed method, potential observations of banded clouds could be used to estimate the atmospheric density of sub-stellar objects

    A note on four nonradioactive labeling systems for dot hybridization detection of potato viruses

    Get PDF
    Des clones d'ADN complémentaire ont été fabriqués à partir des ARN génomiques des virus S (PVS), X (PVX) et Y (PVY) de la pomme de terre (Solarium tuberosum). Les clones ont été sélectionnés pour leur spécificité par l'hybridation avec divers ARN viraux. Les clones S12 de PVS et X6 de PVX se sont avérés très spécifiques à l'ARN de PVS et PVX respectivement, alors que le clone Y10 de PVY a hybride fortement à l'ARN du PVY et faiblement à l'ARN du PVS. Quatre systèmes commerciaux non radioactifs de marquage des acides nucléiques et de détection ont été comparés entre eux et avec le marquage radioactif traditionnel de la sonde au 32P. La détection colorimétrique de sondes d'ADN marquées à la digoxygénine permet de déceler 1 ng de virions (60 pg d'ARN), soit une sensibilité du même ordre que l'autoradiographie avec des sondes marquées au phosphore radioactif. Les sondes sulfonées, biotinylées et marquées à la peroxydase ont été moins sensibles en permettant la détection de 600 pg d'ARN viral.Complementary DNA clones of genomic RNAs of potato (Solarium tuberosum) viruses S (PVS), X (PVX) and Y (PVY) were produced and tested for their capacity to hybridize with various plant virus RNAs. PVS clone S12 and PVX clone X6 were found to be very specifie to PVS and PVX RNA respectively, whereas PVY clone Y10 strongly hybridized with PVY RNA and weakly with PVS RNA. Four commercial, nonradioactive Systems of nucleic acid labeling and detection were compared to the usual 32P-labeled probe using dot hybridization experiments. Colorimetric detection of digoxigenin-labeled DNA probes gave a level of sensitivity of 1 ng of virions (60 pg of RNA), similar to autoradiography of 32P-labeled probes. Sulfonated, biotinylated and peroxidase-labeled probes were slightly less sensitive, allowing detection of 600 pg of viral RNA

    Modélisation d'une politique d'autocontrôle sur un réseau d'eau potable

    Get PDF
    Quel est le nombre d'échantillons à prélever pour analyse bactériologique dans un réseau de distribution d'eau potable afin réaliser un autocontrôle optimal du point de vue économique (coûts analytiques et coût des actions curatives), tout en limitant les risques de dégradation de la qualité ? Pour répondre à cette question, nous proposons un modèle probabiliste qui simule le choix de la décision curative lorsque les analyses indiquent des résultats insatisfaisants ainsi que l'effet de cette décision sur la qualité de l'eau du réseau. Les différentes actions curatives et leur efficacité ont été déterminées empiriquement à partir de l'expertise du gestionnaire du réseau de la Banlieue de Paris et des données collectées de 1992 à 1996. Le modèle s'appuie sur un schéma Markovien d'évolution du couple (Qualité de l'eau, Action curative). Par programmation dynamique, on calcule le coût moyen de la politique décisionnelle de la Banlieue de Paris et le risque généré par cette politique en terme de qualité de l'eau (fréquence des états dégradés), pour différents niveaux d'autocontrôle (nombre d'analyses d'autocontrôle). Le risque d'avoir un état dégradé diminue avec le nombre d'analyses jusqu'au seuil de 140 analyses (autocontrôle et contrôle réglementaire) puis reste quasiment constant, tandis que les coûts continuent d'augmenter.Drinking water quality is monitored regularly by state officers (DDASS), and also by the water distributor at a level of his own choice. A model has been constructed to simulate decision making after observations of one or more bacteriological-positive samples from the drinking water distribution system of suburban Paris (four million inhabitants in 144 boroughs). In cases of non-conformity, a curative action is taken (rinsing, chlorinating...) that tends to increase the level of water quality for the ensuing weeks. The model compares the trade-offs between the global cost of the policy and the risk of quality failure, based on various sampling plans of different intensity which the quality manager may design to get information from the distribution system. The more weekly analyses he makes, the more money he spends in control, but at the same time, the more valuable is the information that he receives with which to assess the appropriateness of curative actions to increase quality within the system.The state of quality in the distribution system is supposed to be homogeneous, with each sampling station representative of the overall water quality. Three discrete classes of quality (acceptable, poor, unsatisfactory) have been defined, corresponding respectively to an average frequency of 5%, 10% and 15% of coliform-positive samples from the control design. The set of alternatives is composed of eight curative actions presently in use in the distribution system when a defect sample is registered: (1) complementary checking of measurements of the quality parameters, such as chlorine and temperature; (2) additional analyses of bacteriological counts; (3) rinses (water is released during a few hours from certain pipes directly into the sewage system, to allow its replacement by fresh water supposedly of better quality); (4) purges (same as rinses, but for a longer period in larger zones); (5) disinfection (the defective zone is isolated and a specialized truck introduces a large amount of chlorine into the distribution pipes); (6) deep cleaning (a yearly cleaning of 3% of the distribution system); (7) chlorinating (the level of free chlorine injection is increased in the water treatment plant); (8) a change in the treatment plant mode of operation (the complete process is checked to prevent the possible transfer of bacteria from the river to the distribution system). It also includes the standard decision of "doing nothing," i.e., let the system evolve on its own. The cost of each decision has been evaluated according to the economic data available from the distribution company, taking mainly into account controllers' work hours and travel expenses. Water quality dynamics in the distribution system are modeled as a Markov chain controlled by the possible decisions at each stage. For each curative action a (3*3) transition matrix is empirically elicited, using both available data and expertise from the team of quality managers. The present control strategy of the distribution company is embedded in the model, by respecting the observed constraints in the sequence of decisions: for instance, if a previous rinse has not been followed by a decrease in the number of coliform-positive samples in the following week, a stronger action such as a purge or disinfection is enforced, rather than repeating the rinse. The strategy also mimics the empirical rules of the quality manager's behavior in facing bacteriological incidents: for example, a single occurrence of coliforms with no specific curative action taken in a previous week will generally dictate a rinse (84% of time), sometimes demand a purge (12%), and occasionally require a disinfection (1%). The Markov model is run in a simulation mode for the spring-summer period: for a given value of the sample size, the average cost of the quality monitoring policy and a failure index (average frequency in the two lowest quality states) can be evaluated by backward induction. Although the assessment of the parameters has been made empirically, the model exhibits realistic performances with regard to side criteria used as discrepancy measures for model rejection checking: the relative use of each curative action and the average time necessary to escape from a non-acceptable state (resiliency) are of the same order of magnitude as the corresponding real indices. A sensitivity analysis reveals that the results are fairly robust to small changes in the probabilities of transition, but do depend on the way the range of water qualities is divided into discrete classes. With the data chosen, the model showed a satisfactory cost/risk balance at 110-140 analyses per week, for the homogeneous subsystem under study. In the case of more data availability, this model could become a valuable decision tool. Provided that a criterion of joint global utility between risk and cost can be defined, it could be used to design a control policy with a weekly varying sample

    Perceptual grouping based on iterative multi-scale tensor voting

    Get PDF
    Abstract. We propose a new approach for perceptual grouping of oriented segments in highly cluttered images based on tensor voting. Segments are represented as second-order tensors and communicate with each other through a voting scheme that incorporates the Gestalt principles of visual perception. An iterative scheme has been devised which removes noise segments in a conservative way using multi-scale analysis and re-voting. We have tested our approach on data sets composed of real objects in real backgrounds. Our experimental results indicate that our method can segment successfully objects in images with up to twenty times more noise segments than object ones.
    • …
    corecore