32 research outputs found

    An Integrated Design Approach for Improving Drinking Water Ozone Disinfection Treatment Based on Computational Fluid Dynamics

    Get PDF
    Ozonation is currently considered as one of the most effective microbial disinfection technologies due to its powerful disinfection capacity and reduction in levels of chlorinated disinfection by-products (DBP). However, ozonation of waters containing bromide can produce bromate ion above regulated levels, leading to tradeoffs between microbial and chemical risks. In efforts to meet increasingly stringent drinking water regulations and to be cost-effective, water suppliers are required to optimize ozone dosage. Therefore, there is a need to develop a robust and flexible tool to accurately describe ozone disinfection processes and contribute to their design and operation. Computational fluid dynamics (CFD) has come into use recently for evaluating disinfection systems. However, the focus of its application has been largely on modelling the hydraulic behaviour of contactors, which is only one component of system design. The significance of this dissertation is that a fully comprehensive three dimensional (3D) multiphase CFD model has been developed to address all the major components of ozone disinfection processes: contactor hydraulics, ozone mass transfer, ozone decay, and microbial inactivation. The model was validated using full-scale experimental data, including tracer test results and ozone profiles from full-scale ozone contactors in two Canadian drinking water treatment plants (WTPs): the DesBaillets WTP in Montréal, Quebec and the Mannheim WTP in Kitchener, Ontario. Good agreement was observed between the numerical simulation and experimental data. The CFD model was applied to investigate ozone contactor performance at the DesBaillets WTP. The CFD-predicted flow fields showed that recirculation zones and short circuiting existed in the DesBaillets contactors. The simulation results suggested that additional baffles could be installed to increase the residence time and improve disinfection efficiency. The CFD model was also used to simulate ozone contactor performance at the Mannheim Water Treatment Plant before and after installing new liquid oxygen (LOX) ozone generators and removing some diffusers from the system. The modelling results indicated that such changes led to an increase in effective residence time, and therefore an adjustment to operational parameters was required after system modification. Another significant contribution is that, for the first time, the Eulerian and Lagrangian (or particle tracking) approaches, two commonly utilized methods for predicting microbial inactivation efficiency have been compared for the study of ozone disinfection processes. The modelling results of two hypothetical ozone reactors and a full scale contactor suggested that the effective CT values predicted by the Lagriangian approach were slightly lower than those obtained from the Eulerian approach but their differences were within 10%. Therefore, both approaches can be used to predict ozone disinfection efficiency. For the full scale contactor investigated, the tracer residence time distribution predicted by the Euerlian approach provided a better fit to the experimental results, which indicated that the Eulerian approach might be more suitable for the simulation of chemical tracer performance. The results of this part of work provided important insight in understanding the complex performance of multiphase ozonation systems and played an important role in further improving CFD modelling approaches for full-scale ozone disinfection systems. The third significant contribution of this work is that a CFD model was applied to illustrate the importance of ozone residual monitoring locations and suggest an improved strategy for ozone residual monitoring. For the DesBaillets ozone contactors, the CFD modelling results showed that ozone residuals in the cross section of the outlets of some contactor chambers differed by an order of magnitude. The “optimal” area of monitoring locations however varied at different operational conditions. Therefore, it was suggested that multiple ozone residual sampling points should be installed based on CFD analysis and experimental studies, to provide more accurate indicators to system operators. The CFD model was also used to study the factors affecting the residence time distribution (RTD). The results suggested that the selection of the tracer injection locations as well as tracer sampling locations might affect the RTD prediction or measurement. The CFD-predicted T10 values at different outlet locations varied by more than 10% variation. It is therefore recommended that CFD modelling be used to determine tracer test strategy before conducting a full-scale tracer test, and multiple sampling points should be employed during tracer tests, if possible. In addition, a research based on full-scale investigation has also been done to compare the three different CT prediction approaches: CT10, integrated disinfection design framework (IDDF), and CFD, to determine the most appropriate method for design and operation of ozone systems. The CFD approach yielded more accurate predictions of inactivation efficacy than the other two approaches. The current results also suggested that the differences in the three approaches in CT predictions became smaller at higher contactor T10/T ratios conditions as the contactors performed more closely to ideal plug flow reactors. This study has demonstrated that the computational fluid dynamics (CFD) approach is an efficient tool for improving ozone disinfection performance of existing water treatment plants and designing new ozonation systems. The model developed in this study can be used for ozone contactor design, evaluation, and troubleshooting. It can also be used as a virtual experimental tool to optimize ozone contactor behaviour under varying water quality and operational conditions

    Développement d'une méthode de calcul de la performance des procédés de désinfection des installations de traitement d'eau potable

    Get PDF
    RÉSUMÉ L’objectif principal des installations de traitement d’eau potable est de réduire le risque microbiologique relié à l’ingestion de l’eau sous un seuil acceptable. Pour cela, l’ajout de désinfectant est généralement utilisé afin d’inactiver un pourcentage défini des microorganismes pathogènes présents dans l’eau. Ce pourcentage est fixé en fonction de la qualité de l’eau brute prélevée et définit les objectifs de traitement de l’usine. Le calcul des performances des procédés de désinfection permet donc de s’assurer du respect de ces objectifs. Les installations de potabilisation ont également des normes à respecter sur les contaminants chimiques présents dans l’eau. Les procédés de désinfection peuvent également servir à l’enlèvement de ces contaminants et il est donc intéressant d’estimer leurs performances à ce niveau. Pour pouvoir être utilisée à large échelle, la méthode de calcul des performances doit être à la fois simple et fiable. Le calcul des performances des procédés de désinfection dépend de trois principaux facteurs : les cinétiques de réaction (avec les contaminants microbiologiques ou chimiques), les cinétiques de décroissance du désinfectant dans l’eau et les phénomènes hydrauliques agissant au sein du réacteur. Pour décrire chaque aspect, il existe différents modèles dont les niveaux de complexité et d’efficacité varient. Les performances d’inactivation ou d’enlèvement prédites par les méthodes finales dépendent entièrement des modèles utilisés et des hypothèses posées.----------ABSTRACT The main goal of drinking water facilities is reducing the microbiological risk related to water ingestion under an acceptable threshold. To achieve that, disinfectants are usually added in water in order to inactivate a given percentage of pathogenic microorganisms. The Water Treatment Plants’ (WTP) goals are represented by this percentage, which is defined based on the raw water quality. The calculation of disinfection performance for each process is used to ensure compliance with these goals. WTPs also have to respect standards on chemical contaminants present in water. Disinfection processes are useful for removing these contaminants and their performance should be assessed. The performance calculation has to be both simple and reliable for widespread application. The calculation of disinfection processes performance depends on three main factors: the reaction kinetics (with microbiological or chemical contaminants), the disinfectant decay kinetics and the hydraulic phenomena acting inside the reactor. Each aspect is decribed by several possible models which are identifiable by their degrees of complexity and effectiveness. The calculated performance is fully influenced by the models employed and the assumptions made

    Improved Predictions of Contaminant Degradation in Water Treatment Reactors

    Get PDF
    The efficacy of fundamental water treatment processes depends on reactor hydraulics. Despite the importance of reactor hydraulics, oversimplified hydraulic models have been used for the design, operation, and regulation of water treatment reactors. The most commonly used model assumes plug flow reactor (PFR) behavior with residence time equal to the time for the first 10 percent of flow to leave the reactor (PFR t10). This simplification is overly conservative when targeting low log reductions of contaminants, and may also overestimate treatment efficacy when targeting higher log reductions, such as in water reuse applications. The overall goal of this dissertation was to improve the prediction of contaminant degradation in water treatment reactors by accurately modeling reactor hydraulics. This goal was met through the following three objectives: (i) development of accurate models for residence time distribution (RTD) (i.e., macromixing); (ii) assessment of flow segregation and earliness of mixing (i.e., micromixing) in full-scale water treatment reactors; and (iii) quantitative evaluation of the effect of RTD model selection on predictions of contaminant degradation. This work generated a number of major conclusions and contributions to the modeling of contaminant degradation. (i) Reactor network (RN) models accurately represented observed RTD using fewer fitting parameters than alternative models and (ii) an open-source tool was created to fit RN models to tracer data. (iii) Micromixing was observed to be prevalent in full-scale reactors, and (iv) the tanks-in-series (TIS) and certain RN models most accurately represented micromixing. (v) Micromixing had the greatest impact on predictions of pathogen disinfection when specific lethality coefficient and disinfectant decay rate were high. (vi) The PFR t10 model may cease to be conservative when predicting contaminant reductions >2-log. (vi) Designing reactors for 1-log reduction using the PFR t10 model may increase capital costs by 10-80% relative to an accurate RTD model like the RN model; (vii) at 6-log reduction, properly sizing oxidation processes using the RN model may increase costs by over 100% relative to the PFR t10 model. Overall, this work provides a fundamental basis for the rational design, operation, and regulation of water treatment processes using the TIS or RN models.Doctor of Philosoph

    Integrated circuit outlier identification by multiple parameter correlation

    Get PDF
    Semiconductor manufacturers must ensure that chips conform to their specifications before they are shipped to customers. This is achieved by testing various parameters of a chip to determine whether it is defective or not. Separating defective chips from fault-free ones is relatively straightforward for functional or other Boolean tests that produce a go/no-go type of result. However, making this distinction is extremely challenging for parametric tests. Owing to continuous distributions of parameters, any pass/fail threshold results in yield loss and/or test escapes. The continuous advances in process technology, increased process variations and inaccurate fault models all make this even worse. The pass/fail thresholds for such tests are usually set using prior experience or by a combination of visual inspection and engineering judgment. Many chips have parameters that exceed certain thresholds but pass Boolean tests. Owing to the imperfect nature of tests, to determine whether these chips (called "outliers") are indeed defective is nontrivial. To avoid wasted investment in packaging or further testing it is important to screen defective chips early in a test flow. Moreover, if seemingly strange behavior of outlier chips can be explained with the help of certain process parameters or by correlating additional test data, such chips can be retained in the test flow before they are proved to be fatally flawed. In this research, we investigate several methods to identify true outliers (defective chips, or chips that lead to functional failure) from apparent outliers (seemingly defective, but fault-free chips). The outlier identification methods in this research primarily rely on wafer-level spatial correlation, but also use additional test parameters. These methods are evaluated and validated using industrial test data. The potential of these methods to reduce burn-in is discussed

    The George-Anne

    Get PDF

    Assessment of Power System Equipment Insulation Based on Distorted Excitation Voltage

    Full text link
    Electrical insulation plays a critical role in high voltage power system equipment. The presence of electrical, thermal, and mechanical stresses imposed when they are in operation for a long time cause gradual degradation of the insulation. Therefore, regular condition monitoring and diagnostic testing of power system equipment are of paramount importance for the reliable operation of electricity supply networks and systems. The dielectric dissipation factor (DDF) measurement is one of the most common techniques for insulation assessment. From a traditional perspective, a pure sinusoidal voltage is used for excitation in the testing. However, the grid voltage nowadays in reality is often distorted with a waveform having multiple harmonic components. Generally, there are distorted voltages and currents generated due to the presence of non-linear equipment or components in the system. Thus, testing under distorted voltage with harmonics provides a more realistic diagnostic measurement as compared to traditional AC sinusoidal high voltage testing. This dissertation investigates the impact of harmonically distorted excitation on the dielectric dissipation factor of high voltage power equipment. A practical measurement method based on distorted excitation is proposed and tested on a reference capacitor-resistor test object. A theoretical and mathematical model is developed to quantify the impact of distortion on the DDF measured in contrast to the case of non-distorted excitation. It is established that for the same total RMS magnitude of the applied excitation, the DDF decreases with the increasing harmonic proportion in the applied voltage waveform. For validation, laboratory experiments and computer simulations were carried out, and data obtained were compared with the analytical results. The proposed technique is then tested on some real high voltage components (33kV dry-type current transformers). The results confirm the monotonically decreasing trend, but the pattern is more complex. The dielectric dissipation factor mathematical and electrical circuit model is implemented based on the polarisation loss. The theoretical formulation is implemented in a computer simulation using MATLAB Simulink to validate the results. In summary, the thesis provides useful diagnostic insights on the characteristics of the dielectric dissipation factor measurement under distorted excitation

    Maternally Derived Anti-Dengue Antibodies and Risk of DHF in Infants: A Case-Control Study

    Get PDF
    This study proposes to directly test the hypothesis that antibody-dependent enhancement (ADE) is the critical factor in the development of dengue hemorrhagic fever (DHF) in infants. DHF occurs in two distinct clinical settings: a) in children and adults with secondary DENV infection, and b) in infants with primary DENV infection born to mothers with prior DENV infection. The ADE hypothesis proposes that pre-existing serotype-cross-reactive non-neutralizing anti-DENV antibodies bind the heterotypic DENV during secondary infection and enhance its uptake into immune cells, leading to increased viral load and DHF. This model suggests that DHF in DENV-infected infants is caused by the enhancing effect of waning maternal anti-DENV antibodies, thus causing a “physiologic secondary infection” during an infant’s primary infection and thereby increasing the infant’s risk for DHF. The effect of maternal immunity on DHF in infants has been studied exclusively in Southeast Asia. However, the maternal DENV seroprevalence approaches 100% in this part of the world. As a consequence, the ADE model of infant DHF cannot truly be tested in Southeast Asia, because all infants possess anti-DENV antibody at birth. In the Western Hemisphere, by contrast, women may have experienced either a single DENV infection, more than one DENV infection, or no DENV infection at all. The ability to include DENV-seronegative mothers as controls allows for the ADE hypothesis to be directly tested in a clinical study. To our knowledge, no such study has been previously conducted. This thesis presents a case-control study designed to evaluate the influence of positive maternal dengue seroprevalence on the risk of DHF in infants. As the MSCI program provides instruction in study design, this thesis does not present findings. The clinical trial described herein began in May 2010 and enrollment is expected to continue through May 2012 (see Table 4)

    Protein structure recognition: from eigenvector analysis to structural threading method

    Get PDF
    In this work, we try to understand the protein folding problem using pair-wise hydrophobic interaction as the dominant interaction for the protein folding process. We found a strong correlation between amino acid sequence and the corresponding native structure of the protein. Some applications of this correlation were discussed in this dissertation include the domain partition and a new structural threading method as well as the performance of this method in the CASP5 competition.;In the first part, we give a brief introduction to the protein folding problem. Some essential knowledge and progress from other research groups was discussed. This part include discussions of interactions among amino acids residues, lattice HP model, and the designablity principle.;In the second part, we try to establish the correlation between amino acid sequence and the corresponding native structure of the protein. This correlation was observed in our eigenvector study of protein contact matrix. We believe the correlation is universal, thus it can be used in automatic partition of protein structures into folding domains.;In the third part, we discuss a threading method based on the correlation between amino acid sequence and ominant eigenvector of the structure contact-matrix. A mathematically straightforward iteration scheme provides a self-consistent optimum global sequence-structure alignment. The computational efficiency of this method makes it possible to search whole protein structure databases for structural homology without relying on sequence similarity. The sensitivity and specificity of this method is discussed, along with a case of blind test prediction.;In the appendix, we list the overall performance of this threading method in CASP5 blind test in comparison with other existing approaches

    Passive optical network (PON) monitoring using optical coding technology

    Get PDF
    Les réseaux optiques passifs (PON) semblent être la technologie gagnante et ultime du futur pour les "fibres jusqu'au domicile" ayant une haute capacité. L'écoute de contrôle de ce genre de système est nécessaire pour s'assurer un niveau de qualité de service prédéterminé pour chaque client. En outre, l'écoute de contrôle réduit considérablement les dépenses en capital et de fonctionnement (CAPEX et OPEX), tant pour le fournisseur du réseau que les clients. Alors que la capacité des PON est croissante, les gestionnaires de réseau ne disposent pas encore d'une technologie efficace et appropriée pour l'écoute de contrôle des réseaux de capacité aussi élevée. Une variété de solutions a été proposée. Toutes ces dernières solutions ne sont pas pratiques à cause de leur faible capacité (nombre de clients), d'une faible évolutivité, d'une grande complexité et des défis technologiques. Plus important encore, la technologie souhaitable pour l'écoute de contrôle devrait être rentable car le marché des PON est très sensible aux coûts. Dans cette thèse, nous considérons l'application de la technologie du codage optique passif (OC) comme une solution prometteuse pour l'écoute de contrôle centralisée d'un réseau optique ramifié tels que les réseaux PON. Dans la première étape, nous développons une expression pour le signal détecté par l'écoute de contrôle et étudions ses statistiques. Nous trouvons une nouvelle expression explicite pour le rapport signal utile/signal brouillé (SIR) comme outil de mesure métrique de performance. Nous considérons cinq distributions PON géographiques différentes et étudions leurs effets sur l'SIR pour l'écoute de contrôle d'OC. Dans la prochaine étape, nous généralisons notre modèle mathématique et ses expressions pour le contrôle des signaux détectés par un détecteur quadratique et des paramètres réalistes. Nous évaluons ensuite les performances théoriques de la technologie basée sur l'écoute de contrôle selon le rapport signal/bruit (SNR), le rapport signal/bruit plus coefficient d'interférence (SNIR), et la probabilité de fausse alarme. Nous élaborons l'effet de la puissance d'impulsion transmise, la taille du réseau et la cohérence de la source lumineuse sur le rendement des codes unidimensionnels (ID) et bidimensionnels (2D) de l'écoute de contrôle d'OC. Une conception optimale est également abordée. Enfin, nous appliquons les tests de Neyman-Pearson pour le récepteur de notre système d'écoute de contrôle et enquêtons sur la façon dont le codage et la taille du réseau affectent les dépenses de fonctionnement (OPEX) de notre système d'écoute de contrôle. Malgré le fait que les codes ID et 2D fournissent des performances acceptables, elles exigent des encodeurs avec un nombre élevé de composants optiques : ils sont encombrants, causent des pertes, et ils sont coûteux. Par conséquent, nous proposons un nouveau schéma de codage simple et plus approprié pour notre application de l'écoute de contrôle que nous appelons le codage périodique. Par simulation, nous évaluons l'efficacité de l'écoute de contrôle en terme de SNR pour un PON employant cette technologie. Ce système de codage est utilisé dans notre vérification expérimentale de l'écoute de contrôle d'OC. Nous étudions expérimentalement et par simulation, l'écoute de contrôle d'un PON utilisant la technologie de codage périodique. Nous discutons des problèmes de conception pour le codage périodique et les critères de détection optimale. Nous développons également un algorithme séquentiel pour le maximum de vraisemblance avec une complexité réduite. Nous menons des expériences pour valider notre algorithme de détection à l'aide de quatre encodeurs périodiques que nous avons conçus et fabriqués. Nous menons également des simulations de Monte-Carlo pour des distributions géographiques de PON réalistes, avec des clients situés au hasard. Nous étudions l'effet de la zone de couverture et la taille du réseau (nombre d'abonnés) sur l'efficacité de calcul de notre algorithme. Nous offrons une borne sur la probabilité pour un réseau donné d'entraîner l'algorithme vers un temps exorbitant de surveillance du réseau, c'est à dire le délai d'attente de probabilité. Enfin, nous soulignons l'importance du moyennage pour remédier aux restrictions budgétaires en puissance/perte dans notre système de surveillance afin de supporter de plus grandes tailles de réseaux et plus grandes portées de fibres. Ensuite, nous mettrons à niveau notre dispositif expérimental pour démontrer un m PON avec 16 clients. Nous utilisons un laser à modulation d'exploitation directement à 1 GHz pour générer les impulsions sonde. Les données mesurées par le dispositif expérimental est exploité par l'algorithme de MLSE à détecter et à localiser les clients. Trois déploiements PON différents sont réalisés. Nous démontrons une surveillance plus rigoureuse pour les réseaux ayant une répartition géographique à plusieurs niveaux. Nous étudions aussi le budget de la perte de notre dispositif de soutien plus élevés de capacités du réseau. Enfin, nous étudions le budget total admissible de la perte d'exploitation du système de surveillance dans la bande de fréquences à 1650 nm en fonction des spécifications de l'émetteur/récepteur. En particulier, la limite totale de la perte de budget est représentée en fonction du gain de l'amplicateure de transimpédance (TIA) et le résolution de la conversion analogique-numérique (ADC). Par ailleurs, nous enquêtons sur le compromis entre la distance portée et la capacité (taille de fractionnement au niveau du noeud distant) dans notre système de suivi

    Certain aspects of the epidemiology of leptospirosis in Jamaica

    Get PDF
    corecore