12 research outputs found

    The limits of precision monomer placement in chain growth polymerization

    Get PDF
    Precise control over the location of monomers in a polymer chain has been described as the ‘Holy Grail’ of polymer synthesis. Controlled chain growth polymerization techniques have brought this goal closer, allowing the preparation of multiblock copolymers with ordered sequences of functional monomers. Such structures have promising applications ranging from medicine to materials engineering. Here we show, however, that the statistical nature of chain growth polymerization places strong limits on the control that can be obtained. We demonstrate that monomer locations are distributed according to surprisingly simple laws related to the Poisson or beta distributions. The degree of control is quantified in terms of the yield of the desired structure and the standard deviation of the appropriate distribution, allowing comparison between different synthetic techniques. This analysis establishes experimental requirements for the design of polymeric chains with controlled sequence of functionalities, which balance precise control of structure with simplicity of synthesis

    Development and testing of dry chemicals in advanced extinguishing systems for jet engine nacelle fires

    Get PDF
    The effectiveness of dry chemical in extinguishing and delaying reignition of fires resulting from hydrocarbon fuel leaking onto heated surfaces such as can occur in jet engine nacelles is studied. The commercial fire extinguishant dry chemical tried are sodium and potassium bicarbonate, carbonate, chloride, carbamate (Monnex), metal halogen, and metal hydroxycarbonate compounds. Synthetic and preparative procedures for new materials developed, a new concept of fire control by dry chemical agents, descriptions of experiment assemblages to test dry chemical fire extinguishant efficiencies in controlling fuel fires initiated by hot surfaces, comparative testing data for more than 25 chemical systems in a 'static' assemblage with no air flow across the heated surface, and similar comparative data for more than ten compounds in a dynamic system with air flows up to 350 ft/sec are presented

    The visual uncertainty paradigm for controlling screen-space information in visualization

    Get PDF
    The information visualization pipeline serves as a lossy communication channel for presentation of data on a screen-space of limited resolution. The lossy communication is not just a machine-only phenomenon due to information loss caused by translation of data, but also a reflection of the degree to which the human user can comprehend visual information. The common entity in both aspects is the uncertainty associated with the visual representation. However, in the current linear model of the visualization pipeline, visual representation is mostly considered as the ends rather than the means for facilitating the analysis process. While the perceptual side of visualization is also being studied, little attention is paid to the way the visualization appears on the display. Thus, we believe there is a need to study the appearance of the visualization on a limited-resolution screen in order to understand its own properties and how they influence the way they represent the data. I argue that the visual uncertainty paradigm for controlling screen-space information will enable us in achieving user-centric optimization of a visualization in different application scenarios. Conceptualization of visual uncertainty enables us to integrate the encoding and decoding aspects of visual representation into a holistic framework facilitating the definition of metrics that serve as a bridge between the last stages of the visualization pipeline and the user's perceptual system. The goal of this dissertation is three-fold: i) conceptualize a visual uncertainty taxonomy in the context of pixel-based, multi-dimensional visualization techniques that helps systematic definition of screen-space metrics, ii) apply the taxonomy for identifying sources of useful visual uncertainty that helps in protecting privacy of sensitive data and also for identifying the types of uncertainty that can be reduced through interaction techniques, and iii) application of the metrics for designing information-assisted models that help in visualization of high-dimensional, temporal data

    Application of a sequential partial extraction procedure to investigate uranium, copper, zinc, iron and manganese partitioning in recent lake, stream and bog sediments, northern Saskatchewan / by Douglas Andrew Warren Lehto. --

    Get PDF
    Sequential partial extractions show that partitioning of uranium, copper, zinc, iron and manganese into lake, stream and bog sediments are affected by the type and abundance of component fractions present in sediments and by the physico-chemical conditions of the superjacent waters. The water pH influences the concentration of uranium retained by organic matter as well as the relative proportion partitioned into the amorphous iron hydroxide fraction and the humic and fulvic acid components of the organic matter fraction. Copper partitioning is controlled by the percent carbon content of sediments which influences the concentration of metal retained in the organic matter fraction. The amount of copper retained by other component fractions is determined by their relative abundance in sediments. The Eh-pH conditions of the superjacent waters control the solubilities of iron, manganese and zinc thereby affecting the availability and sorption of these metals into the organic matter and inorganic hydroxide fractions of sediment. Metal partitioning characteristics and physico-chemical factors which influence metal partitioning should be considered when using lake, stream and bog sediments in geochemical exploration

    Antibody-based biosensor assays for the detection of zilpaterol and markers for prostate cancer

    Get PDF
    The research presented in this thesis describes the production and application of antibodies against the drug of abuse zilpaterol, and the application of antibodies against prostate-specific antigen (PSA), a cancer marker. Polyclonal antibodies were used in the development of immunoassays in a competitive ELISA format and on the Biacore (a surface plasmon resonance-based optical biosensor capable of monitoring biomolecular interactions in 'real-time'). A zilpaterol-HSA conjugate was used to generate and characterise single chain antibody fragments. A combinatorial single chain (scFv) antibody phage display library was generated to zilpaterol. Splenomic mRNA from mice pre-immunised with a zilpaterol-HSA conjugate was used in the amplification of antibody genes followed by cloning into vectors from a well-established phage display system. Four positive clones were isolated during panning. One clone (Bl) was selected and re-cloned into a plasmid fiom soluble scFv antibody expression. The soluble scFv antibody was purified and used in the development of a competitive ELISA-based assay. Further analysis of the B 1 clone was carried out during the development of an inhibition assay for zilpaterol on Biacore. Affinity determinations of the scFv antibody for zilpaterol were carried out using 'realtime' biomolecular interaction analysis. A recombinant form of PSA was also produced and characterised. Commercial anti-PSA antibodies were used to generate a competitive ELISA

    A study of factors affecting precision in atomic absorption spectrometry

    Get PDF
    1. The effect of deviations from Beer's law on the precision of atomic absorption analysis has been examined from a theoretical point of view, and a function has been derived which makes it possible to evaluate quantitatively the effect of calibration curvature on the precision of analysis. The influence of incomplete sample volatilization on calibration curvature has been briefly investigated. 2. Possible error sources in atomic absorption spectrometry have been classified according to the "error function" (i.e., the dependence, upon transmittance T, of the uncertainty dT in a given transmittance measurement) with which they are associated. The magnitude of the contribution from each component function to the overall error function has been evaluated quantitatively, and it has been shown that the major component in nearly every case examined is that associated with the dynamic nature of the flame. Concentration ranges for optimum precision are suggested. 3. The effect of varying instrumental parameters on precision has been investigated, and generalized conditions for best precision have been ascertained. 4. The effect of an initial solvent extraction step on the precision of atomic absorption has been investigated for the elements copper and lead. It is shown that solvent extraction may be used to improve both the analytical sensitivity and the precision of analysis when very low concentrations of metal are determined. 5. The precision of analytical methods involving atomic absorption spectrometry has been studied, and the standard deviations compared with those obtained for the analysis of similar samples by means of a variety of other methods of analysis, both instrumental and classical

    Aplicaciones de la aritmética en coma fija a la representación de primitivas gráficas de bajo nivel

    Full text link
    La aritmética en coma fija tiene la propiedad de realizar operaciones con números decimales con un coste computacional entero. A pesar de no estar soportada de forma nativa por los lenguajes de programación y por las CPUs generalistas, es la aritmética ideal para aplicaciones de control industrial, simulación, informática gráfica, multimedia y señal digital, etc. Su falta de normalización y soporte impide su uso extendido en muchos campos de la informática. Esta tesis justifica la utilización de esta aritmética en el campo de los gráficos por computador. A partir de un estudio de implementación y normalización de la aritmética, se estudian incrementos de potencia relativos y precisiones obtenidas y su aplicación a la simulación discreta y de vuelo. Se analizan los algoritmos de dibujo de primitivas básicas como las líneas, con y sin aliasing, su recortado y el dibujo de circunferencias y elipses. Se presentan algunas implementaciones de algoritmos basados en la coma fija y se analiza la mejora del coste computacional y de la precisión obtenida respecto de los algoritmos de fuerza bruta y de los tradicionales. Mientras los algoritmos tradicionales suelen entregar un error comprendido entre los 0.32 y 0.45 píxeles, dependiendo de la primitiva analizada, los algoritmos basados en la coma fija no superan los 0.25 de media, igualando el error teórico generado por los algoritmos de fuerza bruta. Por otro lado, los algoritmos basados en la aritmética en coma fija suelen mejorar la velocidad media de los algoritmos tradicionales, pudiéndose a veces conseguir aceleraciones elevadas si se utilizan técnicas de paralelización. Éste sería el caso de la versión paralela del algoritmo DDA con y sin antialiasing que podría dibujar una recta con coste temporal logarítmico respecto de su longitud en píxeles. Los algoritmos obtenidos son tan sencillos que pueden ser implementados algunos de ellos en hardware dentro de un procesador gráfico de forma muy eficiente.Mollá Vayá, RP. (2001). Aplicaciones de la aritmética en coma fija a la representación de primitivas gráficas de bajo nivel [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/15406Palanci

    Essays on Earnings Predictability

    Get PDF
    This dissertation addresses the prediction of corporate earnings. The thesis aims to examine whether the degree of precision in earnings forecasts can be increased by basing them on historical financial ratios. Furthermore, the intent of the dissertation is to analyze whether accounting standards affect the accuracy of analysts’ earnings forecasts. Finally, the objective of the dissertation is to investigate how the stock market is affected by the accuracy of corporate earnings projections. The dissertation contributes to a deeper understanding of these issues. First, it is shown how earnings forecasts can be generated based on historical timeseries patterns of financial ratios. This is done by modeling the return on equity and the growth-rate in equity as two separate but correlated timeseries processes which converge to a long-term, constant level. Empirical results suggest that these earnings forecasts are not more accurate than the simpler forecasts based on a historical timeseries of earnings. Secondly, the dissertation shows how accounting standards affect analysts’ earnings predictions. Accounting conservatism contributes to a more volatile earnings process, which lowers the accuracy of analysts’ earnings forecasts. Furthermore, the dissertation shows how the stock market’s reaction to the disclosure of information about corporate earnings depends on how well corporate earnings can be predicted. The dissertation indicates that the stock market’s reaction to the disclosure of earnings information is stronger for firms whose earnings can be predicted with higher accuracy than it is for firms whose earnings can not be predicted with the same degree of accuracy

    Efficient algorithms for computing the L2L_2 discrepancy

    Get PDF
    The L2L_2-discrepancy is a quantitative measure of precision for multivariate quadrature rules. It can be computed explicitly. Previously known algorithms needed O(m2O(m^2) operations, where mm is the number of nodes. In this paper we present algorithms which require O(m(logm)d)O(m(log m)^d) operations

    Efficient algorithms for computing the L2L_2 discrepancy

    No full text
    The L2L_2-discrepancy is a quantitative measure of precision for multivariate quadrature rules. It can be computed explicitly. Previously known algorithms needed O(m2O(m^2) operations, where mm is the number of nodes. In this paper we present algorithms which require O(m(logm)d)O(m(log m)^d) operations
    corecore