3,442 research outputs found

    Characterizing Faint Submillimeter Galaxies with Cluster Lensing.

    Get PDF
    Ph.D. Thesis. University of Hawaiʻi at Mānoa 2017

    An integrated approach with new strategies for QSAR models and lead optimization

    Get PDF
    Compound testing set for huAChE collected from Guo et al. (PDF 52 kb

    Synthesis and Morphological Transformation of Conjugated Amphiphilic Diblock Copolymers in Mixed Solvents

    Get PDF
    The synthesis, morphological transformation, and photophysical properties of a rod-coil block copolymer, poly[2,7-(9,9-dihexylfluorene)]-block-poly(2-vinylpyridine) (PF-b-P2VP), with P2VP coils of various lengths in a mixed methanol/tetrahydrofuran (MeOH/THF) solvent are reported. Various morphological structures of PF-b-P2VP aggregates, including spheres, short worm-like structures, long cylinders, and large compound micelles (LCMs), were observed after varying the coil length of PF-b-P2VP and the selectivity of mixed solvents. These aggregated structures demonstrated considerable variation with regard to optical absorption, fluorescence, and the PL quantum yield of rod-coil copolymers. The degree of hypsochromic spectral shift was enhanced as the length of P2VP coils and the content of poor solvent increased. This study reveals the influence of coil length and selectivity of solvents on the morphology and the optical characteristics of rod-coil amphiphilic copolymers

    High performance entanglement-assisted quantum LDPC codes need little entanglement

    Full text link
    Though the entanglement-assisted formalism provides a universal connection between a classical linear code and an entanglement-assisted quantum error-correcting code (EAQECC), the issue of maintaining large amount of pure maximally entangled states in constructing EAQECCs is a practical obstacle to its use. It is also conjectured that the power of entanglement-assisted formalism to convert those good classical codes comes from massive consumption of maximally entangled states. We show that the above conjecture is wrong by providing families of EAQECCs with an entanglement consumption rate that diminishes linearly as a function of the code length. Notably, two families of EAQECCs constructed in the paper require only one copy of maximally entangled state no matter how large the code length is. These families of EAQECCs that are constructed from classical finite geometric LDPC codes perform very well according to our numerical simulations. Our work indicates that EAQECCs are not only theoretically interesting, but also physically implementable. Finally, these high performance entanglement-assisted LDPC codes with low entanglement consumption rates allow one to construct high-performance standard QECCs with very similar parameters.Comment: 8 pages, 5 figures. Published versio

    FP8-BERT: Post-Training Quantization for Transformer

    Full text link
    Transformer-based models, such as BERT, have been widely applied in a wide range of natural language processing tasks. However, one inevitable side effect is that they require massive memory storage and inference cost when deployed in production. Quantization is one of the popularized ways to alleviate the cost. However, the previous 8-bit quantization strategy based on INT8 data format either suffers from the degradation of accuracy in a Post-Training Quantization (PTQ) fashion or requires an expensive Quantization-Aware Training (QAT) process. Recently, a new numeric format FP8 (i.e. floating-point of 8-bits) has been proposed and supported in commercial AI computing platforms such as H100. In this paper, we empirically validate the effectiveness of FP8 as a way to do Post-Training Quantization without significant loss of accuracy, with a simple calibration and format conversion process. We adopt the FP8 standard proposed by NVIDIA Corp. (2022) in our extensive experiments of BERT variants on GLUE and SQuAD v1.1 datasets, and show that PTQ with FP8 can significantly improve the accuracy upon that with INT8, to the extent of the full-precision model
    corecore