707 research outputs found

    Global strong solutions to the planar compressible magnetohydrodynamic equations with large initial data and vaccum

    Full text link
    This paper considers the initial boundary problem to the planar compressible magnetohydrodynamic equations with large initial data and vacuum. The global existence and uniqueness of large strong solutions are established when the heat conductivity coefficient κ(θ)\kappa(\theta) satisfies \begin{equation*} C_{1}(1+\theta^q)\leq \kappa(\theta)\leq C_2(1+\theta^q) \end{equation*} for some constants q>0q>0, and C1,C2>0C_1,C_2>0.Comment: 19pages,some typos are correcte

    Inherent limitations of probabilistic models for protein-DNA binding specificity

    Get PDF
    The specificities of transcription factors are most commonly represented with probabilistic models. These models provide a probability for each base occurring at each position within the binding site and the positions are assumed to contribute independently. The model is simple and intuitive and is the basis for many motif discovery algorithms. However, the model also has inherent limitations that prevent it from accurately representing true binding probabilities, especially for the highest affinity sites under conditions of high protein concentration. The limitations are not due to the assumption of independence between positions but rather are caused by the non-linear relationship between binding affinity and binding probability and the fact that independent normalization at each position skews the site probabilities. Generally probabilistic models are reasonably good approximations, but new high-throughput methods allow for biophysical models with increased accuracy that should be used whenever possible

    Implementation of variational quantum algorithms on superconducting qudits

    Get PDF
    Quantum computing is considered an emerging technology with promising applications in chemistry, materials, medicine, and cryptography. Superconducting circuits are a leading candidate hardware platform for the realisation of quantum computing, and superconducting devices have now been demonstrated at a scale of hundreds of qubits. Further scale-up faces challenges in wiring, frequency crowding, and the high cost of control electronics. Complementary to increasing the number of qubits, using qutrits (3-level systems) or qudits (d-level systems, d>3) as the basic building block for quantum processors can also increase their computational capability. A commonly used superconducting qubit design, the transmon, has more than two levels. It is a good candidate for a qutrit or qudit processor. Variational quantum algorithms are a type of quantum algorithm that can be implemented on near-term devices. They have been proposed to have a higher tolerance to noise in near-term devices, making them promising for near-term applications of quantum computing. The difference between qubits and qudits makes it non-trivial to translate a variational algorithm designed for qubits onto a qudit quantum processor. The algorithm needs to be either rewritten into a qudit version or an emulator needs to be developed to emulate a qubit processor with a qudit processor. This thesis describes research on the implementation of variational quantum algorithms, with a particular focus on utilising more than two computational levels of transmons. The work comprises building a two-qubit transmon device and a multi-level transmon device that is used as a qutrit or a qudit (d = 4). We fully benchmarked the two-qubit and the single qudit devices with randomised benchmarking and gate-set tomography, and found good agreement between the two approaches. The qutrit Hadamard gate is reported to have an infidelity of 3.22 ± 0.11 × 10−3, which is comparable to state-of-the-art results. We use the qudit to implement a two-qubit emulator and report that the two-qubit Clifford gate randomised benchmarking result on the emulator (infidelity 9.5 ± 0.7 × 10−2) is worse than the physical two-qubit (infidelity 4.0 ± 0.3 × 10−2) result. We also implemented active reset for the qudit transmon to demonstrate preparing high-fidelity initial states with active feedback. We found the initial state fidelity improved from 0.900 ± 0.011 to 0.9932 ± 0.0013 from gate set tomography. We finally utilised the single qudit device to implement quantum algorithms. First, a single qutrit classifier for the iris dataset was implemented. We report a successful demonstration of the iris classifier, which yields the training accuracy of the qutrit classifier as 0.96 ± 0.03 and the testing accuracy as 0.94 ± 0.04 among multiple trials. Second, we implemented a two-qubit emulator with a 4-level qudit and used the emulator to demonstrate a variational quantum eigensolver for hydrogen molecules. The solved energy versus the hydrogen bond distance is within 1.5 × 10−2 Hartree, below the chemical accuracy threshold. From the characterisation, benchmarking results, and successful demonstration of two quantum algorithms, we conclude that higher levels of a transmon can be used to increase the size of the Hilbert space used for quantum computation with minimal extra cost

    A Generalized Biophysical Model of Transcription Factor Binding Specificity and Its Application on High-Throughput SELEX Data

    Get PDF
    The interaction between transcription factors (TFs) and DNA plays an important role in gene expression regulation. In the past, experiments on protein–DNA interactions could only identify a handful of sequences that a TF binds with high affinities. In recent years, several high-throughput experimental techniques, such as high-throughput SELEX (HT-SELEX), protein-binding microarrays (PBMs) and ChIP-seq, have been developed to estimate the relative binding affinities of large numbers of DNA sequences both in vitro and in vivo. The large volume of data generated by these techniques proved to be a challenge and prompted the development of novel motif discovery algorithms. These algorithms are based on a range of TF binding models, including the widely used probabilistic model that represents binding motifs as position frequency matrices (PFMs). However, the probabilistic model has limitations and the PFMs extracted from some of the high-throughput experiments are known to be suboptimal. In this dissertation, we attempt to address these important questions and develop a generalized biophysical model and an expectation maximization (EM) algorithm for estimating position weight matrices (PWMs) and other parameters using HT-SELEX data. First, we discuss the inherent limitations of the popular probabilistic model and compare it with a biophysical model that assumes the nucleotides in a binding site contribute independently to its binding energy instead of binding probability. We use simulations to demonstrate that the biophysical model almost always provides better fits to the data and conclude that it should take the place of the probabilistic model in charactering TF binding specificity. Then we describe a generalized biophysical model, which removes the assumption of known binding locations and is particularly suitable for modeling protein–DNA interactions in HT-SELEX experiments, and BEESEM, an EM algorithm capable of estimating the binding model and binding locations simultaneously. BEESEM can also calculate the confidence intervals of the estimated parameters in the binding model, a rare but useful feature among motif discovery algorithms. By comparing BEESEM with 5 other algorithms on HT-SELEX, PBM and ChIP-seq data, we demonstrate that BEESEM provides significantly better fits to in vitro data and is similar to the other methods (with one exception) on in vivo data under the criterion of the area under the receiver operating characteristic curve (AUROC). We also discuss the limitations of the AUROC criterion, which is purely rank-based and thus misses quantitative binding information. Finally, we investigate whether adding DNA shape features can significantly improve the accuracy of binding models. We evaluate the ability of the gradient boosting classifiers generated by DNAshapedTFBS, an algorithm that takes account of DNA shape features, to differentiate ChIP-seq peaks from random background sequences, and compare them with various matrix-based binding models. The results indicate that, compared with optimized PWMs, adding DNA shape features does not produce significantly better binding models and may increase the risk of overfitting on training datasets

    Multi-agent blind quantum computation without universal cluster states

    Get PDF
    Blind quantum computation (BQC) protocols enable quantum algorithms to be executed on third-party quantum agents while keeping the data and algorithm confidential. The previous proposals for measurement-based BQC require preparing a highly entangled cluster state. In this paper, we show that such a requirement is not necessary. Our protocol only requires pre-shared Bell pairs between delegated quantum agents, and there is no requirement for any classical or quantum information exchange between agents during the execution. Our proposal requires fewer quantum resources than previous proposals by eliminating the need for a universal cluster state

    Multi-agent blind quantum computation without universal cluster state

    Full text link
    Blind quantum computation (BQC) protocols enable quantum algorithms to be executed on third-party quantum agents while keeping the data and algorithm confidential. The previous proposals for measurement-based BQC require preparing a highly entangled cluster state. In this paper, we show that such a requirement is not necessary. Our protocol only requires pre-shared bell pairs between delegated quantum agents, and there is no requirement of any classical or quantum information exchange between agents during the execution. Our proposal requires fewer quantum resources than previous proposals by removing the universal cluster state

    Next generation multi-scale quantum simulations for strongly correlated materials

    Get PDF
    This thesis represents our effort to develop the next generation multi-scale quantum simulation methods suitable for strongly-correlated materials, where complicated phase-diagrams prevail, suggesting complicated underlying physics. We first give a detailed description of the parquet formalism. With its help, different approximate methods can be unified and a hierarchy of approximate methods with different accuracies and computational complexity can thus be designed. Next, we present a numerical solution of the parquet approximation. Results on the Hubbard model are compared to those obtained from Determinant Quantum Monte Carlo (DQMC), FLuctuation EXchange (FLEX), and self-consistent second-order approximation methods. The comparison shows a satisfactory agreement with DQMC and a significant improvement over the FLEX or the self-consistent second-order approximation. The parquet formalism can also be used to analyze the superconducting mechanism of the high-temperature superconductors. The dynamical cluster approximation (DCA) method is used to understand the proximity of the superconducting dome to the quantum critical point in the 2-D Hubbard model. At optimal doping, where Vd is revealed to be featureless, we find a power-law behavior of chi_0d(w=0), replacing the BCS logarithm behavior, and strongly enhanced T_c. After that we propose another multi-scale approach by combining the DCA and the recently introduced dual-fermion formalism. Within this approach, short and long length scale physics is addressed by the DCA cluster calculation, while intermediate length scale physics is addressed diagrammatically using dual fermions. The bare and dressed dual fermionic Green functions scale as O(1/Lc), so perturbation theory on the dual lattice converges very quickly. Lastly, we study the responses to dynamical modulation of the optical lattice potential by analyzing properties of the repulsive fermionic Hubbard model in an optical lattice. We provide numerical evidence showing the modulations by on-site local interaction cannot be ignored, and instead can even strongly contribute to the dynamical behaviors of the system in highly-doped cases

    A Low Power 5.8GHz Fully Integrated CMOS LNA for Wireless Applications

    Get PDF
    A low power 5.8 GHz fully integrated CMOS low noise amplifier (LNA) with on chip spiral inductors for wireless applications is designed based on TSMC 0.18 µm technology in this paper. The cascode structure and power-constrained simultaneous noise and input matching technique are adopted to achieve low noise, low power and high gain characteristics. The proposed LNA exhibit a state of the art performance consuming only 6.4mW from a 1.8V power supply. The simulation results show that it has a noise figure (NF) only 0.972 dB, which is perfectly close to NFmin while maintaining the other performances. The proposed LNA also has an input 1-dB compression point (IP1dB) of-21.22 dBm, a power gain of 17.04 dB, and good input and output reflection coefficients, which indicate that the proposed LNA topology is very suitable for the implementation of narrowband LNAs in 802.11a wireless applications
    • …
    corecore