64 research outputs found

    From Ad-Hoc to Systematic: A Strategy for Imposing General Boundary Conditions in Discretized PDEs in variational quantum algorithm

    Full text link
    We proposed a general quantum-computing-based algorithm that harnesses the exponential power of noisy intermediate-scale quantum (NISQ) devices in solving partial differential equations (PDE). This variational quantum eigensolver (VQE)-inspired approach transcends previous idealized model demonstrations constrained by strict and simplistic boundary conditions. It enables the imposition of arbitrary boundary conditions, significantly expanding its potential and adaptability for real-world applications, achieving this "from ad-hoc to systematic" concept. We have implemented this method using the fourth-order PDE (the Euler-Bernoulli beam) as example and showcased its effectiveness with four different boundary conditions. This framework enables expectation evaluations independent of problem size, harnessing the exponentially growing state space inherent in quantum computing, resulting in exceptional scalability. This method paves the way for applying quantum computing to practical engineering applications.Comment: 16 pages, 8 figure

    Acoustic Holographic Rendering with Two-dimensional Metamaterial-based Passive Phased Array.

    Get PDF
    Acoustic holographic rendering in complete analogy with optical holography are useful for various applications, ranging from multi-focal lensing, multiplexed sensing and synthesizing three-dimensional complex sound fields. Conventional approaches rely on a large number of active transducers and phase shifting circuits. In this paper we show that by using passive metamaterials as subwavelength pixels, holographic rendering can be achieved without cumbersome circuitry and with only a single transducer, thus significantly reducing system complexity. Such metamaterial-based holograms can serve as versatile platforms for various advanced acoustic wave manipulation and signal modulation, leading to new possibilities in acoustic sensing, energy deposition and medical diagnostic imaging

    CMB: A Comprehensive Medical Benchmark in Chinese

    Full text link
    Large Language Models (LLMs) provide a possibility to make a great breakthrough in medicine. The establishment of a standardized medical benchmark becomes a fundamental cornerstone to measure progression. However, medical environments in different regions have their local characteristics, e.g., the ubiquity and significance of traditional Chinese medicine within China. Therefore, merely translating English-based medical evaluation may result in \textit{contextual incongruities} to a local region. To solve the issue, we propose a localized medical benchmark called CMB, a Comprehensive Medical Benchmark in Chinese, designed and rooted entirely within the native Chinese linguistic and cultural framework. While traditional Chinese medicine is integral to this evaluation, it does not constitute its entirety. Using this benchmark, we have evaluated several prominent large-scale LLMs, including ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical domain. It is worth noting that our benchmark is not devised as a leaderboard competition but as an instrument for self-assessment of model advancements. We hope this benchmark could facilitate the widespread adoption and enhancement of medical LLMs within China. Check details in \url{https://cmedbenchmark.llmzoo.com/}

    AceGPT, Localizing Large Language Models in Arabic

    Full text link
    This paper explores the imperative need and methodology for developing a localized Large Language Model (LLM) tailored for Arabic, a language with unique cultural characteristics that are not adequately addressed by current mainstream models like ChatGPT. Key concerns additionally arise when considering cultural sensitivity and local values. To this end, the paper outlines a packaged solution, including further pre-training with Arabic texts, supervised fine-tuning (SFT) using native Arabic instructions and GPT-4 responses in Arabic, and reinforcement learning with AI feedback (RLAIF) using a reward model that is sensitive to local culture and values. The objective is to train culturally aware and value-aligned Arabic LLMs that can serve the diverse application-specific needs of Arabic-speaking communities. Extensive evaluations demonstrated that the resulting LLM called `AceGPT' is the SOTA open Arabic LLM in various benchmarks, including instruction-following benchmark (i.e., Arabic Vicuna-80 and Arabic AlpacaEval), knowledge benchmark (i.e., Arabic MMLU and EXAMs), as well as the newly-proposed Arabic cultural \& value alignment benchmark. Notably, AceGPT outperforms ChatGPT in the popular Vicuna-80 benchmark when evaluated with GPT-4, despite the benchmark's limited scale. % Natural Language Understanding (NLU) benchmark (i.e., ALUE) Codes, data, and models are in https://github.com/FreedomIntelligence/AceGPT.Comment: https://github.com/FreedomIntelligence/AceGP

    Crystal structure of rhodopsin bound to arrestin by femtosecond X-ray laser.

    Get PDF
    G-protein-coupled receptors (GPCRs) signal primarily through G proteins or arrestins. Arrestin binding to GPCRs blocks G protein interaction and redirects signalling to numerous G-protein-independent pathways. Here we report the crystal structure of a constitutively active form of human rhodopsin bound to a pre-activated form of the mouse visual arrestin, determined by serial femtosecond X-ray laser crystallography. Together with extensive biochemical and mutagenesis data, the structure reveals an overall architecture of the rhodopsin-arrestin assembly in which rhodopsin uses distinct structural elements, including transmembrane helix 7 and helix 8, to recruit arrestin. Correspondingly, arrestin adopts the pre-activated conformation, with a ∼20° rotation between the amino and carboxy domains, which opens up a cleft in arrestin to accommodate a short helix formed by the second intracellular loop of rhodopsin. This structure provides a basis for understanding GPCR-mediated arrestin-biased signalling and demonstrates the power of X-ray lasers for advancing the frontiers of structural biology

    Systematic assessment of long-read RNA-seq methods for transcript identification and quantification

    Get PDF
    The Long-read RNA-Seq Genome Annotation Assessment Project (LRGASP) Consortium was formed to evaluate the effectiveness of long-read approaches for transcriptome analysis. The consortium generated over 427 million long-read sequences from cDNA and direct RNA datasets, encompassing human, mouse, and manatee species, using different protocols and sequencing platforms. These data were utilized by developers to address challenges in transcript isoform detection and quantification, as well as de novo transcript isoform identification. The study revealed that libraries with longer, more accurate sequences produce more accurate transcripts than those with increased read depth, whereas greater read depth improved quantification accuracy. In well-annotated genomes, tools based on reference sequences demonstrated the best performance. When aiming to detect rare and novel transcripts or when using reference-free approaches, incorporating additional orthogonal data and replicate samples are advised. This collaborative study offers a benchmark for current practices and provides direction for future method development in transcriptome analysis

    Применение программного продукта «Яндекс.Сервер» для организации поиска в электронном каталоге библиотеки

    Get PDF
    The huge amounts of information accumulated by libraries in recent years put before developers a problem of the organization of fast and qualitative search which decision is possible with the use of modern search tools of Web-technology. The author examines one of these tools the software product “Yandex. Server”, allowing to organize optimum search in the electronic library catalog. The software product “Yandex. Server” gives a chance to carry out optimum search taking into account morphology of Russian and English languages, as well as the various logical conditions that provides effective and flexible search in the electronic library catalog.Накопленные библиотеками за последние годы огромные массивы информации ставят перед разработчиками задачу организации быстрого и качественного поиска, решение которой возможно с использованием современных поисковых инструментов веб-технологии. Автор рассматривает один из таких инструментов - программный продукт «Яндекс. Сервер», позволяющий организовать оптимальный поиск в электронном каталоге библиотеки с учетом морфологии русского и английского языков, а также различных логических условий

    Joint DOA and DOD Estimation Based on Tensor Subspace with Partially Calibrated Bistatic MIMO Radar

    No full text
    A joint direction-of-departure (DOD) and direction-of-arrival (DOA) estimation algorithm based on tensor subspace approach for partially calibrated bistatic multiple-input multiple-output (MIMO) radar is proposed. By exploiting the multidimensional structure of the received data, a third-order measurement tensor is constructed. Consequently, the tensor-based signal subspace is achieved using the higher-order singular value decomposition (HOSVD). To achieve accurate DOA estimation with partially calibrated array, a closed-form solution is provided to estimate the gain-phase uncertainties of the transmit and receive arrays by modeling the imperfections of the arrays. Simulation results demonstrate the effectiveness of the proposed calibration algorithm
    corecore