599 research outputs found

    Zhang Neural Networks for Online Solution of Time-Varying Linear Inequalities

    Get PDF
    In this chapter, a special type of recurrent neural networks termed “Zhang neural network” (ZNN) is presented and studied for online solution of time-varying linear (matrix-vector and matrix) inequalities. Specifically, focusing on solving the time-varying linear matrix-vector inequality (LMVI), we develop and investigate two different ZNN models based on two different Zhang functions (ZFs). Then, being an extension, by defining another two different ZFs, another two ZNN models are developed and investigated to solve the time-varying linear matrix inequality (LMI). For such ZNN models, theoretical results and analyses are presented as well to show their computational performances. Simulation results with two illustrative examples further substantiate the efficacy of the presented ZNN models for time-varying LMVI and LMI solving

    Expression of fatty acid and lipid biosynthetic genes in developing endosperm of Jatropha curcas

    Get PDF
    BACKGROUND: Temporal and spatial expression of fatty acid and lipid biosynthetic genes are associated with the accumulation of storage lipids in the seeds of oil plants. In jatropha (Jatropha curcas L.), a potential biofuel plant, the storage lipids are mainly synthesized and accumulated in the endosperm of seeds. Although the fatty acid and lipid biosynthetic genes in jatropha have been identified, the expression of these genes at different developing stages of endosperm has not been systemically investigated. RESULTS: Transmission electron microscopy study revealed that the oil body formation in developing endosperm of jatropha seeds initially appeared at 28 days after fertilization (DAF), was actively developed at 42 DAF and reached to the maximum number and size at 56 DAF. Sixty-eight genes that encode enzymes, proteins or their subunits involved in fatty acid and lipid biosynthesis were identified from a normalized cDNA library of jatropha developing endosperm. Gene expression with quantitative reverse-transcription polymerase chain reaction analysis demonstrated that the 68 genes could be collectively grouped into five categories based on the patterns of relative expression of the genes during endosperm development. Category I has 47 genes and they displayed a bell-shaped expression pattern with the peak expression at 28 or 42 DAF, but low expression at 14 and 56 DAF. Category II contains 8 genes and expression of the 8 genes was constantly increased from 14 to 56 DAF. Category III comprises of 2 genes and both genes were constitutively expressed throughout endosperm development. Category IV has 9 genes and they showed a high expression at 14 and 28 DAF, but a decreased expression from 42 to 56 DAF. Category V consists of 2 genes and both genes showed a medium expression at 14 DAF, the lowest expression at 28 or 42 DAF, and the highest expression at 56 DAF. In addition, genes encoding enzymes or proteins with similar function were differentially expressed during endosperm development. CONCLUSION: The formation of oil bodies in jatropha endosperm is developmentally regulated. The expression of the majority of fatty acid and lipid biosynthetic genes is highly consistent with the development of oil bodies and endosperm in jatropha seeds, while the genes encoding enzymes with similar function may be differentially expressed during endosperm development. These results not only provide the initial information on spatial and temporal expression of fatty acid and lipid biosynthetic genes in jatropha developing endosperm, but are also valuable to identify the rate-limiting genes for storage lipid biosynthesis and accumulation during seed development

    Identification and Analysis of Intermediate Size Noncoding RNAs in the Human Fetal Brain

    Get PDF
    The involvement of noncoding RNAs (ncRNAs) in the development of the human brain remains largely unknown. Applying a cloning strategy for detection of intermediate size (50–500 nt) ncRNAs (is-ncRNAs) we have identified 82 novel transcripts in human fetal brain tissue. Most of the novel is-ncRNAs are not well conserved in vertebrates, and several transcripts were only found in primates. Northern blot and microarray analysis indicated considerable variation in expression across human fetal brain development stages and fetal tissues for both novel and known is-ncRNAs. Expression of several of the novel is-ncRNAs was conspicuously absent in one or two brain cancer cell lines, and transient overexpression of some transcripts in cancer cells significantly inhibited cell proliferation. Overall, our results suggest that is-ncRNAs play important roles in the development and tumorigenesis of human brain

    Iterative Layerwise Training for Quantum Approximate Optimization Algorithm

    Full text link
    The capability of the quantum approximate optimization algorithm (QAOA) in solving the combinatorial optimization problems has been intensively studied in recent years due to its application in the quantum-classical hybrid regime. Despite having difficulties that are innate in the variational quantum algorithms (VQA), such as barren plateaus and the local minima problem, QAOA remains one of the applications that is suitable for the recent noisy intermediate scale quantum (NISQ) devices. Recent works have shown that the performance of QAOA largely depends on the initial parameters, which motivate parameter initialization strategies to obtain good initial points for the optimization of QAOA. On the other hand, optimization strategies focus on the optimization part of QAOA instead of the parameter initialization. Instead of having absolute advantages, these strategies usually impose trade-offs to the performance of the optimization problems. One of such examples is the layerwise optimization strategy, in which the QAOA parameters are optimized layer-by-layer instead of the full optimization. The layerwise strategy costs less in total compared to the full optimization, in exchange of lower approximation ratio. In this work, we propose the iterative layerwise optimization strategy and explore the possibility for the reduction of optimization cost in solving problems with QAOA. Using numerical simulations, we found out that by combining the iterative layerwise with proper initialization strategies, the optimization cost can be significantly reduced in exchange for a minor reduction in the approximation ratio. We also show that in some cases, the approximation ratio given by the iterative layerwise strategy is even higher than that given by the full optimization.Comment: 9 pages, 3 figure

    Learning To Teach Large Language Models Logical Reasoning

    Full text link
    Large language models (LLMs) have gained enormous attention from both academia and industry, due to their exceptional ability in language generation and extremely powerful generalization. However, current LLMs still output unreliable content in practical reasoning tasks due to their inherent issues (e.g., hallucination). To better disentangle this problem, in this paper, we conduct an in-depth investigation to systematically explore the capability of LLMs in logical reasoning. More in detail, we first investigate the deficiency of LLMs in logical reasoning on different tasks, including event relation extraction and deductive reasoning. Our study demonstrates that LLMs are not good reasoners in solving tasks with rigorous reasoning and will produce counterfactual answers, which require us to iteratively refine. Therefore, we comprehensively explore different strategies to endow LLMs with logical reasoning ability, and thus enable them to generate more logically consistent answers across different scenarios. Based on our approach, we also contribute a synthesized dataset (LLM-LR) involving multi-hop reasoning for evaluation and pre-training. Extensive quantitative and qualitative analyses on different tasks also validate the effectiveness and necessity of teaching LLMs with logic and provide insights for solving practical tasks with LLMs in future work
    • …
    corecore