166 research outputs found

    Extension of Modified Polak-Ribière-Polyak Conjugate Gradient Method to Linear Equality Constraints Minimization Problems

    Get PDF
    Combining the Rosen gradient projection method with the two-term Polak-Ribière-Polyak (PRP) conjugate gradient method, we propose a two-term Polak-Ribière-Polyak (PRP) conjugate gradient projection method for solving linear equality constraints optimization problems. The proposed method possesses some attractive properties: (1) search direction generated by the proposed method is a feasible descent direction; consequently the generated iterates are feasible points; (2) the sequences of function are decreasing. Under some mild conditions, we show that it is globally convergent with Armijio-type line search. Preliminary numerical results show that the proposed method is promising

    Metal-coated carbon nanotube tips for Magnetic Force Microscopy

    Full text link
    We fabricated cantilevers for magnetic force microscopy with carbon nanotube tips coated with magnetic material. Images of a custom hard drive demonstrated 20 nm lateral resolution, with prospects for further improvements.Comment: Accepted to be published in Applied Physics Letter

    Nogo-66 Promotes the Differentiation of Neural Progenitors into Astroglial Lineage Cells through mTOR-STAT3 Pathway

    Get PDF
    Background: Neural stem/progenitor cells (NPCs) can differentiate into neurons, astrocytes and oligodendrocytes. NPCs are considered valuable for the cell therapy of injuries in the central nervous system (CNS). However, when NPCs are transplanted into the adult mammalian spinal cord, they mostly differentiate into glial lineage. The same results have been observed for endogenous NPCs during spinal cord injury. However, little is known about the mechanism of such fate decision of NPCs. Methodology/Principal Findings: In the present study, we have found that myelin protein and Nogo-66 promoted the differentiation of NPCs into glial lineage. NgR and mTOR-Stat3 pathway were involved in this process. Releasing NgR from cell membranes or blocking mTOR-STAT3 could rescue the enhanced glial differentiation by Nogo-66. Conclusions/Significance: These results revealed a novel function of Nogo-66 in the fate decision of NPCs. This discover

    Improvement of Sciatic Nerve Regeneration Using Laminin-Binding Human NGF-β

    Get PDF
    Sciatic nerve injuries often cause partial or total loss of motor, sensory and autonomic functions due to the axon discontinuity, degeneration, and eventual death which finally result in substantial functional loss and decreased quality of life. Nerve growth factor (NGF) plays a critical role in peripheral nerve regeneration. However, the lack of efficient NGF delivery approach limits its clinical applications. We reported here by fusing with the N-terminal domain of agrin (NtA), NGF-β could target to nerve cells and improve nerve regeneration. was also measured. Using the rat sciatic nerve crush injury model, the nerve repair and functional restoration by utilizing LBD-NGF were tested.. In the rat sciatic nerve crush injury model, we found that LBD-NGF could be retained and concentrated at the nerve injury sites to promote nerve repair and enhance functional restoration following nerve damages.Fused with NtA, NGF-β could bind to laminin specifically. Since laminin is the major component of nerve extracellular matrix, laminin binding NGF could target to nerve cells and improve the repair of peripheral nerve injuries

    Brainformers: Trading Simplicity for Efficiency

    Full text link
    Transformers are central to recent successes in natural language processing and computer vision. Transformers have a mostly uniform backbone where layers alternate between feed-forward and self-attention in order to build a deep network. Here we investigate this design choice and find that more complex blocks that have different permutations of layer primitives can be more efficient. Using this insight, we develop a complex block, named Brainformer, that consists of a diverse sets of layers such as sparsely gated feed-forward layers, dense feed-forward layers, attention layers, and various forms of layer normalization and activation functions. Brainformer consistently outperforms the state-of-the-art dense and sparse Transformers, in terms of both quality and efficiency. A Brainformer model with 8 billion activated parameters per token demonstrates 2x faster training convergence and 5x faster step time compared to its GLaM counterpart. In downstream task evaluation, Brainformer also demonstrates a 3% higher SuperGLUE score with fine-tuning compared to GLaM with a similar number of activated parameters. Finally, Brainformer largely outperforms a Primer dense model derived with NAS with similar computation per token on fewshot evaluations
    • …
    corecore