362 research outputs found

    Synthetic Aβ peptides acquire prion-like properties in the brain

    Get PDF
    In transmission studies with Alzheimer's disease (AD) animal models, the formation of Aβ plaques is proposed to be initiated by seeding the inoculated amyloid β (Aβ) peptides in the brain. Like the misfolded scrapie prion protein (PrP(Sc)) in prion diseases, Aβ in AD shows a certain degree of resistance to protease digestion while the biochemical basis for protease resistance of Aβ remains poorly understood. Using in vitro assays, histoblotting, and electron microscopy, we characterize the biochemical and morphological features of synthetic Aβ peptides and Aβ isolated from AD brain tissues. Consistent with previous observations, monomeric and oligomeric Aβ species extracted from AD brains are insoluble in detergent buffers and resistant to digestions with proteinase K (PK). Histoblotting of AD brain tissue sections exhibits an increased Aβ immunoreactivity after digestion with PK. In contrast, synthetic Aβ40 and Aβ42 are soluble in detergent buffers and fully digested by PK. Electron microscopy of Aβ40 and Aβ42 synthetic peptides shows that both species of Aβ form mature fibrils. Those generated from Aβ40 are longer but less numerous than those made of Aβ42. When spiked into human brain homogenates, both Aβ40 and Aβ42 acquire insolubility in detergent and resistance to PK. Our study favors the hypothesis that the human brain may contain cofactor(s) that confers the synthetic Aβ peptides PrP(Sc)-like physicochemical properties

    GLP-1 receptor signalling promotes β-cell glucose metabolism via mTOR-dependent HIF-1α activation

    Get PDF
    Glucagon-like peptide-1 (GLP-1) promotes insulin secretion from pancreatic ß-cells in a glucose dependent manner. Several pathways mediate this action by rapid, kinase phosphorylation-dependent, but gene expression-independent mechanisms. Since GLP-1-induced insulin secretion requires glucose metabolism, we aimed to address the hypothesis that GLP-1 receptor (GLP-1R) signalling can modulate glucose uptake and utilization in ß-cells. We have assessed various metabolic parameters after short and long exposure of clonal BRIN-BD11 ß-cells and rodent islets to the GLP-1R agonist Exendin-4 (50 nM). Here we report for the first time that prolonged stimulation of the GLP-1R for 18 hours promotes metabolic reprogramming of ß-cells. This is evidenced by up-regulation of glycolytic enzyme expression, increased rates of glucose uptake and consumption, as well as augmented ATP content, insulin secretion and glycolytic flux after removal of Exendin-4. In our model, depletion of Hypoxia-Inducible Factor 1 alpha (HIF-1a) impaired the effects of Exendin-4 on glucose metabolism, while pharmacological inhibition of Phosphoinositide 3-kinase (PI3K) or mTOR completely abolished such effects. Considering the central role of glucose catabolism for stimulus-secretion coupling in ß-cells, our findings suggest that chronic GLP-1 actions on insulin secretion include elevated ß-cell glucose metabolism. Moreover, our data reveal novel aspects of GLP-1 stimulated insulin secretion involving de novo gene expression

    Nonmonotone Barzilai-Borwein Gradient Algorithm for 1\ell_1-Regularized Nonsmooth Minimization in Compressive Sensing

    Full text link
    This paper is devoted to minimizing the sum of a smooth function and a nonsmooth 1\ell_1-regularized term. This problem as a special cases includes the 1\ell_1-regularized convex minimization problem in signal processing, compressive sensing, machine learning, data mining, etc. However, the non-differentiability of the 1\ell_1-norm causes more challenging especially in large problems encountered in many practical applications. This paper proposes, analyzes, and tests a Barzilai-Borwein gradient algorithm. At each iteration, the generated search direction enjoys descent property and can be easily derived by minimizing a local approximal quadratic model and simultaneously taking the favorable structure of the 1\ell_1-norm. Moreover, a nonmonotone line search technique is incorporated to find a suitable stepsize along this direction. The algorithm is easily performed, where the values of the objective function and the gradient of the smooth term are required at per-iteration. Under some conditions, the proposed algorithm is shown to be globally convergent. The limited experiments by using some nonconvex unconstrained problems from CUTEr library with additive 1\ell_1-regularization illustrate that the proposed algorithm performs quite well. Extensive experiments for 1\ell_1-regularized least squares problems in compressive sensing verify that our algorithm compares favorably with several state-of-the-art algorithms which are specifically designed in recent years.Comment: 20 page

    ε-Distance Weighted Support Vector Regression

    Get PDF
    We gratefully thank Dr Teng Zhang and Prof Zhi-Hua Zhou for providing the source code of “LDM”, and their kind technical assistance. We also thank Prof Chih-Jen Lins team for providing the LIBSVM and LIBLINEAR packages and their support. This work is supported by the National Natural Science Foundation of China (Grant Nos.61472159, 61572227) and Development Project of Jilin Province of China (Grant Nos. 20140101180JC, 20160204022GX, 20180414012G H). This work is also partially supported by the 2015 Scottish Crucible Award funded by the Royal Society of Edinburgh and the 2016 PECE bursary provided by the Scottish Informatics & Computer Science Alliance (SICSA).Postprin

    Observation of a ppb mass threshoud enhancement in \psi^\prime\to\pi^+\pi^-J/\psi(J/\psi\to\gamma p\bar{p}) decay

    Full text link
    The decay channel ψπ+πJ/ψ(J/ψγppˉ)\psi^\prime\to\pi^+\pi^-J/\psi(J/\psi\to\gamma p\bar{p}) is studied using a sample of 1.06×1081.06\times 10^8 ψ\psi^\prime events collected by the BESIII experiment at BEPCII. A strong enhancement at threshold is observed in the ppˉp\bar{p} invariant mass spectrum. The enhancement can be fit with an SS-wave Breit-Wigner resonance function with a resulting peak mass of M=186113+6(stat)26+7(syst)MeV/c2M=1861^{+6}_{-13} {\rm (stat)}^{+7}_{-26} {\rm (syst)} {\rm MeV/}c^2 and a narrow width that is Γ<38MeV/c2\Gamma<38 {\rm MeV/}c^2 at the 90% confidence level. These results are consistent with published BESII results. These mass and width values do not match with those of any known meson resonance.Comment: 5 pages, 3 figures, submitted to Chinese Physics

    Deep Randomized Neural Networks

    Get PDF
    Randomized Neural Networks explore the behavior of neural systems where the majority of connections are fixed, either in a stochastic or a deterministic fashion. Typical examples of such systems consist of multi-layered neural network architectures where the connections to the hidden layer(s) are left untrained after initialization. Limiting the training algorithms to operate on a reduced set of weights inherently characterizes the class of Randomized Neural Networks with a number of intriguing features. Among them, the extreme efficiency of the resulting learning processes is undoubtedly a striking advantage with respect to fully trained architectures. Besides, despite the involved simplifications, randomized neural systems possess remarkable properties both in practice, achieving state-of-the-art results in multiple domains, and theoretically, allowing to analyze intrinsic properties of neural architectures (e.g. before training of the hidden layers' connections). In recent years, the study of Randomized Neural Networks has been extended towards deep architectures, opening new research directions to the design of effective yet extremely efficient deep learning models in vectorial as well as in more complex data domains. This chapter surveys all the major aspects regarding the design and analysis of Randomized Neural Networks, and some of the key results with respect to their approximation capabilities. In particular, we first introduce the fundamentals of randomized neural models in the context of feed-forward networks (i.e., Random Vector Functional Link and equivalent models) and convolutional filters, before moving to the case of recurrent systems (i.e., Reservoir Computing networks). For both, we focus specifically on recent results in the domain of deep randomized systems, and (for recurrent models) their application to structured domains

    Deep Randomized Neural Networks

    Get PDF
    Randomized Neural Networks explore the behavior of neural systems where the majority of connections are fixed, either in a stochastic or a deterministic fashion. Typical examples of such systems consist of multi-layered neural network architectures where the connections to the hidden layer(s) are left untrained after initialization. Limiting the training algorithms to operate on a reduced set of weights inherently characterizes the class of Randomized Neural Networks with a number of intriguing features. Among them, the extreme efficiency of the resulting learning processes is undoubtedly a striking advantage with respect to fully trained architectures. Besides, despite the involved simplifications, randomized neural systems possess remarkable properties both in practice, achieving state-of-the-art results in multiple domains, and theoretically, allowing to analyze intrinsic properties of neural architectures (e.g. before training of the hidden layers’ connections). In recent years, the study of Randomized Neural Networks has been extended towards deep architectures, opening new research directions to the design of effective yet extremely efficient deep learning models in vectorial as well as in more complex data domains. This chapter surveys all the major aspects regarding the design and analysis of Randomized Neural Networks, and some of the key results with respect to their approximation capabilities. In particular, we first introduce the fundamentals of randomized neural models in the context of feed-forward networks (i.e., Random Vector Functional Link and equivalent models) and convolutional filters, before moving to the case of recurrent systems (i.e., Reservoir Computing networks). For both, we focus specifically on recent results in the domain of deep randomized systems, and (for recurrent models) their application to structured domains
    corecore