24 research outputs found

    Structural Basis for Specific Binding of Human MPP8 Chromodomain to Histone H3 Methylated at Lysine 9

    Get PDF
    . MPP8 binding to methylated H3K9 is suggested to recruit the H3K9 methyltransferases GLP and ESET, and DNA methyltransferase 3A to the promoter of the E-cadherin gene, mediating the E-cadherin gene silencing and promote tumor cell motility and invasion. MPP8 contains a chromodomain in its N-terminus, which is used to bind the methylated H3K9. HP1, a chromodomain containing protein that binds to methylated H3K9 as well. The structure also reveals that the human MPP8 chromodomain forms homodimer, which is mediated via an unexpected domain swapping interaction through two Ξ² strands from the two protomer subunits.Our findings reveal the molecular mechanism of selective binding of human MPP8 chromodomain to methylated histone H3K9. The observation of human MPP8 chromodomain in both solution and crystal lattice may provide clues to study MPP8-mediated gene regulation furthermore

    Training a Probabilistic Graphical Model with Resistive Switching Electronic Synapses

    No full text
    Current large-scale implementations of deep learning and data mining require thousands of processors, massive amounts of off-chip memory, and consume gigajoules of energy. New memory technologies, such as nanoscale two-terminal resistive switching memory devices, offer a compact, scalable, and low-power alternative that permits on-chip colocated processing and memory in fine-grain distributed parallel architecture. Here, we report the first use of resistive memory devices for implementing and training a restricted Boltzmann machine (RBM), a generative probabilistic graphical model as a key component for unsupervised learning in deep networks. We experimentally demonstrate a 45-synapse RBM realized with 90 resistive phase change memory (PCM) elements trained with a bioinspired variant of the contrastive divergence algorithm, implementing Hebbian and anti-Hebbian weight updates. The resistive PCM devices show a twofold to tenfold reduction in error rate in a missing pixel pattern completion task trained over 30 epochs, compared with untrained case. Measured programming energy consumption is 6.1 nJ per epoch with the PCM devices, a factor of 150 times lower than the conventional processor-memory systems. We analyze and discuss the dependence of learning performance on cycle-to-cycle variations and number of gradual levels in the PCM analog memory devices

    Very large-scale neuromorphic systems for biological signal processing

    No full text
    This chapter is a white paper describing a platform for scaled-up neuromorphic systems to β€˜human brain size’ complexity. Such a system will be necessary for massive search and analysis tasks while interacting with biological data. This system would consist of similar number of neurons and synapses as in an adult human brain. One of the largest bottlenecks is the huge synaptic complexity that would result from connecting billions of neurons. The purpose of this chapter is to describe a feasible architecture that could handle the enormous communication bandwidth necessary for such a large-scale neuromorphic system. The proposed approach is grounded in the assumption that we would only be able to appreciate the utility of a neuromorphic system when it is somewhat similar to the human brain in terms of energy consumption and size. Inspired by the recent advancements in SoC architecture, a novel scalable intercluster communication network is proposed here. A particularly useful instantiation of this occurs for the global synaptic communication, interconnecting the local clusters of synapse arrays. The core of the proposed solution is a novel switching architecture in the CMOS back end of line (BEOL) that is expected to be extremely power efficient. In contrast to a fixed predefined bus that is shared over all connected local clusters, the proposed solution will allow a multitude of dedicated point-to-point connections that can be switched dynamically
    corecore