205 research outputs found

    Solvable Neural Network Model for Input-Output Associations: Optimal Recall at the Onset of Chaos

    Full text link
    In neural information processing, an input modulates neural dynamics to generate a desired output. To unravel the dynamics and underlying neural connectivity enabling such input-output association, we proposed an exactly soluble neural-network model with a connectivity matrix explicitly consisting of inputs and required outputs. An analytic form of the response upon the input is derived, whereas three distinctive types of responses including chaotic dynamics as bifurcation against input strength are obtained depending on the neural sensitivity and number of inputs. Optimal performance is achieved at the onset of chaos, and the relevance of the results to cognitive dynamics is discussed

    The Effect of the Shape and Size of the Frame on the Impression Given by a Painting

    Get PDF

    非アルコー性脂肪肝炎(NASH)の発症と進展の分子機序

    Get PDF
    筑波大学 (University of Tsukuba)201

    Perspective Chapter: RNA Therapeutics for Cancers

    Get PDF
    RNA therapeutics represent a promising class of drugs and some of the successful therapeutics have been recently transformed into clinics for several disorders. A growing body of evidence has underlined the involvement of aberrant expression of cancer-associate genes or RNA splicing in the pathogenesis of a variety of cancers. In addition, there have been >200 clinical trials of oligonucleotide therapeutics targeting a variety of molecules in cancers. Although there are no approved RNA therapeutics against cancers so far, some promising outcomes have been obtained in phase 1/2 clinical trials. We will review the recent advances in the study of cancer pathogenesis associated with RNA therapeutics and the development of RNA therapeutics for cancers

    ノックアウトマウスを用いたオートファジー選択的基質の探索

    Get PDF
    学位の種別: 課程博士審査委員会委員 : (主査)東京大学教授 栗原 裕基, 東京大学教授 岩坪 威, 東京大学教授 村上 誠, 東京大学准教授 金井 克光, 東京大学教授 尾藤 晴彦University of Tokyo(東京大学

    One-Dimensional Organometallic V-Anthracene Wire and Its B-N Analogue: Efficient Half-Metallic Spin Filters

    Full text link
    Using density functional theory, we have investigated the structural, electronic and magnetic properties of infinitely periodic organometallic vanadium-anthracene ([V_2Ant]_\infinity) and [V_4(BNAnt)_2]_\infinity(where BNAnt is B-N analogue of anthracene) for their possible application in spintronics. From our calculations, we find that one-dimensional [V_2Ant]_\infinity and [V_4(BNAnt)_2]_\infinity wires exhibit robust ferromagnetic half-metallic and metallic behavior, respectively. The finite sized V6Ant2V_6Ant_2 and V6(BNAnt)2V_6(BNAnt)_2 clusters are also found to exhibit efficient spin filter properties when coupled to graphene electrodes on either side

    Learning Shapes Spontaneous Activity Itinerating over Memorized States

    Get PDF
    Learning is a process that helps create neural dynamical systems so that an appropriate output pattern is generated for a given input. Often, such a memory is considered to be included in one of the attractors in neural dynamical systems, depending on the initial neural state specified by an input. Neither neural activities observed in the absence of inputs nor changes caused in the neural activity when an input is provided were studied extensively in the past. However, recent experimental studies have reported existence of structured spontaneous neural activity and its changes when an input is provided. With this background, we propose that memory recall occurs when the spontaneous neural activity changes to an appropriate output activity upon the application of an input, and this phenomenon is known as bifurcation in the dynamical systems theory. We introduce a reinforcement-learning-based layered neural network model with two synaptic time scales; in this network, I/O relations are successively memorized when the difference between the time scales is appropriate. After the learning process is complete, the neural dynamics are shaped so that it changes appropriately with each input. As the number of memorized patterns is increased, the generated spontaneous neural activity after learning shows itineration over the previously learned output patterns. This theoretical finding also shows remarkable agreement with recent experimental reports, where spontaneous neural activity in the visual cortex without stimuli itinerate over evoked patterns by previously applied signals. Our results suggest that itinerant spontaneous activity can be a natural outcome of successive learning of several patterns, and it facilitates bifurcation of the network when an input is provided
    corecore