434 research outputs found

    Symmetries of Electrostatic Interaction between DNA Molecules

    Full text link
    We study a model for pair interaction UU of DNA molecules generated by the discrete dipole moments of base-pairs and the charges of phosphate groups, and find noncommutative group of eighth order S{\cal S} of symmetries that leave UU invariant. We classify the minima using group S{\cal S} and employ numerical methods for finding them. The minima may correspond to several cholesteric phases, as well as phases formed by cross-like conformations of molecules at an angle close to 90o\rm{90}^{o}, "snowflake phase". The results depend on the effective charge QQ of the phosphate group which can be modified by the polycations or the ions of metals. The snowflake phase could exist for QQ above the threshold QCQ_C. Below QCQ_C there could be several cholesteric phases. Close to QCQ_C the snowflake phase could change into the cholesteric one at constant distance between adjacent molecules.Comment: 13 pages, 4 figure

    A Stable Biologically Motivated Learning Mechanism for Visual Feature Extraction to Handle Facial Categorization

    Get PDF
    The brain mechanism of extracting visual features for recognizing various objects has consistently been a controversial issue in computational models of object recognition. To extract visual features, we introduce a new, biologically motivated model for facial categorization, which is an extension of the Hubel and Wiesel simple-to-complex cell hierarchy. To address the synaptic stability versus plasticity dilemma, we apply the Adaptive Resonance Theory (ART) for extracting informative intermediate level visual features during the learning process, which also makes this model stable against the destruction of previously learned information while learning new information. Such a mechanism has been suggested to be embedded within known laminar microcircuits of the cerebral cortex. To reveal the strength of the proposed visual feature learning mechanism, we show that when we use this mechanism in the training process of a well-known biologically motivated object recognition model (the HMAX model), it performs better than the HMAX model in face/non-face classification tasks. Furthermore, we demonstrate that our proposed mechanism is capable of following similar trends in performance as humans in a psychophysical experiment using a face versus non-face rapid categorization task

    On the Inability of Markov Models to Capture Criticality in Human Mobility

    Get PDF
    We examine the non-Markovian nature of human mobility by exposing the inability of Markov models to capture criticality in human mobility. In particular, the assumed Markovian nature of mobility was used to establish a theoretical upper bound on the predictability of human mobility (expressed as a minimum error probability limit), based on temporally correlated entropy. Since its inception, this bound has been widely used and empirically validated using Markov chains. We show that recurrent-neural architectures can achieve significantly higher predictability, surpassing this widely used upper bound. In order to explain this anomaly, we shed light on several underlying assumptions in previous research works that has resulted in this bias. By evaluating the mobility predictability on real-world datasets, we show that human mobility exhibits scale-invariant long-range correlations, bearing similarity to a power-law decay. This is in contrast to the initial assumption that human mobility follows an exponential decay. This assumption of exponential decay coupled with Lempel-Ziv compression in computing Fano's inequality has led to an inaccurate estimation of the predictability upper bound. We show that this approach inflates the entropy, consequently lowering the upper bound on human mobility predictability. We finally highlight that this approach tends to overlook long-range correlations in human mobility. This explains why recurrent-neural architectures that are designed to handle long-range structural correlations surpass the previously computed upper bound on mobility predictability

    Routes to develop a [S]/([S]+[Se]) gradient in wide band-gap Cu2ZnGe(S,Se)4 thin-film solar cells

    Get PDF
    Wide band-gap kesterite-based solar cells are very attractive to be used for tandem devices as well as for semi-transparent photovoltaic cells. Here, Cu2ZnGe(S,Se)4 (CZGSSe) thin films have been grown by sulfurization of co-evaporated Cu2ZnGeSe4. The influence of a NaF precursor layer and of a Se capping film on CZGSSe absorbers and solar cells has been investigated. It has been found that the distribution of [S]/([S]+[Se]) through the CZGSSe absorber layer is strongly dependent on the Na content. Na promotes the diffusion of S towards the bulk of the absorber layer. Thicker NaF layers>6 nm lead to a higher S content in the bulk of the absorber layer, but to a decreased accumulation of sulphur at the surface, as detected by GIXRD, GD-OES, and Raman spectroscopy measurements. A relationship between Jsc, FF and Na-content supplied was found; higher Na content resulted in improved solar cell efficiencies. It has also been possible to modify the [S]/([S]+[Se])-gradient throughout the CZGSSe film by the absence of the Se capping layer, achieving devices with 2.7% performance and Eg = 2.0 eV. This work reveals two ways to control the [S]/([S]+[Se]) depth-profile to produce wide band gap CZGSSe absorber layers for efficient solar cells

    Quantum effects in linguistic endeavors

    Full text link
    Classifying the information content of neural spike trains in a linguistic endeavor, an uncertainty relation emerges between the bit size of a word and its duration. This uncertainty is associated with the task of synchronizing the spike trains of different duration representing different words. The uncertainty involves peculiar quantum features, so that word comparison amounts to measurement-based-quantum computation. Such a quantum behavior explains the onset and decay of the memory window connecting successive pieces of a linguistic text. The behavior here discussed is applicable to other reported evidences of quantum effects in human linguistic processes, so far lacking a plausible framework, since either no efforts to assign an appropriate quantum constant had been associated or speculating on microscopic processes dependent on Planck's constant resulted in unrealistic decoherence times

    Coordinated optimization of visual cortical maps (II) Numerical studies

    Get PDF
    It is an attractive hypothesis that the spatial structure of visual cortical architecture can be explained by the coordinated optimization of multiple visual cortical maps representing orientation preference (OP), ocular dominance (OD), spatial frequency, or direction preference. In part (I) of this study we defined a class of analytically tractable coordinated optimization models and solved representative examples in which a spatially complex organization of the orientation preference map is induced by inter-map interactions. We found that attractor solutions near symmetry breaking threshold predict a highly ordered map layout and require a substantial OD bias for OP pinwheel stabilization. Here we examine in numerical simulations whether such models exhibit biologically more realistic spatially irregular solutions at a finite distance from threshold and when transients towards attractor states are considered. We also examine whether model behavior qualitatively changes when the spatial periodicities of the two maps are detuned and when considering more than 2 feature dimensions. Our numerical results support the view that neither minimal energy states nor intermediate transient states of our coordinated optimization models successfully explain the spatially irregular architecture of the visual cortex. We discuss several alternative scenarios and additional factors that may improve the agreement between model solutions and biological observations.Comment: 55 pages, 11 figures. arXiv admin note: substantial text overlap with arXiv:1102.335

    Dynamical principles in neuroscience

    Full text link
    Dynamical modeling of neural systems and brain functions has a history of success over the last half century. This includes, for example, the explanation and prediction of some features of neural rhythmic behaviors. Many interesting dynamical models of learning and memory based on physiological experiments have been suggested over the last two decades. Dynamical models even of consciousness now exist. Usually these models and results are based on traditional approaches and paradigms of nonlinear dynamics including dynamical chaos. Neural systems are, however, an unusual subject for nonlinear dynamics for several reasons: (i) Even the simplest neural network, with only a few neurons and synaptic connections, has an enormous number of variables and control parameters. These make neural systems adaptive and flexible, and are critical to their biological function. (ii) In contrast to traditional physical systems described by well-known basic principles, first principles governing the dynamics of neural systems are unknown. (iii) Many different neural systems exhibit similar dynamics despite having different architectures and different levels of complexity. (iv) The network architecture and connection strengths are usually not known in detail and therefore the dynamical analysis must, in some sense, be probabilistic. (v) Since nervous systems are able to organize behavior based on sensory inputs, the dynamical modeling of these systems has to explain the transformation of temporal information into combinatorial or combinatorial-temporal codes, and vice versa, for memory and recognition. In this review these problems are discussed in the context of addressing the stimulating questions: What can neuroscience learn from nonlinear dynamics, and what can nonlinear dynamics learn from neuroscience?This work was supported by NSF Grant No. NSF/EIA-0130708, and Grant No. PHY 0414174; NIH Grant No. 1 R01 NS50945 and Grant No. NS40110; MEC BFI2003-07276, and Fundación BBVA

    Psychophysics, Gestalts and Games

    Get PDF
    International audienceMany psychophysical studies are dedicated to the evaluation of the human gestalt detection on dot or Gabor patterns, and to model its dependence on the pattern and background parameters. Nevertheless, even for these constrained percepts, psychophysics have not yet reached the challenging prediction stage, where human detection would be quantitatively predicted by a (generic) model. On the other hand, Computer Vision has attempted at defining automatic detection thresholds. This chapter sketches a procedure to confront these two methodologies inspired in gestaltism. Using a computational quantitative version of the non-accidentalness principle, we raise the possibility that the psychophysical and the (older) gestaltist setups, both applicable on dot or Gabor patterns, find a useful complement in a Turing test. In our perceptual Turing test, human performance is compared by the scientist to the detection result given by a computer. This confrontation permits to revive the abandoned method of gestaltic games. We sketch the elaboration of such a game, where the subjects of the experiment are confronted to an alignment detection algorithm, and are invited to draw examples that will fool it. We show that in that way a more precise definition of the alignment gestalt and of its computational formulation seems to emerge. Detection algorithms might also be relevant to more classic psychophysical setups, where they can again play the role of a Turing test. To a visual experiment where subjects were invited to detect alignments in Gabor patterns, we associated a single function measuring the alignment detectability in the form of a number of false alarms (NFA). The first results indicate that the values of the NFA, as a function of all simulation parameters, are highly correlated to the human detection. This fact, that we intend to support by further experiments , might end up confirming that human alignment detection is the result of a single mechanism

    Coordinated optimization of visual cortical maps (I) Symmetry-based analysis

    Get PDF
    In the primary visual cortex of primates and carnivores, functional architecture can be characterized by maps of various stimulus features such as orientation preference (OP), ocular dominance (OD), and spatial frequency. It is a long-standing question in theoretical neuroscience whether the observed maps should be interpreted as optima of a specific energy functional that summarizes the design principles of cortical functional architecture. A rigorous evaluation of this optimization hypothesis is particularly demanded by recent evidence that the functional architecture of OP columns precisely follows species invariant quantitative laws. Because it would be desirable to infer the form of such an optimization principle from the biological data, the optimization approach to explain cortical functional architecture raises the following questions: i) What are the genuine ground states of candidate energy functionals and how can they be calculated with precision and rigor? ii) How do differences in candidate optimization principles impact on the predicted map structure and conversely what can be learned about an hypothetical underlying optimization principle from observations on map structure? iii) Is there a way to analyze the coordinated organization of cortical maps predicted by optimization principles in general? To answer these questions we developed a general dynamical systems approach to the combined optimization of visual cortical maps of OP and another scalar feature such as OD or spatial frequency preference.Comment: 90 pages, 16 figure

    Gain control network conditions in early sensory coding

    Get PDF
    Gain control is essential for the proper function of any sensory system. However, the precise mechanisms for achieving effective gain control in the brain are unknown. Based on our understanding of the existence and strength of connections in the insect olfactory system, we analyze the conditions that lead to controlled gain in a randomly connected network of excitatory and inhibitory neurons. We consider two scenarios for the variation of input into the system. In the first case, the intensity of the sensory input controls the input currents to a fixed proportion of neurons of the excitatory and inhibitory populations. In the second case, increasing intensity of the sensory stimulus will both, recruit an increasing number of neurons that receive input and change the input current that they receive. Using a mean field approximation for the network activity we derive relationships between the parameters of the network that ensure that the overall level of activity of the excitatory population remains unchanged for increasing intensity of the external stimulation. We find that, first, the main parameters that regulate network gain are the probabilities of connections from the inhibitory population to the excitatory population and of the connections within the inhibitory population. Second, we show that strict gain control is not achievable in a random network in the second case, when the input recruits an increasing number of neurons. Finally, we confirm that the gain control conditions derived from the mean field approximation are valid in simulations of firing rate models and Hodgkin-Huxley conductance based models
    • …
    corecore