6 research outputs found

    Dualistic geometry of the manifold of higher-order neurons

    Get PDF
    Abstract Recursive Fractal Genome Function in the geometric mind frame of Tensor Network Theory (TNT) leads through FractoGene to a mathematical unification of physiological and pathological development of neural structure and function as governed by the genome. The cerebellum serves as the best platform for unification of neuroscience and genomics. The matrix of massively parallel neural nets of fractal Purkinje brain cells explains the sensorimotor, multidimensional non-Euclidean coordination by the cerebellum acting as a space-time metric tensor. In TNT, the recursion of covariant sensory vectors into contravariant motor executions converges into Eigenstates composing the cerebellar metric as a Moore-Penrose Pseudo-Inverse. The Principle of Recursion is generalized to genomic systems with the realization that the assembly of proteins from nucleic acids as governed by regulation of coding RNA (cRNA) is a contravariant multi-component functor, where in turn the quantum states of resulting protein structures both in intergenic and intronic sequences are measured in a covariant manner by non-coding RNA (ncRNA) arising as a result of proteins binding with ncDNA modulated by transcription factors. Thus, cRNA and ncRNA vectors by their interference constitute a genomic metric. Recursion through massively parallel neural network and genomic systems raises the question if it obeys the Weyl law of Fractal Quantum Eigenstates, or when derailed, pathologically results in aberrant methylation or chromatin modulation; the root cause of cancerous growth. The growth of fractal Purkinje neurons of the cerebellum is governed by the aperiodical discrete quantum system of sequences of DNA bases, codons and motifs. The full genome is fractal; the discrete quantum system of pyknonlike elements follows the Zipf-Mandelbrot Parabolic Fractal Distribution curve. The Fractal Approach to Recursive Iteration has been used to identify fractal defects causing a cerebellar disease, the Friedreich Spinocerebellar Ataxia -in this case as runs disrupting a fractal regulatory sequence. Massive deployment starts by an open domain collaborative definition of a standard for fractal genome dimension in the embedding spaces of the genome-epigenome-methylome to optimally diagnose cancerous hologenome in the nucleotide, codon or motif-hyperspaces. Recursion is parallelized both by open domain algorithms, and also by proprietary FractoGene algorithms on high performance computing platforms, for genome analytics on accelerated private hybrid clouds with PDA personal interfaces, becoming the mainstay of clinical genomic measures prior and post cancer intervention in hospitals and serve consumers at large as Personal Genome Assistants. References 1

    Higher-order Petri net models based on artificial neural networks

    Get PDF
    AbstractIn this paper, the properties of higher-order neural networks are exploited in a new class of Petri nets, called higher-order Petri nets (HOPN). Using the similarities between neural networks and Petri nets this paper demonstrates how the McCullock-Pitts models and the higher-order neural networks can be represented by Petri nets. A 5-tuple HOPN is defined, a theorem on the relationship between the potential firability of the goal transition and the T-invariant (HOPN) is proved and discussed. The proposed HOPN can be applied to the polynomial clause subset of first-order predicate logic. A five-clause polynomial logic program example is also included to illustrate the theoretical results

    Support Vector Machine Implementations for Classification & Clustering

    Get PDF
    BACKGROUND: We describe Support Vector Machine (SVM) applications to classification and clustering of channel current data. SVMs are variational-calculus based methods that are constrained to have structural risk minimization (SRM), i.e., they provide noise tolerant solutions for pattern recognition. The SVM approach encapsulates a significant amount of model-fitting information in the choice of its kernel. In work thus far, novel, information-theoretic, kernels have been successfully employed for notably better performance over standard kernels. Currently there are two approaches for implementing multiclass SVMs. One is called external multi-class that arranges several binary classifiers as a decision tree such that they perform a single-class decision making function, with each leaf corresponding to a unique class. The second approach, namely internal-multiclass, involves solving a single optimization problem corresponding to the entire data set (with multiple hyperplanes). RESULTS: Each SVM approach encapsulates a significant amount of model-fitting information in its choice of kernel. In work thus far, novel, information-theoretic, kernels were successfully employed for notably better performance over standard kernels. Two SVM approaches to multiclass discrimination are described: (1) internal multiclass (with a single optimization), and (2) external multiclass (using an optimized decision tree). We describe benefits of the internal-SVM approach, along with further refinements to the internal-multiclass SVM algorithms that offer significant improvement in training time without sacrificing accuracy. In situations where the data isn't clearly separable, making for poor discrimination, signal clustering is used to provide robust and useful information – to this end, novel, SVM-based clustering methods are also described. As with the classification, there are Internal and External SVM Clustering algorithms, both of which are briefly described

    Generalized Alpha-Beta Divergences and Their Application to Robust Nonnegative Matrix Factorization

    Get PDF
    We propose a class of multiplicative algorithms for Nonnegative Matrix Factorization (NMF) which are robust with respect to noise and outliers. To achieve this, we formulate a new family generalized divergences referred to as the Alpha-Beta-divergences (AB-divergences), which are parameterized by the two tuning parameters, alpha and beta, and smoothly connect the fundamental Alpha-, Beta- and Gamma-divergences. By adjusting these tuning parameters, we show that a wide range of standard and new divergences can be obtained. The corresponding learning algorithms for NMF are shown to integrate and generalize many existing ones, including the Lee-Seung, ISRA (Image Space Reconstruction Algorithm), EMML (Expectation Maximization Maximum Likelihood), Alpha-NMF, and Beta-NMF. Owing to more degrees of freedom in tuning the parameters, the proposed family of AB-multiplicative NMF algorithms is shown to improve robustness with respect to noise and outliers. The analysis illuminates the links of between AB-divergence and other divergences, especially Gamma- and Itakura-Saito divergences

    A compositional neural architecture for language

    No full text
    Hierarchical structure and compositionality imbue human language with unparalleled expressive power and set it apart from other perception–action systems. However, neither formal nor neurobiological models account for how these defining computational properties might arise in a physiological system. I attempt to reconcile hierarchy and compositionality with principles from cell assembly computation in neuroscience; the result is an emerging theory of how the brain could convert distributed perceptual representations into hierarchical structures across multiple timescales while representing interpretable incremental stages of (de) compositional meaning. The model's architecture—a multidimensional coordinate system based on neurophysiological models of sensory processing—proposes that a manifold of neural trajectories encodes sensory, motor, and abstract linguistic states. Gain modulation, including inhibition, tunes the path in the manifold in accordance with behavior and is how latent structure is inferred. As a consequence, predictive information about upcoming sensory input during production and comprehension is available without a separate operation. The proposed processing mechanism is synthesized from current models of neural entrainment to speech, concepts from systems neuroscience and category theory, and a symbolic-connectionist computational model that uses time and rhythm to structure information. I build on evidence from cognitive neuroscience and computational modeling that suggests a formal and mechanistic alignment between structure building and neural oscillations and moves toward unifying basic insights from linguistics and psycholinguistics with the currency of neural computation

    Robust Framework for System Architecture and Hand-offs in Wireless and Cellular Communication Systems

    Get PDF
    Robustness of a system has been defined in various ways and a lot of work has been done to model the robustness of a system, but quantifying or measuring robustness has always been very difficult. In this research, we develop a framework for robust system architecture. We consider a system of a linear estimator (multiple tap filter) and then attempt to model the system performance and robustness in a graphical manner, which admits an analysis using the differential geometric concepts. We compare two different perturbation models, namely the gradient with biased perturbations (sub-optimal model) of a surface and the gradient with unbiased perturbations (optimal model), and observe the values to see which of them can alternately be used in the process of understanding or measuring robustness. In this process we have worked on different examples and conducted many simulations to find if there is any consistency in the two models. We propose the study of robustness measures for estimation/prediction in stationary and non-stationary environment using differential geometric tools in conjunction with probability density analysis. Our approach shows that the gradient can be viewed as a random variable and therefore used to generate probability densities, allowing one to draw conclusions regarding the robust- ness. As an example, one can apply the geometric methodology to the prediction of time varying deterministic data in imperfectly known non-stationary distribution. We also compare stationary to non-stationary distribution and prove that robustness is reduced by admitting residual non-stationarity. We then research and develop a robust iterative handoff algorithm, relating generally to methods, devices and systems for reselecting and then handing over a mobile communications device from a first cell to a second cell in a cellular wireless communications system (GPRS, W-CDMA or OFDMA). This algorithm results in significant decrease in amount of power and/or result is a decrease of break in communications during an established voice call or other connection, in the field, thereby outperforming prior art
    corecore