91,633 research outputs found

    The Hyperdimensional Transform: a Holographic Representation of Functions

    Full text link
    Integral transforms are invaluable mathematical tools to map functions into spaces where they are easier to characterize. We introduce the hyperdimensional transform as a new kind of integral transform. It converts square-integrable functions into noise-robust, holographic, high-dimensional representations called hyperdimensional vectors. The central idea is to approximate a function by a linear combination of random functions. We formally introduce a set of stochastic, orthogonal basis functions and define the hyperdimensional transform and its inverse. We discuss general transform-related properties such as its uniqueness, approximation properties of the inverse transform, and the representation of integrals and derivatives. The hyperdimensional transform offers a powerful, flexible framework that connects closely with other integral transforms, such as the Fourier, Laplace, and fuzzy transforms. Moreover, it provides theoretical foundations and new insights for the field of hyperdimensional computing, a computing paradigm that is rapidly gaining attention for efficient and explainable machine learning algorithms, with potential applications in statistical modelling and machine learning. In addition, we provide straightforward and easily understandable code, which can function as a tutorial and allows for the reproduction of the demonstrated examples, from computing the transform to solving differential equations

    Using machine learning to model doseā€“response relationships

    Full text link
    Rationale, aims and objectivesEstablishing the relationship between various doses of an exposure and a response variable is integral to many studies in health care. Linear parametric models, widely used for estimating doseā€“response relationships, have several limitations. This paper employs the optimal discriminant analysis (ODA) machineā€learning algorithm to determine the degree to which exposure dose can be distinguished based on the distribution of the response variable. By framing the doseā€“response relationship as a classification problem, machine learning can provide the same functionality as conventional models, but can additionally make individualā€level predictions, which may be helpful in practical applications like establishing responsiveness to prescribed drug regimens.MethodUsing data from a study measuring the responses of blood flow in the forearm to the intraā€arterial administration of isoproterenol (separately for 9 black and 13 white men, and pooled), we compare the results estimated from a generalized estimating equations (GEE) model with those estimated using ODA.ResultsGeneralized estimating equations and ODA both identified many statistically significant doseā€“response relationships, separately by race and for pooled data. Post hoc comparisons between doses indicated ODA (based on exact P values) was consistently more conservative than GEE (based on estimated P values). Compared with ODA, GEE produced twice as many instances of paradoxical confounding (findings from analysis of pooled data that are inconsistent with findings from analyses stratified by race).ConclusionsGiven its unique advantages and greater analytic flexibility, maximumā€accuracy machineā€learning methods like ODA should be considered as the primary analytic approach in doseā€“response applications.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/134965/1/jep12573_am.pdfhttp://deepblue.lib.umich.edu/bitstream/2027.42/134965/2/jep12573.pd

    Sensor-less maximum power extraction control of a hydrostatic tidal turbine based on adaptive extreme learning machine

    Get PDF
    In this paper, a hydrostatic tidal turbine (HTT) is designed and modelled, which uses more reliable hydrostatic transmission to replace existing fixed ratio gearbox transmission. The HTT dynamic model is derived by integrating governing equations of all the components of the hydraulic machine. A nonlinear observer is proposed to predict the turbine torque and tidal speeds in real time based on extreme learning machine (ELM). A sensor-less double integral sliding mode controller is then designed for the HTT to achieve the maximum power extraction in the presence of large parametric uncertainties and nonlinearities. Simscape design experiments are conducted to verify the proposed design, model and control system, which show that the proposed control system can efficiently achieve the maximum power extraction and has much better performance than conventional control. Unlike the existing works on ELM, the weights and biases in the ELM are updated online continuously. Furthermore, the overall stability of the controlled HTT system including the ELM is proved and the selection criteria for ELM learning rates is derived. The proposed sensor-less control system has prominent advantages in robustness and accuracy, and is also easy to implement in practice

    Assessing Information Transmission in Data Transformations with the Channel Multivariate Entropy Triangle

    Get PDF
    Data transformation, e.g., feature transformation and selection, is an integral part of any machine learning procedure. In this paper, we introduce an information-theoretic model and tools to assess the quality of data transformations in machine learning tasks. In an unsupervised fashion, we analyze the transformation of a discrete, multivariate source of information (X) over bar into a discrete, multivariate sink of information (Y) over bar related by a distribution P-(XY) over bar. The first contribution is a decomposition of the maximal potential entropy of ((X) over bar, (Y) over bar), which we call a balance equation, into its (a) non-transferable, (b) transferable, but not transferred, and (c) transferred parts. Such balance equations can be represented in (de Finetti) entropy diagrams, our second set of contributions. The most important of these, the aggregate channel multivariate entropy triangle, is a visual exploratory tool to assess the effectiveness of multivariate data transformations in transferring information from input to output variables. We also show how these decomposition and balance equations also apply to the entropies of (X) over bar and (Y) over bar, respectively, and generate entropy triangles for them. As an example, we present the application of these tools to the assessment of information transfer efficiency for Principal Component Analysis and Independent Component Analysis as unsupervised feature transformation and selection procedures in supervised classification tasks.This research was funded by he Spanish Government-MinECo projects TEC2014-53390-P and TEC2017-84395-P

    Differentiable Genetic Programming

    Full text link
    We introduce the use of high order automatic differentiation, implemented via the algebra of truncated Taylor polynomials, in genetic programming. Using the Cartesian Genetic Programming encoding we obtain a high-order Taylor representation of the program output that is then used to back-propagate errors during learning. The resulting machine learning framework is called differentiable Cartesian Genetic Programming (dCGP). In the context of symbolic regression, dCGP offers a new approach to the long unsolved problem of constant representation in GP expressions. On several problems of increasing complexity we find that dCGP is able to find the exact form of the symbolic expression as well as the constants values. We also demonstrate the use of dCGP to solve a large class of differential equations and to find prime integrals of dynamical systems, presenting, in both cases, results that confirm the efficacy of our approach
    • ā€¦
    corecore