10 research outputs found

    Tracking a well diversified portfolio with maximum entropy in the mean

    Get PDF
    In this work we address the following problem: Having chosen a well diversified portfolio, we show how to improve on its return, maintaining the diversification. In order to achieve this boost on return we construct a neighborhood of the well diversified portfolio and find a portfolio that maximizes the return in that neighborhood. For that we use the method of maximum entropy in the mean to find a portfolio that yields any possible return up to the maximum return within the neighborhood. The implicit bonus of the method is that if the benchmark portfolio has acceptable risk and diversification, the portfolio of maximum return in that neighborhood will also have acceptable risk and diversification.The work of the third author has been supported by the Madrid Government (Comunidad de Madrid-Spain) under the Multiannual Agreement with UC3M in the line of Excellence of University Professors (EPUC3M12), and in the context of the V PRICIT (Regional Programme of Research and Technological Innovation). We acknowledge financial support from Ministerio de Ciencia e Innovacion grant PID2020-115744RB-I00

    Maxentropic Solutions to a Convex Interpolation Problem Motivated by Utility Theory

    Get PDF
    Here, we consider the following inverse problem: Determination of an increasing continuous function U(x) on an interval [a,b] from the knowledge of the integrals ∫U(x)dFXi(x)=πi where the Xi are random variables taking values on [a,b] and πi are given numbers. This is a linear integral equation with discrete data, which can be transformed into a generalized moment problem when U(x) is supposed to have a positive derivative, and it becomes a classical interpolation problem if the Xi are deterministic. In some cases, e.g., in utility theory in economics, natural growth and convexity constraints are required on the function, which makes the inverse problem more interesting. Not only that, the data may be provided in intervals and/or measured up to an additive error. It is the purpose of this work to show how the standard method of maximum entropy, as well as the method of maximum entropy in the mean, provides an efficient method to deal with these problems.All sources of funding of the study should be disclosed. Please clearly indicate grants that you have received in support of your research work. Clearly state if you received funds for covering the costs to publish in open access

    Cross-Entropy Estimation of Linear Cointegrated Equations

    Get PDF
    The cross-entropy approach is extended to the estimation of cointegrated equations. The entropy estimators for an appropriately constructed moment form, are asymptotically equivalent to Fully Modi�ed estimators since they converge to these estimates su¢ ciently quickly. The performance of the entropy estimators are examined by using some Monte Carlo trials, and in an applied example for the estimation of a production function for South African agriculture

    CHANNEL CODING TECHNIQUES FOR A MULTIPLE TRACK DIGITAL MAGNETIC RECORDING SYSTEM

    Get PDF
    In magnetic recording greater area) bit packing densities are achieved through increasing track density by reducing space between and width of the recording tracks, and/or reducing the wavelength of the recorded information. This leads to the requirement of higher precision tape transport mechanisms and dedicated coding circuitry. A TMS320 10 digital signal processor is applied to a standard low-cost, low precision, multiple-track, compact cassette tape recording system. Advanced signal processing and coding techniques are employed to maximise recording density and to compensate for the mechanical deficiencies of this system. Parallel software encoding/decoding algorithms have been developed for several Run-Length Limited modulation codes. The results for a peak detection system show that Bi-Phase L code can be reliably employed up to a data rate of 5kbits/second/track. Development of a second system employing a TMS32025 and sampling detection permitted the utilisation of adaptive equalisation to slim the readback pulse. Application of conventional read equalisation techniques, that oppose inter-symbol interference, resulted in a 30% increase in performance. Further investigation shows that greater linear recording densities can be achieved by employing Partial Response signalling and Maximum Likelihood Detection. Partial response signalling schemes use controlled inter-symbol interference to increase recording density at the expense of a multi-level read back waveform which results in an increased noise penalty. Maximum Likelihood Sequence detection employs soft decisions on the readback waveform to recover this loss. The associated modulation coding techniques required for optimised operation of such a system are discussed. Two-dimensional run-length-limited (d, ky) modulation codes provide a further means of increasing storage capacity in multi-track recording systems. For example the code rate of a single track run length-limited code with constraints (1, 3), such as Miller code, can be increased by over 25% when using a 4-track two-dimensional code with the same d constraint and with the k constraint satisfied across a number of parallel channels. The k constraint along an individual track, kx, can be increased without loss of clock synchronisation since the clocking information derived by frequent signal transitions can be sub-divided across a number of, y, parallel tracks in terms of a ky constraint. This permits more code words to be generated for a given (d, k) constraint in two dimensions than is possible in one dimension. This coding technique is furthered by development of a reverse enumeration scheme based on the trellis description of the (d, ky) constraints. The application of a two-dimensional code to a high linear density system employing extended class IV partial response signalling and maximum likelihood detection is proposed. Finally, additional coding constraints to improve spectral response and error performance are discussed.Hewlett Packard, Computer Peripherals Division (Bristol

    Generalized Maximum Entropy, Convexity and Machine Learning

    No full text
    This thesis identifies and extends techniques that can be linked to the principle of maximum entropy (maxent) and applied to parameter estimation in machine learning and statistics. Entropy functions based on deformed logarithms are used to construct Bregman divergences, and together these represent a generalization of relative entropy. The framework is analyzed using convex analysis to charac- terize generalized forms of exponential family distributions. Various connections to the existing machine learning literature are discussed and the techniques are applied to the problem of non-negative matrix factorization (NMF)

    Modulation codes

    Get PDF

    Model reconstruction for moment-based stochastic chemical kinetics

    Get PDF
    Based on the theory of stochastic chemical kinetics, the inherent randomness and stochasticity of biochemical reaction networks can be accurately described by discrete-state continuous-time Markov chains, where each chemical reaction corresponds to a state transition of the process. However, the analysis of such processes is computationally expensive and sophisticated numerical methods are required. The main complication comes due to the largeness problem of the state space, so that analysis techniques based on an exploration of the state space are often not feasible and the integration of the moments of the underlying probability distribution has become a very popular alternative. In this thesis we propose an analysis framework in which we integrate a number of moments of the process instead of the state probabilities. This results in a more timeefficient simulation of the time evolution of the process. In order to regain the state probabilities from the moment representation, we combine the moment-based simulation (MM) with a maximum entropy approach: the maximum entropy principle is applied to derive a distribution that fits best to a given sequence of moments. We further extend this approach by incorporating the conditional moments (MCM) which allows not only to reconstruct the distribution of the species present in high amount in the system, but also to approximate the probabilities of species with low molecular counts. For the given distribution reconstruction framework, we investigate the numerical accuracy and stability using case studies from systems biology, compare two different moment approximation methods (MM and MCM), examine if it can be used for the reaction rates estimation problem and describe the possible future applications. In this thesis we propose an analysis framework in which we integrate a number of moments of the process instead of the state probabilities. This results in a more time-efficient simulation of the time evolution of the process. In order to regain the state probabilities from the moment representation, we combine the moment-based simulation (MM) with a maximum entropy approach: the maximum entropy principle is applied to derive a distribution that fits best to a given sequence of moments. We further extend this approach by incorporating the conditional moments (MCM) which allows not only to reconstruct the distribution of the species present in high amount in the system, but also to approximate the probabilities of species with low molecular counts. For the given distribution reconstruction framework, we investigate the numerical accuracy and stability using case studies from systems biology, compare two different moment approximation methods (MM and MCM), examine if it can be used for the reaction rates estimation problem and describe the possible future applications.Basierend auf der Theorie der stochastischen chemischen Kinetiken können die inhärente Zufälligkeit und Stochastizität von biochemischen Reaktionsnetzwerken durch diskrete zeitkontinuierliche Markow-Ketten genau beschrieben werden, wobei jede chemische Reaktion einem Zustandsübergang des Prozesses entspricht. Die Analyse solcher Prozesse ist jedoch rechenaufwendig und komplexe numerische Verfahren sind erforderlich. Analysetechniken, die auf dem Abtasten des Zustandsraums basieren, sind durch dessen Größe oft nicht anwendbar. Als populäre Alternative wird heute häufig die Integration der Momente der zugrundeliegenden Wahrscheinlichkeitsverteilung genutzt. In dieser Arbeit schlagen wir einen Analyserahmen vor, in dem wir, anstatt der Zustandswahrscheinlichkeiten, zugrundeliegende Momente des Prozesses integrieren. Dies führt zu einer zeiteffizienteren Simulation der zeitlichen Entwicklung des Prozesses. Um die Zustandswahrscheinlichkeiten aus der Momentreprsentation wiederzugewinnen, kombinieren wir die momentbasierte Simulation (MM) mit Entropiemaximierung: Die Maximum- Entropie-Methode wird angewendet, um eine Verteilung abzuleiten, die am besten zu einer bestimmten Sequenz von Momenten passt. Wir erweitern diesen Ansatz durch das Einbeziehen bedingter Momente (MCM), die es nicht nur erlauben, die Verteilung der in großer Menge im System enthaltenen Spezies zu rekonstruieren, sondern es ebenso ermöglicht, sich den Wahrscheinlichkeiten von Spezies mit niedrigen Molekulargewichten anzunähern. Für das gegebene System zur Verteilungsrekonstruktion untersuchen wir die numerische Genauigkeit und Stabilität anhand von Fallstudien aus der Systembiologie, vergleichen zwei unterschiedliche Verfahren der Momentapproximation (MM und MCM), untersuchen, ob es für das Problem der Abschätzung von Reaktionsraten verwendet werden kann und beschreiben die mögliche zukünftige Anwendungen
    corecore