1,174 research outputs found

    Weighted entropy and optimal portfolios for risk-averse Kelly investments

    Full text link
    Following a series of works on capital growth investment, we analyse log-optimal portfolios where the return evaluation includes `weights' of different outcomes. The results are twofold: (A) under certain conditions, the logarithmic growth rate leads to a supermartingale, and (B) the optimal (martingale) investment strategy is a proportional betting. We focus on properties of the optimal portfolios and discuss a number of simple examples extending the well-known Kelly betting scheme. An important restriction is that the investment does not exceed the current capital value and allows the trader to cover the worst possible losses. The paper deals with a class of discrete-time models. A continuous-time extension is a topic of an ongoing study

    A variational principle for cyclic polygons with prescribed edge lengths

    Get PDF
    We provide a new proof of the elementary geometric theorem on the existence and uniqueness of cyclic polygons with prescribed side lengths. The proof is based on a variational principle involving the central angles of the polygon as variables. The uniqueness follows from the concavity of the target function. The existence proof relies on a fundamental inequality of information theory. We also provide proofs for the corresponding theorems of spherical and hyperbolic geometry (and, as a byproduct, in 1+11+1 spacetime). The spherical theorem is reduced to the euclidean one. The proof of the hyperbolic theorem treats three cases separately: Only the case of polygons inscribed in compact circles can be reduced to the euclidean theorem. For the other two cases, polygons inscribed in horocycles and hypercycles, we provide separate arguments. The hypercycle case also proves the theorem for "cyclic" polygons in 1+11+1 spacetime.Comment: 18 pages, 6 figures. v2: typos corrected, final versio

    Maximum Entropy Linear Manifold for Learning Discriminative Low-dimensional Representation

    Full text link
    Representation learning is currently a very hot topic in modern machine learning, mostly due to the great success of the deep learning methods. In particular low-dimensional representation which discriminates classes can not only enhance the classification procedure, but also make it faster, while contrary to the high-dimensional embeddings can be efficiently used for visual based exploratory data analysis. In this paper we propose Maximum Entropy Linear Manifold (MELM), a multidimensional generalization of Multithreshold Entropy Linear Classifier model which is able to find a low-dimensional linear data projection maximizing discriminativeness of projected classes. As a result we obtain a linear embedding which can be used for classification, class aware dimensionality reduction and data visualization. MELM provides highly discriminative 2D projections of the data which can be used as a method for constructing robust classifiers. We provide both empirical evaluation as well as some interesting theoretical properties of our objective function such us scale and affine transformation invariance, connections with PCA and bounding of the expected balanced accuracy error.Comment: submitted to ECMLPKDD 201

    Relay Backpropagation for Effective Learning of Deep Convolutional Neural Networks

    Full text link
    Learning deeper convolutional neural networks becomes a tendency in recent years. However, many empirical evidences suggest that performance improvement cannot be gained by simply stacking more layers. In this paper, we consider the issue from an information theoretical perspective, and propose a novel method Relay Backpropagation, that encourages the propagation of effective information through the network in training stage. By virtue of the method, we achieved the first place in ILSVRC 2015 Scene Classification Challenge. Extensive experiments on two challenging large scale datasets demonstrate the effectiveness of our method is not restricted to a specific dataset or network architecture. Our models will be available to the research community later.Comment: Technical report for our submissions to the ILSVRC 2015 Scene Classification Challenge, where we won the first plac

    Information complexity of the AND function in the two-Party, and multiparty settings

    Full text link
    In a recent breakthrough paper [M. Braverman, A. Garg, D. Pankratov, and O. Weinstein, From information to exact communication, STOC'13] Braverman et al. developed a local characterization for the zero-error information complexity in the two party model, and used it to compute the exact internal and external information complexity of the 2-bit AND function, which was then applied to determine the exact asymptotic of randomized communication complexity of the set disjointness problem. In this article, we extend their results on AND function to the multi-party number-in-hand model by proving that the generalization of their protocol has optimal internal and external information cost for certain distributions. Our proof has new components, and in particular it fixes some minor gaps in the proof of Braverman et al

    Competitive portfolio selection using stochastic predictions

    Get PDF
    We study a portfolio selection problem where a player attempts to maximise a utility function that represents the growth rate of wealth. We show that, given some stochastic predictions of the asset prices in the next time step, a sublinear expected regret is attainable against an optimal greedy algorithm, subject to tradeoff against the \accuracy" of such predictions that learn (or improve) over time. We also study the effects of introducing transaction costs into the model

    Scanner Invariant Representations for Diffusion MRI Harmonization

    Get PDF
    Purpose: In the present work we describe the correction of diffusion-weighted MRI for site and scanner biases using a novel method based on invariant representation. Theory and Methods: Pooled imaging data from multiple sources are subject to variation between the sources. Correcting for these biases has become very important as imaging studies increase in size and multi-site cases become more common. We propose learning an intermediate representation invariant to site/protocol variables, a technique adapted from information theory-based algorithmic fairness; by leveraging the data processing inequality, such a representation can then be used to create an image reconstruction that is uninformative of its original source, yet still faithful to underlying structures. To implement this, we use a deep learning method based on variational auto-encoders (VAE) to construct scanner invariant encodings of the imaging data. Results: To evaluate our method, we use training data from the 2018 MICCAI Computational Diffusion MRI (CDMRI) Challenge Harmonization dataset. Our proposed method shows improvements on independent test data relative to a recently published baseline method on each subtask, mapping data from three different scanning contexts to and from one separate target scanning context. Conclusion: As imaging studies continue to grow, the use of pooled multi-site imaging will similarly increase. Invariant representation presents a strong candidate for the harmonization of these data

    A matter of words: NLP for quality evaluation of Wikipedia medical articles

    Get PDF
    Automatic quality evaluation of Web information is a task with many fields of applications and of great relevance, especially in critical domains like the medical one. We move from the intuition that the quality of content of medical Web documents is affected by features related with the specific domain. First, the usage of a specific vocabulary (Domain Informativeness); then, the adoption of specific codes (like those used in the infoboxes of Wikipedia articles) and the type of document (e.g., historical and technical ones). In this paper, we propose to leverage specific domain features to improve the results of the evaluation of Wikipedia medical articles. In particular, we evaluate the articles adopting an "actionable" model, whose features are related to the content of the articles, so that the model can also directly suggest strategies for improving a given article quality. We rely on Natural Language Processing (NLP) and dictionaries-based techniques in order to extract the bio-medical concepts in a text. We prove the effectiveness of our approach by classifying the medical articles of the Wikipedia Medicine Portal, which have been previously manually labeled by the Wiki Project team. The results of our experiments confirm that, by considering domain-oriented features, it is possible to obtain sensible improvements with respect to existing solutions, mainly for those articles that other approaches have less correctly classified. Other than being interesting by their own, the results call for further research in the area of domain specific features suitable for Web data quality assessment

    Adaptive Path Planning for Depth Constrained Bathymetric Mapping with an Autonomous Surface Vessel

    Full text link
    This paper describes the design, implementation and testing of a suite of algorithms to enable depth constrained autonomous bathymetric (underwater topography) mapping by an Autonomous Surface Vessel (ASV). Given a target depth and a bounding polygon, the ASV will find and follow the intersection of the bounding polygon and the depth contour as modeled online with a Gaussian Process (GP). This intersection, once mapped, will then be used as a boundary within which a path will be planned for coverage to build a map of the Bathymetry. Methods for sequential updates to GP's are described allowing online fitting, prediction and hyper-parameter optimisation on a small embedded PC. New algorithms are introduced for the partitioning of convex polygons to allow efficient path planning for coverage. These algorithms are tested both in simulation and in the field with a small twin hull differential thrust vessel built for the task.Comment: 21 pages, 9 Figures, 1 Table. Submitted to The Journal of Field Robotic
    corecore