8,767 research outputs found

    Stable cohomology of spaces of non-singular hypersurfaces

    Full text link
    We prove that the rational cohomology of the space of non-singular complex homogeneous polynomials of degree d in a fixed number of variables stabilizes to the cohomology of the general linear group for d sufficiently large.Comment: 11 pages; v3: stabilization range made explicit, proof of Lemma 3 corrected and expande

    Some remarks on varieties with degenerate Gauss image

    Get PDF
    We consider projective varieties with degenerate Gauss image whose focal hypersurfaces are non-reduced schemes. Examples of this situation are provided by the secant varieties of Severi and Scorza varieties. The Severi varieties are moreover characterized by a uniqueness property.Comment: 9 pages, to be published in Pacific Journal of Mathematic

    Towards a quantitative measure of rareness

    Get PDF
    Within the context of detection of incongruent events, an often overlooked aspect is how a system should react to the detection. The set of all the possible actions is certainly conditioned by the task at hand, and by the embodiment of the artificial cognitive system under consideration. Still, we argue that a desirable action that does not depend from these factors is to update the internal model and learn the new detected event. This paper proposes a recent transfer learning algorithm as the way to address this issue. A notable feature of the proposed model is its capability to learn from small samples, even a single one. This is very desirable in this context, as we cannot expect to have too many samples to learn from, given the very nature of incongruent events. We also show that one of the internal parameters of the algorithm makes it possible to quantitatively measure incongruence of detected events. Experiments on two different datasets support our claim

    Training Deep Networks without Learning Rates Through Coin Betting

    Get PDF
    Deep learning methods achieve state-of-the-art performance in many application scenarios. Yet, these methods require a significant amount of hyperparameters tuning in order to achieve the best results. In particular, tuning the learning rates in the stochastic optimization process is still one of the main bottlenecks. In this paper, we propose a new stochastic gradient descent procedure for deep networks that does not require any learning rate setting. Contrary to previous methods, we do not adapt the learning rates nor we make use of the assumed curvature of the objective function. Instead, we reduce the optimization process to a game of betting on a coin and propose a learning rate free optimal algorithm for this scenario. Theoretical convergence is proven for convex and quasi-convex functions and empirical evidence shows the advantage of our algorithm over popular stochastic gradient algorithms

    On projective varieties of dimension n+k covered by k-spaces

    Full text link
    We study families of linear spaces in projective space whose union is a proper subvariety X of the expected dimension. We establish relations between configurations of focal points and existence or non-existence of a fixed tangent space to X along a general element of the family. We apply our results to the classification of ruled 3-dimensional varieties.Comment: To be published in Illinois Journal of Mathematic

    Cohomology of the second Voronoi compactification of A_4

    Full text link
    In this paper we compute the cohomology groups of the second Voronoi compactification of the moduli space of abelian fourfolds in all degrees with the exception of the middle degree 10. We also compute the cohomology groups of the perfect cone compactification in degree < 10. The main tool is the investigation of the strata of the compactification corresponding to semi-abelic varieties with constant torus rank.Comment: v2: 41 pages, mostly expository change

    Adaptive Deep Learning through Visual Domain Localization

    Get PDF
    A commercial robot, trained by its manufacturer to recognize a predefined number and type of objects, might be used in many settings, that will in general differ in their illumination conditions, background, type and degree of clutter, and so on. Recent computer vision works tackle this generalization issue through domain adaptation methods, assuming as source the visual domain where the system is trained and as target the domain of deployment. All approaches assume to have access to images from all classes of the target during training, an unrealistic condition in robotics applications. We address this issue proposing an algorithm that takes into account the specific needs of robot vision. Our intuition is that the nature of the domain shift experienced mostly in robotics is local. We exploit this through the learning of maps that spatially ground the domain and quantify the degree of shift, embedded into an end-to-end deep domain adaptation architecture. By explicitly localizing the roots of the domain shift we significantly reduce the number of parameters of the architecture to tune, we gain the flexibility necessary to deal with subset of categories in the target domain at training time, and we provide a clear feedback on the rationale behind any classification decision, which can be exploited in human-robot interactions. Experiments on two different settings of the iCub World database confirm the suitability of our method for robot vision
    • …
    corecore