71 research outputs found

    Linear Regression on Manifold Structured Data: the Impact of Extrinsic Geometry on Solutions

    Full text link
    In this paper, we study linear regression applied to data structured on a manifold. We assume that the data manifold is smooth and is embedded in a Euclidean space, and our objective is to reveal the impact of the data manifold's extrinsic geometry on the regression. Specifically, we analyze the impact of the manifold's curvatures (or higher order nonlinearity in the parameterization when the curvatures are locally zero) on the uniqueness of the regression solution. Our findings suggest that the corresponding linear regression does not have a unique solution when the embedded submanifold is flat in some dimensions. Otherwise, the manifold's curvature (or higher order nonlinearity in the embedding) may contribute significantly, particularly in the solution associated with the normal directions of the manifold. Our findings thus reveal the role of data manifold geometry in ensuring the stability of regression models for out-of-distribution inferences.Comment: 13 pages, 6 figures, accepted to TAGML23 workshop of ICML2023, to be published in PML

    Discovering visual attributes from image and video data

    Get PDF

    Isolation of AhDHNs from Arachis hypogaea L. and evaluation of AhDHNs expression under exogenous abscisic acid (ABA) and water stress

    Get PDF
    The peanut (Arachis hypogaea L.) is an important oil and cash crop all over the world. It is mostly planted in arid and semi-arid regions. To determine the mechanism by which dehydrins (DHNs) are regulated by abscisic acid (ABA) in peanuts, three Arachis hypogaea L. dehydrins (AhDHNs) were isolated from peanut plants and sequenced. By blasting the protein sequences of these AhDHNs, AhDHN1 was found belonging to the YnSKn subfamily. AhDHN2 and AhDHN3 were found belonging to the SKn and YnKn types, respectively. 100 μM ABA enhanced AhDHNs expression in peanut leaves. When peanut plants were treated with ABA and then with the ABA synthesis inhibitor sodium tungstate 12 h later, AhDHN expression was suppressed. However, AhDHN2 was inhibited by sodium tungstate at 2 h, though other AhDHNs were not. AhDHNs expressions increased greatly in peanut leaves treated with 30% polyethylene glycol (PEG). Sodium tungstate along with PEG inhibited the expression of AhDHNs. This study found that exogenous and endogenous ABA can both affect the expression of AhDHN independently. The differential expression of AhDHNs to exogenous ABA may be because of differences in the structure of different AhDHNs.Keywords: Arachis hypogaea L. dehydrins (AhDHNs), peanut, abscisic acid (ABA), expression, sodium tungstate, water stres

    Nearest Neighbor Sampling of Point Sets using Random Rays

    Full text link
    We propose a new framework for the sampling, compression, and analysis of distributions of point sets and other geometric objects embedded in Euclidean spaces. A set of randomly selected rays are projected onto their closest points in the data set, forming the ray signature. From the signature, statistical information about the data set, as well as certain geometrical information, can be extracted, independent of the ray set. We present promising results from "RayNN", a neural network for the classification of point clouds based on ray signatures

    Gradient constrained sharpness-aware prompt learning for vision-language models

    Full text link
    This paper targets a novel trade-off problem in generalizable prompt learning for vision-language models (VLM), i.e., improving the performance on unseen classes while maintaining the performance on seen classes. Comparing with existing generalizable methods that neglect the seen classes degradation, the setting of this problem is more strict and fits more closely with practical applications. To solve this problem, we start from the optimization perspective, and leverage the relationship between loss landscape geometry and model generalization ability. By analyzing the loss landscapes of the state-of-the-art method and vanilla Sharpness-aware Minimization (SAM) based method, we conclude that the trade-off performance correlates to both loss value and loss sharpness, while each of them is indispensable. However, we find the optimizing gradient of existing methods cannot maintain high relevance to both loss value and loss sharpness during optimization, which severely affects their trade-off performance. To this end, we propose a novel SAM-based method for prompt learning, denoted as Gradient Constrained Sharpness-aware Context Optimization (GCSCoOp), to dynamically constrain the optimizing gradient, thus achieving above two-fold optimization objective simultaneously. Extensive experiments verify the effectiveness of GCSCoOp in the trade-off problem.Comment: 19 pages 11 figure

    Fusion-Eval: Integrating Evaluators with LLMs

    Full text link
    Evaluating Large Language Models (LLMs) is a complex task, especially considering the intricacies of natural language understanding and the expectations for high-level reasoning. Traditional evaluations typically lean on human-based, model-based, or automatic-metrics-based paradigms, each with its own advantages and shortcomings. We introduce "Fusion-Eval", a system that employs LLMs not solely for direct evaluations, but to skillfully integrate insights from diverse evaluators. This gives Fusion-Eval flexibility, enabling it to work effectively across diverse tasks and make optimal use of multiple references. In testing on the SummEval dataset, Fusion-Eval achieved a Spearman correlation of 0.96, outperforming other evaluators. The success of Fusion-Eval underscores the potential of LLMs to produce evaluations that closely align human perspectives, setting a new standard in the field of LLM evaluation

    What is the best way for extracting meaningful attributes from pictures?

    Get PDF
    Automatic attribute discovery methods have gained in popularity to extract sets of visual attributes from images or videos for various tasks. Despite their good performance in some classification tasks, it is difficult to evaluate whether the attributes discovered by these methods are meaningful and which methods are the most appropriate to discover attributes for visual descriptions. In its simplest form, such an evaluation can be performed by manually verifying whether there is any consistent identifiable visual concept distinguishing between positive and negative exemplars labelled by an attribute. This manual checking is tedious, expensive and labour intensive. In addition, comparisons between different methods could also be problematic as it is not clear how one could quantitatively decide which attribute is more meaningful than the others. In this paper, we propose a novel attribute meaningfulness metric to address this challenging problem. With this metric, automatic quantitative evaluation can be performed on the attribute sets; thus, reducing the enormous effort to perform manual evaluation. The proposed metric is applied to some recent automatic attribute discovery and hashing methods on four attribute-labelled datasets. To further validate the efficacy of the proposed method, we conducted a user study. In addition, we also compared our metric with a semi-supervised attribute discover method using the mixture of probabilistic PCA. In our evaluation, we gleaned several insights that could be beneficial in developing new automatic attribute discovery methods

    Automatic image attribute selection for zero-shot learning of object categories

    Get PDF
    Recently the use of image attributes as image descriptors has drawn great attention. This is because the resulting descriptors extracted using these attributes are human understandable as well as machine readable. Although the image attributes are generally semantically meaningful, they may not be discriminative. As such, prior works often consider a discriminative learning approach that could discover discriminative attributes. Nevertheless, the resulting learned attributes could lose their semantic meaning. To that end, in the present work, we study two properties of attributes: discriminative power and reliability. We then propose a novel greedy algorithm called Discriminative and Reliable Attribute Learning (DRAL) which selects a subset of attributes which maximises an objective function incorporating the two properties. We compare our proposed system to the recent state-of-the-art approach, called Direct Attribute Prediction (DAP) for the zero-shot learning task on the Animal with Attributes (AwA) dataset. The results show that our proposed approach can achieve similar performance to this state-of-the-art approach while using a significantly smaller number of attributes
    • …
    corecore