597,188 research outputs found

    Measuring Re-identification Risk

    Full text link
    Compact user representations (such as embeddings) form the backbone of personalization services. In this work, we present a new theoretical framework to measure re-identification risk in such user representations. Our framework, based on hypothesis testing, formally bounds the probability that an attacker may be able to obtain the identity of a user from their representation. As an application, we show how our framework is general enough to model important real-world applications such as the Chrome's Topics API for interest-based advertising. We complement our theoretical bounds by showing provably good attack algorithms for re-identification that we use to estimate the re-identification risk in the Topics API. We believe this work provides a rigorous and interpretable notion of re-identification risk and a framework to measure it that can be used to inform real-world applications

    ANALISIS KEMAMPUAN REPRESENTASI MATEMATIS DITINJAU DARI KEMAMPUAN METAKOGNITIF

    Get PDF
    Mathematical representation ability influenced by metacognitive abilities. This study aims to describe the ability of mathematical representation in terms of students' metacognitive abilities. This research was conducted at a junior high school in Karawang Regency with 39 students in class VIII as research subjects. The research method used is a qualitative approach to the type of phenomenological research. Data in this study were obtained based on research measurement instruments in the form of mathematical representation ability test questions, metacognitive ability questionnaires, and interview guidelines. The data was analyzed using the Miles and Huberman’s methods, which included data reduction, data presentation, and drawing conclusions. Based on the results of data analysis, it can be concluded that students with high metacognitive abilities are able to fulfill the indicators of symbol and image representation very well, and quite good verbal representations, then students with moderate metacognitive abilities are able to fulfill the indicators of image representation well but are good enough for symbolic representation and verbal, as well as for students with low metacognitive fulfillment of symbolic, pictorial and verbal representations that are quite good

    On numerical approximation of an optimal control problem in linear elasticity

    Get PDF
    In this paper we apply the optimal control theory to a linear elasticity problem. An iterative method based on the optimality system characterizing the corresponding minimization of a cost functional is proposed. Convergence of the approximate solutions is proved provided that a parameter of penalization is not too small. Numerical solutions are presented to emphasize the role of this parameter. It is shown that the results are far from being good approximations of the expected ones, because the parameter can not be taken small enough in the iteration method. On the other hand, numerical results from a spectral analysis are shown without this limitation by the use of eigenfunction representations

    Generative or Contrastive? Phrase Reconstruction for Better Sentence Representation Learning

    Full text link
    Though offering amazing contextualized token-level representations, current pre-trained language models actually take less attention on acquiring sentence-level representation during its self-supervised pre-training. If self-supervised learning can be distinguished into two subcategories, generative and contrastive, then most existing studies show that sentence representation learning may more benefit from the contrastive methods but not the generative methods. However, contrastive learning cannot be well compatible with the common token-level generative self-supervised learning, and does not guarantee good performance on downstream semantic retrieval tasks. Thus, to alleviate such obvious inconveniences, we instead propose a novel generative self-supervised learning objective based on phrase reconstruction. Empirical studies show that our generative learning may yield powerful enough sentence representation and achieve performance in Sentence Textual Similarity (STS) tasks on par with contrastive learning. Further, in terms of unsupervised setting, our generative method outperforms previous state-of-the-art SimCSE on the benchmark of downstream semantic retrieval tasks.Comment: Preprin

    A Simple Baseline that Questions the Use of Pretrained-Models in Continual Learning

    Full text link
    With the success of pretraining techniques in representation learning, a number of continual learning methods based on pretrained models have been proposed. Some of these methods design continual learning mechanisms on the pre-trained representations and only allow minimum updates or even no updates of the backbone models during the training of continual learning. In this paper, we question whether the complexity of these models is needed to achieve good performance by comparing them to a simple baseline that we designed. We argue that the pretrained feature extractor itself can be strong enough to achieve a competitive or even better continual learning performance on Split-CIFAR100 and CoRe 50 benchmarks. To validate this, we conduct a very simple baseline that 1) use the frozen pretrained model to extract image features for every class encountered during the continual learning stage and compute their corresponding mean features on training data, and 2) predict the class of the input based on the nearest neighbor distance between test samples and mean features of the classes; i.e., Nearest Mean Classifier (NMC). This baseline is single-headed, exemplar-free, and can be task-free (by updating the means continually). This baseline achieved 88.53% on 10-Split-CIFAR-100, surpassing most state-of-the-art continual learning methods that are all initialized using the same pretrained transformer model. We hope our baseline may encourage future progress in designing learning systems that can continually add quality to the learning representations even if they started from some pretrained weights.Comment: 6 pages, Under review , Code available at https://github.com/Pauljanson002/pretrained-cl.gi
    • …
    corecore