294 research outputs found

    On the ERM Principle with Networked Data

    Full text link
    Networked data, in which every training example involves two objects and may share some common objects with others, is used in many machine learning tasks such as learning to rank and link prediction. A challenge of learning from networked examples is that target values are not known for some pairs of objects. In this case, neither the classical i.i.d.\ assumption nor techniques based on complete U-statistics can be used. Most existing theoretical results of this problem only deal with the classical empirical risk minimization (ERM) principle that always weights every example equally, but this strategy leads to unsatisfactory bounds. We consider general weighted ERM and show new universal risk bounds for this problem. These new bounds naturally define an optimization problem which leads to appropriate weights for networked examples. Though this optimization problem is not convex in general, we devise a new fully polynomial-time approximation scheme (FPTAS) to solve it.Comment: accepted by AAAI. arXiv admin note: substantial text overlap with arXiv:math/0702683 by other author

    Research on the Problems and Countermeasures of Family Education for Left Behind Children in Rural Areas

    Get PDF
    The issue of family education for left behind children in rural areas is a very important issue in family education, and it is also a social issue that cannot be ignored in the process of urbanization in China. During the research process, this article used methods such as survey, interview, and comparison to obtain some data on left behind children in rural areas. There are roughly five types of guardianship for left behind children in rural areas in terms of family education, and the reasons for the existence of family education for left behind children in rural areas are also identified: the absence of family education subjects, and the emphasis on upbringing over education in intergenerational education; The main body of family education has outdated educational concepts and inadequate methods and methods; Lack of family education environment and abnormal behavior of parents; Due to weak awareness of home school cooperation and limited cooperative activities, solutions have been proposed: clarify family education responsibilities, establish good connections between home and school, build a high-quality education environment, achieve home school cooperation, and carry out scientific education models to ensure the healthy growth and development of left behind children in rural areas

    Lifted Algorithms for Symmetric Weighted First-Order Model Sampling

    Full text link
    Weighted model counting (WMC) is the task of computing the weighted sum of all satisfying assignments (i.e., models) of a propositional formula. Similarly, weighted model sampling (WMS) aims to randomly generate models with probability proportional to their respective weights. Both WMC and WMS are hard to solve exactly, falling under the #P\#\mathsf{P}-hard complexity class. However, it is known that the counting problem may sometimes be tractable, if the propositional formula can be compactly represented and expressed in first-order logic. In such cases, model counting problems can be solved in time polynomial in the domain size, and are known as domain-liftable. The following question then arises: Is it also the case for weighted model sampling? This paper addresses this question and answers it affirmatively. Specifically, we prove the domain-liftability under sampling for the two-variables fragment of first-order logic with counting quantifiers in this paper, by devising an efficient sampling algorithm for this fragment that runs in time polynomial in the domain size. We then further show that this result continues to hold even in the presence of cardinality constraints. To empirically verify our approach, we conduct experiments over various first-order formulas designed for the uniform generation of combinatorial structures and sampling in statistical-relational models. The results demonstrate that our algorithm outperforms a start-of-the-art WMS sampler by a substantial margin, confirming the theoretical results.Comment: 47 pages, 6 figures. An expanded version of "On exact sampling in the two-variable fragment of first-order logic" in LICS23, submitted to AIJ. arXiv admin note: substantial text overlap with arXiv:2302.0273

    Dimension Estimation Using Weighted Correlation Dimension Method

    Get PDF
    Dimension reduction is an important tool for feature extraction and has been widely used in many fields including image processing, discrete-time systems, and fault diagnosis. As a key parameter of the dimension reduction, intrinsic dimension represents the smallest number of variables which is used to describe a complete dataset. Among all the dimension estimation methods, correlation dimension (CD) method is one of the most popular ones, which always assumes that the effect of every point on the intrinsic dimension estimation is identical. However, it is different when the distribution of a dataset is nonuniform. Intrinsic dimension estimated by the high density area is more reliable than the ones estimated by the low density or boundary area. In this paper, a novel weighted correlation dimension (WCD) approach is proposed. The vertex degree of an undirected graph is invoked to measure the contribution of each point to the intrinsic dimension estimation. In order to improve the adaptability of WCD estimation, k-means clustering algorithm is adopted to adaptively select the linear portion of the log-log sequence (logā”Ī“k,logā”C(n,Ī“k)). Various factors that affect the performance of WCD are studied. Experiments on synthetic and real datasets show the validity and the advantages of the development of technique

    The control strength quantification analysis of outer pendulum rod for double inverted pendulum

    Get PDF
    Due to the complexity of the dynamics characteristics of an inverted pendulum, and the problem that the linearization analyze method cannot satisfy the controlling requirement, a nonlinear dynamics analyze method was proposed. Through decoupling the dynamics model of a double inverted pendulum, the outer pendulum rod motion equation was derived. And then, aiming at the control strength function of outer pendulum rod, the qualitative and quantitative relationship between spatial position of pendulum rod and the control strength of outer rod, and the quantification relationship between dynamics parameters and the control strength of outer rod were separately analyzed. And the simulation verified the correctness of the analysis

    Generative Noisy-Label Learning by Implicit Dicriminative Approximation with Partial Label Prior

    Full text link
    The learning with noisy labels has been addressed with both discriminative and generative models. Although discriminative models have dominated the field due to their simpler modeling and more efficient computational training processes, generative models offer a more effective means of disentangling clean and noisy labels and improving the estimation of the label transition matrix. However, generative approaches maximize the joint likelihood of noisy labels and data using a complex formulation that only indirectly optimizes the model of interest associating data and clean labels. Additionally, these approaches rely on generative models that are challenging to train and tend to use uninformative clean label priors. In this paper, we propose a new generative noisy-label learning approach that addresses these three issues. First, we propose a new model optimisation that directly associates data and clean labels. Second, the generative model is implicitly estimated using a discriminative model, eliminating the inefficient training of a generative model. Third, we propose a new informative label prior inspired by partial label learning as supervision signal for noisy label learning. Extensive experiments on several noisy-label benchmarks demonstrate that our generative model provides state-of-the-art results while maintaining a similar computational complexity as discriminative models

    A Closer Look at Audio-Visual Semantic Segmentation

    Full text link
    Audio-visual segmentation (AVS) is a complex task that involves accurately segmenting the corresponding sounding object based on audio-visual queries. Successful audio-visual learning requires two essential components: 1) an unbiased dataset with high-quality pixel-level multi-class labels, and 2) a model capable of effectively linking audio information with its corresponding visual object. However, these two requirements are only partially addressed by current methods, with training sets containing biased audio-visual data, and models that generalise poorly beyond this biased training set. In this work, we propose a new strategy to build cost-effective and relatively unbiased audio-visual semantic segmentation benchmarks. Our strategy, called Visual Post-production (VPO), explores the observation that it is not necessary to have explicit audio-visual pairs extracted from single video sources to build such benchmarks. We also refine the previously proposed AVSBench to transform it into the audio-visual semantic segmentation benchmark AVSBench-Single+. Furthermore, this paper introduces a new pixel-wise audio-visual contrastive learning method to enable a better generalisation of the model beyond the training set. We verify the validity of the VPO strategy by showing that state-of-the-art (SOTA) models trained with datasets built by matching audio and visual data from different sources or with datasets containing audio and visual data from the same video source produce almost the same accuracy. Then, using the proposed VPO benchmarks and AVSBench-Single+, we show that our method produces more accurate audio-visual semantic segmentation than SOTA models. Code and dataset will be available

    Silica-supported quinolinium tribromide: a recoverable solid brominating reagent for regioselective monobromination of aromatic amines

    Full text link
    Silica-supported quinolinium tribromide was synthesized and found to be an efficient, stable, and recoverable solid brominating reagent for the regioselective monobromination of aromatic amines. This protocol has advantages of high yield, mild condition and simple work-up procedure

    Improved Visual Fine-tuning with Natural Language Supervision

    Full text link
    Fine-tuning a visual pre-trained model can leverage the semantic information from large-scale pre-training data and mitigate the over-fitting problem on downstream vision tasks with limited training examples. While the problem of catastrophic forgetting in pre-trained backbone has been extensively studied for fine-tuning, its potential bias from the corresponding pre-training task and data, attracts less attention. In this work, we investigate this problem by demonstrating that the obtained classifier after fine-tuning will be close to that induced by the pre-trained model. To reduce the bias in the classifier effectively, we introduce a reference distribution obtained from a fixed text classifier, which can help regularize the learned vision classifier. The proposed method, Text Supervised fine-tuning (TeS), is evaluated with diverse pre-trained vision models including ResNet and ViT, and text encoders including BERT and CLIP, on 11 downstream tasks. The consistent improvement with a clear margin over distinct scenarios confirms the effectiveness of our proposal. Code is available at \url{https://github.com/idstcv/TeS}.Comment: accepted by ICCV'2
    • ā€¦
    corecore