2,623 research outputs found

    Comparing two groups of ranked objects by pairwise matching

    Get PDF
    Let [gamma][subscript]X = (X[subscript]sp(1)\u27,X[subscript]sp(2)\u27,...,X[subscript]sp(n)\u27) and [gamma][subscript]Y = Y[subscript]sp(1)\u27,Y[subscript]sp(2)\u27,...,Y[subscript]sp(n)\u27) be two groups of stochastically increasing rv\u27s, which can represent, say, the increasing strengths of the members of two chess teams or two tennis teams, etc. Let [pi] = ([pi][subscript]1,[pi][subscript]2,...,[pi][subscript]n) be a permutation of (1,2,...,n). Then the statistic S([pi]) = [sigma][subscript]spi=1n I(Y[subscript]sp(i)\u27 \u3e X[subscript]sp(i)\u27) measures the performance of [gamma][subscript]Y over [gamma][subscript]X under the permutation or matching [pi], where I(y \u3e x) is an indicator function. We are interested in the relationship between [pi] and ES([pi]) = [sigma][subscript]spi = 1n P(Y[subscript]sp(i)\u27 \u3e X[subscript]sp([pi][subscript] i)\u27), especially in comparing ES([pi]) when [pi] = (1,2,..., n), corresponding to ordered matching, and when [pi] is randomly given. The P(Y[subscript]sp(i)\u27 \u3e X[subscript]sp(i)\u27) are of interest in themselves. A class of special matchings called \u27fair matchings\u27 is emphasized. We say a matching [pi] is fair if ES([pi]) = n[over] 2 when [gamma][subscript]X ~ [gamma][subscript]Y. Simple matching and symmetric matching, which are fair under certain conditions, are also defined. The problems are investigated under two models, i.e., the order statistics model and the linear preference model;In the order statistics model, we assume that X[subscript]sp(i)\u27 and Y[subscript]sp(i)\u27 have the same marginal distributions as X[subscript](i) and Y[subscript](i), the i-th order statistics in two random samples of size n from F(x) and G(x), respectively. In this case, ES([pi]) = [sigma][subscript]spi = 1n P(Y[subscript](i) \u3e X[subscript]([pi][subscript] i)), and when [pi] is randomly given ES([pi]) = [sigma][subscript]spi =1n P(Y[subscript]i \u3e X[subscript]i), where (X[subscript]1,X[subscript]2,...,X[subscript]n) and (Y[subscript]1,Y[subscript]2,...,Y[subscript]n) are random samples from F(x) and G(x), respectively. If G(x) = F(x - [mu]), where [mu] ‚Č• 0, then it is shown that [sigma][subscript]spi = 1n P(Y[subscript](i) \u3e X[subscript](i)) ‚Č• [sigma][subscript]spi = 1n P(Y[subscript]i \u3e X[subscript]i). Moreover, we have [sigma][subscript]spi = 1n P(Y[subscript](i) \u3e X[subscript](i)) ‚Č• ES([pi]), for any simple matching [pi]. If F(x) is a distribution function of a symmetric rv, then this inequality holds also for any symmetric matching [pi];In the linear preference model, we assume that X[subscript]sp(i)\u27 ~ F(x - [lambda][subscript](i)) and Y[subscript]sp(i)\u27 ~ F(x - [mu][subscript](i)), for i = 1,2,..., n, where F(x) is a unimodal distribution function, [lambda][subscript](1) ‚ȧ [lambda][subscript](2) ‚ȧ ... ‚ȧ [lambda][subscript](n), and [mu][subscript](1) ‚ȧ [mu][subscript](2) ‚ȧ ... ‚ȧ [mu][subscript](n). If U(x) is the cdf of Z[subscript]1 - Z[subscript]2, where Z[subscript]1 and Z[subscript]2 are iid with cdf F(x), the expectation of S([pi]) can be written as E[S([pi])] = [sigma][subscript]spi = 1n U([mu][subscript](i) - [lambda][subscript]([pi][subscript] i)). Under certain conditions, we get similar results to those in the order statistics model. We also obtain some other miscellaneous results about ordered matching, as well as maximization and rearrangement properties of ES([pi]).;In both models, the case with ties permitted is also considered

    Designing intelligent language tutoring system for learning Chinese characters

    Get PDF
    The purposes of this research are to explore 1) the design and usability of the interface for an intelligent tutoring system for recognition of Chinese characters, 2) the pedagogical effectiveness of different forms of information presentation and feedback. A prototype system (an iPad Chinese character tutor) was developed and was evaluated for its effectiveness and usability. In the evaluation test, two groups were given 34 Chinese characters and phrases to learn using two different versions of the system. Version A contained a metaphor-based pedagogy, feedback, and extra instructions; Version B did not. Participants’ learning performance and survey results were used to measure the effectiveness and usability of the system. Learning performance of the group who used Version A was statistically significantly better than that of the Version B group. Participants surveyed rated Version A significantly higher than Version B on several constructs, including usability, satisfaction, functionality, and usefulness. This study lays the foundation for the development of an Intelligent Tutoring System (ITS) for Chinese learning

    A Comparative Study on the Herd Behavior of Chinese Equity and Partial Equity Hybrid Funds-Empirical Analysis Based on Market Fluctuations

    Get PDF
    This paper uses the LSV model and the VOL volatility index, as well as the quarterly position data of equity funds and partial equity hybrid funds from the first quarter of 2007 to the fourth quarter of 2019 to conduct an empirical study on the herd behavior of both kinds of funds. Then establish a connection with the volatility of the Shanghai Stock Exchange over the same period. The results show that the overall trend of herd behavior between equity funds and partial equity hybrid funds is almost completely opposite. Equity funds have a stronger herd behavior in buying, while partial equity hybrid funds have a stronger herd behavior in selling. Meanwhile, when the volatility of the Shanghai Composite Index increased significantly, the herd behavior in selling both increased

    3D Printing of Scaffolds for Tissue Engineering

    Get PDF
    Three-dimensional (3D) printing has demonstrated its great potential in producing functional scaffolds for biomedical applications. To facilitate tissue regeneration, scaffolds need to be designed to provide a suitable environment for cell growth, which generally depends on the selection of materials and geometrical features such as internal structures and pore size distribution. The mechanical property match with the original tissue to be repaired is also critical. In this chapter, the specific request of materials and structure for tissue engineering is briefly reviewed, and then an overview of the recent research in 3D printing technologies for tissue engineering will be provided, together with a discussion of possible future directions in this area

    Dust-acoustic waves and stability in the permeating dusty plasma: II. Power-law distributions

    Full text link
    The dust-acoustic waves and their stability driven by a flowing dusty plasma when it cross through a static (target) dusty plasma (the so-called permeating dusty plasma) are investigated when the components of the dusty plasma obey the power-law q-distributions in nonextensive statistics. The frequency, the growth rate and the stability condition of the dust-acoustic waves are derived under this physical situation, which express the effects of the nonextensivity as well as the flowing dusty plasma velocity on the dust-acoustic waves in this dusty plasma. The numerical results illustrate some new characteristics of the dust-acoustic waves, which are different from those in the permeating dusty plasma when the plasma components are the Maxwellian distribution. In addition, we show that the flowing dusty plasma velocity has a significant effect on the dust-acoustic waves in the permeating dusty plasma with the power-law q-distribution.Comment: 20 pages, 10 figures, 41 reference

    Perfect Alignment May be Poisonous to Graph Contrastive Learning

    Full text link
    Graph Contrastive Learning (GCL) aims to learn node representations by aligning positive pairs and separating negative ones. However, limited research has been conducted on the inner law behind specific augmentations used in graph-based learning. What kind of augmentation will help downstream performance, how does contrastive learning actually influence downstream tasks, and why the magnitude of augmentation matters? This paper seeks to address these questions by establishing a connection between augmentation and downstream performance, as well as by investigating the generalization of contrastive learning. Our findings reveal that GCL contributes to downstream tasks mainly by separating different classes rather than gathering nodes of the same class. So perfect alignment and augmentation overlap which draw all intra-class samples the same can not explain the success of contrastive learning. Then in order to comprehend how augmentation aids the contrastive learning process, we conduct further investigations into its generalization, finding that perfect alignment that draw positive pair the same could help contrastive loss but is poisonous to generalization, on the contrary, imperfect alignment enhances the model's generalization ability. We analyse the result by information theory and graph spectrum theory respectively, and propose two simple but effective methods to verify the theories. The two methods could be easily applied to various GCL algorithms and extensive experiments are conducted to prove its effectiveness

    Efficient HDR Reconstruction from Real-World Raw Images

    Full text link
    High dynamic range (HDR) imaging is still a significant yet challenging problem due to the limited dynamic range of generic image sensors. Most existing learning-based HDR reconstruction methods take a set of bracketed-exposure sRGB images to extend the dynamic range, and thus are computational- and memory-inefficient by requiring the Image Signal Processor (ISP) to produce multiple sRGB images from the raw ones. In this paper, we propose to broaden the dynamic range from the raw inputs and perform only one ISP processing for the reconstructed HDR raw image. Our key insights are threefold: (1) we design a new computational raw HDR data formation pipeline and construct the first real-world raw HDR dataset, RealRaw-HDR; (2) we develop a lightweight-efficient HDR model, RepUNet, using the structural re-parameterization technique; (3) we propose a plug-and-play motion alignment loss to mitigate motion misalignment between short- and long-exposure images. Extensive experiments demonstrate that our approach achieves state-of-the-art performance in both visual quality and quantitative metrics

    Discriminating Color Faces For Recognition

    Get PDF

    Learning to Sample: an Active Learning Framework

    Full text link
    Meta-learning algorithms for active learning are emerging as a promising paradigm for learning the ``best'' active learning strategy. However, current learning-based active learning approaches still require sufficient training data so as to generalize meta-learning models for active learning. This is contrary to the nature of active learning which typically starts with a small number of labeled samples. The unavailability of large amounts of labeled samples for training meta-learning models would inevitably lead to poor performance (e.g., instabilities and overfitting). In our paper, we tackle these issues by proposing a novel learning-based active learning framework, called Learning To Sample (LTS). This framework has two key components: a sampling model and a boosting model, which can mutually learn from each other in iterations to improve the performance of each other. Within this framework, the sampling model incorporates uncertainty sampling and diversity sampling into a unified process for optimization, enabling us to actively select the most representative and informative samples based on an optimized integration of uncertainty and diversity. To evaluate the effectiveness of the LTS framework, we have conducted extensive experiments on three different classification tasks: image classification, salary level prediction, and entity resolution. The experimental results show that our LTS framework significantly outperforms all the baselines when the label budget is limited, especially for datasets with highly imbalanced classes. In addition to this, our LTS framework can effectively tackle the cold start problem occurring in many existing active learning approaches.Comment: Accepted by ICDM'1
    • ‚Ķ
    corecore