1,737 research outputs found

    Multi-bump solutions of −Δu=K(x)un+2n−2-\Delta u=K(x)u^{\frac{n+2}{n-2}} on lattices in RnR^n

    Full text link
    We consider critical exponent semi-linear elliptic equation with coefficient K(x) periodic in its first k variables, with 2k smaller than n-2. Under some natural conditions on K near a critical point, we prove the existence of multi-bump solutions where the centers of bumps can be placed in some lattices in Rk, including infinite lattices. We also show that for 2k greater than or equal to n-2, no such solutions exist.Comment: Final version. Some typo corrected. To appear inJournal fur die reine und angewandte Mathematik (Crelle's Journal

    Development of a sub-milimeter position sensitive gas detector

    Full text link
    A position sensitive thin gap chamber has been developed. The position resolution was measured using the cosmic muons. This paper presents the structure of this detector, position resolution measurement method and results

    Test of the prototype of electron detector for LHAASO project using cosmic rays

    Full text link
    LHAASO project is to be built in south-west China, which use an array of 5137 election detectors for the measurement of the incident electrons arriving at the detector plane. For the quality control of the big quantity of electron detectors, a cosmic ray hodoscope with two-dimensional spacial sensitivity and good time resolution has been developed. The first prototype of electron detector is tested with the hodoscope and the performance of the detector is validated to be consistent with the design.Comment: submitted to Chinese physics C. arXiv admin note: substantial text overlap with arXiv:1308.575

    DeepRank: A New Deep Architecture for Relevance Ranking in Information Retrieval

    Full text link
    This paper concerns a deep learning approach to relevance ranking in information retrieval (IR). Existing deep IR models such as DSSM and CDSSM directly apply neural networks to generate ranking scores, without explicit understandings of the relevance. According to the human judgement process, a relevance label is generated by the following three steps: 1) relevant locations are detected, 2) local relevances are determined, 3) local relevances are aggregated to output the relevance label. In this paper we propose a new deep learning architecture, namely DeepRank, to simulate the above human judgment process. Firstly, a detection strategy is designed to extract the relevant contexts. Then, a measure network is applied to determine the local relevances by utilizing a convolutional neural network (CNN) or two-dimensional gated recurrent units (2D-GRU). Finally, an aggregation network with sequential integration and term gating mechanism is used to produce a global relevance score. DeepRank well captures important IR characteristics, including exact/semantic matching signals, proximity heuristics, query term importance, and diverse relevance requirement. Experiments on both benchmark LETOR dataset and a large scale clickthrough data show that DeepRank can significantly outperform learning to ranking methods, and existing deep learning methods.Comment: Published as a conference paper at CIKM 2017, CIKM'17, November 6--10, 2017, Singapore TextNet (https://github.com/pl8787/textnet-release) PyTorch (https://github.com/pl8787/DeepRank_PyTorch

    Selfishness need not be bad

    Full text link
    We investigate the price of anarchy (PoA) in non-atomic congestion games when the total demand TT gets very large. First results in this direction have recently been obtained by \cite{Colini2016On, Colini2017WINE, Colini2017arxiv} for routing games and show that the PoA converges to 1 when the growth of the total demand TT satisfies certain regularity conditions. We extend their results by developing a \Wuuu{new} framework for the limit analysis of \Wuuuu{the PoA that offers strong techniques such as the limit of games and applies to arbitrary growth patterns of TT.} \Wuuu{We} show that the PoA converges to 1 in the limit game regardless of the type of growth of TT for a large class of cost functions that contains all polynomials and all regularly varying functions. % For routing games with BPR \Wuu{cost} functions, we show in addition that socially optimal strategy profiles converge to \Wuu{equilibria} in the limit game, and that PoA=1+o(T−β)=1+o(T^{-\beta}), where β>0\beta>0 is the degree of the \Wuu{BPR} functions. However, the precise convergence rate depends crucially on the the growth of TT, which shows that a conjecture proposed by \cite{O2016Mechanisms} need not hold

    Learning of Human-like Algebraic Reasoning Using Deep Feedforward Neural Networks

    Full text link
    There is a wide gap between symbolic reasoning and deep learning. In this research, we explore the possibility of using deep learning to improve symbolic reasoning. Briefly, in a reasoning system, a deep feedforward neural network is used to guide rewriting processes after learning from algebraic reasoning examples produced by humans. To enable the neural network to recognise patterns of algebraic expressions with non-deterministic sizes, reduced partial trees are used to represent the expressions. Also, to represent both top-down and bottom-up information of the expressions, a centralisation technique is used to improve the reduced partial trees. Besides, symbolic association vectors and rule application records are used to improve the rewriting processes. Experimental results reveal that the algebraic reasoning examples can be accurately learnt only if the feedforward neural network has enough hidden layers. Also, the centralisation technique, the symbolic association vectors and the rule application records can reduce error rates of reasoning. In particular, the above approaches have led to 4.6% error rate of reasoning on a dataset of linear equations, differentials and integrals.Comment: 8 pages, 7 figure

    Locally Smoothed Neural Networks

    Full text link
    Convolutional Neural Networks (CNN) and the locally connected layer are limited in capturing the importance and relations of different local receptive fields, which are often crucial for tasks such as face verification, visual question answering, and word sequence prediction. To tackle the issue, we propose a novel locally smoothed neural network (LSNN) in this paper. The main idea is to represent the weight matrix of the locally connected layer as the product of the kernel and the smoother, where the kernel is shared over different local receptive fields, and the smoother is for determining the importance and relations of different local receptive fields. Specifically, a multi-variate Gaussian function is utilized to generate the smoother, for modeling the location relations among different local receptive fields. Furthermore, the content information can also be leveraged by setting the mean and precision of the Gaussian function according to the content. Experiments on some variant of MNIST clearly show our advantages over CNN and locally connected layer.Comment: In Proceedings of 9th Asian Conference on Machine Learning (ACML2017

    Spherical Paragraph Model

    Full text link
    Representing texts as fixed-length vectors is central to many language processing tasks. Most traditional methods build text representations based on the simple Bag-of-Words (BoW) representation, which loses the rich semantic relations between words. Recent advances in natural language processing have shown that semantically meaningful representations of words can be efficiently acquired by distributed models, making it possible to build text representations based on a better foundation called the Bag-of-Word-Embedding (BoWE) representation. However, existing text representation methods using BoWE often lack sound probabilistic foundations or cannot well capture the semantic relatedness encoded in word vectors. To address these problems, we introduce the Spherical Paragraph Model (SPM), a probabilistic generative model based on BoWE, for text representation. SPM has good probabilistic interpretability and can fully leverage the rich semantics of words, the word co-occurrence information as well as the corpus-wide information to help the representation learning of texts. Experimental results on topical classification and sentiment analysis demonstrate that SPM can achieve new state-of-the-art performances on several benchmark datasets.Comment: 10 page

    A Study of MatchPyramid Models on Ad-hoc Retrieval

    Full text link
    Deep neural networks have been successfully applied to many text matching tasks, such as paraphrase identification, question answering, and machine translation. Although ad-hoc retrieval can also be formalized as a text matching task, few deep models have been tested on it. In this paper, we study a state-of-the-art deep matching model, namely MatchPyramid, on the ad-hoc retrieval task. The MatchPyramid model employs a convolutional neural network over the interactions between query and document to produce the matching score. We conducted extensive experiments to study the impact of different pooling sizes, interaction functions and kernel sizes on the retrieval performance. Finally, we show that the MatchPyramid models can significantly outperform several recently introduced deep matching models on the retrieval task, but still cannot compete with the traditional retrieval models, such as BM25 and language models.Comment: Neu-IR '16 SIGIR Workshop on Neural Information Retrieva

    Semantic Regularities in Document Representations

    Full text link
    Recent work exhibited that distributed word representations are good at capturing linguistic regularities in language. This allows vector-oriented reasoning based on simple linear algebra between words. Since many different methods have been proposed for learning document representations, it is natural to ask whether there is also linear structure in these learned representations to allow similar reasoning at document level. To answer this question, we design a new document analogy task for testing the semantic regularities in document representations, and conduct empirical evaluations over several state-of-the-art document representation models. The results reveal that neural embedding based document representations work better on this analogy task than conventional methods, and we provide some preliminary explanations over these observations.Comment: 6 page
    • …
    corecore