3,059 research outputs found

    Recommendation Subgraphs for Web Discovery

    Full text link
    Recommendations are central to the utility of many websites including YouTube, Quora as well as popular e-commerce stores. Such sites typically contain a set of recommendations on every product page that enables visitors to easily navigate the website. Choosing an appropriate set of recommendations at each page is one of the key features of backend engines that have been deployed at several e-commerce sites. Specifically at BloomReach, an engine consisting of several independent components analyzes and optimizes its clients' websites. This paper focuses on the structure optimizer component which improves the website navigation experience that enables the discovery of novel content. We begin by formalizing the concept of recommendations used for discovery. We formulate this as a natural graph optimization problem which in its simplest case, reduces to a bipartite matching problem. In practice, solving these matching problems requires superlinear time and is not scalable. Also, implementing simple algorithms is critical in practice because they are significantly easier to maintain in production. This motivated us to analyze three methods for solving the problem in increasing order of sophistication: a sampling algorithm, a greedy algorithm and a more involved partitioning based algorithm. We first theoretically analyze the performance of these three methods on random graph models characterizing when each method will yield a solution of sufficient quality and the parameter ranges when more sophistication is needed. We complement this by providing an empirical analysis of these algorithms on simulated and real-world production data. Our results confirm that it is not always necessary to implement complicated algorithms in the real-world and that very good practical results can be obtained by using heuristics that are backed by the confidence of concrete theoretical guarantees

    Resolving on 100 pc scales the UV-continuum in Lyman-α\alpha emitters between redshift 2 to 3 with gravitational lensing

    Get PDF
    We present a study of seventeen LAEs at redshift 2<z<<z<3 gravitationally lensed by massive early-type galaxies (ETGs) at a mean redshift of approximately 0.5. Using a fully Bayesian grid-based technique, we model the gravitational lens mass distributions with elliptical power-law profiles and reconstruct the UV-continuum surface brightness distributions of the background sources using pixellated source models. We find that the deflectors are close to, but not consistent with isothermal models in almost all cases, at the 2σ2\sigma-level. We take advantage of the lensing magnification (typically Ό≃\mu\simeq 20) to characterise the physical and morphological properties of these LAE galaxies. From reconstructing the ultra-violet continuum emission, we find that the star-formation rates range from 0.3 to 8.5 M⊙_{\odot} yr−1^{-1} and that the galaxies are typically composed of several compact and diffuse components, separated by 0.4 to 4 kpc. Moreover, they have peak star-formation rate intensities that range from 2.1 to 54.1 M⊙_{\odot} yr−1^{-1} kpc−2^{-2}. These galaxies tend to be extended with major axis ranging from 0.2 to 1.8 kpc (median 561 pc), and with a median ellipticity of 0.49. This morphology is consistent with disk-like structures of star-formation for more than half of the sample. However, for at least two sources, we also find off-axis components that may be associated with mergers. Resolved kinematical information will be needed to confirm the disk-like nature and possible merger scenario for the LAEs in the sample.Comment: 19 pages, 7 figures, accepted for publication on MNRA

    A Three-Point Cosmic Ray Anisotropy Method

    Full text link
    The two-point angular correlation function is a traditional method used to search for deviations from expectations of isotropy. In this paper we develop and explore a statistically descriptive three-point method with the intended application being the search for deviations from isotropy in the highest energy cosmic rays. We compare the sensitivity of a two-point method and a "shape-strength" method for a variety of Monte-Carlo simulated anisotropic signals. Studies are done with anisotropic source signals diluted by an isotropic background. Type I and II errors for rejecting the hypothesis of isotropic cosmic ray arrival directions are evaluated for four different event sample sizes: 27, 40, 60 and 80 events, consistent with near term data expectations from the Pierre Auger Observatory. In all cases the ability to reject the isotropic hypothesis improves with event size and with the fraction of anisotropic signal. While ~40 event data sets should be sufficient for reliable identification of anisotropy in cases of rather extreme (highly anisotropic) data, much larger data sets are suggested for reliable identification of more subtle anisotropies. The shape-strength method consistently performs better than the two point method and can be easily adapted to an arbitrary experimental exposure on the celestial sphere.Comment: Fixed PDF erro

    Global Production Increased by Spatial Heterogeneity in a Population Dynamics Model

    Get PDF
    Spatial and temporal heterogeneity are often described as important factors having a strong impact on biodiversity. The effect of heterogeneity is in most cases analyzed by the response of biotic interactions such as competition of predation. It may also modify intrinsic population properties such as growth rate. Most of the studies are theoretic since it is often difficult to manipulate spatial heterogeneity in practice. Despite the large number of studies dealing with this topics, it is still difficult to understand how the heterogeneity affects populations dynamics. On the basis of a very simple model, this paper aims to explicitly provide a simple mechanism which can explain why spatial heterogeneity may be a favorable factor for production.We consider a two patch model and a logistic growth is assumed on each patch. A general condition on the migration rates and the local subpopulation growth rates is provided under which the total carrying capacity is higher than the sum of the local carrying capacities, which is not intuitive. As we illustrate, this result is robust under stochastic perturbations

    Studying the nuclear mass composition of Ultra-High Energy Cosmic Rays with the Pierre Auger Observatory

    Get PDF
    The Fluorescence Detector of the Pierre Auger Observatory measures the atmospheric depth, XmaxX_{max}, where the longitudinal profile of the high energy air showers reaches its maximum. This is sensitive to the nuclear mass composition of the cosmic rays. Due to its hybrid design, the Pierre Auger Observatory also provides independent experimental observables obtained from the Surface Detector for the study of the nuclear mass composition. We present XmaxX_{max}-distributions and an update of the average and RMS values in different energy bins and compare them to the predictions for different nuclear masses of the primary particles and hadronic interaction models. We also present the results of the composition-sensitive parameters derived from the ground level component.Comment: Proceedings of the 12th International Conference on Topics in Astroparticle and Underground Physics, TAUP 2011, Munich, German

    Educational studies of cosmic rays with telescope of Geiger-Muller counters

    Get PDF
    A group of high school students (XII Liceum) in the framework of the Roland Maze Project has built a compact telescope of three Geiger-Muller counters. The connection between the telescope and PC computer was also created and programed by students involved in the Project. This has allowed students to use their equipment to perform serious scientific measurements concerning the single cosmic ray muon flux at ground level and below. These measurements were then analyzed with the programs based on the 'nowadays' knowledge on statistics. An overview of the apparatus, methods and results were presented at several students conferences and recently won the first prize in a national competition of high school students scientific work. The telescope itself, in spite of its 'scientific' purposes, is built in such a way that it is hung on a wall in a school physics lab and counts muons continuously. This can help to raise the interest for studying physics among others. At present a few (3) groups of young participants of the Roland Maze Project have already built their own telescopes for their schools and some others are working on it. This work is a perfect example of what can be done by young people when respective opportunities are created by more experienced researchers and a little help and advice is given.Comment: 5 figures, 10 page

    Have Cherenkov telescopes detected a new light boson?

    Full text link
    Recent observations by H.E.S.S. and MAGIC strongly suggest that the Universe is more transparent to very-high-energy gamma rays than previously thought. We show that this fact can be reconciled with standard blazar emission models provided that photon oscillations into a very light Axion-Like Particle occur in extragalactic magnetic fields. A quantitative estimate of this effect indeed explains the observed data and in particular the spectrum of blazar 3C279.Comment: 3 pages, 1 figure, Proceeding of the "Eleventh International Workshop on Topics in Astroparticle and Underground Physics" (TAUP), Roma, Italy, 1 - 5 July 2009 (to be published in the Proceedings

    Luminous Satellites II: Spatial Distribution, Luminosity Function and Cosmic Evolution

    Full text link
    We infer the normalization and the radial and angular distributions of the number density of satellites of massive galaxies (log⁡10[Mh∗/M⊙]>10.5\log_{10}[M_{h}^*/M\odot]>10.5) between redshifts 0.1 and 0.8 as a function of host stellar mass, redshift, morphology and satellite luminosity. Exploiting the depth and resolution of the COSMOS HST images, we detect satellites up to eight magnitudes fainter than the host galaxies and as close as 0.3 (1.4) arcseconds (kpc). Describing the number density profile of satellite galaxies to be a projected power law such that P(R)\propto R^{\rpower}, we find \rpower=-1.1\pm 0.3. We find no dependency of \rpower on host stellar mass, redshift, morphology or satellite luminosity. Satellites of early-type hosts have angular distributions that are more flattened than the host light profile and are aligned with its major axis. No significant average alignment is detected for satellites of late-type hosts. The number of satellites within a fixed magnitude contrast from a host galaxy is dependent on its stellar mass, with more massive galaxies hosting significantly more satellites. Furthermore, high-mass late-type hosts have significantly fewer satellites than early-type galaxies of the same stellar mass, likely a result of environmental differences. No significant evolution in the number of satellites per host is detected. The cumulative luminosity function of satellites is qualitatively in good agreement with that predicted using subhalo abundance matching techniques. However, there are significant residual discrepancies in the absolute normalization, suggesting that properties other than the host galaxy luminosity or stellar mass determine the number of satellites.Comment: 23 pages, 12 figures, Accepted for publication in the Astrophysical Journa
    • 

    corecore