97,711 research outputs found

    An in-depth study on diversity evaluation : The importance of intrinsic diversity

    Get PDF
    Diversified document ranking has been recognized as an effective strategy to tackle ambiguous and/or underspecified queries. In this paper, we conduct an in-depth study on diversity evaluation that provides insights for assessing the performance of a diversified retrieval system. By casting the widely used diversity metrics (e.g., ERR-IA, α-nDCG and D#-nDCG) into a unified framework based on marginal utility, we analyze how these metrics capture extrinsic diversity and intrinsic diversity. Our analyses show that the prior metrics (ERR-IA, α-nDCG and D#-nDCG) are not able to precisely measure intrinsic diversity if we merely feed a set of subtopics into them in a traditional manner (i.e., without fine-grained relevance knowledge per subtopic). As the redundancy of relevant documents with respect to each specific information need (i.e., subtopic) can not be then detected and solved, the overall diversity evaluation may not be reliable. Furthermore, a series of experiments are conducted on a gold standard collection (English and Chinese) and a set of submitted runs, where the intent-square metrics that extend the diversity metrics through incorporating hierarchical subtopics are used as references. The experimental results show that the intent-square metrics disagree with the diversity metrics (ERR-IA and α-nDCG) being used in a traditional way on top-ranked runs, and that the average precision correlation scores between intent-square metrics and the prior diversity metrics (ERR-IA and α-nDCG) are fairly low. These results justify our analyses, and uncover the previously-unknown importance of intrinsic diversity to the overall diversity evaluation

    An Axiomatic Analysis of Diversity Evaluation Metrics: Introducing the Rank-Biased Utility Metric

    Full text link
    Many evaluation metrics have been defined to evaluate the effectiveness ad-hoc retrieval and search result diversification systems. However, it is often unclear which evaluation metric should be used to analyze the performance of retrieval systems given a specific task. Axiomatic analysis is an informative mechanism to understand the fundamentals of metrics and their suitability for particular scenarios. In this paper, we define a constraint-based axiomatic framework to study the suitability of existing metrics in search result diversification scenarios. The analysis informed the definition of Rank-Biased Utility (RBU) -- an adaptation of the well-known Rank-Biased Precision metric -- that takes into account redundancy and the user effort associated to the inspection of documents in the ranking. Our experiments over standard diversity evaluation campaigns show that the proposed metric captures quality criteria reflected by different metrics, being suitable in the absence of knowledge about particular features of the scenario under study.Comment: Original version: 10 pages. Preprint of full paper to appear at SIGIR'18: The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, July 8-12, 2018, Ann Arbor, MI, USA. ACM, New York, NY, US

    Lightcurves of Type Ia Supernovae from Near the Time of Explosion

    Get PDF
    We present a set of 11 type Ia supernova (SN Ia) lightcurves with dense, pre-maximum sampling. These supernovae (SNe), in galaxies behind the Large Magellanic Cloud (LMC), were discovered by the SuperMACHO survey. The SNe span a redshift range of z = 0.11 - 0.35. Our lightcurves contain some of the earliest pre-maximum observations of SNe Ia to date. We also give a functional model that describes the SN Ia lightcurve shape (in our VR-band). Our function uses the "expanding fireball" model of Goldhaber et al. (1998) to describe the rising lightcurve immediately after explosion but constrains it to smoothly join the remainder of the lightcurve. We fit this model to a composite observed VR-band lightcurve of three SNe between redshifts of 0.135 to 0.165. These SNe have not been K-corrected or adjusted to account for reddening. In this redshift range, the observed VR-band most closely matches the rest frame V-band. Using the best fit to our functional description of the lightcurve, we find the time between explosion and observed VR-band maximum to be 17.6+-1.3(stat)+-0.07(sys) rest-frame days for a SN Ia with a VR-band Delta m_{-10} of 0.52mag. For the redshifts sampled, the observed VR-band time-of-maximum brightness should be the same as the rest-frame V-band maximum to within 1.1 rest-frame days.Comment: 35 pages, 18 figures, 15 tables; Higher quality PDF available at http://ctiokw.ctio.noao.edu/~sm/sm/SNrise/index.html; AJ accepte

    On the Additivity and Weak Baselines for Search Result Diversification Research

    Get PDF
    A recent study on the topic of additivity addresses the task of search result diversification and concludes that while weaker baselines are almost always significantly improved by the evaluated diversification methods, for stronger baselines, just the opposite happens, i.e., no significant improvement can be observed. Due to the importance of the issue in shaping future research directions and evaluation strategies in search results diversification, in this work, we first aim to reproduce the findings reported in the previous study, and then investigate its possible limitations. Our extensive experiments first reveal that under the same experimental setting with that previous study, we can reach similar results. Next, we hypothesize that for stronger baselines, tuning the parameters of some methods (i.e., the trade-off parameter between the relevance and diversity of the results in this particular scenario) should be done in a more fine-grained manner. With trade-off parameters that are specifically determined for each baseline run, we show that the percentage of significant improvements even over the strong baselines can be doubled. As a further issue, we discuss the possible impact of using the same strong baseline retrieval function for the diversity computations of the methods. Our takeaway message is that in the case of a strong baseline, it is more crucial to tune the parameters of the diversification methods to be evaluated; but once this is done, additivity is achievable

    Electron Transfer Reaction Through an Adsorbed Layer

    Full text link
    We consider electron transfer from a redox to an electrode through and adsorbed intermediate. The formalism is developed to cover all regimes of coverage factor, from lone adsorbate to monolayer regime. The randomness in the distribution of adsorbates is handled using coherent potential approximation. We give current-overpotential profile for all coverage regimes. We explictly analyse the low and high coverage regimes by supplementing with DOS profile for adsorbate in both weakly coupled and strongly coupled sector. The prominence of bonding and anti-bonding states in the strongly coupled adsorbates at low coverage gives rise to saddle point behaviour in current-overpotential profile. We were able to recover the marcus inverted region at low coverage and the traditional direct electron transfer behaviour at high coverage

    A sequence based genetic algorithm with local search for the travelling salesman problem

    Get PDF
    The standard Genetic Algorithm often suffers from slow convergence for solving combinatorial optimization problems. In this study, we present a sequence based genetic algorithm (SBGA) for the symmetric travelling salesman problem (TSP). In our proposed method, a set of sequences are extracted from the best individuals, which are used to guide the search of SBGA. Additionally, some procedures are applied to maintain the diversity by breaking the selected sequences into sub tours if the best individual of the population does not improve. SBGA is compared with the inver-over operator, a state-of-the-art algorithm for the TSP, on a set of benchmark TSPs. Experimental results show that the convergence speed of SBGA is very promising and much faster than that of the inver-over algorithm and that SBGA achieves a similar solution quality on all test TSPs

    SWATI: Synthesizing Wordlengths Automatically Using Testing and Induction

    Full text link
    In this paper, we present an automated technique SWATI: Synthesizing Wordlengths Automatically Using Testing and Induction, which uses a combination of Nelder-Mead optimization based testing, and induction from examples to automatically synthesize optimal fixedpoint implementation of numerical routines. The design of numerical software is commonly done using floating-point arithmetic in design-environments such as Matlab. However, these designs are often implemented using fixed-point arithmetic for speed and efficiency reasons especially in embedded systems. The fixed-point implementation reduces implementation cost, provides better performance, and reduces power consumption. The conversion from floating-point designs to fixed-point code is subject to two opposing constraints: (i) the word-width of fixed-point types must be minimized, and (ii) the outputs of the fixed-point program must be accurate. In this paper, we propose a new solution to this problem. Our technique takes the floating-point program, specified accuracy and an implementation cost model and provides the fixed-point program with specified accuracy and optimal implementation cost. We demonstrate the effectiveness of our approach on a set of examples from the domain of automated control, robotics and digital signal processing

    A hybrid genetic algorithm and inver over approach for the travelling salesman problem

    Get PDF
    This article posted here with permission of the IEEE - Copyright @ 2010 IEEEThis paper proposes a two-phase hybrid approach for the travelling salesman problem (TSP). The first phase is based on a sequence based genetic algorithm (SBGA) with an embedded local search scheme. Within the SBGA, a memory is introduced to store good sequences (sub-tours) extracted from previous good solutions and the stored sequences are used to guide the generation of offspring via local search during the evolution of the population. Additionally, we also apply some techniques to adapt the key parameters based on whether the best individual of the population improves or not and maintain the diversity. After SBGA finishes, the hybrid approach enters the second phase, where the inver over (IO) operator, which is a state-of-the-art algorithm for the TSP, is used to further improve the solution quality of the population. Experiments are carried out to investigate the performance of the proposed hybrid approach in comparison with several relevant algorithms on a set of benchmark TSP instances. The experimental results show that the proposed hybrid approach is efficient in finding good quality solutions for the test TSPs.This work was supported by the Engineering and Physical Sciences Research Council (EPSRC) of the United Kingdom under Grant EP/E060722/1
    corecore