35,600 research outputs found
A meta-evaluation of evaluation methods for diversified search
For the evaluation of diversified search results, a number of different methods have been proposed in the literature. Prior to making use of such evaluation methods, it is important to have a good understanding of how diversity and relevance contribute to the performance metric of each method. In this paper, we use the statistical technique ANOVA to analyse and compare three representative evaluation methods for diversified search, namely alpha-nDCG, MAP-IA, and ERR-IA, on the TREC-2009 Web track dataset. It is shown that the performance scores provided by those evaluation methods can indeed reflect two crucial aspects of diversity --- richness and evenness --- as well as relevance, though to different degrees
An Axiomatic Analysis of Diversity Evaluation Metrics: Introducing the Rank-Biased Utility Metric
Many evaluation metrics have been defined to evaluate the effectiveness
ad-hoc retrieval and search result diversification systems. However, it is
often unclear which evaluation metric should be used to analyze the performance
of retrieval systems given a specific task. Axiomatic analysis is an
informative mechanism to understand the fundamentals of metrics and their
suitability for particular scenarios. In this paper, we define a
constraint-based axiomatic framework to study the suitability of existing
metrics in search result diversification scenarios. The analysis informed the
definition of Rank-Biased Utility (RBU) -- an adaptation of the well-known
Rank-Biased Precision metric -- that takes into account redundancy and the user
effort associated to the inspection of documents in the ranking. Our
experiments over standard diversity evaluation campaigns show that the proposed
metric captures quality criteria reflected by different metrics, being suitable
in the absence of knowledge about particular features of the scenario under
study.Comment: Original version: 10 pages. Preprint of full paper to appear at
SIGIR'18: The 41st International ACM SIGIR Conference on Research &
Development in Information Retrieval, July 8-12, 2018, Ann Arbor, MI, USA.
ACM, New York, NY, US
Basic Enhancement Strategies When Using Bayesian Optimization for Hyperparameter Tuning of Deep Neural Networks
Compared to the traditional machine learning models, deep neural networks (DNN) are known to be highly sensitive to the choice of hyperparameters. While the required time and effort for manual tuning has been rapidly decreasing for the well developed and commonly used DNN architectures, undoubtedly DNN hyperparameter optimization will continue to be a major burden whenever a new DNN architecture needs to be designed, a new task needs to be solved, a new dataset needs to be addressed, or an existing DNN needs to be improved further. For hyperparameter optimization of general machine learning problems, numerous automated solutions have been developed where some of the most popular solutions are based on Bayesian Optimization (BO). In this work, we analyze four fundamental strategies for enhancing BO when it is used for DNN hyperparameter optimization. Specifically, diversification, early termination, parallelization, and cost function transformation are investigated. Based on the analysis, we provide a simple yet robust algorithm for DNN hyperparameter optimization - DEEP-BO (Diversified, Early-termination-Enabled, and Parallel Bayesian Optimization). When evaluated over six DNN benchmarks, DEEP-BO mostly outperformed well-known solutions including GP-Hedge, BOHB, and the speed-up variants that use Median Stopping Rule or Learning Curve Extrapolation. In fact, DEEP-BO consistently provided the top, or at least close to the top, performance over all the benchmark types that we have tested. This indicates that DEEP-BO is a robust solution compared to the existing solutions. The DEEP-BO code is publicly available at <uri>https://github.com/snu-adsl/DEEP-BO</uri>
Big Data Privacy Context: Literature Effects On Secure Informational Assets
This article's objective is the identification of research opportunities in
the current big data privacy domain, evaluating literature effects on secure
informational assets. Until now, no study has analyzed such relation. Its
results can foster science, technologies and businesses. To achieve these
objectives, a big data privacy Systematic Literature Review (SLR) is performed
on the main scientific peer reviewed journals in Scopus database. Bibliometrics
and text mining analysis complement the SLR. This study provides support to big
data privacy researchers on: most and least researched themes, research
novelty, most cited works and authors, themes evolution through time and many
others. In addition, TOPSIS and VIKOR ranks were developed to evaluate
literature effects versus informational assets indicators. Secure Internet
Servers (SIS) was chosen as decision criteria. Results show that big data
privacy literature is strongly focused on computational aspects. However,
individuals, societies, organizations and governments face a technological
change that has just started to be investigated, with growing concerns on law
and regulation aspects. TOPSIS and VIKOR Ranks differed in several positions
and the only consistent country between literature and SIS adoption is the
United States. Countries in the lowest ranking positions represent future
research opportunities.Comment: 21 pages, 9 figure
A Component Based Heuristic Search Method with Evolutionary Eliminations
Nurse rostering is a complex scheduling problem that affects hospital
personnel on a daily basis all over the world. This paper presents a new
component-based approach with evolutionary eliminations, for a nurse scheduling
problem arising at a major UK hospital. The main idea behind this technique is
to decompose a schedule into its components (i.e. the allocated shift pattern
of each nurse), and then to implement two evolutionary elimination strategies
mimicking natural selection and natural mutation process on these components
respectively to iteratively deliver better schedules. The worthiness of all
components in the schedule has to be continuously demonstrated in order for
them to remain there. This demonstration employs an evaluation function which
evaluates how well each component contributes towards the final objective. Two
elimination steps are then applied: the first elimination eliminates a number
of components that are deemed not worthy to stay in the current schedule; the
second elimination may also throw out, with a low level of probability, some
worthy components. The eliminated components are replenished with new ones
using a set of constructive heuristics using local optimality criteria.
Computational results using 52 data instances demonstrate the applicability of
the proposed approach in solving real-world problems.Comment: 27 pages, 4 figure
Multi-test Decision Tree and its Application to Microarray Data Classification
Objective:
The desirable property of tools used to investigate biological data is
easy to understand models and predictive decisions.
Decision trees are particularly promising in this regard due to their comprehensible nature that resembles the hierarchical process of human decision making. However, existing algorithms for learning decision trees have tendency to underfit gene expression data. The main aim of this work is to improve the performance and stability of decision trees with only a small increase in their complexity.
Methods:
We propose a multi-test decision tree (MTDT); our main contribution is the application of several univariate tests in each non-terminal node of the decision tree. We also search for alternative, lower-ranked features in order to obtain more stable and reliable predictions.
Results:
Experimental validation was performed on several real-life gene expression datasets. Comparison results with eight classifiers show that MTDT has a statistically significantly higher accuracy than popular decision tree classifiers, and it was highly competitive with ensemble learning algorithms. The proposed solution managed to outperform its baseline algorithm on datasets by an average percent. A study performed on one of the datasets showed that the discovered genes used in the MTDT classification model
are supported by biological evidence in the literature.
Conclusion:
This paper introduces a new type of decision tree which is more suitable for solving biological problems.
MTDTs are relatively easy to analyze and much more powerful in modeling high dimensional microarray data than their popular counterparts
- …