4,565 research outputs found
The merits of using citation‐based journal weighting schemes to measure research performance in economics: The case of New Zealand
In this study we test various citation‐based journal weighting schemes, especially those based on the Liebowitz and Palmer methodology, as to their suitability for use in a nationwide research funding model. Using data generated by New Zealand’s academic economists, we compare the performance of departments, and individuals, under each of our selected schemes; and we then proceed to contrast these results with those generated by direct citation counts. Our findings suggest that if all citations are deemed to be of equal value, then schemes based on the Liebowitz and Palmer methodology yield problematic outcomes. We also demonstrate that even between weighting schemes based on a common methodology, major differences are found to exist in departmental and individual outcomes
The Feasibility of Dynamically Granted Permissions: Aligning Mobile Privacy with User Preferences
Current smartphone operating systems regulate application permissions by
prompting users on an ask-on-first-use basis. Prior research has shown that
this method is ineffective because it fails to account for context: the
circumstances under which an application first requests access to data may be
vastly different than the circumstances under which it subsequently requests
access. We performed a longitudinal 131-person field study to analyze the
contextuality behind user privacy decisions to regulate access to sensitive
resources. We built a classifier to make privacy decisions on the user's behalf
by detecting when context has changed and, when necessary, inferring privacy
preferences based on the user's past decisions and behavior. Our goal is to
automatically grant appropriate resource requests without further user
intervention, deny inappropriate requests, and only prompt the user when the
system is uncertain of the user's preferences. We show that our approach can
accurately predict users' privacy decisions 96.8% of the time, which is a
four-fold reduction in error rate compared to current systems.Comment: 17 pages, 4 figure
Semi-Counterfactual Risk Minimization Via Neural Networks
Counterfactual risk minimization is a framework for offline policy
optimization with logged data which consists of context, action, propensity
score, and reward for each sample point. In this work, we build on this
framework and propose a learning method for settings where the rewards for some
samples are not observed, and so the logged data consists of a subset of
samples with unknown rewards and a subset of samples with known rewards. This
setting arises in many application domains, including advertising and
healthcare. While reward feedback is missing for some samples, it is possible
to leverage the unknown-reward samples in order to minimize the risk, and we
refer to this setting as semi-counterfactual risk minimization. To approach
this kind of learning problem, we derive new upper bounds on the true risk
under the inverse propensity score estimator. We then build upon these bounds
to propose a regularized counterfactual risk minimization method, where the
regularization term is based on the logged unknown-rewards dataset only; hence
it is reward-independent. We also propose another algorithm based on generating
pseudo-rewards for the logged unknown-rewards dataset. Experimental results
with neural networks and benchmark datasets indicate that these algorithms can
leverage the logged unknown-rewards dataset besides the logged known-reward
dataset
Policy-Adaptive Estimator Selection for Off-Policy Evaluation
Off-policy evaluation (OPE) aims to accurately evaluate the performance of
counterfactual policies using only offline logged data. Although many
estimators have been developed, there is no single estimator that dominates the
others, because the estimators' accuracy can vary greatly depending on a given
OPE task such as the evaluation policy, number of actions, and noise level.
Thus, the data-driven estimator selection problem is becoming increasingly
important and can have a significant impact on the accuracy of OPE. However,
identifying the most accurate estimator using only the logged data is quite
challenging because the ground-truth estimation accuracy of estimators is
generally unavailable. This paper studies this challenging problem of estimator
selection for OPE for the first time. In particular, we enable an estimator
selection that is adaptive to a given OPE task, by appropriately subsampling
available logged data and constructing pseudo policies useful for the
underlying estimator selection task. Comprehensive experiments on both
synthetic and real-world company data demonstrate that the proposed procedure
substantially improves the estimator selection compared to a non-adaptive
heuristic.Comment: accepted at AAAI'2
The consumer empowerment index. A measure of skills, awareness and engagement of European consumers
The Consumer Empowerment Index is a pilot exercise, aimed at obtaining a first snapshot of the state of consumer empowerment as measured by the Eurobarometer survey (Special Eurobarometer n. 342). It is neither a final answer on empowerment nor a comprehensive study on all the different facets of consumer empowerment, but instead it is meant to foster the debate on the determinants of empowerment and their importance for protecting consumers. This report describes the steps followed in the construction of the Index of consumer Empowerment. In particular the definition of the theoretical framework, the quantification of categorical survey questions, the univariate and multivariate analysis of the dataset, and the set of weight used for calculating the scores and ranks of the Index. The report also discusses the robustness of the results and the relationship between the Index and the socio-economic characteristics of the respondents in order to identify the features of the most vulnerable consumers. The Consumer Empowerment Index identifies Norway as the leading country followed by Finland, the Netherlands and Germany and Denmark. The middle of the ranking is dominated by western countries such as Belgium, France, and UK, with an average score 13% lower than the top five. At the bottom of the Index are some Eastern and Baltic countries like Bulgaria, Lithuania, Poland, and Romania with a score 31% lower on average (this gap reaches 40% and 38% in Awareness of consumer legislation and Consumer skills). A group of southern countries, Italy, Portugal, and Spain score poorly in the Index, especially in the pillar Consumer skills where the gap with the top performers reaches 30%.Consumer empowerment; composite indicators
Experiments in terabyte searching, genomic retrieval and novelty detection for TREC 2004
In TREC2004, Dublin City University took part in three tracks, Terabyte (in collaboration with University College Dublin), Genomic and Novelty. In this paper we will discuss each track separately and present separate conclusions from this work. In addition, we present a general description of a text retrieval engine that we have developed in the last year to support our experiments into large scale, distributed information retrieval, which underlies all of the track experiments described in this document
Generating Explanations of Robot Policies in Continuous State Spaces
Transparency in HRI describes the method of making the current state of a robot or intelligent agent understandable to a human user. Applying transparency mechanisms to robots improves the quality of interaction as well as the user experience.
Explanations are an effective way to make a robot’s decision making transparent. We introduce a framework that uses natural language labels attached to a region in the continuous state space of the robot to automatically generate local explanations of a robot’s policy.
We conducted a pilot study and investigated how the generated explanations helped users to understand and reproduce a robot policy in a debugging scenario
Decision making study: methods and applications of evidential reasoning and judgment analysis
Decision making study has been the multi-disciplinary research involving operations researchers, management scientists, statisticians, mathematical psychologists and economists as well as others. This study aims to investigate the theory and methodology of decision making research and apply them to different contexts in real cases.
The study has reviewed the literature of Multiple Criteria Decision Making (MCDM), Evidential Reasoning (ER) approach, Naturalistic Decision Making (NDM) movement, Social Judgment Theory (SJT), and Adaptive Toolbox (AT) program. On the basis of these literatures, two methods, Evidence-based Trade-Off (EBTO) and Judgment Analysis with Heuristic Modelling (JA-HM), have been proposed and developed to accomplish decision making problems under different conditions.
In the EBTO method, we propose a novel framework to aid people s decision making under uncertainty and imprecise goal. Under the framework, the imprecise goal is objectively modelled through an analytical structure, and is independent of the task requirement; the task requirement is specified by the trade-off strategy among criteria of the analytical structure through an importance weighting process, and is subject to the requirement change of a particular decision making task; the evidence available, that could contribute to the evaluation of general performance of the decision alternatives, are formulated with belief structures which are capable of capturing various format of uncertainties that arise from the absence of data, incomplete information and subjective judgments.
The EBTO method was further applied in a case study of Soldier system decision making. The application has demonstrated that EBTO, as a tool, is able to provide a holistic analysis regarding the requirements of Soldier missions, the physical conditions of Soldiers, and the capability of their equipment and weapon systems, which is critical in domain.
By drawing the cross-disciplinary literature from NDM and AT, the JA-HM extended the traditional Judgment Analysis (JA) method, through a number of novel methodological procedures, to account for the unique features of decision making tasks under extreme time pressure and dynamic shifting situations. These novel methodological procedures include, the notion of decision point to deconstruct the dynamic shifting situations in a way that decision problem could be identified and formulated; the classification of routine and non-routine problems, and associated data alignment process to enable meaningful decision data analysis across different decision makers (DMs); the notion of composite cue to account for the DMs iterative process of information perception and comprehension in dynamic task environment; the application of computational models of heuristics to account for the time constraints and process dynamics of DMs decision making process; and the application of cross-validation process to enable the methodological principle of competitive testing of decision models.
The JA-HM was further applied in a case study of fire emergency decision making. The application has been the first behavioural test of the validity of the computational models of heuristics, in predicting the DMs decision making during fire emergency response. It has also been the first behavioural test of the validity of the non-compensatory heuristics in predicting the DMs decisions on ranking task. The findings identified extend the literature of AT and NDM, and have implications for the fire emergency decision making
- …