97,785 research outputs found
Active Inverse Reward Design
Designers of AI agents often iterate on the reward function in a
trial-and-error process until they get the desired behavior, but this only
guarantees good behavior in the training environment. We propose structuring
this process as a series of queries asking the user to compare between
different reward functions. Thus we can actively select queries for maximum
informativeness about the true reward. In contrast to approaches asking the
designer for optimal behavior, this allows us to gather additional information
by eliciting preferences between suboptimal behaviors. After each query, we
need to update the posterior over the true reward function from observing the
proxy reward function chosen by the designer. The recently proposed Inverse
Reward Design (IRD) enables this. Our approach substantially outperforms IRD in
test environments. In particular, it can query the designer about
interpretable, linear reward functions and still infer non-linear ones
Should Optimal Designers Worry About Consideration?
Consideration set formation using non-compensatory screening rules is a vital
component of real purchasing decisions with decades of experimental validation.
Marketers have recently developed statistical methods that can estimate
quantitative choice models that include consideration set formation via
non-compensatory screening rules. But is capturing consideration within models
of choice important for design? This paper reports on a simulation study of a
vehicle portfolio design when households screen over vehicle body style built
to explore the importance of capturing consideration rules for optimal
designers. We generate synthetic market share data, fit a variety of discrete
choice models to the data, and then optimize design decisions using the
estimated models. Model predictive power, design "error", and profitability
relative to ideal profits are compared as the amount of market data available
increases. We find that even when estimated compensatory models provide
relatively good predictive accuracy, they can lead to sub-optimal design
decisions when the population uses consideration behavior; convergence of
compensatory models to non-compensatory behavior is likely to require
unrealistic amounts of data; and modeling heterogeneity in non-compensatory
screening is more valuable than heterogeneity in compensatory trade-offs. This
supports the claim that designers should carefully identify consideration
behaviors before optimizing product portfolios. We also find that higher model
predictive power does not necessarily imply better design decisions; that is,
different model forms can provide "descriptive" rather than "predictive"
information that is useful for design.Comment: 5 figures, 26 pages. In Press at ASME Journal of Mechanical Design
(as of 3/17/15
The design of aircraft using the decision support problem technique
The Decision Support Problem Technique for unified design, manufacturing and maintenance is being developed at the Systems Design Laboratory at the University of Houston. This involves the development of a domain-independent method (and the associated software) that can be used to process domain-dependent information and thereby provide support for human judgment. In a computer assisted environment, this support is provided in the form of optimal solutions to Decision Support Problems
Investigating the effectiveness of an efficient label placement method using eye movement data
This paper focuses on improving the efficiency and effectiveness of dynamic and interactive maps in relation to the user. A label placement method with an improved algorithmic efficiency is presented. Since this algorithm has an influence on the actual placement of the name labels on the map, it is tested if this efficient algorithms also creates more effective maps: how well is the information processed by the user. We tested 30 participants while they were working on a dynamic and interactive map display. Their task was to locate geographical names on each of the presented maps. Their eye movements were registered together with the time at which a given label was found. The gathered data reveal no difference in the user's response times, neither in the number and the duration of the fixations between both map designs. The results of this study show that the efficiency of label placement algorithms can be improved without disturbing the user's cognitive map. Consequently, we created a more efficient map without affecting its effectiveness towards the user
Designs for Stated Preference Experiments
We explore the use of different strategies for the construction of optimal choice experiments and their impact on the overall efficiency of the resulting design. We then evaluate how these choice designs meet the desired characteristics of optimal choice designs (orthogonality, level balance, utility balance and minimum level overlap). We further explore the feasibility of using entropy as a secondary measure of design optimality. We find that current algorithms afford little flexibility for using this secondary measure. We further study the impact of misspecification of the assumed parameter values used in creation of optimal choice designs. We find that the impact of misspecification varies widely based on the discrepancy between the true and assumed parameter values. Further we find that entropy becomes a more feasible secondary measure of design optimality if one considers the potential of misspecification of the values. Current design and analysis strategies for stated preference experiments assume that compensatory decisions are made. We consider how different decision strategies may be represented through manipulating the assumed parameter values used in creating the choice designs. In this context, the consequences of misspecification of the decision strategy are also evaluated. Given the large prevalence of no-choice choices in stated preference experiments, we study how different measures of choice complexity impact the selection of the no-choice alternative. We conclude by suggesting a comprehensive strategy that should be followed in the creation of choice designs
Constructive Preference Elicitation over Hybrid Combinatorial Spaces
Preference elicitation is the task of suggesting a highly preferred
configuration to a decision maker. The preferences are typically learned by
querying the user for choice feedback over pairs or sets of objects. In its
constructive variant, new objects are synthesized "from scratch" by maximizing
an estimate of the user utility over a combinatorial (possibly infinite) space
of candidates. In the constructive setting, most existing elicitation
techniques fail because they rely on exhaustive enumeration of the candidates.
A previous solution explicitly designed for constructive tasks comes with no
formal performance guarantees, and can be very expensive in (or unapplicable
to) problems with non-Boolean attributes. We propose the Choice Perceptron, a
Perceptron-like algorithm for learning user preferences from set-wise choice
feedback over constructive domains and hybrid Boolean-numeric feature spaces.
We provide a theoretical analysis on the attained regret that holds for a large
class of query selection strategies, and devise a heuristic strategy that aims
at optimizing the regret in practice. Finally, we demonstrate its effectiveness
by empirical evaluation against existing competitors on constructive scenarios
of increasing complexity.Comment: AAAI 2018, computing methodologies, machine learning, learning
paradigms, supervised learning, structured output
Neo-Schumpeterian Simulation Models
The use of simulation modelling techniques by neo-Schumpeterian economists dates back to Nelson and Winter’s 1982 book “An Evolutionary Theory of Economic Change”, and has rapidly expanded ever since. This paper considers the way in which successive generations of models have extended the boundaries of research (both with respect to the range of phenomena considered and the different dimensions of innovation that are considered), and while simultaneously introducing novel modelling techniques. At the same time, the paper will highlight the distinct set of features that have emerged in these neo-Schumpeterian models, and which set them apart from the models developed by other schools. In particular, they share a distinct view about the type of world in which real economic agents operate, and a invariably contain a generic set of algorithms. In addition to reviewing past models, the paper considers a number of pressing issues that remain unresolved and which modellers will need to address in future. Notable amongst these are the methodological relationship between empirical studies and simulation (e.g. ‘history friendly modelling’), the development of common standards for sensitivity analysis, and the need to further extend the boundaries of research in order to consider important aspects of innovation and technical change.macroeconomics ;
- …