196 research outputs found
Body Knowledge and Uncertainty Modeling for Monocular 3D Human Body Reconstruction
While 3D body reconstruction methods have made remarkable progress recently,
it remains difficult to acquire the sufficiently accurate and numerous 3D
supervisions required for training. In this paper, we propose \textbf{KNOWN}, a
framework that effectively utilizes body \textbf{KNOW}ledge and
u\textbf{N}certainty modeling to compensate for insufficient 3D supervisions.
KNOWN exploits a comprehensive set of generic body constraints derived from
well-established body knowledge. These generic constraints precisely and
explicitly characterize the reconstruction plausibility and enable 3D
reconstruction models to be trained without any 3D data. Moreover, existing
methods typically use images from multiple datasets during training, which can
result in data noise (\textit{e.g.}, inconsistent joint annotation) and data
imbalance (\textit{e.g.}, minority images representing unusual poses or
captured from challenging camera views). KNOWN solves these problems through a
novel probabilistic framework that models both aleatoric and epistemic
uncertainty. Aleatoric uncertainty is encoded in a robust Negative
Log-Likelihood (NLL) training loss, while epistemic uncertainty is used to
guide model refinement. Experiments demonstrate that KNOWN's body
reconstruction outperforms prior weakly-supervised approaches, particularly on
the challenging minority images.Comment: ICCV 202
Visualizations for an Explainable Planning Agent
In this paper, we report on the visualization capabilities of an Explainable
AI Planning (XAIP) agent that can support human in the loop decision making.
Imposing transparency and explainability requirements on such agents is
especially important in order to establish trust and common ground with the
end-to-end automated planning system. Visualizing the agent's internal
decision-making processes is a crucial step towards achieving this. This may
include externalizing the "brain" of the agent -- starting from its sensory
inputs, to progressively higher order decisions made by it in order to drive
its planning components. We also show how the planner can bootstrap on the
latest techniques in explainable planning to cast plan visualization as a plan
explanation problem, and thus provide concise model-based visualization of its
plans. We demonstrate these functionalities in the context of the automated
planning components of a smart assistant in an instrumented meeting space.Comment: PREVIOUSLY Mr. Jones -- Towards a Proactive Smart Room Orchestrator
(appeared in AAAI 2017 Fall Symposium on Human-Agent Groups
Agents and Service-Oriented Computing for Autonomic Computing: A Research Agenda
Autonomic computing is the solution proposed to cope with the complexity of today\u27s computing environments. Self-management, an important element of autonomic computing, is also characteristic of single and multiagent systems, as well as systems based on service-oriented architectures. Combining these technologies can be profitable for all - in particular, for the development of autonomic computing systems
Model Selection in an Information Economy: Choosing what to Learn
In an economy in which a producer must learn the preferences of a consumer population, it is faced with a classic decision problem: when to explore and when to exploit. If the producer has a limited number of chances to experiment, it must explicitly consider the cost of learning (in terms of foregone profit) against the value of the information acquired. Information goods add an additional dimension to this problem; due to their flexibility, they can be bundled and priced according to a number of different price schedules. An optimizing producer should consider the profit each price schedule can extract, as well as the difficulty of learning of this schedule.
In this paper, we demonstrate the tradeoff between complexity and profitability for a number of common price schedules. We begin with a one-shot decision as to which schedule to learn. Schedules with moderate complexity are preferred in the short and medium term, as they are learned quickly, yet extract a significant fraction of the available profit. We then turn to the repeated version of this one-shot decision and show that moderate complexity schedules, in particular two-part tariff, perform well when the producer must adapt to nonstationarity in the consumer population. When a producer can dynamically change schedules as it learns, it can use an explicit decision-theoretic formulation to greedily select the schedule which appears to yield the greatest profit in the next period.http://deepblue.lib.umich.edu/bitstream/2027.42/50438/1/comp-intel.pd
Information Bundling in a Dynamic Environment
Markets for digital information goods provide the possibility of exploring new and more complex pricing schemes, due to information goods' flexibility and negligible marginal cost. In this paper we compare the dynamic performance of price schedules of varying complexity under two different specifications of consumer demand shifts.http://deepblue.lib.umich.edu/bitstream/2027.42/50442/1/sce_dynamic_bundling.pd
Interval Change-Point Detection for Runtime Probabilistic Model Checking
Recent probabilistic model checking techniques can verify reliability and performance properties of software systems affected by parametric uncertainty. This involves modelling the system behaviour using interval Markov chains, i.e., Markov models with transition probabilities or rates specified as intervals. These intervals can be updated continually using Bayesian estimators with imprecise priors, enabling the verification of the system properties of interest at runtime. However, Bayesian estimators are slow to react to sudden changes in the actual value of the estimated parameters, yielding inaccurate intervals and leading to poor verification results after such changes. To address this limitation, we introduce an efficient interval change-point detection method, and we integrate it with a state-of-the-art Bayesian estimator with imprecise priors. Our experimental results show that the resulting end-to-end Bayesian approach to change-point detection and estimation of interval Markov chain parameters handles effectively a wide range of sudden changes in parameter values, and supports runtime probabilistic model checking under parametric uncertainty
Proceedings of the Thirteenth International Society of Sports Nutrition (ISSN) Conference and Expo
Meeting Abstracts: Proceedings of the Thirteenth International Society of Sports Nutrition (ISSN) Conference and Expo Clearwater Beach, FL, USA. 9-11 June 201
Pricing Information Bundles in a Dynamic Environment
We explore a scenario in which a monopolist producer of information goods seeks to maximize its profits in a market where consumer demand shifts frequently and unpredictably. The producer may set an arbitrarily complex price schedule---a function that maps the set of purchased items to a price. However, lacking direct knowledge of consumer demand, it cannot compute the optimal schedule. Instead, it attempts to optimize profits via trial and error. By means of a simple model of consumer demand and a modified version of a simple nonlinear optimization routine, we study a variety of parametrizations of the price schedule and quantify some of the relationships among learnability, complexity, and profitability. In particular, we show that fixed pricing or simple two-parameter dynamic pricing schedules are preferred when demand shifts frequently, but that dynamic pricing based on more complex schedules tends to be most profitable when demand shifts very infrequently
- …