3,096 research outputs found
Mitigation factors
The debate over the costs of GHG emission reduction has become more com-plex recently as disagreements over the existence of economic and environ-mental double dividents have been added to discussions over the existence of a negative cost potential. We argue that basic assumptions about economic effi-ciency, the (sub-)optimality of the baseline and the rate of technical change are more important than model structure, and we underline the importance of the timing of decisions for determining the costs. Moreover the use of a single baseline ‘no policy' scenario and several policy intervention scenarios may be fundamentally misleading in the longer term simply because the very idea of a business as usual scenario is deeply problematic. Ultimately the debate turns on political judgments about the desirability of alternative development paths. Copright© 1996 Elsevier Science Ltd.Greenhouse gas emissions;Costs of GHG reduction; Mitigation options
Mitigation factors: Assessing the costs of reducing GHG emissions
International audienceThe debate over the costs of GHG emission reduction has become more com-plex recently as disagreements over the existence of economic and environ-mental double dividents have been added to discussions over the existence of a negative cost potential. We argue that basic assumptions about economic effi-ciency, the (sub-)optimality of the baseline and the rate of technical change are more important than model structure, and we underline the importance of the timing of decisions for determining the costs. Moreover the use of a single baseline ‘no policy' scenario and several policy intervention scenarios may be fundamentally misleading in the longer term simply because the very idea of a business as usual scenario is deeply problematic. Ultimately the debate turns on political judgments about the desirability of alternative development paths. Copright© 1996 Elsevier Science Ltd
Negative Results in Computer Vision: A Perspective
A negative result is when the outcome of an experiment or a model is not what
is expected or when a hypothesis does not hold. Despite being often overlooked
in the scientific community, negative results are results and they carry value.
While this topic has been extensively discussed in other fields such as social
sciences and biosciences, less attention has been paid to it in the computer
vision community. The unique characteristics of computer vision, particularly
its experimental aspect, call for a special treatment of this matter. In this
paper, I will address what makes negative results important, how they should be
disseminated and incentivized, and what lessons can be learned from cognitive
vision research in this regard. Further, I will discuss issues such as computer
vision and human vision interaction, experimental design and statistical
hypothesis testing, explanatory versus predictive modeling, performance
evaluation, model comparison, as well as computer vision research culture
SkinnerDB: Regret-Bounded Query Evaluation via Reinforcement Learning
SkinnerDB is designed from the ground up for reliable join ordering. It
maintains no data statistics and uses no cost or cardinality models. Instead,
it uses reinforcement learning to learn optimal join orders on the fly, during
the execution of the current query. To that purpose, we divide the execution of
a query into many small time slices. Different join orders are tried in
different time slices. We merge result tuples generated according to different
join orders until a complete result is obtained. By measuring execution
progress per time slice, we identify promising join orders as execution
proceeds.
Along with SkinnerDB, we introduce a new quality criterion for query
execution strategies. We compare expected execution cost against execution cost
for an optimal join order. SkinnerDB features multiple execution strategies
that are optimized for that criterion. Some of them can be executed on top of
existing database systems. For maximal performance, we introduce a customized
execution engine, facilitating fast join order switching via specialized
multi-way join algorithms and tuple representations.
We experimentally compare SkinnerDB's performance against various baselines,
including MonetDB, Postgres, and adaptive processing methods. We consider
various benchmarks, including the join order benchmark and TPC-H variants with
user-defined functions. Overall, the overheads of reliable join ordering are
negligible compared to the performance impact of the occasional, catastrophic
join order choice
Trial without Error: Towards Safe Reinforcement Learning via Human Intervention
AI systems are increasingly applied to complex tasks that involve interaction
with humans. During training, such systems are potentially dangerous, as they
haven't yet learned to avoid actions that could cause serious harm. How can an
AI system explore and learn without making a single mistake that harms humans
or otherwise causes serious damage? For model-free reinforcement learning,
having a human "in the loop" and ready to intervene is currently the only way
to prevent all catastrophes. We formalize human intervention for RL and show
how to reduce the human labor required by training a supervised learner to
imitate the human's intervention decisions. We evaluate this scheme on Atari
games, with a Deep RL agent being overseen by a human for four hours. When the
class of catastrophes is simple, we are able to prevent all catastrophes
without affecting the agent's learning (whereas an RL baseline fails due to
catastrophic forgetting). However, this scheme is less successful when
catastrophes are more complex: it reduces but does not eliminate catastrophes
and the supervised learner fails on adversarial examples found by the agent.
Extrapolating to more challenging environments, we show that our implementation
would not scale (due to the infeasible amount of human labor required). We
outline extensions of the scheme that are necessary if we are to train
model-free agents without a single catastrophe
- …