3,624 research outputs found
Measuring time preferences
We review research that measures time preferences—i.e., preferences over intertemporal tradeoffs. We distinguish between studies using financial flows, which we call “money earlier or later” (MEL) decisions and studies that use time-dated consumption/effort. Under different structural models, we show how to translate what MEL experiments directly measure (required rates of return for financial flows) into a discount function over utils. We summarize empirical regularities found in MEL studies and the predictive power of those studies. We explain why MEL choices are driven in part by some factors that are distinct from underlying time preferences.National Institutes of Health (NIA R01AG021650 and P01AG005842) and the Pershing Square Fund for Research in the Foundations of Human Behavior
Grand Challenges of Traceability: The Next Ten Years
In 2007, the software and systems traceability community met at the first
Natural Bridge symposium on the Grand Challenges of Traceability to establish
and address research goals for achieving effective, trustworthy, and ubiquitous
traceability. Ten years later, in 2017, the community came together to evaluate
a decade of progress towards achieving these goals. These proceedings document
some of that progress. They include a series of short position papers,
representing current work in the community organized across four process axes
of traceability practice. The sessions covered topics from Trace Strategizing,
Trace Link Creation and Evolution, Trace Link Usage, real-world applications of
Traceability, and Traceability Datasets and benchmarks. Two breakout groups
focused on the importance of creating and sharing traceability datasets within
the research community, and discussed challenges related to the adoption of
tracing techniques in industrial practice. Members of the research community
are engaged in many active, ongoing, and impactful research projects. Our hope
is that ten years from now we will be able to look back at a productive decade
of research and claim that we have achieved the overarching Grand Challenge of
Traceability, which seeks for traceability to be always present, built into the
engineering process, and for it to have "effectively disappeared without a
trace". We hope that others will see the potential that traceability has for
empowering software and systems engineers to develop higher-quality products at
increasing levels of complexity and scale, and that they will join the active
community of Software and Systems traceability researchers as we move forward
into the next decade of research
Grand Challenges of Traceability: The Next Ten Years
In 2007, the software and systems traceability community met at the first
Natural Bridge symposium on the Grand Challenges of Traceability to establish
and address research goals for achieving effective, trustworthy, and ubiquitous
traceability. Ten years later, in 2017, the community came together to evaluate
a decade of progress towards achieving these goals. These proceedings document
some of that progress. They include a series of short position papers,
representing current work in the community organized across four process axes
of traceability practice. The sessions covered topics from Trace Strategizing,
Trace Link Creation and Evolution, Trace Link Usage, real-world applications of
Traceability, and Traceability Datasets and benchmarks. Two breakout groups
focused on the importance of creating and sharing traceability datasets within
the research community, and discussed challenges related to the adoption of
tracing techniques in industrial practice. Members of the research community
are engaged in many active, ongoing, and impactful research projects. Our hope
is that ten years from now we will be able to look back at a productive decade
of research and claim that we have achieved the overarching Grand Challenge of
Traceability, which seeks for traceability to be always present, built into the
engineering process, and for it to have "effectively disappeared without a
trace". We hope that others will see the potential that traceability has for
empowering software and systems engineers to develop higher-quality products at
increasing levels of complexity and scale, and that they will join the active
community of Software and Systems traceability researchers as we move forward
into the next decade of research
Building air leakage databases in energy conservation policies: analysis of selected initiatives in 4 European countries and the USA
Fulltext is available at http://tightvent.eu/wp-content/uploads/2012/02/TightVentReport03.pdfWe collected information on existing envelope air leakage databases from countries that are
involved in the AIVC-TightVent project “Development and applications of building air
leakage databases”. This document summarizes the information from five countries: Czech
Republic, France, Germany, UK, and USA. Even though our summary is not exhaustive of all
existing data on whole-building envelope air leakage, it provides an overview of recent efforts
from a number of countries. There are many reasons why different countries are collecting
these data. We will summarize their motivations, which drive some of the differences in the
types of data being gathered and how the data are analysed. Detailed information from each
country is provided at the end of this document in the form of tables
Description and Optimization of Abstract Machines in a Dialect of Prolog
In order to achieve competitive performance, abstract machines for Prolog and
related languages end up being large and intricate, and incorporate
sophisticated optimizations, both at the design and at the implementation
levels. At the same time, efficiency considerations make it necessary to use
low-level languages in their implementation. This makes them laborious to code,
optimize, and, especially, maintain and extend. Writing the abstract machine
(and ancillary code) in a higher-level language can help tame this inherent
complexity. We show how the semantics of most basic components of an efficient
virtual machine for Prolog can be described using (a variant of) Prolog. These
descriptions are then compiled to C and assembled to build a complete bytecode
emulator. Thanks to the high level of the language used and its closeness to
Prolog, the abstract machine description can be manipulated using standard
Prolog compilation and optimization techniques with relative ease. We also show
how, by applying program transformations selectively, we obtain abstract
machine implementations whose performance can match and even exceed that of
state-of-the-art, highly-tuned, hand-crafted emulators.Comment: 56 pages, 46 figures, 5 tables, To appear in Theory and Practice of
Logic Programming (TPLP
Quantum Lazy Sampling and Game-Playing Proofs for Quantum Indifferentiability
Game-playing proofs constitute a powerful framework for non-quantum
cryptographic security arguments, most notably applied in the context of
indifferentiability. An essential ingredient in such proofs is lazy sampling of
random primitives. We develop a quantum game-playing proof framework by
generalizing two recently developed proof techniques. First, we describe how
Zhandry's compressed quantum oracles~(Crypto'19) can be used to do quantum lazy
sampling of a class of non-uniform function distributions. Second, we observe
how Unruh's one-way-to-hiding lemma~(Eurocrypt'14) can also be applied to
compressed oracles, providing a quantum counterpart to the fundamental lemma of
game-playing. Subsequently, we use our game-playing framework to prove quantum
indifferentiability of the sponge construction, assuming a random internal
function
Black-Box Separations for Non-Interactive Commitments in a Quantum World
Commitments are fundamental in cryptography. In the classical world, commitments are equivalent to the existence of one-way functions. It is also known that the most desired form of commitments in terms of their round complexity, i.e., non-interactive commitments, cannot be built from one-way functions in a black-box way [Mahmoody-Pass, Crypto\u2712]. However, if one allows the parties to use quantum computation and communication, it is known that non-interactive commitments (to classical bits) are in fact possible [Koshiba-Odaira, Arxiv\u2711 and Bitansky-Brakerski, TCC\u2721].
We revisit the assumptions behind non-interactive commitments in a quantum world and study whether they can be achieved using quantum computation and classical communication based on a black-box use of one-way functions. We prove that doing so is impossible unless the Polynomial Compatibility Conjecture [Austrin et al. Crypto\u2722] is false. We further extend our impossibility to protocols with quantum decommitments. This complements the positive result of Bitansky and Brakerski [TCC\u2721], as they only required a classical decommitment message. Because non-interactive commitments can be based on injective one-way functions, assuming the Polynomial Compatibility Conjecture, we also obtain a black-box separation between one-way functions and injective one-way functions (e.g., one-way permutations) even when the construction and the security reductions are allowed to be quantum. This improves the separation of Cao and Xue [Theoretical Computer Science\u2721] in which they only allowed the security reduction to be quantum.
At a technical level, we prove that sampling oracles at random from ``sufficiently large\u27\u27 sets (of oracles) will make them one-way against polynomial quantum-query adversaries who also get arbitrary polynomial-size quantum advice about the oracle. This gives a natural generalization of the recent results of Hhan et al.[Asiacrypt\u2719] and Chung et al. [FOCS\u2720]
False Claims against Model Ownership Resolution
Deep neural network (DNN) models are valuable intellectual property of model
owners, constituting a competitive advantage. Therefore, it is crucial to
develop techniques to protect against model theft. Model ownership resolution
(MOR) is a class of techniques that can deter model theft. A MOR scheme enables
an accuser to assert an ownership claim for a suspect model by presenting
evidence, such as a watermark or fingerprint, to show that the suspect model
was stolen or derived from a source model owned by the accuser. Most of the
existing MOR schemes prioritize robustness against malicious suspects, ensuring
that the accuser will win if the suspect model is indeed a stolen model.
In this paper, we show that common MOR schemes in the literature are
vulnerable to a different, equally important but insufficiently explored,
robustness concern: a malicious accuser. We show how malicious accusers can
successfully make false claims against independent suspect models that were not
stolen. Our core idea is that a malicious accuser can deviate (without
detection) from the specified MOR process by finding (transferable) adversarial
examples that successfully serve as evidence against independent suspect
models. To this end, we first generalize the procedures of common MOR schemes
and show that, under this generalization, defending against false claims is as
challenging as preventing (transferable) adversarial examples. Via systematic
empirical evaluation we demonstrate that our false claim attacks always succeed
in all prominent MOR schemes with realistic configurations, including against a
real-world model: Amazon's Rekognition API.Comment: 13pages,3 figure
Extending Answer Set Programming using Generalized Possibilistic Logic
This international workshop is one of the joint ontology workshops JOWO 2015 affiliated with the 24th International Joint Conference on Artificial Intelligence (IJCAI-2015)International audienceAnswer set programming (ASP) is a form of logic programming in which negation-as-failure is defined in a purely declarative way, based on the notion of a stable model. This short paper briefly explains how a recent generalization of possibilistic logic (GPL) can be used to characterize the semantics of answer set programming. This characterization has several advantages over existing characterizations of the stable model semantics. First, unlike reduct-based approaches, it does not rely on a syntactic procedure: we can directly characterize answer sets based on the minimally specific models of a GPL theory. Second, GPL enables us to study extensions of ASP in an intuitive way: unlike in existing generalizations of ASP such as equilibrium logic and autoepistemic logic, all formulas in GPL have a meaning which is intuitively clear. Finally, being based on possibilistic logic, GPL offers a natural way of dealing with uncertainty in answer set programs
- …