2,776 research outputs found
Ur-Priors, Conditionalization, and Ur-Prior Conditionalization
Conditionalization is a widely endorsed rule for updating one’s beliefs. But a sea of complaints have been raised about it, including worries regarding how the rule handles error correction, changing desiderata of theory choice, evidence loss, self-locating beliefs, learning about new theories, and confirmation. In light of such worries, a number of authors have suggested replacing Conditionalization with a different rule — one that appeals to what I’ll call “ur-priors”. But different authors have understood the rule in different ways, and these different understandings solve different problems. In this paper, I aim to map out the terrain regarding these issues. I survey the different problems that might motivate the adoption of such a rule, flesh out the different understandings of the rule that have been proposed, and assess their pros and cons. I conclude by suggesting that one particular batch of proposals, proposals that appeal to what I’ll call “loaded evidential standards”, are especially promising
Verifiable Network-Performance Measurements
In the current Internet, there is no clean way for affected parties to react
to poor forwarding performance: when a domain violates its Service Level
Agreement (SLA) with a contractual partner, the partner must resort to ad-hoc
probing-based monitoring to determine the existence and extent of the
violation. Instead, we propose a new, systematic approach to the problem of
forwarding-performance verification. Our mechanism relies on voluntary
reporting, allowing each domain to disclose its loss and delay performance to
its neighbors; it does not disclose any information regarding the participating
domains' topology or routing policies beyond what is already publicly
available. Most importantly, it enables verifiable performance measurements,
i.e., domains cannot abuse it to significantly exaggerate their performance.
Finally, our mechanism is tunable, allowing each participating domain to
determine how many resources to devote to it independently (i.e., without any
inter-domain coordination), exposing a controllable trade-off between
performance-verification quality and resource consumption. Our mechanism comes
at the cost of deploying modest functionality at the participating domains'
border routers; we show that it requires reasonable processing and memory
resources within modern network capabilities.Comment: 14 page
Responsibility and non-repudiation in resource-constrained Internet of Things scenarios
The proliferation and popularity of smart
autonomous systems necessitates the development
of methods and models for ensuring the effective
identification of their owners and controllers. The aim
of this paper is to critically discuss the responsibility of
Things and their impact on human affairs. This starts
with an in-depth analysis of IoT Characteristics such
as Autonomy, Ubiquity and Pervasiveness. We argue
that Things governed by a controller should have an
identifiable relationship between the two parties and
that authentication and non-repudiation are essential
characteristics in all IoT scenarios which require
trustworthy communications. However, resources can
be a problem, for instance, many Things are designed
to perform in low-powered hardware. Hence, we also
propose a protocol to demonstrate how we can achieve the
authenticity of participating Things in a connectionless
and resource-constrained environment
- …