173 research outputs found
Re-feedback: freedom with accountability for causing congestion in a connectionless internetwork
This dissertation concerns adding resource accountability to a simplex internetwork such as the Internet,
with only necessary but sufficient constraint on freedom. That is, both freedom for applications to evolve
new innovative behaviours while still responding responsibly to congestion; and freedom for network
providers to structure their pricing in any way, including flat pricing.
The big idea on which the research is built is a novel feedback arrangement termed âre-feedbackâ.
A general form is defined, as well as a specific proposal (re-ECN) to alter the Internet protocol so that
self-contained datagrams carry a metric of expected downstream congestion.
Congestion is chosen because of its central economic role as the marginal cost of network usage.
The aim is to ensure Internet resource allocation can be controlled either by local policies or by market
selection (or indeed local lack of any control).
The current Internet architecture is designed to only reveal path congestion to end-points, not networks.
The collective actions of self-interested consumers and providers should drive Internet resource
allocations towards maximisation of total social welfare. But without visibility of a cost-metric, network
operators are violating the architecture to improve their customerâs experience. The resulting fight
against the architecture is destroying the Internetâs simplicity and ability to evolve.
Although accountability with freedom is the goal, the focus is the congestion metric, and whether
an incentive system is possible that assures its integrity as it is passed between parties around the system,
despite proposed attacks motivated by self-interest and malice.
This dissertation defines the protocol and canonical examples of accountability mechanisms. Designs
are all derived from carefully motivated principles. The resulting system is evaluated by analysis
and simulation against the constraints and principles originally set. The mechanisms are proven to be
agnostic to specific transport behaviours, but they could not be made flow-ID-oblivious
Socio-Cognitive and Affective Computing
Social cognition focuses on how people process, store, and apply information about other people and social situations. It focuses on the role that cognitive processes play in social interactions. On the other hand, the term cognitive computing is generally used to refer to new hardware and/or software that mimics the functioning of the human brain and helps to improve human decision-making. In this sense, it is a type of computing with the goal of discovering more accurate models of how the human brain/mind senses, reasons, and responds to stimuli. Socio-Cognitive Computing should be understood as a set of theoretical interdisciplinary frameworks, methodologies, methods and hardware/software tools to model how the human brain mediates social interactions. In addition, Affective Computing is the study and development of systems and devices that can recognize, interpret, process, and simulate human affects, a fundamental aspect of socio-cognitive neuroscience. It is an interdisciplinary field spanning computer science, electrical engineering, psychology, and cognitive science. Physiological Computing is a category of technology in which electrophysiological data recorded directly from human activity are used to interface with a computing device. This technology becomes even more relevant when computing can be integrated pervasively in everyday life environments. Thus, Socio-Cognitive and Affective Computing systems should be able to adapt their behavior according to the Physiological Computing paradigm. This book integrates proposals from researchers who use signals from the brain and/or body to infer people's intentions and psychological state in smart computing systems. The design of this kind of systems combines knowledge and methods of ubiquitous and pervasive computing, as well as physiological data measurement and processing, with those of socio-cognitive and affective computing
Constructive expertise: a critical, ecological and micro-developmental perspective on developing talent
A multitude of performance domains pursue the goal of understanding how we develop talent
and expertise. Therefore, the main objective of the present work was to embrace this pursuit
whilst operating in a sporting context. The work initially adopted an exploratory, critical and
investigative approach to the problem with the remaining series of studies emerging from these
initial findings. Study 1 utilised ethnographic enquiry over an eighteen month period whilst
working in collaboration with the Rugby Football Union Elite Referee Unit. The study found
shifts in existing perspectives of expertise and talent development including a) the movement
from a descriptive and phase-staged approach to one which is dynamic and non-linear, b) nonnormative as well as normative influences, c) recognition of an 'expert self as intrapersonal,
interpersonal, group and social, d) expertise development existing at micro-, meso- and macrodevelopment levels, e) an integrative, contextualised and multiplicative nature of expertise, f)
emergent as well as planned development, g) identification of a 'nested' and ecological outlook
of expertise acknowledging the necessity of a positive 'talent development environment'.
Additionally, mechanisms of expertise expanded on the existing theory of deliberate practice to
include 'deliberate experience' and 'transfer of skills'. In sum- study 1 encountered an approach
to expertise which embraced complexity and paradox, was equally psycho-social dynamic than
intrapersonal and fostered the necessity for a creation of contexts from which elite performance
can morph. From these findings, and alternative studies and readings, a period of reflection
occurred where models of 'non-linear and dynamical systems', 'talent development
environments', 'adaptive expertise', 'fractal models' and the promotion of adaptive expertise,
self-regulation and meta-cognitive skills required to negotiate the complex pathway associated
with eminent performance was explored before a final sense-making notion of 'expertise as
constructivism' was embraced. The remainder of the work embraced this constructivist
approach of expertise and talent development which was then researched in collaboration with
the Scottish Small-Bore Shooting team over a two year period. The period of work embraced
'constructivism as action research'. Study 2 utilised an 'ecological task analysis' of the Scottish
Small Bore Shooting team and its members to identify constraints and affordances of excellence.
It also served as a benchmark of existing levels of expertise which were evaluated at the end of
the action research. Study 3 served as the primary research study and assessed the overall
efficacy of the constructivist developmental approach inclusive of major transition processes
over the two year period as served by the constructivist design. The program was deemed
successful in relation to performance outcomes at the 2006 Melbourne Commonwealth Games.
Study 4 focused on the importance of creating constructivist 'talent development environments'
in comparison to an existing work of literature. Findings suggest a constructivist talent
development environment which attends to both the planned and emergent nature of expertise
requires fostering. Finally, a theoretical model of constructivist expertise and talent development
is offered encompassing the overall findings of the work
Achieving reliability and fairness in online task computing environments
MenciĂłn Internacional en el tĂtulo de doctorWe consider online task computing environments such as volunteer computing platforms running
on BOINC (e.g., SETI@home) and crowdsourcing platforms such as Amazon Mechanical
Turk. We model the computations as an Internet-based task computing system under the masterworker
paradigm. A master entity sends tasks across the Internet, to worker entities willing to
perform a computational task. Workers execute the tasks, and report back the results, completing
the computational round. Unfortunately, workers are untrustworthy and might report an incorrect
result. Thus, the first research question we answer in this work is how to design a reliable masterworker
task computing system. We capture the workersâ behavior through two realistic models:
(1) the âerror probability modelâ which assumes the presence of altruistic workers willing to
provide correct results and the presence of troll workers aiming at providing random incorrect
results. Both types of workers suffer from an error probability altering their intended response.
(2) The ârationality modelâ which assumes the presence of altruistic workers, always reporting
a correct result, the presence of malicious workers always reporting an incorrect result, and the
presence of rational workers following a strategy that will maximize their utility (benefit). The
rational workers can choose among two strategies: either be honest and report a correct result,
or cheat and report an incorrect result. Our two modeling assumptions on the workersâ behavior
are supported by an experimental evaluation we have performed on Amazon Mechanical Turk.
Given the error probability model, we evaluate two reliability techniques: (1) âvotingâ and (2)
âauditingâ in terms of task assignments required and time invested for computing correctly a set
of tasks with high probability. Considering the rationality model, we take an evolutionary game
theoretic approach and we design mechanisms that eventually achieve a reliable computational
platform where the master receives the correct task result with probability one and with minimal
auditing cost. The designed mechanisms provide incentives to the rational workers, reinforcing
their strategy to a correct behavior, while they are complemented by four reputation schemes that
cope with malice. Finally, we also design a mechanism that deals with unresponsive workers by
keeping a reputation related to the workersâ response rate. The designed mechanism selects the
most reliable and active workers in each computational round. Simulations, among other, depict
the trade-off between the masterâs cost and the time the system needs to reach a state where
the master always receives the correct task result. The second research question we answer in
this work concerns the fair and efficient distribution of workers among the masters over multiple computational rounds. Masters with similar tasks are competing for the same set of workers at
each computational round. Workers must be assigned to the masters in a fair manner; when the
master values a workerâs contribution the most. We consider that a master might have a strategic
behavior, declaring a dishonest valuation on a worker in each round, in an attempt to increase its
benefit. This strategic behavior from the side of the masters might lead to unfair and inefficient assignments
of workers. Applying renown auction mechanisms to solve the problem at hand can be
infeasible since monetary payments are required on the side of the masters. Hence, we present an
alternative mechanism for fair and efficient distribution of the workers in the presence of strategic
masters, without the use of monetary incentives. We show analytically that our designed mechanism
guarantees fairness, is socially efficient, and is truthful. Simulations favourably compare
our designed mechanism with two benchmark auction mechanisms.This work has been supported by IMDEA Networks Institute and the Spanish Ministry of Education grant FPU2013-03792.Programa Oficial de Doctorado en IngenierĂa MatemĂĄticaPresidente: Alberto Tarable.- Secretario: JosĂ© Antonio Cuesta Ruiz.- Vocal: Juan JuliĂĄn Merelo GuervĂł
Applied Metaheuristic Computing
For decades, Applied Metaheuristic Computing (AMC) has been a prevailing optimization technique for tackling perplexing engineering and business problems, such as scheduling, routing, ordering, bin packing, assignment, facility layout planning, among others. This is partly because the classic exact methods are constrained with prior assumptions, and partly due to the heuristics being problem-dependent and lacking generalization. AMC, on the contrary, guides the course of low-level heuristics to search beyond the local optimality, which impairs the capability of traditional computation methods. This topic series has collected quality papers proposing cutting-edge methodology and innovative applications which drive the advances of AMC
Player attitudes to avatar development in digital games: an exploratory study of single-player role-playing games and other genres
Digital games incorporate systems that allow players to customise and develop their controllable in-game representative (avatar) over the course of a game. Avatar customisation systems represent a point at which the goals and values of players interface with the intentions of the game developer forming a dynamic and complex relationship between system and user. With the proliferation of customisable avatars through digital games and the ongoing monetisation of customisation options through digital content delivery platforms it is important to understand the relationship between player and avatar in order to provide a better user experience and to develop an understanding of the cultural impact of the avatar.
Previous research on avatar customisation has focused on the users of virtual worlds and massively multiplayer games, leaving single-player avatar experiences. These past studies have also typically focused on one particular aspect of avatar customisation and those that have looked at all factors involved in avatar customisation have done so with a very small sample. This research has aimed to address this gap in the literature by focusing primarily on avatar customisation features in single-player games, aiming to investigate the relationship between player and customisation systems from the perspective of the players of digital games.
To fulfill the research aims and objectives, the qualitative approach of interpretative phenomenological analysis was adopted. Thirty participants were recruited using snowball and purposive sampling (the criteria being that participants had played games featuring customisable avatars) and accounts of their experiences were gathered through semi-structured interviews. Through this research, strategies of avatar customisation were explored in order to demonstrate how people use such systems. The shortcomings in game mechanics and user interfaces were highlighted so that future games can improve the avatar customisation experience
Public Law and Economics
This comprehensive textbook applies economic analysis to public law. The economic analysis of law has revolutionized legal scholarship and teaching in the last half-century, but it has focused mostly on private law, business law, and criminal law. This book extends the analysis to fundamental topics in public law, such as the separation of government powers, regulation by agencies, constitutional rights, and elections. Every public law involves six fundamental processes of government: bargaining, voting, entrenching, delegating, adjudicating, and enforcing. The book devotes two chapters to each process, beginning with the economic theory and then applying the theory to a wide range of puzzles and problems in law. Each chapter concentrates on cases and legal doctrine, showing the relevance of economics to the work of lawyers and judges. Featuring lucid, accessible writing and engaging examples, the book addresses enduring topics in public law as well as modern controversies, including gerrymandering, voter identification laws, and qualified immunity for police
A trust framework for peer-to-peer interaction in ad hoc networks
PhDAs a wider public is increasingly adopting mobile devices with diverse applications,
the idea of who to trust while on the move becomes a crucial one. The need to find
dependable partners to interact is further exacerbated in situations where one finds
oneself out of the range of backbone structures such as wireless base stations or
cellular networks. One solution is to generate self-started networks, a variant of
which is the ad hoc network that promotes peer-to-peer networking. The work in
this thesis is aimed at defining a framework for such an ad hoc network that provides
ways for participants to distinguish and collaborate with their most trustworthy
neighbours.
In this framework, entities create the ability to generate trust information by directly
observing the behaviour of their peers. Such trust information is also shared in order
to assist those entities in situations where prior interactions with their target peers
may not have existed.
The key novelty points of the framework focus on aggregating the trust evaluation
process around the most trustworthy nodes thereby creating a hierarchy of nodes that
are distinguished by the class, defined by cluster heads, to which they belong.
Furthermore, the impact of such a framework in generating additional overheads for
the network is minimised through the use of clusters. By design, the framework also
houses a rule-based mechanism to thwart misbehaving behaviour or non-cooperation.
Key performance indicators are also defined within this work that allow a framework
to be quickly analysed through snapshot data, a concept analogous to those used
within financial circles when assessing companies. This is also a novel point that
may provide the basis for directly comparing models with different underlying
technologies.
The end result is a trust framework that fully meets the basic requirements for a
sustainable model of trust that can be developed onto an ad hoc network and that
provides enhancements in efficiency (using clustering) and trust performance
- âŠ