97,652 research outputs found
Agent-Based Computing: Promise and Perils
Agent-based computing represents an exciting new synthesis both for Artificial Intelligence (AI) and, more generally, Computer Science. It has the potential to significantly improve the theory and practice of modelling, designing and implementing complex systems. Yet, to date, there has been little systematic analysis of what makes an agent such an appealing and powerful conceptual model. Moreover, even less effort has been devoted to exploring the inherent disadvantages that stem from adopting an agent-oriented view. Here both sets of issues are explored. The standpoint of this analysis is the role of agent-based software in solving complex, real-world problems. In particular, it will be argued that the development of robust and scalable software systems requires autonomous agents that can complete their objectives while situated in a dynamic and uncertain environment, that can engage in rich, high-level social interactions, and that can operate within flexible organisational structures
On Agent-Based Software Engineering
Agent-based computing represents an exciting new synthesis both for Artificial Intelligence (AI) and, more generally, Computer Science. It has the potential to significantly improve the theory and the practice of modeling, designing, and implementing computer systems. Yet, to date, there has been little systematic analysis of what makes the agent-based approach such an appealing and powerful computational model. Moreover, even less effort has been devoted to discussing the inherent disadvantages that stem from adopting an agent-oriented view. Here both sets of issues are explored. The standpoint of this analysis is the role of agent-based software in solving complex, real-world problems. In particular, it will be argued that the development of robust and scalable software systems requires autonomous agents that can complete their objectives while situated in a dynamic and uncertain environment, that can engage in rich, high-level social interactions, and that can operate within flexible organisational structures
Transfer Scenarios: Grounding Innovation with Marginal Practices
Transfer scenarios is a method developed to support the
design of innovative interactive technology. Such a method
should help the designer to come up with inventive ideas,
and at the same time provide grounding in real human
needs. In transfer scenarios, we use marginal practices to
encourage a changed mindset throughout the design
process. A marginal practice consists of individuals who
share an activity that they find meaningful. We regard these
individuals not as end-users, but as valuable input in the
design process. We applied this method when designing
novel applications for autonomous embodied agents, e.g.
robots. Owners of unusual pets, such as snakes and spiders,
were interviewed - not with the intention to design robot
pets, but to determine underlying needs and interests of
their practice. The results were then used to design a set of
applications for more general users, including a dynamic
living-room wall and a set of communicating hobby robots
Recommended from our members
Theory of deferred action: Agent-based simulation model for designing complex adaptive systems
Deferred action is the axiom that agents act in emergent organisation to achieve predetermined goals. Enabling deferred action in designed artificial complex adaptive systems like business organisations and IS is problematical. Emergence is an intractable problem for designers because it cannot be predicted. We develop proof-of-concept, conceptual proto-agent model, of emergent organisation and emergent IS to understand better design principles to enable deferred action as a mechanism for coping with emergence in artefacts. We focus on understanding the effect of emergence when designing artificial complex adaptive systems by developing an exploratory proto-agent model and evaluate its suitability for implementation as agent-based simulation
Taking Turing by Surprise? Designing Digital Computers for morally-loaded contexts
There is much to learn from what Turing hastily dismissed as Lady Lovelace s
objection. Digital computers can indeed surprise us. Just like a piece of art,
algorithms can be designed in such a way as to lead us to question our
understanding of the world, or our place within it. Some humans do lose the
capacity to be surprised in that way. It might be fear, or it might be the
comfort of ideological certainties. As lazy normative animals, we do need to be
able to rely on authorities to simplify our reasoning: that is ok. Yet the
growing sophistication of systems designed to free us from the constraints of
normative engagement may take us past a point of no-return. What if, through
lack of normative exercise, our moral muscles became so atrophied as to leave
us unable to question our social practices? This paper makes two distinct
normative claims:
1. Decision-support systems should be designed with a view to regularly
jolting us out of our moral torpor.
2. Without the depth of habit to somatically anchor model certainty, a
computer s experience of something new is very different from that which in
humans gives rise to non-trivial surprises. This asymmetry has key
repercussions when it comes to the shape of ethical agency in artificial moral
agents. The worry is not just that they would be likely to leap morally ahead
of us, unencumbered by habits. The main reason to doubt that the moral
trajectories of humans v. autonomous systems might remain compatible stems from
the asymmetry in the mechanisms underlying moral change. Whereas in humans
surprises will continue to play an important role in waking us to the need for
moral change, cognitive processes will rule when it comes to machines. This
asymmetry will translate into increasingly different moral outlooks, to the
point of likely unintelligibility. The latter prospect is enough to doubt the
desirability of autonomous moral agents
Recommended from our members
The theory of deferred action: Designing organisations and systems for complexity
Organization and systems are real, complex entities but the science of designing them should be simple. This book explores the process of organization and systems design by redefining and extending formalism capable of representing both purposeful structure and operational needs. The author proposes the notion of deferred action to cohere rationally designed systems with actual action. Researchers will glean radically different epistemological and ontological perspectives while designers will acquire entirely different intellectual tools, principles and mechanisms of design. Managers should learn to think of organization and systems differently and possibly change their management approach
Affect and believability in game characters:a review of the use of affective computing in games
Virtual agents are important in many digital environments. Designing a character that highly engages users in terms of interaction is an intricate task constrained by many requirements. One aspect that has gained more attention recently is the effective dimension of the agent. Several studies have addressed the possibility of developing an affect-aware system for a better user experience. Particularly in games, including emotional and social features in NPCs adds depth to the characters, enriches interaction possibilities, and combined with the basic level of competence, creates a more appealing game. Design requirements for emotionally intelligent NPCs differ from general autonomous agents with the main goal being a stronger player-agent relationship as opposed to problem solving and goal assessment. Nevertheless, deploying an affective module into NPCs adds to the complexity of the architecture and constraints. In addition, using such composite NPC in games seems beyond current technology, despite some brave attempts. However, a MARPO-type modular architecture would seem a useful starting point for adding emotions
- ā¦