290,966 research outputs found
Philosophical Agents
Abstraction is the technique we use to deal with complexity. What is the proper kind and level of abstraction for complex software agents? We think it would be reasonable to endow agents with a philosophy. Then, by understanding their philosophies, we can use them more effectively. To endow agents with ethical principles, developers need an architecture that supports explicit goals, principles and capabilities, as well as laws and ways to sanction or punish miscreants. All of the ethical approaches described in this article are single-agent in orientation and encode other agents implicitly
Philosophical Signposts for Artificial Moral Agent Frameworks
This article focuses on a particular issue under machine ethics—that is, the nature of Artificial Moral Agents. Machine ethics is a branch of artificial intelligence that looks into the moral status of artificial agents. Artificial moral agents, on the other hand, are artificial autonomous agents that possess moral value, as well as certain rights and responsibilities. This paper demonstrates that attempts to fully develop a theory that could possibly account for the nature of Artificial Moral Agents may consider certain philosophical ideas, like the standard characterizations of agency, rational agency, moral agency, and artificial agency. At the very least, the said philosophical concepts may be treated as signposts for further research on how to truly account for the nature of Artificial Moral Agents
Interactivist approach to representation in epigenetic agents
Interactivism is a vast and rather ambitious philosophical
and theoretical system originally developed by Mark
Bickhard, which covers plethora of aspects related to
mind and person. Within interactivism, an agent is
regarded as an action system: an autonomous, self-organizing,
self-maintaining entity, which can exercise
actions and sense their effects in the environment it
inhabits. In this paper, we will argue that it is especially
suited for treatment of the problem of representation in
epigenetic agents. More precisely, we will elaborate on
process-based ontology for representations, and will
sketch a way of discussing about architectures for
epigenetic agents in a general manner
Decision-Making: A Neuroeconomic Perspective
This article introduces and discusses from a philosophical point of view the nascent field of neuroeconomics, which is the study of neural mechanisms involved in decision-making and their economic significance. Following a survey of the ways in which decision-making is usually construed in philosophy, economics and psychology, I review many important findings in neuroeconomics to show that they suggest a revised picture of decision-making and ourselves as choosing agents. Finally, I outline a neuroeconomic account of irrationality
Complexity and Philosophy
The science of complexity is based on a new way of thinking that
stands in sharp contrast to the philosophy underlying Newtonian science, which is
based on reductionism, determinism, and objective knowledge. This paper reviews
the historical development of this new world view, focusing on its philosophical
foundations. Determinism was challenged by quantum mechanics and chaos theory.
Systems theory replaced reductionism by a scientifically based holism. Cybernetics
and postmodern social science showed that knowledge is intrinsically subjective.
These developments are being integrated under the header of “complexity science”.
Its central paradigm is the multi-agent system. Agents are intrinsically subjective
and uncertain about their environment and future, but out of their local interactions,
a global organization emerges. Although different philosophers, and in particular the
postmodernists, have voiced similar ideas, the paradigm of complexity still needs to
be fully assimilated by philosophy. This will throw a new light on old philosophical
issues such as relativism, ethics and the role of the subject
The Emotional Impact of Evil: Philosophical Reflections on Existential Problems
In The Brothers Karamazov, Dostoyevsky illustrates that encounters with evil do not solely impact agents’ beliefs about God (or God’s existence). Evil impacts people on an emotional level as well. Authors like Hasker and van Inwagen sometimes identify the emotional impact of evil with the “existential” problem of evil. For better or worse, the existential version of the problem is often set aside in contemporary philosophical discussions. In this essay, I rely on Robert Roberts’ account of emotions as “concern-based construals” to show that theistic philosophers can effectively address the existential problem (and so, the problem should not be set aside). In fact, addressing the emotional impact of evil is crucial, I argue, given that resolving just the impact of evil on agents’ beliefs about God constitutes an incomplete response to the problem of evil
Introduction: Virtue's Reasons
Over the past thirty years or so, virtues and reasons have emerged as two of the most fruitful and important concepts in contemporary moral philosophy. Virtue theory and moral psychology, for instance, are currently two burgeoning areas of philosophical investigation that involve different, but clearly related, focuses on individual agents’ responsiveness to reasons. The virtues themselves are major components of current ethical theories whose approaches to substantive or normative issues remain remarkably divergent in other respects. The virtues are also increasingly important in a variety of new approaches to epistemology. ..
Karma, Moral Responsibility and Buddhist Ethics
The Buddha taught that there is no self. He also accepted a version of the doctrine of karmic rebirth, according to which good and bad actions accrue merit and demerit respectively and where this determines the nature of the agent’s next life and explains some of the beneficial or harmful occurrences in that life. But how is karmic rebirth possible if there are no selves? If there are no selves, it would seem there are no agents that could be held morally responsible for ‘their’ actions. If actions are those happenings in the world performed by agents, it would seem there are no actions. And if there are no agents and no actions, then morality and the notion of karmic retribution would seem to lose application. Historical opponents argued that the Buddha's teaching of no self was tantamount to moral nihilism. The Buddha, and later Buddhist philosophers, firmly reject this charge. The relevant philosophical issues span a vast intellectual terrain and inspired centuries of philosophical reflection and debate. This article will contextualise and survey some of the historical and contemporary debates relevant to moral psychology and Buddhist ethics. They include whether the Buddha's teaching of no-self is consistent with the possibility of moral responsibility; the role of retributivism in Buddhist thought; the possibility of a Buddhist account of free will; the scope and viability of recent attempts to naturalise karma to character virtues and vices, and whether and how right action is to be understood within a Buddhist framework
Artificial morality: Making of the artificial moral agents
Abstract:
Artificial Morality is a new, emerging interdisciplinary field that centres
around the idea of creating artificial moral agents, or AMAs, by implementing moral
competence in artificial systems. AMAs are ought to be autonomous agents capable of
socially correct judgements and ethically functional behaviour. This request for moral
machines comes from the changes in everyday practice, where artificial systems are being
frequently used in a variety of situations from home help and elderly care purposes to
banking and court algorithms. It is therefore important to create reliable and responsible
machines based on the same ethical principles that society demands from people. New
challenges in creating such agents appear. There are philosophical questions about a
machine’s potential to be an agent, or mora
l agent, in the first place. Then comes the
problem of social acceptance of such machines, regardless of their theoretic agency
status. As a result of efforts to resolve this problem, there are insinuations of needed
additional psychological (emotional and cogn
itive) competence in cold moral machines.
What makes this endeavour of developing AMAs even harder is the complexity of the
technical, engineering aspect of their creation. Implementation approaches such as top-
down, bottom-up and hybrid approach aim to find the best way of developing fully
moral agents, but they encounter their own problems throughout this effort
A Case for Machine Ethics in Modeling Human-Level Intelligent Agents
This paper focuses on the research field of machine ethics and how it relates to a technological singularity—a hypothesized, futuristic event where artificial machines will have greater-than-human-level intelligence. One problem related to the singularity centers on the issue of whether human values and norms would survive such an event. To somehow ensure this, a number of artificial intelligence researchers have opted to focus on the development of artificial moral agents, which refers to machines capable of moral reasoning, judgment, and decision-making. To date, different frameworks on how to arrive at these agents have been put forward. However, there seems to be no hard consensus as to which framework would likely yield a positive result. With the body of work that they have contributed in the study of moral agency, philosophers may contribute to the growing literature on artificial moral agency. While doing so, they could also think about how the said concept could affect other important philosophical concepts
- …