84,908 research outputs found
tagE: Enabling an Embodied Agent to Understand Human Instructions
Natural language serves as the primary mode of communication when an
intelligent agent with a physical presence engages with human beings. While a
plethora of research focuses on natural language understanding (NLU),
encompassing endeavors such as sentiment analysis, intent prediction, question
answering, and summarization, the scope of NLU directed at situations
necessitating tangible actions by an embodied agent remains limited. The
inherent ambiguity and incompleteness inherent in natural language present
challenges for intelligent agents striving to decipher human intention. To
tackle this predicament head-on, we introduce a novel system known as task and
argument grounding for Embodied agents (tagE). At its core, our system employs
an inventive neural network model designed to extract a series of tasks from
complex task instructions expressed in natural language. Our proposed model
adopts an encoder-decoder framework enriched with nested decoding to
effectively extract tasks and their corresponding arguments from these
intricate instructions. These extracted tasks are then mapped (or grounded) to
the robot's established collection of skills, while the arguments find
grounding in objects present within the environment. To facilitate the training
and evaluation of our system, we have curated a dataset featuring complex
instructions. The results of our experiments underscore the prowess of our
approach, as it outperforms robust baseline models.Comment: Accepted in EMNLP Findings 202
ODYSSEY: Software development life cycle ontology
With the omnipresence of softwares in our Society from Information Technology (IT) services to autonomous agents, their systematic and efficient development is crucial for software developers. Hence, in this paper, we present an approach to assist intelligent agents (IA), whatever human beings or artificial systems, in theirs
task to develop and configure softwares. The proposed method is an ontological, developer-centred approach aiding a software developer in decision making and interoperable information sharing through the use of the ODYSSEY ontology we developed for the software development life cycle (SDLC) domain. This ODYSSEY ontology has been designed following the Enterprise Ontology (EO) methodology and coded in Descriptive Logic (DL). Its implementation in OWL has been evaluated for case studies, showing promising results
Intelligent Fighting Units
VĂ˝cvik vojenskĂ˝ch jednotek v terĂ©nu je spojen s velkĂ˝mi náklady, aĹĄ uĹľ se jedná o finance, materiálnĂ nebo lidskĂ© zdroje. Proto se ÄŤĂm dál tĂm vĂce klade dĹŻraz na vĂ˝cvik bojovĂ˝ch jednotek prostĹ™ednictvĂm simulátoru. Pro řádnĂ˝ vĂ˝cvik je pak potĹ™eba, aby se inteligence simulovanĂ˝ch jednotek co nejvĂce podobala inteligenci lidskĂ©, aby mohla ĂşspěšnÄ› nahradit lidskĂ©ho protivnĂka. Tato práce se zabĂ˝vá návrhem postupu realizace inteligentnĂho chovánĂ bojovĂ© jednotky, kterĂ˝ bude aplikovatelnĂ˝ na prostĹ™edĂ simulátoru firmy E-COM s.r.o. Je zde obecnÄ› popsána problematika inteligentnĂch agentĹŻ a zpĹŻsobu dosaĹľenĂ jejich racionálnĂho chovánĂ a autonomie. V tĂ©to práci je takĂ© popsán a rozebrán návrh realizace inteligentnĂ jednotky a jejĂ komunikace s okolnĂm prostĹ™edĂm. Dále se zabĂ˝vá základnĂ implementacĂ vytvoĹ™enĂ©ho návrhu a nad nĂ provedenĂ˝mi experimenty.The field training of army units includes high financial, material and human resource investments. From this reason, an emphasis on the simulator training of these units arised recently. But the training in simulator needs to have the simulated units as intelligent as a human beings are, so the field training with real human opponents can be successfully replaced with the simulator training. This work deals with the design of fighting unit's intelligent behaviour, that will be applicable in the E-COM simulator environment. Work covers the description of intelligent agents and ways how to achieve their rational and autonomous behaviour. The proposal and the analysis of intelligent fighting unit's implementation and unit's communication with surrounding environment, basic implementation of this proposal and experiments with created implementation are also described in this work.
A Case for Machine Ethics in Modeling Human-Level Intelligent Agents
This paper focuses on the research field of machine ethics and how it relates to a technological singularity—a hypothesized, futuristic event where artificial machines will have greater-than-human-level intelligence. One problem related to the singularity centers on the issue of whether human values and norms would survive such an event. To somehow ensure this, a number of artificial intelligence researchers have opted to focus on the development of artificial moral agents, which refers to machines capable of moral reasoning, judgment, and decision-making. To date, different frameworks on how to arrive at these agents have been put forward. However, there seems to be no hard consensus as to which framework would likely yield a positive result. With the body of work that they have contributed in the study of moral agency, philosophers may contribute to the growing literature on artificial moral agency. While doing so, they could also think about how the said concept could affect other important philosophical concepts
Intelligent Agents in Military, Defense and Warfare: Ethical Issues and Concerns
Due to tremendous progress in digital electronics now intelligent and autonomous agents are gradually being adopted into the fields and domains of the military, defense and warfare. This paper tries to explore some of the inherent ethical issues, threats and some remedial issues about the impact of such systems on human civilization and existence in general. This paper discusses human ethics in contrast to machine ethics and the problems caused by non-sentient agents. A systematic study is made on paradoxes regarding the long-term advantages of such agents in military combat. This paper proposes an international standard which could be adopted by all nations to bypass the adverse effects and solve ethical issues of such intelligent agents
Non-human Intention and Meaning-Making: An Ecological Theory
© Springer Nature Switzerland AG 2019. The final publication is available at Springer via https://doi.org/10.1007/978-3-319-97550-4_12Social robots have the potential to problematize many attributes that have previously been considered, in philosophical discourse, to be unique to human beings. Thus, if one construes the explicit programming of robots as constituting specific objectives and the overall design and structure of AI as having aims, in the sense of embedded directives, one might conclude that social robots are motivated to fulfil these objectives, and therefore act intentionally towards fulfilling those goals. The purpose of this paper is to consider the impact of this description of social robotics on traditional notions of intention and meaningmaking, and, in particular, to link meaning-making to a social ecology that is being impacted by the presence of social robots. To the extent that intelligent non-human agents are occupying our world alongside us, this paper suggests that there is no benefit in differentiating them from human agents because they are actively changing the context that we share with them, and therefore influencing our meaningmaking like any other agent. This is not suggested as some kind of Turing Test, in which we can no longer differentiate between humans and robots, but rather to observe that the argument in which human agency is defined in terms of free will, motivation, and intention can equally be used as a description of the agency of social robots. Furthermore, all of this occurs within a shared context in which the actions of the human impinge upon the non-human, and vice versa, thereby problematising Anscombe's classic account of intention.Peer reviewedFinal Accepted Versio
- …