105,370 research outputs found
Predicting consumersā intention to purchase fully autonomous driving systems : which factors drive acceptance?
This study aimed to find which factors influence consumersā intention to purchase a fully
autonomous driving system in the future and which perceived product characteristics
influence the purchase intention and how. Therefore, an extension of the acceptance
model of Driver Assistant Systems by Arndt (2011) is presented. It integrates perceived
product characteristics specific to autonomous driving technology, to investigate which
factors determine the acceptance of fully autonomous driving systems. The proposed
model was empirically tested based on primary data collected in Germany. Exploratory
and confirmatory factor analyses were performed to assess the reliability and validity of
the measurement model. Further, structural equation modeling was used to evaluate the
causal relationships. The findings indicated that Attitude toward buying, Subjective Norm
and the perceived product characteristics Efficiency, Trust in Safety and Eco-Friendliness
significantly influenced individualsā behavioral intention to purchase driverless
technology. The variables perceived Comfort, Image and Driving Enjoyment were not
found to have a significant effect on behavioral intention. Attitude and Subjective Norm
had the most significant influence. A somewhat surprising finding was that Subjective
Norm not only had a direct effect on Behavioral Intention, as suggest by the theory of
reasoned action and theory of planned behavior, but also on Attitude
Artificial morality: Making of the artificial moral agents
Abstract:
Artificial Morality is a new, emerging interdisciplinary field that centres
around the idea of creating artificial moral agents, or AMAs, by implementing moral
competence in artificial systems. AMAs are ought to be autonomous agents capable of
socially correct judgements and ethically functional behaviour. This request for moral
machines comes from the changes in everyday practice, where artificial systems are being
frequently used in a variety of situations from home help and elderly care purposes to
banking and court algorithms. It is therefore important to create reliable and responsible
machines based on the same ethical principles that society demands from people. New
challenges in creating such agents appear. There are philosophical questions about a
machineās potential to be an agent, or mora
l agent, in the first place. Then comes the
problem of social acceptance of such machines, regardless of their theoretic agency
status. As a result of efforts to resolve this problem, there are insinuations of needed
additional psychological (emotional and cogn
itive) competence in cold moral machines.
What makes this endeavour of developing AMAs even harder is the complexity of the
technical, engineering aspect of their creation. Implementation approaches such as top-
down, bottom-up and hybrid approach aim to find the best way of developing fully
moral agents, but they encounter their own problems throughout this effort
The Current State of Normative Agent-Based Systems
Recent years have seen an increase in the application of ideas from the social sciences to computational systems. Nowhere has this been more pronounced than in the domain of multiagent systems. Because multiagent systems are composed of multiple individual agents interacting with each other many parallels can be drawn to human and animal societies. One of the main challenges currently faced in multiagent systems research is that of social control. In particular, how can open multiagent systems be configured and organized given their constantly changing structure? One leading solution is to employ the use of social norms. In human societies, social norms are essential to regulation, coordination, and cooperation. The current trend of thinking is that these same principles can be applied to agent societies, of which multiagent systems are one type. In this article, we provide an introduction to and present a holistic viewpoint of the state of normative computing (computational solutions that employ ideas based on social norms.) To accomplish this, we (1) introduce social norms and their application to agent-based systems; (2) identify and describe a normative process abstracted from the existing research; and (3) discuss future directions for research in normative multiagent computing. The intent of this paper is to introduce new researchers to the ideas that underlie normative computing and survey the existing state of the art, as well as provide direction for future research.Norms, Normative Agents, Agents, Agent-Based System, Agent-Based Simulation, Agent-Based Modeling
Theory of the Arbitration Process
A sensor fusion method for state estimation of a flexible industrial robot is developed. By measuring the acceleration at the end-effector, the accuracy of the arm angular position, as well as the estimated position of the end-effector are improved. The problem is formulated in a Bayesian estimation framework and two solutions are proposed; the extended Kalman filter and the particle filter. In a simulation study on a realistic flexible industrial robot, the angular position performance is shown to be close to the fundamental CramƩr-Rao lower bound. The technique is also verified in experiments on an ABB robot, where the dynamic performance of the position for the end-effector is significantly improved.Vinnova Excellence Center LINK-SICSSF project Collaborative Localizatio
Development of a Future Orientation Model in Emerging Adulthood in Hungary
Social and economic sustainability of countries globally largely depend on how well educational structures are capable of empowering future generations with skills and competencies to become autonomous and active citizens. Such competency is future planning, which is vital in the identity formation of youth in their developmental
phase of emerging adulthood. The article below attempts to elaborate a predictive model of future orientation based on current and future norms, future interest and concern. The model was tested on a sample population of business school students (N=217) in their emerging adulthood. Norm acceptance ranking proved to be different for present and future times. Amongst a number of contextual variables shaping the formation of future plans concern has been found to hold the strongest predictive power
Responsible Autonomy
As intelligent systems are increasingly making decisions that directly affect
society, perhaps the most important upcoming research direction in AI is to
rethink the ethical implications of their actions. Means are needed to
integrate moral, societal and legal values with technological developments in
AI, both during the design process as well as part of the deliberation
algorithms employed by these systems. In this paper, we describe leading ethics
theories and propose alternative ways to ensure ethical behavior by artificial
systems. Given that ethics are dependent on the socio-cultural context and are
often only implicit in deliberation processes, methodologies are needed to
elicit the values held by designers and stakeholders, and to make these
explicit leading to better understanding and trust on artificial autonomous
systems.Comment: IJCAI2017 (International Joint Conference on Artificial Intelligence
Liapunov Exponents and the Reversibility of Molecular Dynamics Algorithms
We study the phenomenon of lack of reversibility in molecular dynamics
algorithms for the case of Wilson's lattice QCD. We demonstrate that the
classical equations of motion that are employed in these algorithms are chaotic
in nature. The leading Liapunov exponent is determined in a range of coupling
parameters. We give a quantitative estimate of the consequences of the
breakdown of reversibility due to round-off errors.Comment: Latex2e file, 4 figures, 19 page
- ā¦