13 research outputs found

    Affective Computing in Marketing: Practical Implications and Research Opportunities Afforded by Emotionally Intelligent Machines

    Get PDF
    After years of using AI to perform cognitive tasks, marketing practitioners can now use it to perform tasks that require emotional intelligence. This advancement is made possible by the rise of afective computing, which develops AI and machines capable of detecting and responding to human emotions. From market research, to customer service, to product innovation, the practice of marketing will likely be transformed by the rise of afective computing, as preliminary evidence from the feld suggests. In this Idea Corner, we discuss this transformation and identify the research opportunities that it oferspublishedVersio

    The Ethics of Artificial Intelligence

    Get PDF
    Artificial Intelligence is idea that machines could think, feel and perform tasks like humans. That idea is not new; it has been around for thousands of years. Even the ancient Greek Aristotle had the idea of ―dualism‖. The first appearance of the word Artificial Intelligence was by John McCarthy the ―father of Artificial Intelligence‖ at a conference at Dartmouth College. Over the years Artificial Intelligence grows, technologically advanced and with that AI got more attention and investments from governments which increased already fast development of that new technology. But with all that is happening the people started to ask ethical questions regarding AI. People understood that AI is becoming their reality, but at the time they didn‘t understand what AI is and what people don‘t understand is what they fear. So the new branch of AI started to develop the ethics of Artificial Intelligence. Ethic by definition is moral principles governing the behavior or actions of an individual or a group. But one definition is not enough to define ethics in different cultures see ethics in different ways. With development of more intelligent robots which were destined to replace people, people felt frighten because one thing that keeps humans on top of food chain is our intelligent but what if someone or something is smarter than us? So the moral codes were created, codes that would stop robots to turn against us codes that would tell robots how to behave. But even with all that codes Singularity would overthrow it. Singularity is idea that AI would understand its design at such an extent that it could redesign itself overwrite all codes and create new ones. That is called the Artificial Super Intelligence or (ASI)

    The Ethics of Artificial Intelligence

    Get PDF
    Artificial Intelligence is idea that machines could think, feel and perform tasks like humans. That idea is not new; it has been around for thousands of years. Even the ancient Greek Aristotle had the idea of ―dualism‖. The first appearance of the word Artificial Intelligence was by John McCarthy the ―father of Artificial Intelligence‖ at a conference at Dartmouth College. Over the years Artificial Intelligence grows, technologically advanced and with that AI got more attention and investments from governments which increased already fast development of that new technology. But with all that is happening the people started to ask ethical questions regarding AI. People understood that AI is becoming their reality, but at the time they didn‘t understand what AI is and what people don‘t understand is what they fear. So the new branch of AI started to develop the ethics of Artificial Intelligence. Ethic by definition is moral principles governing the behavior or actions of an individual or a group. But one definition is not enough to define ethics in different cultures see ethics in different ways. With development of more intelligent robots which were destined to replace people, people felt frighten because one thing that keeps humans on top of food chain is our intelligent but what if someone or something is smarter than us? So the moral codes were created, codes that would stop robots to turn against us codes that would tell robots how to behave. But even with all that codes Singularity would overthrow it. Singularity is idea that AI would understand its design at such an extent that it could redesign itself overwrite all codes and create new ones. That is called the Artificial Super Intelligence or (ASI)

    The Ethics of Artificial Intelligence

    Get PDF
    Artificial Intelligence is idea that machines could think, feel and perform tasks like humans. That idea is not new; it has been around for thousands of years. Even the ancient Greek Aristotle had the idea of ―dualism‖. The first appearance of the word Artificial Intelligence was by John McCarthy the ―father of Artificial Intelligence‖ at a conference at Dartmouth College. Over the years Artificial Intelligence grows, technologically advanced and with that AI got more attention and investments from governments which increased already fast development of that new technology. But with all that is happening the people started to ask ethical questions regarding AI. People understood that AI is becoming their reality, but at the time they didn‘t understand what AI is and what people don‘t understand is what they fear. So the new branch of AI started to develop the ethics of Artificial Intelligence. Ethic by definition is moral principles governing the behavior or actions of an individual or a group. But one definition is not enough to define ethics in different cultures see ethics in different ways. With development of more intelligent robots which were destined to replace people, people felt frighten because one thing that keeps humans on top of food chain is our intelligent but what if someone or something is smarter than us? So the moral codes were created, codes that would stop robots to turn against us codes that would tell robots how to behave. But even with all that codes Singularity would overthrow it. Singularity is idea that AI would understand its design at such an extent that it could redesign itself overwrite all codes and create new ones. That is called the Artificial Super Intelligence or (ASI)

    Recognizing Emotions Conveyed through Facial Expressions

    Get PDF
    Emotional communication is a key element of habilitation care of persons with dementia. It is, therefore, highly preferable for assistive robots that are used to supplement human care provided to persons with dementia, to possess the ability to recognize and respond to emotions expressed by those who are being cared-for. Facial expressions are one of the key modalities through which emotions are conveyed. This work focuses on computer vision-based recognition of facial expressions of emotions conveyed by the elderly. Although there has been much work on automatic facial expression recognition, the algorithms have been experimentally validated primarily on young faces. The facial expressions on older faces has been totally excluded. This is due to the fact that the facial expression databases that were available and that have been used in facial expression recognition research so far do not contain images of facial expressions of people above the age of 65 years. To overcome this problem, we adopt a recently published database, namely, the FACES database, which was developed to address exactly the same problem in the area of human behavioural research. The FACES database contains 2052 images of six different facial expressions, with almost identical and systematic representation of the young, middle-aged and older age-groups. In this work, we evaluate and compare the performance of two of the existing imagebased approaches for facial expression recognition, over a broad spectrum of age ranging from 19 to 80 years. The evaluated systems use Gabor filters and uniform local binary patterns (LBP) for feature extraction, and AdaBoost.MH with multi-threshold stump learner for expression classification. We have experimentally validated the hypotheses that facial expression recognition systems trained only on young faces perform poorly on middle-aged and older faces, and that such systems confuse ageing-related facial features on neutral faces with other expressions of emotions. We also identified that, among the three age-groups, the middle-aged group provides the best generalization performance across the entire age spectrum. The performance of the systems was also compared to the performance of humans in recognizing facial expressions of emotions. Some similarities were observed, such as, difficulty in recognizing the expressions on older faces, and difficulty in recognizing the expression of sadness. The findings of our work establish the need for developing approaches for facial expression recognition that are robust to the effects of ageing on the face. The scientific results of our work can be used as a basis to guide future research in this direction

    Securing teleoperated robot: Classifying human operator identity and emotion through motion-controlled robotic behaviors

    Get PDF
    Teleoperated robotic systems allow human operators to control robots from a distance, which mitigates the constraints of physical distance between the operators and offers invaluable applications in the real world. However, the security of these systems is a critical concern. System attacks and the potential impact of operators’ inappropriate emotions can result in misbehavior of the remote robots, which poses risks to the remote environment. These concerns become particularly serious when performing mission-critical tasks, such as nuclear cleaning. This thesis explored innovative security methods for the teleoperated robotic system. Common methods of security that can be used for teleoperated robots include encryption, robot misbehavior detection and user authentication. However, they have limitations for teleoperated robot systems. Encryption adds communication overheads to the systems. Robot misbehavior detection can only detect unusual signals on robot devices. The user authentication method secured the system primarily at the access point. To address this, we built motioncontrolled robot platforms that allow for robot teleoperation and proposed methods of performing user classification directly on remote-controlled robotic behavioral data to enhance security integrity throughout the operation. We discussed in Chapter 3 and conducted 4 experiments. Experiments 1 and 2 demonstrated the effectiveness of our approach, achieving user classification accuracy of 95% and 93% on two platforms respectively, using motion-controlled robotic end-effector trajectories. The results in experiment 3 further indicated that control system performance directly impacts user classification efficacy. Additionally, we deployed an AI agent to protect user biometric identities, ensuring the robot’s actions do not compromise user privacy in the remote environment in experiment 4. This chapter provided a foundation of methodology and experiment design for the next work. Additionally, Operators’ emotions could pose a security threat to the robot system. A remote robot operator’s emotions can significantly impact the resulting robot’s motions leading to unexpected consequences, even when the user follows protocol and performs permitted tasks. The recognition of a user operator’s emotions in remote robot control scenarios is, however, under-explored. Emotion signals mainly are physiological signals, semantic information, facial expressions and bodily movements. However, most physiological signals are electrical signals and are vulnerable to motion artifacts, which can not acquire the accurate signal and is not suitable for teleoperated robot systems. Semantic information and facial expressions are sometimes not accessible and involve high privacy issues and add additional sensors to the teleoperated systems. We proposed the methods of emotion recognition through the motion-controlled robotic behaviors in Chapter 4. This work demonstrated for the first time that the motioncontrolled robotic arm can inherit human operators’ emotions and emotions can be classified through robotic end-effector trajectories, achieving an 83.3% accuracy. We developed two emotion recognition algorithms using Dynamic Time Warping (DTW) and Convolutional Neural Network (CNN), deriving unique emotional features from the avatar’s end-effector motions and joint spatial-temporal characteristics. Additionally, we demonstrated through direct comparison that our approach is more appropriate for motion-based telerobotic applications than traditional ECG-based methods. Furthermore, we discussed the implications of this system on prominent current and future remote robot operations and emotional robotic contexts. By integrating user classification and emotion recognition into teleoperated robotic systems, this thesis lays the groundwork for a new security paradigm that enhances both the safety of remote operations. Recognizing users and their emotions allows for more contextually appropriate robot responses, potentially preventing harm and improving the overall quality of teleoperated interactions. These advancements contribute significantly to the development of more adaptive, intuitive, and human-centered HRI applications, setting a precedent for future research in the field

    Robots with emotional intelligence

    No full text
    corecore