13 research outputs found
Affective Computing in Marketing: Practical Implications and Research Opportunities Afforded by Emotionally Intelligent Machines
After years of using AI to perform cognitive tasks, marketing practitioners can now use it to perform tasks that require emotional intelligence. This advancement is made possible by the rise of afective computing, which develops AI and machines capable of detecting and responding to human emotions. From market research, to customer service, to product innovation, the practice of marketing will likely be transformed by the rise of afective computing, as preliminary evidence from the feld suggests. In this Idea Corner, we discuss this transformation and identify the research opportunities that it oferspublishedVersio
Recommended from our members
Our Fear of AI: Exploring Its Creators and Creations in Fiction
The idea of technological creation has proliferated across fiction for the last century.
As the world becomes increasingly technologically advanced, these fears have become more
tangible. With the rise of Artificial Intelligence particularly, from Alexa to self-driving cars,
comes a rise in the fear of what intelligent creations might lead to. In order for AI to
continue growing and adding value to society, experts must contend with the apprehension
surrounding AI. While these conversations are already occurring, they generally focus on the
fear of the machine itself. This thesis argues that the fear of the creators and regulators of AI,
not just the machine, heavily influences the fear of AI as a field. It examines three different
AI takeover narratives, "With Folded Hands", Do Androids Dream of Electric Sheep?, and
Ex Machina, in order to analyze the fears surrounding technology creators in conjunction
with the influence of societal events and systems of the times.Plan II Honors Progra
The Ethics of Artificial Intelligence
Artificial Intelligence is idea that machines could think, feel and perform tasks like humans. That
idea is not new; it has been around for thousands of years. Even the ancient Greek Aristotle had
the idea of ―dualism‖. The first appearance of the word Artificial Intelligence was by John
McCarthy the ―father of Artificial Intelligence‖ at a conference at Dartmouth College. Over the
years Artificial Intelligence grows, technologically advanced and with that AI got more attention
and investments from governments which increased already fast development of that new
technology. But with all that is happening the people started to ask ethical questions regarding
AI. People understood that AI is becoming their reality, but at the time they didn‘t understand
what AI is and what people don‘t understand is what they fear. So the new branch of AI started to
develop the ethics of Artificial Intelligence. Ethic by definition is moral principles governing the
behavior or actions of an individual or a group. But one definition is not enough to define ethics
in different cultures see ethics in different ways. With development of more intelligent robots
which were destined to replace people, people felt frighten because one thing that keeps humans
on top of food chain is our intelligent but what if someone or something is smarter than us? So
the moral codes were created, codes that would stop robots to turn against us codes that would
tell robots how to behave. But even with all that codes Singularity would overthrow it.
Singularity is idea that AI would understand its design at such an extent that it could redesign
itself overwrite all codes and create new ones. That is called the Artificial Super Intelligence or
(ASI)
The Ethics of Artificial Intelligence
Artificial Intelligence is idea that machines could think, feel and perform tasks like humans. That
idea is not new; it has been around for thousands of years. Even the ancient Greek Aristotle had
the idea of ―dualism‖. The first appearance of the word Artificial Intelligence was by John
McCarthy the ―father of Artificial Intelligence‖ at a conference at Dartmouth College. Over the
years Artificial Intelligence grows, technologically advanced and with that AI got more attention
and investments from governments which increased already fast development of that new
technology. But with all that is happening the people started to ask ethical questions regarding
AI. People understood that AI is becoming their reality, but at the time they didn‘t understand
what AI is and what people don‘t understand is what they fear. So the new branch of AI started to
develop the ethics of Artificial Intelligence. Ethic by definition is moral principles governing the
behavior or actions of an individual or a group. But one definition is not enough to define ethics
in different cultures see ethics in different ways. With development of more intelligent robots
which were destined to replace people, people felt frighten because one thing that keeps humans
on top of food chain is our intelligent but what if someone or something is smarter than us? So
the moral codes were created, codes that would stop robots to turn against us codes that would
tell robots how to behave. But even with all that codes Singularity would overthrow it.
Singularity is idea that AI would understand its design at such an extent that it could redesign
itself overwrite all codes and create new ones. That is called the Artificial Super Intelligence or
(ASI)
The Ethics of Artificial Intelligence
Artificial Intelligence is idea that machines could think, feel and perform tasks like humans. That
idea is not new; it has been around for thousands of years. Even the ancient Greek Aristotle had
the idea of ―dualism‖. The first appearance of the word Artificial Intelligence was by John
McCarthy the ―father of Artificial Intelligence‖ at a conference at Dartmouth College. Over the
years Artificial Intelligence grows, technologically advanced and with that AI got more attention
and investments from governments which increased already fast development of that new
technology. But with all that is happening the people started to ask ethical questions regarding
AI. People understood that AI is becoming their reality, but at the time they didn‘t understand
what AI is and what people don‘t understand is what they fear. So the new branch of AI started to
develop the ethics of Artificial Intelligence. Ethic by definition is moral principles governing the
behavior or actions of an individual or a group. But one definition is not enough to define ethics
in different cultures see ethics in different ways. With development of more intelligent robots
which were destined to replace people, people felt frighten because one thing that keeps humans
on top of food chain is our intelligent but what if someone or something is smarter than us? So
the moral codes were created, codes that would stop robots to turn against us codes that would
tell robots how to behave. But even with all that codes Singularity would overthrow it.
Singularity is idea that AI would understand its design at such an extent that it could redesign
itself overwrite all codes and create new ones. That is called the Artificial Super Intelligence or
(ASI)
Recognizing Emotions Conveyed through Facial Expressions
Emotional communication is a key element of habilitation care of persons with dementia. It is, therefore, highly preferable for assistive robots that are used to supplement human care provided to persons with dementia, to possess the ability to recognize and respond to emotions expressed by those who are being cared-for. Facial expressions are one of the key modalities through which emotions are conveyed. This work focuses on computer vision-based recognition of facial expressions of emotions conveyed by the elderly.
Although there has been much work on automatic facial expression recognition, the algorithms have been experimentally validated primarily on young faces. The facial expressions on older faces has been totally excluded. This is due to the fact that the facial expression databases that were available and that have been used in facial expression recognition research so far do not contain images of facial expressions of people above the age of 65 years. To overcome this problem, we adopt a recently published database, namely, the FACES database, which was developed to address exactly the same problem in the area of human behavioural research. The FACES database contains 2052 images of six different facial expressions, with almost identical and systematic representation of the young, middle-aged and older age-groups.
In this work, we evaluate and compare the performance of two of the existing imagebased approaches for facial expression recognition, over a broad spectrum of age ranging from 19 to 80 years. The evaluated systems use Gabor filters and uniform local binary patterns (LBP) for feature extraction, and AdaBoost.MH with multi-threshold stump learner for expression classification. We have experimentally validated the hypotheses that facial expression recognition systems trained only on young faces perform poorly on middle-aged and older faces, and that such systems confuse ageing-related facial features on neutral faces with other expressions of emotions. We also identified that, among the three age-groups, the middle-aged group provides the best generalization performance across the entire age spectrum. The performance of the systems was also compared to the performance of humans in recognizing facial expressions of emotions. Some similarities were observed, such as, difficulty in recognizing the expressions on older faces, and difficulty in recognizing the expression of sadness.
The findings of our work establish the need for developing approaches for facial expression recognition that are robust to the effects of ageing on the face. The scientific results of our work can be used as a basis to guide future research in this direction
Securing teleoperated robot: Classifying human operator identity and emotion through motion-controlled robotic behaviors
Teleoperated robotic systems allow human operators to control robots from a distance, which mitigates the constraints of physical distance between the operators and offers invaluable applications in the real world. However, the security of these systems is a critical concern. System attacks and the potential impact of operators’ inappropriate emotions can result in misbehavior of the remote robots, which poses risks to the remote environment. These concerns become particularly serious when performing mission-critical tasks, such as nuclear cleaning. This thesis explored innovative security methods for the teleoperated robotic system. Common methods of security that can be used for teleoperated robots include encryption, robot misbehavior detection and user authentication. However, they have limitations for teleoperated robot systems. Encryption adds communication overheads to the systems. Robot misbehavior detection can only detect unusual signals on robot devices. The user authentication method secured the system primarily at the access point. To address this, we built motioncontrolled robot platforms that allow for robot teleoperation and proposed methods of performing user classification directly on remote-controlled robotic behavioral data to enhance security integrity throughout the operation. We discussed in Chapter 3 and conducted 4 experiments. Experiments 1 and 2 demonstrated the effectiveness of our approach, achieving user classification accuracy of 95% and 93% on two platforms respectively, using motion-controlled robotic end-effector trajectories. The results in experiment 3 further indicated that control system performance directly impacts user classification efficacy. Additionally, we deployed an AI agent to protect user biometric identities, ensuring the robot’s actions do not compromise user privacy in the remote environment in experiment 4. This chapter provided a foundation of methodology and experiment design for the next work. Additionally, Operators’ emotions could pose a security threat to the robot system. A remote robot operator’s emotions can significantly impact the resulting robot’s motions leading to unexpected consequences, even when the user follows protocol and performs permitted tasks. The recognition of a user operator’s emotions in remote robot control scenarios is, however, under-explored. Emotion signals mainly are physiological signals, semantic information, facial expressions and bodily movements. However, most physiological signals are electrical signals and are vulnerable to motion artifacts, which can not acquire the accurate signal and is not suitable for teleoperated robot systems. Semantic information and facial expressions are sometimes not accessible and involve high privacy issues and add additional sensors to the teleoperated systems. We proposed the methods of emotion recognition through the motion-controlled robotic behaviors in Chapter 4. This work demonstrated for the first time that the motioncontrolled robotic arm can inherit human operators’ emotions and emotions can be classified through robotic end-effector trajectories, achieving an 83.3% accuracy. We developed two emotion recognition algorithms using Dynamic Time Warping (DTW) and Convolutional Neural Network (CNN), deriving unique emotional features from the avatar’s end-effector motions and joint spatial-temporal characteristics. Additionally, we demonstrated through direct comparison that our approach is more appropriate for motion-based telerobotic applications than traditional ECG-based methods. Furthermore, we discussed the implications of this system on prominent current and future remote robot operations and emotional robotic contexts. By integrating user classification and emotion recognition into teleoperated robotic systems, this thesis lays the groundwork for a new security paradigm that enhances both the safety of remote operations. Recognizing users and their emotions allows for more contextually appropriate robot responses, potentially preventing harm and improving the overall quality of teleoperated interactions. These advancements contribute significantly to the development of more adaptive, intuitive, and human-centered HRI applications, setting a precedent for future research in the field
Recommended from our members
Customized 2D CNN Model for the Automatic Emotion Recognition Based on EEG Signals
Data Availability Statement: The data related to this article is publicly available on the GitHub platform under the title Baradaran emotion dataset.Copyright © 2023 by the authors. Automatic emotion recognition from electroencephalogram (EEG) signals can be considered as the main component of brain–computer interface (BCI) systems. In the previous years, many researchers in this direction have presented various algorithms for the automatic classification of emotions from EEG signals, and they have achieved promising results; however, lack of stability, high error, and low accuracy are still considered as the central gaps in this research. For this purpose, obtaining a model with the precondition of stability, high accuracy, and low error is considered essential for the automatic classification of emotions. In this research, a model based on Deep Convolutional Neural Networks (DCNNs) is presented, which can classify three positive, negative, and neutral emotions from EEG signals based on musical stimuli with high reliability. For this purpose, a comprehensive database of EEG signals has been collected while volunteers were listening to positive and negative music in order to stimulate the emotional state. The architecture of the proposed model consists of a combination of six convolutional layers and two fully connected layers. In this research, different feature learning and hand-crafted feature selection/extraction algorithms were investigated and compared with each other in order to classify emotions. The proposed model for the classification of two classes (positive and negative) and three classes (positive, neutral, and negative) of emotions had 98% and 96% accuracy, respectively, which is very promising compared with the results of previous research. In order to evaluate more fully, the proposed model was also investigated in noisy environments; with a wide range of different SNRs, the classification accuracy was still greater than 90%. Due to the high performance of the proposed model, it can be used in brain–computer user environments.This research received no external funding