234,340 research outputs found

    Intelligent Playful Environments for Animals

    Full text link
    © Owner/Author 2015. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in Interacción '15 Proceedings of the XVI International Conference on Human Computer Interaction, http://dx.doi.org/10.1145/2829875.2829879We are evolving towards an interconnected and ubiquitous world, where digital devices and interfaces progressively adapt themselves to fit our needs and ease our daily activities. Although we coexist with plenty of animal species, such as our pets, we are approaching the evolution of technology in a strictly human-centric manner. A new field in Computer Science, called Animal-Computer Interaction (ACI), aims at filling this technological gap by developing systems and interfaces specifically designed for animals. Supporting animals' natural behavior and habits with suitable technology could improve both humans and animals' wellbeing. As a consequence, this doctoral research aims to explore, design and develop animal-centered intelligent systems that focus on enhancing one of the most natural animal behaviors: play. Therefore, the main goal of this research is to expand ACI with the ability of automatically manage and adapt animals play activity in order to improve their wellbeing.Work supported by MINECO (TIN2010-20488 and TIN2014-60077-R), UPV (UPV-FE-2014-24), MECD (FPU13/03831) and GVA (APOSTD/2013/013).Pons Tomás, P.; Jaén Martínez, FJ.; Catalá Bolós, A. (2015). Intelligent Playful Environments for Animals. ACM. https://doi.org/10.1145/2829875.2829879SHu, F., Silver, D., and Trude, A. LonelyDog@Home. 2007 IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology - Workshops, IEEE (2007), 333--337.Huizinga, J.Homo ludens. Wolters-Noordhoff, Groningen, The Nederlands, 1985.Mancini, C. Animal-computer interaction: a manifesto. Magazine interactions 18, 4 (2011), 69--73.Mancini, C. Animal-computer interaction (ACI): changing perspective on HCI, participation and sustainability. CHI '13 Extended Abstracts on Human Factors in Computing Systems, ACM Press (2013), 2227--2236.Matsuzawa, T. The Ai project: historical and ecological contexts. Animal cognition 6, 4 (2003), 199--211.Pons, P., Jaen, J., and Catala, A. Animal Ludens: Building Intelligent Playful Environments for Animals. 11th Conference on Advances in Computer Entertainment - Workshop on Animal Human Computer Interaction, (2014).Pons, P., Jaen, J., and Catala, A. Envisioning Future Playful Interactive Environments for Animals. In A. Nijholt, ed., More Playful User Interfaces. Springer, 2015.Robinson, C., Mancini, C., van der Linden, J., Guest, C., and Harris, R. Empowering assistance dogs: an alarm interface for canine use. Intelligent Systems for Animal Welfare, (2014).Rumbaugh, D.M., Gill, T. V., Brown, J. V., et al. A computer-controlled language training system for investigating the language skills of young apes. Behavior Research Methods & Instrumentation 5, 5 (1973), 385--392.Westerlaken, M. and Gualeni, S. Felino: The Philosophical Practice of Making an Interspecies Videogame. The Philosophy of Computer Games Conference, (2014), 1--12.Wingrave, C.A., Rose, J., Langston, T., and LaViola, J.J.J. Early explorations of CAT: canine amusement and training. CHI '10 Extended Abstracts on Human Factors in Computing Systems, (2010), 2661--2669.SpeakDolphin. http://www.speakdolphin.com

    Conceptual Primitive Decomposition for Knowledge Sharing via Natural Language

    Get PDF
    Natural language is an ideal mode of interaction and knowledge sharing between intelligent computer systems and their human users. But a major problem that natural language interaction poses is linguistic variation, or the paraphrase problem : there are a variety of ways of referring to the same idea. This is a special problem for intelligent systems in domains such as information retrieval, where a query presented in natural language is matched against an ontology or knowledge base, particularly when its representation uses a vocabulary based in natural language. This paper proposes solutions to these problems in primitive decomposition methods that represent concepts in terms of structures reflecting low-level, embodied human cognition. We argue that this type of representation system engenders richer relations between natural language expressions and knowledge structures, enabling more effective interactive knowledge sharing

    Facial emotion recognition using min-max similarity classifier

    Full text link
    Recognition of human emotions from the imaging templates is useful in a wide variety of human-computer interaction and intelligent systems applications. However, the automatic recognition of facial expressions using image template matching techniques suffer from the natural variability with facial features and recording conditions. In spite of the progress achieved in facial emotion recognition in recent years, the effective and computationally simple feature selection and classification technique for emotion recognition is still an open problem. In this paper, we propose an efficient and straightforward facial emotion recognition algorithm to reduce the problem of inter-class pixel mismatch during classification. The proposed method includes the application of pixel normalization to remove intensity offsets followed-up with a Min-Max metric in a nearest neighbor classifier that is capable of suppressing feature outliers. The results indicate an improvement of recognition performance from 92.85% to 98.57% for the proposed Min-Max classification method when tested on JAFFE database. The proposed emotion recognition technique outperforms the existing template matching methods

    Integrated Framework Design for Intelligent Human Machine Interaction

    Get PDF
    Human-computer interaction, sometimes referred to as Man-Machine Interaction, is a concept that emerged simultaneously with computers, or more generally machines. The methods by which humans have been interacting with computers have traveled a long way. New designs and technologies appear every day. However, computer systems and complex machines are often only technically successful, and most of the time users may find them confusing to use; thus, such systems are never used efficiently. Therefore, building sophisticated machines and robots is not the only thing someone has to address; in fact, more effort should be put to make these machines simpler for all kind of users, and generic enough to accommodate different types of environments. Thus, designing intelligent human computer interaction modules come to emerge. In this work, we aim to implement a generic framework (referred to as CIMF framework) that allows the user to control the synchronized and coordinated cooperative type of work that a set of robots can perform. Three robots are involved so far: Two manipulators and one mobile robot. The framework should be generic enough to be hardware independent and to allow the easy integration of new entities and modules. We also aim to implement the different building blocks for the intelligent manufacturing cell that communicates with the framework via the most intelligent and advanced human computer interaction techniques. Three techniques shall be addressed: Interface-, audio-, and visual-based type of interaction

    Symbiotic deep learning for medical image analysis with applications in real-time diagnosis for fetal ultrasound screening

    Get PDF
    The last hundred years have seen a monumental rise in the power and capability of machines to perform intelligent tasks in the stead of previously human operators. This rise is not expected to slow down any time soon and what this means for society and humanity as a whole remains to be seen. The overwhelming notion is that with the right goals in mind, the growing influence of machines on our every day tasks will enable humanity to give more attention to the truly groundbreaking challenges that we all face together. This will usher in a new age of human machine collaboration in which humans and machines may work side by side to achieve greater heights for all of humanity. Intelligent systems are useful in isolation, but the true benefits of intelligent systems come to the fore in complex systems where the interaction between humans and machines can be made seamless, and it is this goal of symbiosis between human and machine that may democratise complex knowledge, which motivates this thesis. In the recent past, datadriven methods have come to the fore and now represent the state-of-the-art in many different fields. Alongside the shift from rule-based towards data-driven methods we have also seen a shift in how humans interact with these technologies. Human computer interaction is changing in response to data-driven methods and new techniques must be developed to enable the same symbiosis between man and machine for data-driven methods as for previous formula-driven technology. We address five key challenges which need to be overcome for data-driven human-in-the-loop computing to reach maturity. These are (1) the ’Categorisation Challenge’ where we examine existing work and form a taxonomy of the different methods being utilised for data-driven human-in-the-loop computing; (2) the ’Confidence Challenge’, where data-driven methods must communicate interpretable beliefs in how confident their predictions are; (3) the ’Complexity Challenge’ where the aim of reasoned communication becomes increasingly important as the complexity of tasks and methods to solve also increases; (4) the ’Classification Challenge’ in which we look at how complex methods can be separated in order to provide greater reasoning in complex classification tasks; and finally (5) the ’Curation Challenge’ where we challenge the assumptions around bottleneck creation for the development of supervised learning methods.Open Acces

    An Intelligent Robot and Augmented Reality Instruction System

    Get PDF
    Human-Centered Robotics (HCR) is a research area that focuses on how robots can empower people to live safer, simpler, and more independent lives. In this dissertation, I present a combination of two technologies to deliver human-centric solutions to an important population. The first nascent area that I investigate is the creation of an Intelligent Robot Instructor (IRI) as a learning and instruction tool for human pupils. The second technology is the use of augmented reality (AR) to create an Augmented Reality Instruction (ARI) system to provide instruction via a wearable interface. To function in an intelligent and context-aware manner, both systems require the ability to reason about their perception of the environment and make appropriate decisions. In this work, I construct a novel formulation of several education methodologies, particularly those known as response prompting, as part of a cognitive framework to create a system for intelligent instruction, and compare these methodologies in the context of intelligent decision making using both technologies. The IRI system is demonstrated through experiments with a humanoid robot that uses object recognition and localization for perception and interacts with students through speech, gestures, and object interaction. The ARI system uses augmented reality, computer vision, and machine learning methods to create an intelligent, contextually aware instructional system. By using AR to teach prerequisite skills that lend themselves well to visual, augmented reality instruction prior to a robot instructor teaching skills that lend themselves to embodied interaction, I am able to demonstrate the potential of each system independently as well as in combination to facilitate students\u27 learning. I identify people with intellectual and developmental disabilities (I/DD) as a particularly significant use case and show that IRI and ARI systems can help fulfill the compelling need to develop tools and strategies for people with I/DD. I present results that demonstrate both systems can be used independently by students with I/DD to quickly and easily acquire the skills required for performance of relevant vocational tasks. This is the first successful real-world application of response-prompting for decision making in a robotic and augmented reality intelligent instruction system

    Envisioning Future Playful Interactive Environments for Animals

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/978-981-287-546-4_6Play stands as one of the most natural and inherent behavior among the majority of living species, specifically humans and animals. Human play has evolved significantly over the years, and so have done the artifacts which allow us to play: from children playing tag games without any tools other than their bodies, to modern video games using haptic and wearable devices to augment the playful experience. However, this ludic revolution has not been the same for the humans’ closest companions, our pets. Recently, a new discipline inside the human–computer interaction (HCI) community, called animal–computer interaction (ACI), has focused its attention on improving animals’ welfare using technology. Several works in the ACI field rely on playful interfaces to mediate this digital communication between animals and humans. Until now, the development of these interfaces only comprises a single goal or activity, and its adaptation to the animals’ needs requires the developers’ intervention. This work analyzes the existing approaches, proposing a more generic and autonomous system aimed at addressing several aspects of animal welfare at a time: Intelligent Playful Environments for Animals. The great potential of these systems is discussed, explaining how incorporating intelligent capabilities within playful environments could allow learning from the animals’ behavior and automatically adapt the game to the animals’ needs and preferences. The engaging playful activities created with these systems could serve different purposes and eventually improve animals’ quality of life.This work was partially funded by the Spanish Ministry of Science andInnovation under the National R&D&I Program within the projects Create Worlds (TIN2010-20488) and SUPEREMOS (TIN2014-60077-R), and from Universitat Politècnica de València under Project UPV-FE-2014-24. It also received support from a postdoctoral fellowship within theVALi+d Program of the Conselleria d’Educació, Cultura I Esport (Generalitat Valenciana) awarded to Alejandro Catalá (APOSTD/2013/013). The work of Patricia Pons has been supported by the Universitat Politècnica de València under the “Beca de Excelencia” program and currently by an FPU fellowship from the Spanish Ministry of Education, Culture, and Sports (FPU13/03831).Pons Tomás, P.; Jaén Martínez, FJ.; Catalá Bolós, A. (2015). Envisioning Future Playful Interactive Environments for Animals. En More Playful User Interfaces: Interfaces that Invite Social and Physical Interaction. Springer. 121-150. https://doi.org/10.1007/978-981-287-546-4_6S121150Alfrink, K., van Peer, I., Lagerweij H, et al.: Pig Chase. Playing with Pigs project. (2012) www.playingwithpigs.nlAmat, M., Camps, T., Le, Brech S., Manteca, X.: Separation anxiety in dogs: the implications of predictability and contextual fear for behavioural treatment. Anim. Welf. 23(3), 263–266 (2014). doi: 10.7120/09627286.23.3.263Barker, S.B., Dawson, K.S.: The effects of animal-assisted therapy on anxiety ratings of hospitalized psychiatric patients. Psychiatr. Serv. 49(6), 797–801 (1998)Bateson, P., Martin, P.: Play, Playfulness, Creativity and Innovation. Cambridge University Press, New York (2013)Bekoff, M., Allen, C.: Intentional communication and social play: how and why animals negotiate and agree to play. In: Bekoff, M., Byers, J.A. (eds.) Animal Play Evolutionary. Comparative and Ecological Perspectives, pp. 97–114. Cambridge University Press, New York (1997)Burghardt, G.M.: The Genesis of Animal Play. Testing the Limits. MIT Press, Cambridge (2006)Catalá, A., Pons, P., Jaén, J., et al.: A meta-model for dataflow-based rules in smart environments: evaluating user comprehension and performance. Sci. Comput. Prog. 78(10), 1930–1950 (2013). doi: 10.1016/j.scico.2012.06.010Cheok, A.D., Tan, R.T.K.C., Peiris, R.L., et al.: Metazoa ludens: mixed-reality interaction and play for small pets and humans. IEEE Trans. Syst. Man. Cybern.—Part A Syst. Hum. 41(5), 876–891 (2011). doi: 10.1109/TSMCA.2011.2108998Costello, B., Edmonds, E.: A study in play, pleasure and interaction design. In: Proceedings of the 2007 Conference on Designing Pleasurable Products and Interfaces, pp. 76–91 (2007)Csikszentmihalyi, M.: Beyond Boredom and Anxiety. The Experience of Play in Work and Games. Jossey-Bass Publishers, Hoboken (1975)Filan, S.L., Llewellyn-Jones, R.H.: Animal-assisted therapy for dementia: a review of the literature. Int. Psychogeriatr. 18(4), 597–611 (2006). doi: 10.1017/S1041610206003322García-Herranz, M., Haya, P.A., Alamán, X.: Towards a ubiquitous end-user programming system for smart spaces. J. Univ. Comput. Sci. 16(12), 1633–1649 (2010). doi: 10.3217/jucs-016-12-1633Hirskyj-Douglas, I., Read, J.C.: Who is really in the centre of dog computer interaction? In: Adjunct Proceedings of the 11th Conference on Advances in Computer Entertainment—Workshop on Animal Human Computer Interaction (2014)Hu, F., Silver, D., Trude, A.: LonelyDog@Home. In: International Conference Web Intelligence Intelligent Agent Technology—Workshops, 2007 IEEE/WIC/ACM IEEE, pp. 333–337, (2007)Huizinga, J.: Homo Ludens. Wolters-Noordhoff, Groningen (1985)Kamioka, H., Okada, S., Tsutani, K., et al.: Effectiveness of animal-assisted therapy: a systematic review of randomized controlled trials. Complement. Ther. Med. 22(2), 371–390 (2014). doi: 10.1016/j.ctim.2013.12.016Lee, S.P., Cheok, A.D., James, T.K.S., et al.: A mobile pet wearable computer and mixed reality system for human–poultry interaction through the internet. Pers. Ubiquit. Comput. 10(5), 301–317 (2006). doi: 10.1007/s00779-005-0051-6Leo, K., Tan, B.: User-tracking mobile floor projection virtual reality game system for paediatric gait and dynamic balance training. In: Proceedings of the 4th International Convention on Rehabilitation Engineering and Assistive Technology pp. 25:1–25:4 (2010)Mancini, C.: Animal-computer interaction: a manifesto. Mag. Interact. 18(4), 69–73 (2011). doi: 10.1145/1978822.1978836Mancini, C.: Animal-computer interaction (ACI): changing perspective on HCI, participation and sustainability. CHI ’13 Extended Abstracts on Human Factors in Computing Systems. ACM Press, New York, pp. 2227–2236 (2013)Mancini, C., van der Linden, J.: UbiComp for animal welfare: envisioning smart environments for kenneled dogs. In: Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing, pp. 117–128 (2014)Mancini, C., Harris, R., Aengenheister, B., Guest, C.: Re-centering multispecies practices: a canine interface for cancer detection dogs. In: Proceedings of the SIGCHI Conference on Human Factors in Computing System, pp. 2673–2682 (2015)Mancini, C., van der Linden, J., Bryan, J., Stuart, A.: Exploring interspecies sensemaking: dog tracking semiotics and multispecies ethnography. In: Proceedings of the 2012 ACM Conference on Ubiquitous Computing—UbiComp ’12. ACM Press, New York, pp. 143–152 (2012)Mankoff, D., Dey, A.K., Mankoff, J., Mankoff, K.: Supporting interspecies social awareness: using peripheral displays for distributed pack awareness. In: Proceedings of the 18th Annual ACM Symposium on User interface Software and Technology, pp. 253–258 (2005)Maternaghan, C., Turner, K.J.: A configurable telecare system. In: Proceedings of the 4th International Conference on Pervasive Technologies Related to Assistive Environments—PETRA ’11. ACM Press, New York, pp. 14:1–14:8 (2011)Matsuzawa, T.: The Ai project: historical and ecological contexts. Anim. Cogn. 6(4), 199–211 (2003). doi: 10.1007/s10071-003-0199-2McGrath, R.E.: Species-appropriate computer mediated interaction. CHI ‘09 Extended Abstracts on Human Factors in Computing Systems. ACM Press, New York, pp. 2529–2534 (2009)Mocholí, J.A., Jaén, J., Catalá, A.: A model of affective entities for effective learning environments. In: Innovations in Hybrid Intelligent Systems, pp. 337–344 (2007)Nijholt, A. (ed.): Playful User Interfaces. Springer, Singapore (2014)Norman, D.A.: The invisible computer. MIT Press, Cambridge (1998)Noz, F., An, J.: Cat cat revolution: an interspecies gaming experience. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 2661–2664 (2011)Paldanius, M., Kärkkäinen, T., Väänänen-Vainio-Mattila, K., et al.: Communication technology for human-dog interaction: exploration of dog owners’ experiences and expectations. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM Press, New York, pp. 2641–2650 (2011)Picard, R.W.: Affective Computing. MIT Press, Cambridge (1997)Pons, P., Jaén, J., Catalá, A.: Animal ludens: building intelligent playful environments for animals. In: Adjunct Proceedings of the 11th Conference on Advances in Computer Entertainment—Workshop on Animal Human Computer Interaction (2014)Resner, B.: Rover@Home: Computer Mediated Remote Interaction Between Humans and Dogs. M.Sc. thesis, Massachusetts Institute of Technology, Cambridge (2001)Ritvo, S.E., Allison, R.S.: Challenges related to nonhuman animal-computer interaction: usability and “liking”. In: Adjunct Proceedings of the 11th Conference on Advances in Computer Entertainment—Workshop on Animal Human Computer Interaction (2014)Robinson, C., Mncini, C., Van Der Linden, J., et al.: Canine-centered interface design: supporting the work of diabetes alert dogs. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 3757–3766 (2014)Rumbaugh, D.M.: Language Learning by a Chimpanzee: The LANA Project. Academic Press, New York (1977)Rumbaugh, D.M.: Apes and their future in comparative psychology. Eye Psi Chi 18(1), 16–19 (2013)Rumbaugh, D.M., Gill, T.V., Brown, J.V., et al.: A computer-controlled language training system for investigating the language skills of young apes. Behav. Res. Methods Instrum. 5(5), 385–392 (1973)Schwartz, S.: Separation anxiety syndrome in cats: 136 cases (1991–2000). J. Am. Vet. Med. Assoc. 220(7), 1028–1033 (2002). doi: 10.2460/javma.2002.220.1028Schwartz, S.: Separation anxiety syndrome in dogs and cats. J. Am. Vet. Med. Assoc. 222(11), 1526–1532 (2003)Solomon, O.: What a dog can do: children with autism and therapy dogs in social interaction. Ethos J. Soc. Psychol. Anthropol. 38(1), 143–166 (2010). doi: 10.1111/j.1548-1352.2010.01085.xTeh, K.S., Lee, S.P., Cheok, A.D.: Poultry. Internet: a remote human-pet interaction system. In: CHI ’06 Extended Abstracts on Human Factors in Computing Systems, pp. 251–254 (2006)Väätäjä, H., Pesonen, E.: Ethical issues and guidelines when conducting HCI studies with animals. In: CHI ’13 Extended Abstracts on Human Factors in Computing Systems, pp. 2159–2168 (2013)Väätäjä, H.: Animal welfare as a design goal in technology mediated human-animal interaction—opportunities with haptics. In: Adjunct Proceedings of the 11th Conference on Advances in Computer Entertainment—Workshop on Animal Human Computer Interaction (2014)Weilenmann, A., Juhlin, O.: Understanding people and animals. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems—CHI ’11. ACM Press, New York, pp. 2631–2640 (2011)Weiser, M.: The computer for the 21st century. Sci. Am. 265(3), 94–104 (1991)Westerlaken, M., Gualeni, S., Geurtsen, A.: Grounded zoomorphism: an evaluation methodology for ACI design. In: Adjunct Proceedings of the 11th Conference on Advances in Computer Entertainment—Workshop on Animal Human Computer Interaction (2014)Westerlaken, M., Gualeni, S.: Felino: the philosophical practice of making an interspecies videogame. Philosophy of Computer Games Conference, pp. 1–12 (2014)Wingrave, C.A., Rose, J., Langston, T., LaViola, J.J.J.: Early explorations of CAT: canine amusement and training. In: CHI ’10 Extended Abstracts on Human Factors in Computing Systems, pp. 2661–2669 (2010

    TOWARDS BUILDING INTELLIGENT COLLABORATIVE PROBLEM SOLVING SYSTEMS

    Get PDF
    Historically, Collaborative Problem Solving (CPS) systems were more focused on Human Computer Interaction (HCI) issues, such as providing good experience of communication among the participants. Whereas, Intelligent Tutoring Systems (ITS) focus both on HCI issues as well as leveraging Artificial Intelligence (AI) techniques in their intelligent agents. This dissertation seeks to minimize the gap between CPS systems and ITS by adopting the methods used in ITS researches. To move towards this goal, we focus on analyzing interactions with textual inputs in online learning systems such as DeepTutor and Virtual Internships (VI) to understand their semantics and underlying intents. In order to address the problem of assessing the student generated short text, this research explores firstly data driven machine learning models coupled with expert generated as well as general text analysis features. Secondly it explores method to utilize knowledge graph embedding for assessing student answer in ITS. Finally, it also explores a method using only standard reference examples generated by human teacher. Such method is useful when a new system has been deployed and no student data were available.To handle negation in tutorial dialogue, this research explored a Long Short Term Memory (LSTM) based method. The advantage of this method is that it requires no human engineered features and performs comparably well with other models using human engineered features.Another important analysis done in this research is to find speech acts in conversation utterances of multiple players in VI. Among various models, a noise label trained neural network model performed better in categorizing the speech acts of the utterances.The learners\u27 professional skill development in VI is characterized by the distribution of SKIVE elements, the components of epistemic frames. Inferring the population distribution of these elements could help to assess the learners\u27 skill development. This research sought a Markov method to infer the population distribution of SKIVE elements, namely the stationary distribution of the elements.While studying various aspects of interactions in our targeted learning systems, we motivate our research to replace the human mentor or tutor with intelligent agent. Introducing intelligent agent in place of human helps to reduce the cost as well as scale up the system

    Emotion Recognition from EEG Signal Focusing on Deep Learning and Shallow Learning Techniques

    Get PDF
    Recently, electroencephalogram-based emotion recognition has become crucial in enabling the Human-Computer Interaction (HCI) system to become more intelligent. Due to the outstanding applications of emotion recognition, e.g., person-based decision making, mind-machine interfacing, cognitive interaction, affect detection, feeling detection, etc., emotion recognition has become successful in attracting the recent hype of AI-empowered research. Therefore, numerous studies have been conducted driven by a range of approaches, which demand a systematic review of methodologies used for this task with their feature sets and techniques. It will facilitate the beginners as guidance towards composing an effective emotion recognition system. In this article, we have conducted a rigorous review on the state-of-the-art emotion recognition systems, published in recent literature, and summarized some of the common emotion recognition steps with relevant definitions, theories, and analyses to provide key knowledge to develop a proper framework. Moreover, studies included here were dichotomized based on two categories: i) deep learning-based, and ii) shallow machine learning-based emotion recognition systems. The reviewed systems were compared based on methods, classifier, the number of classified emotions, accuracy, and dataset used. An informative comparison, recent research trends, and some recommendations are also provided for future research directions
    corecore