26 research outputs found

    Integrating Multiple Sketch Recognition Methods to Improve Accuracy and Speed

    Get PDF
    Sketch recognition is the computer understanding of hand drawn diagrams. Recognizing sketches instantaneously is necessary to build beautiful interfaces with real time feedback. There are various techniques to quickly recognize sketches into ten or twenty classes. However for much larger datasets of sketches from a large number of classes, these existing techniques can take an extended period of time to accurately classify an incoming sketch and require significant computational overhead. Thus, to make classification of large datasets feasible, we propose using multiple stages of recognition. In the initial stage, gesture-based feature values are calculated and the trained model is used to classify the incoming sketch. Sketches with an accuracy less than a threshold value, go through a second stage of geometric recognition techniques. In the second geometric stage, the sketch is segmented, and sent to shape-specific recognizers. The sketches are matched against predefined shape descriptions, and confidence values are calculated. The system outputs a list of classes that the sketch could be classified as, along with the accuracy, and precision for each sketch. This process both significantly reduces the time taken to classify such huge datasets of sketches, and increases both the accuracy and precision of the recognition

    Integrating Multiple Sketch Recognition Methods to Improve Accuracy and Speed

    Get PDF
    Sketch recognition is the computer understanding of hand drawn diagrams. Recognizing sketches instantaneously is necessary to build beautiful interfaces with real time feedback. There are various techniques to quickly recognize sketches into ten or twenty classes. However for much larger datasets of sketches from a large number of classes, these existing techniques can take an extended period of time to accurately classify an incoming sketch and require significant computational overhead. Thus, to make classification of large datasets feasible, we propose using multiple stages of recognition. In the initial stage, gesture-based feature values are calculated and the trained model is used to classify the incoming sketch. Sketches with an accuracy less than a threshold value, go through a second stage of geometric recognition techniques. In the second geometric stage, the sketch is segmented, and sent to shape-specific recognizers. The sketches are matched against predefined shape descriptions, and confidence values are calculated. The system outputs a list of classes that the sketch could be classified as, along with the accuracy, and precision for each sketch. This process both significantly reduces the time taken to classify such huge datasets of sketches, and increases both the accuracy and precision of the recognition

    Development of context-sensitive user interfaces

    Get PDF
    Dobro dizajniran, intuitivan i privlačan za korišćenje korisnički interfejs predstavlja ključni faktor uspeha računarskih proizvoda i sistema. Radi unapređenja razvoja i upotrebljivosti korisničkih interfejsa potrebno je uzeti u obzir karakteristike korisnika. Ovo zahteva interdisciplinaran pristup i korišćenje znanja iz različitih oblasti kao što su računarske, saznajne i biološke nauke. Pored toga, potrebno je uzeti u obzir karakteristike medija i fizičkog okruženja u kojem se odvija interakcija čoveka i računara. Razvoj korisničkog interfejsa treba da uvaži i karakteristike hardverskih uređaja koji se koriste u komunikaciji sa korisnikom, dostupne softverske resurse, kao i karakteristike programskih sistema koji treba da koriste korisnički interfejs. U skladu sa tim, uvodi se pojam kontekstno-osetljivog interfejsa koji se definiše kao korisnički interfejs koji je prilagodljiv kontekstu interakcije sa konkretnim korisnikom. Kontekst interakcije čine tri klase entiteta: korisnik računarskog sistema (čovek); hardverska i softverska platforma pomoću kojih korisnici interaguju sa sistemom i fizičko okruženje u kojem se odigrava interakcija sa sistemom. Posmatrajući evoluciju razvoja softvera uočavamo povećanje nivoa apstrakcije na kojem se softver opisuje. Dostignuti nivo razvoja omogućava platformski nezavisnu specifikaciju softvera koja se postepeno ili automatizovano prevodi u izvršne aplikacije za različite softverske i hardverske platforme. Arhitektura upravljana modelima, koja se koristi za razvoj složenih programskih rešenja, hijerarhijski organizuje koncepte i modele u više nivoa apstrakcije. Ovo je posebno bitno imajući u vidu da je razvoj kontekstno-osetljivih korisničkih interfejsa složen proces koji uključuje modelovanje velikog broja elemenata na različitim nivoima apstrakcije. U ovoj tezi smo istraživali problem unapređenja razvoja kontekstno-osetljivih korisničkih interfejsa. Predloženo je rešenje koje omogućava automatizaciju razvoja korisničkog interfejsa prilagođenog kontekstu interakcije čoveka i računara. Rešenje se ogleda u proširenju jezika za modelovanje, standardnog procesa razvoja softverskih sistema (Unified proces) i razvojnih alata elementima specifičnim za interakciju čoveka i računara. U skladu sa prethodnim, razvijen je model kontekstno-osetljive interakcije čoveka i računara i predloženi su modeli korisničkih interfejsa na različitim nivoima apstrakcije. Zbog standardizacije, široke prihvaćenosti, i dostupnosti razvojnih alata, odlučili smo se za proširenje UML (Unified Modeling Language) jezika za modelovanje i ATL (Atlas Transformation Language) jezika za transformacije modela. Primena predloženog pristupa je demonstrirana na primerima dve studije slučaja iz različitih domena...Well-designed, intuitive and catchy-to-use user interface represents key issue of success of computer products and systems. In order to improve development and usability of user interfaces it is needed to take into account user’s charasteristics. This entails interdisciplinary approach and use of knowledge from different fields such as computing, cognitive and biological sciences. In addition, it is needed to consider features of the physical environment and the medium in which interaction between human and computer takes place. Development of user interface must include characteristics of hardware devices employed in interaction with the user, availabale software resources, as well as characteristics of software systems using the interface. According to stated, concept of context-sensitive user interface is introduced, defined as a user interface adaptable to context of interaction with concrete user. Context of interaction is decomposed into three classes of entitites: user of a computer system (human); hardware and software platforms by which users interact with the system and physical environment in which interaction with system happens. Looking at the evolution of software development, we can notice that the abstraction level on which software is described is increasing all the time. The latest trend is to specify software using platform-independent models, which are then gradually and (semi-) automatically transformed into executable applications for different platforms and target devices. Modeldriven architecture, used for development of complex software solutions, hierarchically organizes concepts and models into multiple abstraction levels. This is especially important regarding development of context-sensitive user interfaces which appears to be a complex process involved with modeling of a large number of elements on different abstraction levels. In this thesis, we have been exploring problem concerned with the improvement of development of context-sensitive user interfaces. Solution enabling automation of development of user interface adaptable to context of interaction between human and computer is proposed. Solution includes extensions of modelling language, standard software development process (Unified process) and development tools with the elements specific for human-computer interaction. Based on previous, model of context-sensitive human-computer interaction has been developed and user interface models on different abstraction levels have been proposed. For reasons of standardization, wide acceptance and availability of development tools, we have decided to extend UML (Unified Modeling Language) modeling language and ATL (Atlas Transformation Language) language for model transformations. Application of the proposed approach is demonstrated with examples of two case studies from different domains..

    Behavioral Task Modeling for Entity Recommendation

    Get PDF
    Our everyday tasks involve interactions with a wide range of information. The information that we manage is often associated with a task context. However, current computer systems do not organize information in this way, do not help the user find information in task context, but require explicit user actions such as searching and information seeking. We explore the use of task context to guide the delivery of information to the user proactively, that is, to have the right information easily available at the right time. In this thesis, we used two types of novel contextual information: 24/7 behavioral recordings and spoken conversations for task modeling. The task context is created by monitoring the user's information behavior from temporal, social, and topical aspects; that can be contextualized by several entities such as applications, documents, people, time, and various keywords determining the task. By tracking the association amongst the entities, we can infer the user's task context, predict future information access, and proactively retrieve relevant information for the task at hand. The approach is validated with a series of field studies, in which altogether 47 participants voluntarily installed a screen monitoring system on their laptops 24/7 to collect available digital activities, and their spoken conversations were recorded. Different aspects of the data were considered to train the models. In the evaluation, we treated information sourced from several applications, spoken conversations, and various aspects of the data as different kinds of influence on the prediction performance. The combined influences of multiple data sources and aspects were also considered in the models. Our findings revealed that task information could be found in a variety of applications and spoken conversations. In addition, we found that task context models that consider behavioral information captured from the computer screen and spoken conversations could yield a promising improvement in recommendation quality compared to the conventional modeling approach that considered only pre-determined interaction logs, such as query logs or Web browsing history. We also showed how a task context model could support the users' work performance, reducing their effort in searching by ranking and suggesting relevant information. Our results and findings have direct implications for information personalization and recommendation systems that leverage contextual information to predict and proactively present personalized information to the user to improve the interaction experience with the computer systems.Jokapäiväisiin tehtäviimme kuuluu vuorovaikutusta monenlaisten tietojen kanssa. Hallitsemamme tiedot liittyvät usein johonkin tehtäväkontekstiin. Nykyiset tietokonejärjestelmät eivät kuitenkaan järjestä tietoja tällä tavalla tai auta käyttäjää löytämään tietoja tehtäväkontekstista, vaan vaativat käyttäjältä eksplisiittisiä toimia, kuten tietojen hakua ja etsimistä. Tutkimme, kuinka tehtäväkontekstia voidaan käyttää ohjaamaan tietojen toimittamista käyttäjälle ennakoivasti, eli siten, että oikeat tiedot olisivat helposti saatavilla oikeaan aikaan. Tässä väitöskirjassa käytimme kahdenlaisia uusia kontekstuaalisia tietoja: 24/7-käyttäytymistallenteita ja tehtävän mallintamiseen liittyviä puhuttuja keskusteluja. Tehtäväkonteksti luodaan seuraamalla käyttäjän tietokäyttäytymistä ajallisista, sosiaalisista ja ajankohtaisista näkökulmista katsoen; sitä voidaan kuvata useilla entiteeteillä, kuten sovelluksilla, asiakirjoilla, henkilöillä, ajalla ja erilaisilla tehtävää määrittävillä avainsanoilla. Tarkastelemalla näiden entiteettien välisiä yhteyksiä voimme päätellä käyttäjän tehtäväkontekstin, ennustaa tulevaa tiedon käyttöä ja hakea ennakoivasti käsillä olevaan tehtävään liittyviä asiaankuuluvia tietoja. Tätä lähestymistapaa arvioitiin kenttätutkimuksilla, joissa yhteensä 47 osallistujaa asensi vapaaehtoisesti kannettaviin tietokoneisiinsa näytönvalvontajärjestelmän, jolla voitiin 24/7 kerätä heidän saatavilla oleva digitaalinen toimintansa, ja joissa tallennettiin myös heidän puhutut keskustelunsa. Mallien kouluttamisessa otettiin huomioon datan eri piirteet. Arvioinnissa käsittelimme useista sovelluksista, puhutuista keskusteluista ja datan eri piirteistä saatuja tietoja erilaisina vaikutuksina ennusteiden toimivuuteen. Malleissa otettiin huomioon myös useiden tietolähteiden ja näkökohtien yhteisvaikutukset. Havaintomme paljastivat, että tehtävätietoja löytyi useista sovelluksista ja puhutuista keskusteluista. Lisäksi havaitsimme, että tehtäväkontekstimallit, joissa otetaan huomioon tietokoneen näytöltä ja puhutuista keskusteluista saadut käyttäytymistiedot, voivat parantaa suositusten laatua verrattuna tavanomaiseen mallinnustapaan, jossa tarkastellaan vain ennalta määritettyjä vuorovaikutuslokeja, kuten kyselylokeja tai verkonselaushistoriaa. Osoitimme myös, miten tehtäväkontekstimalli pystyi tukemaan käyttäjien suoritusta ja vähentämään heidän hakuihin tarvitsemaansa työpanosta järjestämällä hakutuloksia ja ehdottamalla heille asiaankuuluvia tietoja. Tuloksillamme ja havainnoillamme on suoria vaikutuksia tietojen personointi- ja suositusjärjestelmiin, jotka hyödyntävät kontekstuaalista tietoa ennustaakseen ja esittääkseen ennakoivasti personoituja tietoja käyttäjälle ja näin parantaakseen vuorovaikutuskokemusta tietokonejärjestelmien kanssa

    Enhancing explainability and scrutability of recommender systems

    Get PDF
    Our increasing reliance on complex algorithms for recommendations calls for models and methods for explainable, scrutable, and trustworthy AI. While explainability is required for understanding the relationships between model inputs and outputs, a scrutable system allows us to modify its behavior as desired. These properties help bridge the gap between our expectations and the algorithm’s behavior and accordingly boost our trust in AI. Aiming to cope with information overload, recommender systems play a crucial role in filtering content (such as products, news, songs, and movies) and shaping a personalized experience for their users. Consequently, there has been a growing demand from the information consumers to receive proper explanations for their personalized recommendations. These explanations aim at helping users understand why certain items are recommended to them and how their previous inputs to the system relate to the generation of such recommendations. Besides, in the event of receiving undesirable content, explanations could possibly contain valuable information as to how the system’s behavior can be modified accordingly. In this thesis, we present our contributions towards explainability and scrutability of recommender systems: • We introduce a user-centric framework, FAIRY, for discovering and ranking post-hoc explanations for the social feeds generated by black-box platforms. These explanations reveal relationships between users’ profiles and their feed items and are extracted from the local interaction graphs of users. FAIRY employs a learning-to-rank (LTR) method to score candidate explanations based on their relevance and surprisal. • We propose a method, PRINCE, to facilitate provider-side explainability in graph-based recommender systems that use personalized PageRank at their core. PRINCE explanations are comprehensible for users, because they present subsets of the user’s prior actions responsible for the received recommendations. PRINCE operates in a counterfactual setup and builds on a polynomial-time algorithm for finding the smallest counterfactual explanations. • We propose a human-in-the-loop framework, ELIXIR, for enhancing scrutability and subsequently the recommendation models by leveraging user feedback on explanations. ELIXIR enables recommender systems to collect user feedback on pairs of recommendations and explanations. The feedback is incorporated into the model by imposing a soft constraint for learning user-specific item representations. We evaluate all proposed models and methods with real user studies and demonstrate their benefits at achieving explainability and scrutability in recommender systems.Unsere zunehmende Abhängigkeit von komplexen Algorithmen für maschinelle Empfehlungen erfordert Modelle und Methoden für erklärbare, nachvollziehbare und vertrauenswürdige KI. Zum Verstehen der Beziehungen zwischen Modellein- und ausgaben muss KI erklärbar sein. Möchten wir das Verhalten des Systems hingegen nach unseren Vorstellungen ändern, muss dessen Entscheidungsprozess nachvollziehbar sein. Erklärbarkeit und Nachvollziehbarkeit von KI helfen uns dabei, die Lücke zwischen dem von uns erwarteten und dem tatsächlichen Verhalten der Algorithmen zu schließen und unser Vertrauen in KI-Systeme entsprechend zu stärken. Um ein Übermaß an Informationen zu verhindern, spielen Empfehlungsdienste eine entscheidende Rolle um Inhalte (z.B. Produkten, Nachrichten, Musik und Filmen) zu filtern und deren Benutzern eine personalisierte Erfahrung zu bieten. Infolgedessen erheben immer mehr In- formationskonsumenten Anspruch auf angemessene Erklärungen für deren personalisierte Empfehlungen. Diese Erklärungen sollen den Benutzern helfen zu verstehen, warum ihnen bestimmte Dinge empfohlen wurden und wie sich ihre früheren Eingaben in das System auf die Generierung solcher Empfehlungen auswirken. Außerdem können Erklärungen für den Fall, dass unerwünschte Inhalte empfohlen werden, wertvolle Informationen darüber enthalten, wie das Verhalten des Systems entsprechend geändert werden kann. In dieser Dissertation stellen wir unsere Beiträge zu Erklärbarkeit und Nachvollziehbarkeit von Empfehlungsdiensten vor. • Mit FAIRY stellen wir ein benutzerzentriertes Framework vor, mit dem post-hoc Erklärungen für die von Black-Box-Plattformen generierten sozialen Feeds entdeckt und bewertet werden können. Diese Erklärungen zeigen Beziehungen zwischen Benutzerprofilen und deren Feeds auf und werden aus den lokalen Interaktionsgraphen der Benutzer extrahiert. FAIRY verwendet eine LTR-Methode (Learning-to-Rank), um die Erklärungen anhand ihrer Relevanz und ihres Grads unerwarteter Empfehlungen zu bewerten. • Mit der PRINCE-Methode erleichtern wir das anbieterseitige Generieren von Erklärungen für PageRank-basierte Empfehlungsdienste. PRINCE-Erklärungen sind für Benutzer verständlich, da sie Teilmengen früherer Nutzerinteraktionen darstellen, die für die erhaltenen Empfehlungen verantwortlich sind. PRINCE-Erklärungen sind somit kausaler Natur und werden von einem Algorithmus mit polynomieller Laufzeit erzeugt , um präzise Erklärungen zu finden. • Wir präsentieren ein Human-in-the-Loop-Framework, ELIXIR, um die Nachvollziehbarkeit der Empfehlungsmodelle und die Qualität der Empfehlungen zu verbessern. Mit ELIXIR können Empfehlungsdienste Benutzerfeedback zu Empfehlungen und Erklärungen sammeln. Das Feedback wird in das Modell einbezogen, indem benutzerspezifischer Einbettungen von Objekten gelernt werden. Wir evaluieren alle Modelle und Methoden in Benutzerstudien und demonstrieren ihren Nutzen hinsichtlich Erklärbarkeit und Nachvollziehbarkeit von Empfehlungsdiensten

    Social search in collaborative tagging networks : the role of ties

    Get PDF
    [no abstract

    Exploratory Browsing

    Get PDF
    In recent years the digital media has influenced many areas of our life. The transition from analogue to digital has substantially changed our ways of dealing with media collections. Today‟s interfaces for managing digital media mainly offer fixed linear models corresponding to the underlying technical concepts (folders, events, albums, etc.), or the metaphors borrowed from the analogue counterparts (e.g., stacks, film rolls). However, people‟s mental interpretations of their media collections often go beyond the scope of linear scan. Besides explicit search with specific goals, current interfaces can not sufficiently support the explorative and often non-linear behavior. This dissertation presents an exploration of interface design to enhance the browsing experience with media collections. The main outcome of this thesis is a new model of Exploratory Browsing to guide the design of interfaces to support the full range of browsing activities, especially the Exploratory Browsing. We define Exploratory Browsing as the behavior when the user is uncertain about her or his targets and needs to discover areas of interest (exploratory), in which she or he can explore in detail and possibly find some acceptable items (browsing). According to the browsing objectives, we group browsing activities into three categories: Search Browsing, General Purpose Browsing and Serendipitous Browsing. In the context of this thesis, Exploratory Browsing refers to the latter two browsing activities, which goes beyond explicit search with specific objectives. We systematically explore the design space of interfaces to support the Exploratory Browsing experience. Applying the methodology of User-Centered Design, we develop eight prototypes, covering two main usage contexts of browsing with personal collections and in online communities. The main studied media types are photographs and music. The main contribution of this thesis lies in deepening the understanding of how people‟s exploratory behavior has an impact on the interface design. This thesis contributes to the field of interface design for media collections in several aspects. With the goal to inform the interface design to support the Exploratory Browsing experience with media collections, we present a model of Exploratory Browsing, covering the full range of exploratory activities around media collections. We investigate this model in different usage contexts and develop eight prototypes. The substantial implications gathered during the development and evaluation of these prototypes inform the further refinement of our model: We uncover the underlying transitional relations between browsing activities and discover several stimulators to encourage a fluid and effective activity transition. Based on this model, we propose a catalogue of general interface characteristics, and employ this catalogue as criteria to analyze the effectiveness of our prototypes. We also present several general suggestions for designing interfaces for media collections

    Multimodal Interactive Transcription of Handwritten Text Images

    Full text link
    En esta tesis se presenta un nuevo marco interactivo y multimodal para la transcripción de Documentos manuscritos. Esta aproximación, lejos de proporcionar la transcripción completa pretende asistir al experto en la dura tarea de transcribir. Hasta la fecha, los sistemas de reconocimiento de texto manuscrito disponibles no proporcionan transcripciones aceptables por los usuarios y, generalmente, se requiere la intervención del humano para corregir las transcripciones obtenidas. Estos sistemas han demostrado ser realmente útiles en aplicaciones restringidas y con vocabularios limitados (como es el caso del reconocimiento de direcciones postales o de cantidades numéricas en cheques bancarios), consiguiendo en este tipo de tareas resultados aceptables. Sin embargo, cuando se trabaja con documentos manuscritos sin ningún tipo de restricción (como documentos manuscritos antiguos o texto espontáneo), la tecnología actual solo consigue resultados inaceptables. El escenario interactivo estudiado en esta tesis permite una solución más efectiva. En este escenario, el sistema de reconocimiento y el usuario cooperan para generar la transcripción final de la imagen de texto. El sistema utiliza la imagen de texto y una parte de la transcripción previamente validada (prefijo) para proponer una posible continuación. Despues, el usuario encuentra y corrige el siguente error producido por el sistema, generando así un nuevo prefijo mas largo. Este nuevo prefijo, es utilizado por el sistema para sugerir una nueva hipótesis. La tecnología utilizada se basa en modelos ocultos de Markov y n-gramas. Estos modelos son utilizados aquí de la misma manera que en el reconocimiento automático del habla. Algunas modificaciones en la definición convencional de los n-gramas han sido necesarias para tener en cuenta la retroalimentación del usuario en este sistema.Romero Gómez, V. (2010). Multimodal Interactive Transcription of Handwritten Text Images [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/8541Palanci

    Eye Tracking Methods for Analysis of Visuo-Cognitive Behavior in Medical Imaging

    Get PDF
    Predictive modeling of human visual search behavior and the underlying metacognitive processes is now possible thanks to significant advances in bio-sensing device technology and machine intelligence. Eye tracking bio-sensors, for example, can measure psycho-physiological response through change events in configuration of the human eye. These events include positional changes such as visual fixation, saccadic movements, and scanpath, and non-positional changes such as blinks and pupil dilation and constriction. Using data from eye-tracking sensors, we can model human perception, cognitive processes, and responses to external stimuli. In this study, we investigated the visuo-cognitive behavior of clinicians during the diagnostic decision process for breast cancer screening under clinically equivalent experimental conditions involving multiple monitors and breast projection views. Using a head-mounted eye tracking device and a customized user interface, we recorded eye change events and diagnostic decisions from 10 clinicians (three breast-imaging radiologists and seven Radiology residents) for a corpus of 100 screening mammograms (comprising cases of varied pathology and breast parenchyma density). We proposed novel features and gaze analysis techniques, which help to encode discriminative pattern changes in positional and non-positional measures of eye events. These changes were shown to correlate with individual image readers' identity and experience level, mammographic case pathology and breast parenchyma density, and diagnostic decision. Furthermore, our results suggest that a combination of machine intelligence and bio-sensing modalities can provide adequate predictive capability for the characterization of a mammographic case and image readers diagnostic performance. Lastly, features characterizing eye movements can be utilized for biometric identification purposes. These findings are impactful in real-time performance monitoring and personalized intelligent training and evaluation systems in screening mammography. Further, the developed algorithms are applicable in other application domains involving high-risk visual tasks

    Learning Data-Driven Models of Non-Verbal Behaviors for Building Rapport Using an Intelligent Virtual Agent

    Get PDF
    There is a growing societal need to address the increasing prevalence of behavioral health issues, such as obesity, alcohol or drug use, and general lack of treatment adherence for a variety of health problems. The statistics, worldwide and in the USA, are daunting. Excessive alcohol use is the third leading preventable cause of death in the United States (with 79,000 deaths annually), and is responsible for a wide range of health and social problems. On the positive side though, these behavioral health issues (and associated possible diseases) can often be prevented with relatively simple lifestyle changes, such as losing weight with a diet and/or physical exercise, or learning how to reduce alcohol consumption. Medicine has therefore started to move toward finding ways of preventively promoting wellness, rather than solely treating already established illness. Evidence-based patient-centered Brief Motivational Interviewing (BMI) interven- tions have been found particularly effective in helping people find intrinsic motivation to change problem behaviors after short counseling sessions, and to maintain healthy lifestyles over the long-term. Lack of locally available personnel well-trained in BMI, however, often limits access to successful interventions for people in need. To fill this accessibility gap, Computer-Based Interventions (CBIs) have started to emerge. Success of the CBIs, however, critically relies on insuring engagement and retention of CBI users so that they remain motivated to use these systems and come back to use them over the long term as necessary. Because of their text-only interfaces, current CBIs can therefore only express limited empathy and rapport, which are the most important factors of health interventions. Fortunately, in the last decade, computer science research has progressed in the design of simulated human characters with anthropomorphic communicative abilities. Virtual characters interact using humans’ innate communication modalities, such as facial expressions, body language, speech, and natural language understanding. By advancing research in Artificial Intelligence (AI), we can improve the ability of artificial agents to help us solve CBI problems. To facilitate successful communication and social interaction between artificial agents and human partners, it is essential that aspects of human social behavior, especially empathy and rapport, be considered when designing human-computer interfaces. Hence, the goal of the present dissertation is to provide a computational model of rapport to enhance an artificial agent’s social behavior, and to provide an experimental tool for the psychological theories shaping the model. Parts of this thesis were already published in [LYL+12, AYL12, AL13, ALYR13, LAYR13, YALR13, ALY14]
    corecore