10 research outputs found

    Personality in Healthcare Human Robot Interaction (H-HRI): A Literature Review and Brief Critique

    Full text link
    Robots are becoming an important way to deliver health care, and personality is vital to understanding their effectiveness. Despite this, there is a lack of a systematic overarching understanding of personality in health care human robot interaction (H-HRI). To address this, the authors conducted a review that identified 18 studies on personality in H-HRI. This paper presents the results of that systematic literature review. Insights are derived from this review regarding the methodologies, outcomes, and samples utilized. The authors of this review discuss findings across this literature while identifying several gaps worthy of attention. Overall, this paper is an important starting point in understanding personality in H-HRI.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/156252/1/Esterwood and Robert 2020.pdfDescription of Esterwood and Robert 2020.pdf : ArticleSEL

    Is Silent External Humanโ€“Machine Interface (eHMI) Enough? A Passenger-Centric Study on Effective eHMI for Autonomous Personal Mobility Vehicles in the Field

    Get PDF
    Density functional quantum mechanical research yield a comprehensive theoretical approach on the selected organic molecule of 9,9-dihydroxynanoic acid. The DFT approach is implemented to optimise the molecular structure. The geometrical parameters of molecule were calculated. The vibrational spectrum studies were done with FT-IR and FT-Raman and their results were investigated. VEDA software is utilised to simulate PED for the basic vibrational frequencies in order to find all the vibration modes. UVโ€“Vis analysis is performed for the gaseous phase and on solvent phases such as water, ethanol, ethyl ethanoate, acetone, and DMSO by TD-DFT-B3LYP with 6-311++G (d,p), and band gap energies are calculated. FMO investigations reported analysing the compound energy gap, softness, hardness, and electrophilicity index in gaseous phase along with a variety solvent phase. To further understand the molecule\u27s reactive areas, Mulliken atomic charge evaluation, Fukui functions, and dual descriptors were conceptually investigated. The electro static MEP map of the compound in distinct solvent phases is provided in order to comprehend the molecular shape, size, and reactive region. The molecule\u27s hyperpolarizability is observed to be 201.8 a.u for gas, 146.6 a.u for acetone, 138.9 a.u for DMSO, 136.3 a.u for water, and 169.8 a.u for ethyl ethanoate, will supports the molecule is good NLO material. The donor acceptor interaction has been explored so as to explain the intramolecular hyper conjugative interaction and stability. To understand the topology Multiwfn analysis like ELF, LOL and ALIE were done. The E.coli proteins are downloaded from the PDB database and fictitiously docked with our selected ligand molecule using Auto Dock software to determine hydrogen bonding and binding energy

    Is Silent External Humanโ€“Machine Interface (eHMI) Enough? A Passenger-Centric Study on Effective eHMI for Autonomous Personal Mobility Vehicles in the Field

    Get PDF
    Autonomous personal mobility vehicle (APMV) is a miniaturized autonomous vehicle designed for short-distance mobility to everyone. Due to its open design, APMVโ€™s passengers are exposed to communications between the external human-machine interface (eHMI) on APMV and pedestrians. Therefore, effective eHMI designs for APMV need to consider potential impacts of APMV-pedestrian interactions on passengersโ€™ subjective feelings. This study from the perspective of APMV passengers discussed three eHMI designs: (1) graphical user interface (GUI)-based eHMI with text message (eHMI-T), (2) multimodal user interface (MUI)-based eHMI with neutral voice (eHMI-NV), and (3) MUI-based eHMI with affective voice (eHMI-AV). In a riding field experiment (Nโ€‰=โ€‰24), eHMI-T made passengers feel awkward during the โ€œsilent timeโ€ when eHMI-T conveyed information exclusively to pedestrians, not passengers. MUI-based eHMIs with voice cues showed advantages, with eHMI-NV excelling in pragmatic quality and eHMI-AV in hedonic quality. Considering passengersโ€™ personalities and genders in APMV eHMI design is also highlighted

    A Review of Personality in Human Robot Interactions

    Full text link
    Personality has been identified as a vital factor in understanding the quality of human robot interactions. Despite this the research in this area remains fragmented and lacks a coherent framework. This makes it difficult to understand what we know and identify what we do not. As a result our knowledge of personality in human robot interactions has not kept pace with the deployment of robots in organizations or in our broader society. To address this shortcoming, this paper reviews 83 articles and 84 separate studies to assess the current state of human robot personality research. This review: (1) highlights major thematic research areas, (2) identifies gaps in the literature, (3) derives and presents major conclusions from the literature and (4) offers guidance for future research.Comment: 70 pages, 2 figure

    ๊ฐœ๋ฐœ ์ดˆ๊ธฐ ๋‹จ๊ณ„์—์„œ์˜ ์†Œ์…œ AI ๊ฐœ์ธ๋น„์„œ ํ‰๊ฐ€ ๋ชจ๋ธ ๊ฐœ๋ฐœ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ์‚ฐ์—…๊ณตํ•™๊ณผ, 2022.2. ์œค๋ช…ํ™˜.This dissertation aims to propose a user evaluation model to evaluate social AI personal assistants in the early stage of product development. Due to the rapid development of personal devices, data generated from personal devices are increasing explosively, and various personal AI services and products using these data are being launched. However, compared to the interest in AI personal assistant products, its market is still immature. In this case, it is important to understand consumer expectations and perceptions deeply and develop a product that can satisfy them to spread the product and allow general consumers to easily accept the product promptly. Accordingly, this dissertation proposes and validates a user evaluation model that can be used in the early stage of product development. Prior to proposing this methodology, main characteristics of social AI personal assistants, the importance of user evaluation in the early stage of product development and the limitations of the existing user evaluation model were investigated in Chapter 2. Various technology acceptance models and evaluation models for social AI personal assistant products have been proposed, evaluation models that can be applied in the initial stage of product development were insufficient, however. Moreover, it was found that commonly used evaluation measures for assessment of hedonic value were much fewer compared to measures for utilitarian value. These were used as starting points of this dissertation. In Chapter 3, the evaluation measures used in previous studies related to social AI personal assistant were collected and carefully reviewed. Through systematic review of 40 studies, the evaluation measures used in the past and limitation of related research were investigated. As a result, it was found that it was not easy to develop a prototype for evaluation, so it was possible to make the most of the products that have already been commercialized. In addition, all evaluation items used in previous studies were collected and used as the basis for the evaluation model to be proposed later. As a result of the analysis, considering the purpose of the social AI personal assistant, the role as supporting the user emotionally through social interaction with the user is important, but it was found that the evaluation measures related to hedonic value that are commonly used were still insufficient. In Chapter 4, evaluation measures that can be used in the initial stage of product development for social AI personal assistant were selected. Selected evaluation measures were used to evaluate three types of social robots and relationship among evaluation factors were induced through this evaluation. A process was proposed to understand to various opinions related to social robots and to derive evaluation items, and a case study was conducted in which a total of 230 people evaluated three social robots concept images using the evaluation items finally selected through this process. As a result, it is shown that consumersโ€™ attitude toward products was built through the utilitarian dimension and the hedonic dimension. In addition, there is positive relationship between ease of use and utility in the utilitarian dimension, and among aesthetic pleasure, attractiveness of personality, affective value in the hedonic dimension. Moreover, it is confirmed that the evaluation model derived from this study showed superior explanatory power compared to the previously proposed technology acceptance model. In Chapter 5, the model was validated again by applying the evaluation measure and the relationship among evaluation factors derived in Chapter 4 to other products. 100 UX experts with expertise in the field of social AI personal assistants and 100 users who use the voice assistant service often, watched two concept videos of the voice assistant service to help users in the onboarding situation of mobile phones and evaluated these concepts. As a result of the evaluation, there is no significant difference in the evaluation results between the UX expert and the real user group, so the structural equation model analysis was conducted using all the data obtained from the UX expert and the real user group. As a result, results similar to those in Chapter 4 are obtained, and it is expected that the model could be generalized to social AI personal assistant products and applied for future research. This dissertation proposes evaluation measure and relationship among evaluation factors that can be applied when conducting user evaluation in the initial stage of social AI personal assistant development. In addition, case studies using social AI personal assistant products and services were conducted to validate it. With the findings of this study, it is expected that researchers who need to conduct user evaluation to clarify product concepts in the early stages of product development will be able to apply evaluation measures effectively. It is expected that the significance of this dissertation will become clearer if further research is conducted comparing the finished product of social AI personal assistants with the video type stimulus in the early stage of development.๋ณธ ๋…ผ๋ฌธ์€ ์ตœ๊ทผ ๋น ๋ฅด๊ฒŒ ๋ฐœ์ „ํ•˜๊ณ  ์žˆ๋Š” social AI personal assistant์˜ ๊ฐœ๋ฐœ ์ดˆ๊ธฐ ๋‹จ๊ณ„์— ํ™œ์šฉ ๊ฐ€๋Šฅํ•œ ์‚ฌ์šฉ์ž ํ‰๊ฐ€ ํ•ญ๋ชฉ์„ ๊ฐœ๋ฐœํ•˜๊ณ  ํ‰๊ฐ€ ํ•ญ๋ชฉ ๊ฐ„์˜ ๊ด€๊ณ„๋ฅผ ๊ฒ€์ฆํ•˜๋Š” ๊ฒƒ์„ ๋ชฉํ‘œ๋กœ ํ•œ๋‹ค. ๊ฐœ์ธ ๋””๋ฐ”์ด์Šค์˜ ๋ฐœ๋‹ฌ๋กœ ์ธํ•ด, ๊ฐ ๋””๋ฐ”์ด์Šค์—์„œ ์ƒ์„ฑ๋˜๋Š” ๋ฐ์ดํ„ฐ๊ฐ€ ํญ๋ฐœ์ ์œผ๋กœ ์ฆ๊ฐ€ํ•˜๊ณ  ์žˆ๊ณ , ์ด๋ฅผ ํ™œ์šฉํ•œ ๊ฐœ์ธ์šฉ AI ์„œ๋น„์Šค ๋ฐ ์ œํ’ˆ์ด ๋‹ค์–‘ํ•˜๊ฒŒ ์ œ์•ˆ๋˜๊ณ  ์žˆ๋‹ค. ํ•˜์ง€๋งŒ ๊ทธ ๊ด€์‹ฌ์— ๋น„ํ•ด, social AI personal assistant ์ œํ’ˆ์˜ ์‹ค์ œ ์‹œ์žฅ์€ ์•„์ง ์„ฑ์ˆ™ํ•˜์ง€ ์•Š์€ ๋‹จ๊ณ„์ด๋‹ค. ์ด๋Ÿฌํ•œ ์ƒํ™ฉ์—์„œ ์ œํ’ˆ์„ ๋น ๋ฅด๊ฒŒ ํ™•์‚ฐ์‹œํ‚ค๊ณ  ์ผ๋ฐ˜ ์†Œ๋น„์ž๋“ค์ด ์‰ฝ๊ฒŒ ์ œํ’ˆ์„ ์ˆ˜์šฉํ•  ์ˆ˜ ์žˆ๊ฒŒ ํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š”, ์†Œ๋น„์ž์˜ ๊ธฐ๋Œ€์™€ ์ธ์‹์„ ์ถฉ๋ถ„ํžˆ ์ดํ•ดํ•˜๊ณ  ๊ทธ๋ฅผ ์ถฉ์กฑ์‹œํ‚ฌ ์ˆ˜ ์žˆ๋Š” ์ œํ’ˆ์„ ๊ฐœ๋ฐœํ•˜๋Š” ๊ฒƒ์ด ์ค‘์š”ํ•˜๋‹ค. ์ด์— ๋”ฐ๋ผ ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ์ œํ’ˆ ๊ฐœ๋ฐœ ์ดˆ๊ธฐ ๋‹จ๊ณ„์— ํ™œ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ์‚ฌ์šฉ์ž ํ‰๊ฐ€ ํ•ญ๋ชฉ์„ ์ œ์•ˆํ•˜๊ณ  ํ‰๊ฐ€ ํ•ญ๋ชฉ ๊ฐ„ ๊ด€๊ณ„๋ฅผ ๋„์ถœํ•˜๋Š” ๊ฒƒ์„ ๋ชฉํ‘œ๋กœ ํ•œ๋‹ค. ๋จผ์ € 2์žฅ์—์„œ๋Š” social AI personal assistant์˜ ํŠน์ง•, ์ œํ’ˆ ๊ฐœ๋ฐœ ์ดˆ๊ธฐ ๋‹จ๊ณ„์—์„œ ์ด๋ฃจ์–ด์ง€๋Š” ์‚ฌ์šฉ์ž ํ‰๊ฐ€์˜ ์ค‘์š”์„ฑ ๋ฐ ๊ธฐ์กด ์‚ฌ์šฉ์ž ํ‰๊ฐ€ ๋ชจ๋ธ์˜ ํ•œ๊ณ„์ ์„ ์กฐ์‚ฌํ•˜์˜€๋‹ค. ๊ธฐ์กด์— ๊ธฐ์ˆ  ์ˆ˜์šฉ ๋ชจ๋ธ ๋ฐ AI personal assistant ์ œํ’ˆ์˜ ํ‰๊ฐ€ ๋ชจ๋ธ๋“ค์ด ๋‹ค์–‘ํ•˜๊ฒŒ ์ œ์•ˆ๋˜์–ด ์™”์œผ๋‚˜, ์ œํ’ˆ ๊ฐœ๋ฐœ ์ดˆ๊ธฐ ๋‹จ๊ณ„์— ํ™œ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ํ‰๊ฐ€ ๋ชจ๋ธ์€ ๋ถ€์กฑํ•˜์˜€๊ณ , ์ œํ’ˆ ์ „๋ฐ˜์„ ํ‰๊ฐ€ํ•  ์ˆ˜ ์žˆ๋Š” ํ‰๊ฐ€ ๋ชจ๋ธ์˜ ๋ถ€์žฌ๋กœ ๋Œ€๋ถ€๋ถ„์˜ ๊ธฐ์กด ์—ฐ๊ตฌ์—์„œ๋Š” ๋‘ ๊ฐ€์ง€ ์ด์ƒ์˜ ํ‰๊ฐ€ ๋ชจ๋ธ์„ ๊ฒฐํ•ฉ, ์ˆ˜์ •ํ•˜์—ฌ ์‚ฌ์šฉํ•œ ๊ฒƒ์„ ์•Œ ์ˆ˜ ์žˆ์—ˆ๋‹ค. 3์žฅ์—์„œ๋Š” AI personal assistant ๊ด€๋ จ ๊ธฐ์กด ์—ฐ๊ตฌ์—์„œ ํ™œ์šฉ๋œ ํ‰๊ฐ€ ํ•ญ๋ชฉ์„ ๊ฒ€ํ† ํ•˜์˜€๋‹ค. ์ด 40๊ฐœ์˜ ์—ฐ๊ตฌ๋ฅผ ๋ฆฌ๋ทฐํ•˜์—ฌ, ๊ธฐ์กด์— ํ™œ์šฉ๋˜๊ณ  ์žˆ๋Š” ํ‰๊ฐ€ ํ•ญ๋ชฉ์˜ ์ข…๋ฅ˜ ๋ฐ ํ•œ๊ณ„์ ์„ ์•Œ์•„๋ณด์•˜๋‹ค. ๊ทธ ๊ฒฐ๊ณผ, ํ‰๊ฐ€๋ฅผ ์œ„ํ•œ ํ”„๋กœํ† ํƒ€์ž… ๊ฐœ๋ฐœ์ด ์‰ฝ์ง€ ์•Š๊ธฐ์— ์ด๋ฏธ ์ƒ์šฉํ™”๋œ ์ œํ’ˆ๋“ค์„ ์ตœ๋Œ€ํ•œ ํ™œ์šฉํ•˜๋Š” ๊ฒƒ์„ ์•Œ ์ˆ˜ ์žˆ์—ˆ์œผ๋ฉฐ, ์ œํ’ˆ ์ „๋ฐ˜์„ ํ‰๊ฐ€ํ•œ ์‚ฌ๋ก€๋Š” ๋ถ€์กฑํ•จ์„ ์•Œ ์ˆ˜ ์žˆ์—ˆ๋‹ค. ๋˜ํ•œ ๊ธฐ์กด ์—ฐ๊ตฌ๋“ค์ด ์‚ฌ์šฉํ•œ ํ‰๊ฐ€ ํ•ญ๋ชฉ์„ ๋ชจ๋‘ ์ˆ˜์ง‘ ๋ฐ ์ •๋ฆฌํ•˜์—ฌ ์ดํ›„ ์ œ์•ˆํ•  ํ‰๊ฐ€ ๋ชจ๋ธ์˜ ๊ธฐ๋ฐ˜ ์ž๋ฃŒ๋กœ ํ™œ์šฉํ•˜์˜€๋‹ค. ๋ถ„์„ ๊ฒฐ๊ณผ, social AI personal assistant์˜ ๋ชฉ์ ์„ ๊ณ ๋ คํ•ด๋ณด์•˜์„ ๋•Œ, ์‚ฌ์šฉ์ž์™€์˜ ์‚ฌํšŒ์  ์ธํ„ฐ๋ž™์…˜์„ ํ†ตํ•ด ์‚ฌ์šฉ์ž์˜ ๊ฐ์ •์ ์ธ ๋ฉด์„ ์ฑ„์›Œ์ฃผ๋Š” ์—ญํ• ์ด ์ค‘์š”ํ•˜์ง€๋งŒ, ๊ณตํ†ต์ ์œผ๋กœ ํ™œ์šฉํ•˜๊ณ  ์žˆ๋Š” ๊ฐ์ •์  ๊ฐ€์น˜ ๊ด€๋ จ ํ‰๊ฐ€ ํ•ญ๋ชฉ์ด ๋ถ€์กฑํ•œ ๊ฒƒ์œผ๋กœ ๋‚˜ํƒ€๋‚ฌ๋‹ค. 4์žฅ์—์„œ๋Š” social AI personal assistant ์ œํ’ˆ ๊ฐœ๋ฐœ ์ดˆ๊ธฐ ๋‹จ๊ณ„์—์„œ ํ™œ์šฉ ๊ฐ€๋Šฅํ•œ ํ‰๊ฐ€ ํ•ญ๋ชฉ์„ ์ˆ˜์ง‘ ๋ฐ ์ œ์•ˆํ•˜๊ณ , ํ‰๊ฐ€ ํ•ญ๋ชฉ์„ ํ™œ์šฉํ•˜์—ฌ social robots์„ ํ‰๊ฐ€ํ•œ ๋’ค ์ด๋ฅผ ํ†ตํ•ด ํ‰๊ฐ€ ํ•ญ๋ชฉ ๊ฐ„์˜ ๊ด€๊ณ„๋ฅผ ๋„์ถœํ•˜์˜€๋‹ค. Social robots ๊ด€๋ จ ์˜๊ฒฌ์„ ๋‹ค์–‘ํ•˜๊ฒŒ ์ฒญ์ทจํ•˜๊ณ  ํ‰๊ฐ€ ํ•ญ๋ชฉ์„ ๋„์ถœํ•˜๋Š” ํ”„๋กœ์„ธ์Šค๋ฅผ ์ œ์•ˆํ•˜์˜€์œผ๋ฉฐ, ๋ณธ ํ”„๋กœ์„ธ์Šค๋ฅผ ํ†ตํ•ด ์ตœ์ข… ์„ ์ •๋œ ํ‰๊ฐ€ ํ•ญ๋ชฉ์„ ์ด์šฉํ•˜์—ฌ, ์ด 230๋ช…์ด ์„ธ ๊ฐ€์ง€ social robots ์ปจ์…‰ ์˜์ƒ์„ ํ‰๊ฐ€ํ•˜๋Š” ์‚ฌ๋ก€ ์—ฐ๊ตฌ๋ฅผ ์ง„ํ–‰ํ•˜์˜€๋‹ค. ํ‰๊ฐ€ ๊ฒฐ๊ณผ, ์ œํ’ˆ์— ๋Œ€ํ•œ ์†Œ๋น„์ž ํƒœ๋„๋Š” Utilitarian dimension๊ณผ Hedonic dimension์„ ํ†ตํ•ด ํ˜•์„ฑ๋˜์—ˆ๊ณ , Utilitarian dimension ๋‚ด ์‚ฌ์šฉ์„ฑ ๋ฐ ์ œํ’ˆ ํšจ์šฉ์„ฑ, Hedonic dimension์— ํฌํ•จ๋˜๋Š” ์‹ฌ๋ฏธ์  ๋งŒ์กฑ๋„, ์„ฑ๊ฒฉ์˜ ๋งค๋ ฅ๋„, ๊ฐ์„ฑ์  ๊ฐ€์น˜ ๊ฐ๊ฐ์€ ์„œ๋กœ ๊ธ์ •์ ์ธ ์ƒ๊ด€๊ด€๊ณ„๋ฅผ ์ง€๋‹˜์„ ์•Œ ์ˆ˜ ์žˆ์—ˆ๋‹ค. ๋˜ํ•œ ๊ธฐ์กด์— ์ œ์•ˆ๋œ ๊ธฐ์ˆ  ์ˆ˜์šฉ ๋ชจ๋ธ ๋Œ€๋น„ ๋ณธ ์—ฐ๊ตฌ์—์„œ ๋„์ถœํ•œ ํ‰๊ฐ€ ๋ชจ๋ธ์ด ์šฐ์ˆ˜ํ•œ ์„ค๋ช…๋ ฅ์„ ๋ณด์ž„์„ ํ™•์ธํ•˜์˜€๋‹ค. 5์žฅ์—์„œ๋Š” 4์žฅ์—์„œ ๋„์ถœ๋œ ํ‰๊ฐ€ ๋ชจ๋ธ์„ ํƒ€ ์ œํ’ˆ์— ์ ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ๋‹ค์‹œ ํ•œ๋ฒˆ ๊ฒ€์ฆํ•˜์˜€๋‹ค. ํ•ด๋‹น ๋ถ„์•ผ์— ์ „๋ฌธ์„ฑ์„ ์ง€๋‹Œ UX ์ „๋ฌธ๊ฐ€ 100๋ช… ๋ฐ ์Œ์„ฑ ๋น„์„œ ์„œ๋น„์Šค๋ฅผ ์‹ค์ œ ์‚ฌ์šฉํ•˜๋Š” ์‹ค์‚ฌ์šฉ์ž 100๋ช…์ด, ํœด๋Œ€ํฐ ์˜จ๋ณด๋”ฉ ์ƒํ™ฉ์—์„œ ์‚ฌ์šฉ์ž๋ฅผ ๋„์™€์ฃผ๋Š” ์Œ์„ฑ ๋น„์„œ ์„œ๋น„์Šค์˜ ์ปจ์…‰ ์˜์ƒ ๋‘ ๊ฐ€์ง€๋ฅผ ๋ณด๊ณ  ์ปจ์…‰์— ๋Œ€ํ•œ ํ‰๊ฐ€๋ฅผ ์ง„ํ–‰ํ•˜์˜€๋‹ค. ํ‰๊ฐ€ ๊ฒฐ๊ณผ UX ์ „๋ฌธ๊ฐ€์™€ ์‹ค์‚ฌ์šฉ์ž ๊ทธ๋ฃน ๊ฐ„์—๋Š” ํ‰๊ฐ€ ๊ฒฐ๊ณผ์— ์œ ์˜๋ฏธํ•œ ์ฐจ์ด๋ฅผ ๋ณด์ด์ง€ ์•Š์•˜๊ธฐ ๋•Œ๋ฌธ์—, UX ์ „๋ฌธ๊ฐ€์™€ ์‹ค์‚ฌ์šฉ์ž ๊ทธ๋ฃน์—์„œ ์–ป์€ ๋ฐ์ดํ„ฐ ์ „์ฒด๋ฅผ ํ™œ์šฉํ•˜์—ฌ ๊ตฌ์กฐ ๋ฐฉ์ •์‹ ๋ชจ๋ธ ๋ถ„์„์„ ์ง„ํ–‰ํ•˜์˜€๋‹ค. ๊ทธ ๊ฒฐ๊ณผ 5์žฅ๊ณผ ์œ ์‚ฌํ•œ ์ˆ˜์ค€์˜ ๊ฒฐ๊ณผ๋ฅผ ์–ป์—ˆ๊ณ , ์ถ”ํ›„ ํ•ด๋‹น ๋ชจ๋ธ์„ social AI personal assistant ์ œํ’ˆ์— ์ผ๋ฐ˜ํ™”ํ•˜์—ฌ ํ™œ์šฉํ•  ์ˆ˜ ์žˆ์„ ๊ฒƒ์œผ๋กœ ํŒ๋‹จํ•˜์˜€๋‹ค. ๋ณธ ๋…ผ๋ฌธ์€ social AI personal assistant ๊ด€๋ จ ์ œํ’ˆ ๋ฐ ์„œ๋น„์Šค์˜ ๊ฐœ๋ฐœ ์ดˆ๊ธฐ ๋‹จ๊ณ„์—์„œ ์‚ฌ์šฉ์ž ํ‰๊ฐ€๋ฅผ ์ง„ํ–‰ํ•  ๋•Œ ํ™œ์šฉ ๊ฐ€๋Šฅํ•œ ํ‰๊ฐ€ ํ•ญ๋ชฉ ๋ฐ ํ‰๊ฐ€ ํ•ญ๋ชฉ ๊ฐ„์˜ ๊ด€๊ณ„๋ฅผ ๋„์ถœํ•˜์˜€๋‹ค. ๋˜ํ•œ ์ด๋ฅผ ๊ฒ€์ฆํ•˜๊ธฐ ์œ„ํ•˜์—ฌ social AI personal assistant ์ œํ’ˆ ๋ฐ ์„œ๋น„์Šค๋ฅผ ํ™œ์šฉํ•œ ์‚ฌ๋ก€์—ฐ๊ตฌ๋ฅผ ์ง„ํ–‰ํ•˜์˜€๋‹ค. ๋ณธ ์—ฐ๊ตฌ ๊ฒฐ๊ณผ๋Š” ์ถ”ํ›„ ์ œํ’ˆ ๊ฐœ๋ฐœ ์ดˆ๊ธฐ ๋‹จ๊ณ„์—์„œ ์ œํ’ˆ์˜ ์ปจ์…‰์„ ๋ช…ํ™•ํžˆ ํ•˜๊ธฐ ์œ„ํ•œ ์‚ฌ์šฉ์ž ํ‰๊ฐ€๋ฅผ ์‹ค์‹œํ•ด์•ผ ํ•˜๋Š” ์—ฐ๊ตฌ์ง„์ด ํšจ์œจ์ ์œผ๋กœ ํ™œ์šฉํ•  ์ˆ˜ ์žˆ์„ ๊ฒƒ์œผ๋กœ ๊ธฐ๋Œ€๋œ๋‹ค. ์ถ”ํ›„ ์ด ๋ถ€๋ถ„์˜ ๊ฒ€์ฆ์„ ์œ„ํ•ด, social AI personal assistants์˜ ์™„์ œํ’ˆ๊ณผ ๊ฐœ๋ฐœ ์ดˆ๊ธฐ ๋‹จ๊ณ„์˜ video type stimulus๋ฅผ ๋น„๊ตํ•˜๋Š” ์ถ”๊ฐ€ ์—ฐ๊ตฌ๊ฐ€ ์ด๋ฃจ์–ด์ง„๋‹ค๋ฉด ๋ณธ ์—ฐ๊ตฌ์˜ ์˜๋ฏธ๋ฅผ ๋ณด๋‹ค ๋ช…ํ™•ํ•˜๊ฒŒ ์ œ์‹œํ•  ์ˆ˜ ์žˆ์„ ๊ฒƒ์œผ๋กœ ์ƒ๊ฐ๋œ๋‹ค.Chapter 1 Introduction 1 1.1 Background and motivation 1 1.1 Research objectives 5 1.2 Dissertation outline 7 Chapter 2 Literature review 9 2.1 Social AI personal assistant 9 2.2 User centered design process 13 2.3 Technology acceptance models 16 2.4 Evaluation measures for social AI personal assistant 22 2.5 Existing evaluation methodologies for social AI personal assistant 27 Chapter 3 Collection of existing evaluation measures for social AI personal assistants 40 3.1 Background 40 3.2 Methodology 43 3.3 Result 51 3.4 Discussion 60 Chapter 4 Development of an evaluation model for social AI personal assistants 63 4.1 Background 63 4.2 Methodology 66 4.2.1 Developing evaluation measures for social AI personal assistants 68 4.2.2 Conducting user evaluation for social robots 74 4.3 Result 77 4.3.1 Descriptive statistics 77 4.3.2 Hypothesis development and testing 80 4.3.3 Comparison with existing technology acceptance models 88 4.4 Discussion 93 Chapter 5 Verification of an evaluation model with voice assistant services 95 5.1 Background 95 5.2 Methodology 98 5.2.1 Design of evaluation questionnaires for voice assistant services 99 5.2.2 Validation of relationship among evaluation factors 103 5.3 Result 108 5.3.1 Descriptive statistics 108 5.3.2 Hypothesis development and testing 111 5.3.3 Comparison with existing technology acceptance models 118 5.4 Discussion 121 Chapter 6 Conclusion 124 6.1 Summary of this study 124 6.2 Contribution of this study 126 6.3 Limitation and future work 128 Bibliography 129 Appendix A. Evaluation measures for social AI personal assistant collected in Chapter 4 146 Appendix B. Questionnaires for evaluation of social robots 154 Appendix C. Questionnaires for evaluation of voice assistant service 166๋ฐ•

    Building Embodied Conversational Agents:Observations on human nonverbal behaviour as a resource for the development of artificial characters

    Get PDF
    "Wow this is so cool!" This is what I most probably yelled, back in the 90s, when my first computer program on our MSX computer turned out to do exactly what I wanted it to do. The program contained the following instruction: COLOR 10(1.1) After hitting enter, it would change the screen color from light blue to dark yellow. A few years after that experience, Microsoft Windows was introduced. Windows came with an intuitive graphical user interface that was designed to allow all people, so also those who would not consider themselves to be experienced computer addicts, to interact with the computer. This was a major step forward in human-computer interaction, as from that point forward no complex programming skills were required anymore to perform such actions as adapting the screen color. Changing the background was just a matter of pointing the mouse to the desired color on a color palette. "Wow this is so cool!". This is what I shouted, again, 20 years later. This time my new smartphone successfully skipped to the next song on Spotify because I literally told my smartphone, with my voice, to do so. Being able to operate your smartphone with natural language through voice-control can be extremely handy, for instance when listening to music while showering. Again, the option to handle a computer with voice instructions turned out to be a significant optimization in human-computer interaction. From now on, computers could be instructed without the use of a screen, mouse or keyboard, and instead could operate successfully simply by telling the machine what to do. In other words, I have personally witnessed how, within only a few decades, the way people interact with computers has changed drastically, starting as a rather technical and abstract enterprise to becoming something that was both natural and intuitive, and did not require any advanced computer background. Accordingly, while computers used to be machines that could only be operated by technically-oriented individuals, they had gradually changed into devices that are part of many peopleโ€™s household, just as much as a television, a vacuum cleaner or a microwave oven. The introduction of voice control is a significant feature of the newer generation of interfaces in the sense that these have become more "antropomorphic" and try to mimic the way people interact in daily life, where indeed the voice is a universally used device that humans exploit in their exchanges with others. The question then arises whether it would be possible to go even one step further, where people, like in science-fiction movies, interact with avatars or humanoid robots, whereby users can have a proper conversation with a computer-simulated human that is indistinguishable from a real human. An interaction with a human-like representation of a computer that behaves, talks and reacts like a real person would imply that the computer is able to not only produce and understand messages transmitted auditorily through the voice, but also could rely on the perception and generation of different forms of body language, such as facial expressions, gestures or body posture. At the time of writing, developments of this next step in human-computer interaction are in full swing, but the type of such interactions is still rather constrained when compared to the way humans have their exchanges with other humans. It is interesting to reflect on how such future humanmachine interactions may look like. When we consider other products that have been created in history, it sometimes is striking to see that some of these have been inspired by things that can be observed in our environment, yet at the same do not have to be exact copies of those phenomena. For instance, an airplane has wings just as birds, yet the wings of an airplane do not make those typical movements a bird would produce to fly. Moreover, an airplane has wheels, whereas a bird has legs. At the same time, an airplane has made it possible for a humans to cover long distances in a fast and smooth manner in a way that was unthinkable before it was invented. The example of the airplane shows how new technologies can have "unnatural" properties, but can nonetheless be very beneficial and impactful for human beings. This dissertation centers on this practical question of how virtual humans can be programmed to act more human-like. The four studies presented in this dissertation all have the equivalent underlying question of how parts of human behavior can be captured, such that computers can use it to become more human-like. Each study differs in method, perspective and specific questions, but they are all aimed to gain insights and directions that would help further push the computer developments of human-like behavior and investigate (the simulation of) human conversational behavior. The rest of this introductory chapter gives a general overview of virtual humans (also known as embodied conversational agents), their potential uses and the engineering challenges, followed by an overview of the four studies

    Building Embodied Conversational Agents:Observations on human nonverbal behaviour as a resource for the development of artificial characters

    Get PDF
    "Wow this is so cool!" This is what I most probably yelled, back in the 90s, when my first computer program on our MSX computer turned out to do exactly what I wanted it to do. The program contained the following instruction: COLOR 10(1.1) After hitting enter, it would change the screen color from light blue to dark yellow. A few years after that experience, Microsoft Windows was introduced. Windows came with an intuitive graphical user interface that was designed to allow all people, so also those who would not consider themselves to be experienced computer addicts, to interact with the computer. This was a major step forward in human-computer interaction, as from that point forward no complex programming skills were required anymore to perform such actions as adapting the screen color. Changing the background was just a matter of pointing the mouse to the desired color on a color palette. "Wow this is so cool!". This is what I shouted, again, 20 years later. This time my new smartphone successfully skipped to the next song on Spotify because I literally told my smartphone, with my voice, to do so. Being able to operate your smartphone with natural language through voice-control can be extremely handy, for instance when listening to music while showering. Again, the option to handle a computer with voice instructions turned out to be a significant optimization in human-computer interaction. From now on, computers could be instructed without the use of a screen, mouse or keyboard, and instead could operate successfully simply by telling the machine what to do. In other words, I have personally witnessed how, within only a few decades, the way people interact with computers has changed drastically, starting as a rather technical and abstract enterprise to becoming something that was both natural and intuitive, and did not require any advanced computer background. Accordingly, while computers used to be machines that could only be operated by technically-oriented individuals, they had gradually changed into devices that are part of many peopleโ€™s household, just as much as a television, a vacuum cleaner or a microwave oven. The introduction of voice control is a significant feature of the newer generation of interfaces in the sense that these have become more "antropomorphic" and try to mimic the way people interact in daily life, where indeed the voice is a universally used device that humans exploit in their exchanges with others. The question then arises whether it would be possible to go even one step further, where people, like in science-fiction movies, interact with avatars or humanoid robots, whereby users can have a proper conversation with a computer-simulated human that is indistinguishable from a real human. An interaction with a human-like representation of a computer that behaves, talks and reacts like a real person would imply that the computer is able to not only produce and understand messages transmitted auditorily through the voice, but also could rely on the perception and generation of different forms of body language, such as facial expressions, gestures or body posture. At the time of writing, developments of this next step in human-computer interaction are in full swing, but the type of such interactions is still rather constrained when compared to the way humans have their exchanges with other humans. It is interesting to reflect on how such future humanmachine interactions may look like. When we consider other products that have been created in history, it sometimes is striking to see that some of these have been inspired by things that can be observed in our environment, yet at the same do not have to be exact copies of those phenomena. For instance, an airplane has wings just as birds, yet the wings of an airplane do not make those typical movements a bird would produce to fly. Moreover, an airplane has wheels, whereas a bird has legs. At the same time, an airplane has made it possible for a humans to cover long distances in a fast and smooth manner in a way that was unthinkable before it was invented. The example of the airplane shows how new technologies can have "unnatural" properties, but can nonetheless be very beneficial and impactful for human beings. This dissertation centers on this practical question of how virtual humans can be programmed to act more human-like. The four studies presented in this dissertation all have the equivalent underlying question of how parts of human behavior can be captured, such that computers can use it to become more human-like. Each study differs in method, perspective and specific questions, but they are all aimed to gain insights and directions that would help further push the computer developments of human-like behavior and investigate (the simulation of) human conversational behavior. The rest of this introductory chapter gives a general overview of virtual humans (also known as embodied conversational agents), their potential uses and the engineering challenges, followed by an overview of the four studies

    Toward Context-Aware, Affective, and Impactful Social Robots

    Get PDF
    corecore