11,067 research outputs found

    Cognitive Designers Activity Study, Formalization, Modelling, and Computation

    Get PDF
    This study aims to explore how designers mentally categorise design information during the early sketching performed in the generative phase. An action research approach is particularly appropriate for identifying the various sorts of design information and the cognitive operations involved in this phase. Thus, we conducted a protocol study with eight product designers based on a descriptive model derived from cognitive psychological memory theories. Subsequent protocol analysis yielded a cognitive model depicting the mental categorisation of design information processing performed by designers. This cognitive model included a structure for design information (high, middle, and low levels) and linked cognitive operations (association and transformation). Finally, this paper concludes by discussing directions for future research on the development of new computational tools for designers

    Textโ€“toโ€“Video: Image Semantics and NLP

    Get PDF
    When aiming at automatically translating an arbitrary text into a visual story, the main challenge consists in finding a semantically close visual representation whereby the displayed meaning should remain the same as in the given text. Besides, the appearance of an image itself largely influences how its meaningful information is transported towards an observer. This thesis now demonstrates that investigating in both, image semantics as well as the semantic relatedness between visual and textual sources enables us to tackle the challenging semantic gap and to find a semantically close translation from natural language to a corresponding visual representation. Within the last years, social networking became of high interest leading to an enormous and still increasing amount of online available data. Photo sharing sites like Flickr allow users to associate textual information with their uploaded imagery. Thus, this thesis exploits this huge knowledge source of user generated data providing initial links between images and words, and other meaningful data. In order to approach visual semantics, this work presents various methods to analyze the visual structure as well as the appearance of images in terms of meaningful similarities, aesthetic appeal, and emotional effect towards an observer. In detail, our GPU-based approach efficiently finds visual similarities between images in large datasets across visual domains and identifies various meanings for ambiguous words exploring similarity in online search results. Further, we investigate in the highly subjective aesthetic appeal of images and make use of deep learning to directly learn aesthetic rankings from a broad diversity of user reactions in social online behavior. To gain even deeper insights into the influence of visual appearance towards an observer, we explore how simple image processing is capable of actually changing the emotional perception and derive a simple but effective image filter. To identify meaningful connections between written text and visual representations, we employ methods from Natural Language Processing (NLP). Extensive textual processing allows us to create semantically relevant illustrations for simple text elements as well as complete storylines. More precisely, we present an approach that resolves dependencies in textual descriptions to arrange 3D models correctly. Further, we develop a method that finds semantically relevant illustrations to texts of different types based on a novel hierarchical querying algorithm. Finally, we present an optimization based framework that is capable of not only generating semantically relevant but also visually coherent picture stories in different styles.Bei der automatischen Umwandlung eines beliebigen Textes in eine visuelle Geschichte, besteht die grรถรŸte Herausforderung darin eine semantisch passende visuelle Darstellung zu finden. Dabei sollte die Bedeutung der Darstellung dem vorgegebenen Text entsprechen. Darรผber hinaus hat die Erscheinung eines Bildes einen groรŸen EinfluรŸ darauf, wie seine bedeutungsvollen Inhalte auf einen Betrachter รผbertragen werden. Diese Dissertation zeigt, dass die Erforschung sowohl der Bildsemantik als auch der semantischen Verbindung zwischen visuellen und textuellen Quellen es ermรถglicht, die anspruchsvolle semantische Lรผcke zu schlieรŸen und eine semantisch nahe รœbersetzung von natรผrlicher Sprache in eine entsprechend sinngemรครŸe visuelle Darstellung zu finden. Des Weiteren gewann die soziale Vernetzung in den letzten Jahren zunehmend an Bedeutung, was zu einer enormen und immer noch wachsenden Menge an online verfรผgbaren Daten gefรผhrt hat. Foto-Sharing-Websites wie Flickr ermรถglichen es Benutzern, Textinformationen mit ihren hochgeladenen Bildern zu verknรผpfen. Die vorliegende Arbeit nutzt die enorme Wissensquelle von benutzergenerierten Daten welche erste Verbindungen zwischen Bildern und Wรถrtern sowie anderen aussagekrรคftigen Daten zur Verfรผgung stellt. Zur Erforschung der visuellen Semantik stellt diese Arbeit unterschiedliche Methoden vor, um die visuelle Struktur sowie die Wirkung von Bildern in Bezug auf bedeutungsvolle ร„hnlichkeiten, รคsthetische Erscheinung und emotionalem Einfluss auf einen Beobachter zu analysieren. Genauer gesagt, findet unser GPU-basierter Ansatz effizient visuelle ร„hnlichkeiten zwischen Bildern in groรŸen Datenmengen quer รผber visuelle Domรคnen hinweg und identifiziert verschiedene Bedeutungen fรผr mehrdeutige Wรถrter durch die Erforschung von ร„hnlichkeiten in Online-Suchergebnissen. Des Weiteren wird die hรถchst subjektive รคsthetische Anziehungskraft von Bildern untersucht und "deep learning" genutzt, um direkt รคsthetische Einordnungen aus einer breiten Vielfalt von Benutzerreaktionen im sozialen Online-Verhalten zu lernen. Um noch tiefere Erkenntnisse รผber den Einfluss des visuellen Erscheinungsbildes auf einen Betrachter zu gewinnen, wird erforscht, wie alleinig einfache Bildverarbeitung in der Lage ist, tatsรคchlich die emotionale Wahrnehmung zu verรคndern und ein einfacher aber wirkungsvoller Bildfilter davon abgeleitet werden kann. Um bedeutungserhaltende Verbindungen zwischen geschriebenem Text und visueller Darstellung zu ermitteln, werden Methoden des "Natural Language Processing (NLP)" verwendet, die der Verarbeitung natรผrlicher Sprache dienen. Der Einsatz umfangreicher Textverarbeitung ermรถglicht es, semantisch relevante Illustrationen fรผr einfache Textteile sowie fรผr komplette Handlungsstrรคnge zu erzeugen. Im Detail wird ein Ansatz vorgestellt, der Abhรคngigkeiten in Textbeschreibungen auflรถst, um 3D-Modelle korrekt anzuordnen. Des Weiteren wird eine Methode entwickelt die, basierend auf einem neuen hierarchischen Such-Anfrage Algorithmus, semantisch relevante Illustrationen zu Texten verschiedener Art findet. SchlieรŸlich wird ein optimierungsbasiertes Framework vorgestellt, das nicht nur semantisch relevante, sondern auch visuell kohรคrente Bildgeschichten in verschiedenen Bildstilen erzeugen kann

    Understanding citizen science and environmental monitoring: final report on behalf of UK Environmental Observation Framework

    Get PDF
    Citizen science can broadly be defined as the involvement of volunteers in science. Over the past decade there has been a rapid increase in the number of citizen science initiatives. The breadth of environmental-based citizen science is immense. Citizen scientists have surveyed for and monitored a broad range of taxa, and also contributed data on weather and habitats reflecting an increase in engagement with a diverse range of observational science. Citizen science has taken many varied approaches from citizen-led (co-created) projects with local community groups to, more commonly, scientist-led mass participation initiatives that are open to all sectors of society. Citizen science provides an indispensable means of combining environmental research with environmental education and wildlife recording. Here we provide a synthesis of extant citizen science projects using a novel cross-cutting approach to objectively assess understanding of citizen science and environmental monitoring including: 1. Brief overview of knowledge on the motivations of volunteers. 2. Semi-systematic review of environmental citizen science projects in order to understand the variety of extant citizen science projects. 3. Collation of detailed case studies on a selection of projects to complement the semi-systematic review. 4. Structured interviews with users of citizen science and environmental monitoring data focussing on policy, in order to more fully understand how citizen science can fit into policy needs. 5. Review of technology in citizen science and an exploration of future opportunities

    The value of personalised consumer product design facilitated through additive manufacturing technology

    Get PDF
    This research attempted to discover how Additive Manufacturing (AM) can best be used to increase the value of personalised consumer products and how designers can be assisted in finding an effective way to facilitate value addition within personalisable product designs. AM has become an enabler for end-users to become directly involved in product personalisation through the manipulation of three-dimensional (3D) designs of the product using easy-to-use design toolkits. In this way, end-users are able to fabricate their own personalised designs using various types of AM systems. Personalisation activity can contribute to an increment in the value of a product because it delivers a closer fit to user preferences. The research began with a literature review that covered the areas of product personalisation, additive manufacturing, and consumer value in product design. The literature review revealed that the lack of methods and tools to enable designers to exploit AM has become a fundamental challenge in fully realising the advantages of the technology. Consequently, the question remained as to whether industrial designers are able to identify the design characteristics that can potentially add value to a product, particularly when the product is being personalised by end-users using AM-enabled design tools and systems. A new value taxonomy was developed to capture the relevant value attributes of personalised AM products. The value taxonomy comprised two first-level value types: product value and experiential value. It was further expanded into six second-level value components: functional value, personal-expressive value, sensory value, unique value, co-design value, and hedonic value. The research employed a survey to assess end-users value reflection on personalised features; measuring their willingness to pay (WTP) and their intention to purchase a product with personalised features. Thereafter, an experimental study was performed to measure end-users opinions on the value of 3D-printed personalised products based on the two value types: product value and experiential value. Based on the findings, a formal added value identification method was developed to act as a design aid tool to assist designers in preparing a personalisable product design that embodies value-adding personalisation features within the product. The design method was translated into a beta-test version paper-based design workbook known as the V+APP Design Method: Design Workbook. The design aid tool was validated by expert designers. In conclusion, this research has indicated that the added value identification method shows promise as a practical and effective method in aiding expert designers to identify the potential value-adding personalisation features within personalisable AM products, ensuring they are able to fully exploit the unique characteristics and value-adding design characteristics enabled by AM. Finally, the limitations of the research have been explained and recommendations made for future work in this area

    The CAID Information System

    Get PDF
    None provided

    Attributes of narrative game aesthetics for perceived cultural learning

    Get PDF
    Previous researches are mostly concerned on non-holistic game aesthetics for learning in various interactive media platforms. There is lack of studies on attributes of narrative games aesthetics which may contribute to perceived cultural learning. Therefore, this study aims to propose a conceptual model of narrative game aesthetics for perceived cultural learning. Three specific objectives were formulated: (i) to determine game aesthetics that contribute to perceived cultural learning in narrative games, (ii) to develop a narrative game based on the determined game aesthetics, and (iii) to produce empirical evidence on the contribution of game aesthetics towards perceived cultural learning. The research methodology comprises of three main phases: conceptual model development, prototype development, and user evaluation. For the first phase, the conceptual model was developed based on previous literature and reviewed by six experts. In the second phase, prototype development was then developed according to the conceptual model. Finally, user evaluation was employed using quasi experiment which involved 43 participants. Data analysis is conducted using descriptive analysis, correlation analysis, and observation. Findings indicate that six out of 10 attributes namely image and graphic; layout; shape and form; texture; voice; and music, are significantly correlated to perceived cultural learning. The observation results also indicate that these attributes can amplify game experience for perceived cultural learning. In a nutshell, this study has identified attributes of narrative game aesthetics for perceived cultural learning. It further provides empirical evidence on contributions of these attributes of narrative game aesthetics to perceived cultural learning. The outcome of this study will provide guidelines for narrative game designers and developers whom interested to inculcate cultural learning in their game

    The use of images and descriptive words for the development of an image database for product designers

    Full text link
    This research aims to understand the role images currently play within the design process, in order to develop a classification of image types and reference keywords to construct an electronic image database for professional use in product design. Images play an important role in the design process, both in defining the context for designs and in informing the creation of individual design. They are also used to communicate with clients, to understand consumers, to assist in expressing the themes of the project, to understand the related environments, or to search for inspiration or functional solutions. Designers usually have their own collections of images, however for each project they still spend a significant amount of time searching images, either looking within their own collection or searching for new images. This study is based on the assumption that there is a structure that can show the relationship between the image itself and the information it conveys and can be used to develop the database. A product-image database will enable designers to consult images more easily and this will also facilitate communication of visual ideas among designers or between designers and their clients, thus augmenting its potential value in the professional design process. Also, the value of an image may be enhanced by applying its linguistic associations through descriptions and keywords which identify and interpret its content. Through a series of interviews, workshops, and understanding relevant issues, such as design method, linguistic theory, perception psychology and so on, a prototype database system was developed. It was developed based on three information divisions: SPECIFICATION, CHARACTERISTIC, and EMOTION. The three divisions construct a model of the information which an image conveys. The database prototype was tested and evaluated by groups of students and professional designers. The results showed that users understand the concept and working of the database and appreciated its value. They also indicated that the CHARACTERISTIC division was most valuable as it allows users to record images through their recollection of feelings

    Management consulting.

    Get PDF
    Including a lengthy, comprehensive introduction, this important collection brings together some of the most influential papers that have contributed to our understanding of management consultancy work. The two-volume set encompasses the breadth of conceptual and empirical perspectives and explores those key ideas that have helped to advance our knowledge of this intriguing area. The volumes are divided into a series of thematic sections, affording the reader easy access to a great resource of information. Professors Clark and Avakian have written an original introduction which provides a comprehensive overview of the literature

    ๊ฐœ๋ฐœ ์ดˆ๊ธฐ ๋‹จ๊ณ„์—์„œ์˜ ์†Œ์…œ AI ๊ฐœ์ธ๋น„์„œ ํ‰๊ฐ€ ๋ชจ๋ธ ๊ฐœ๋ฐœ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ์‚ฐ์—…๊ณตํ•™๊ณผ, 2022.2. ์œค๋ช…ํ™˜.This dissertation aims to propose a user evaluation model to evaluate social AI personal assistants in the early stage of product development. Due to the rapid development of personal devices, data generated from personal devices are increasing explosively, and various personal AI services and products using these data are being launched. However, compared to the interest in AI personal assistant products, its market is still immature. In this case, it is important to understand consumer expectations and perceptions deeply and develop a product that can satisfy them to spread the product and allow general consumers to easily accept the product promptly. Accordingly, this dissertation proposes and validates a user evaluation model that can be used in the early stage of product development. Prior to proposing this methodology, main characteristics of social AI personal assistants, the importance of user evaluation in the early stage of product development and the limitations of the existing user evaluation model were investigated in Chapter 2. Various technology acceptance models and evaluation models for social AI personal assistant products have been proposed, evaluation models that can be applied in the initial stage of product development were insufficient, however. Moreover, it was found that commonly used evaluation measures for assessment of hedonic value were much fewer compared to measures for utilitarian value. These were used as starting points of this dissertation. In Chapter 3, the evaluation measures used in previous studies related to social AI personal assistant were collected and carefully reviewed. Through systematic review of 40 studies, the evaluation measures used in the past and limitation of related research were investigated. As a result, it was found that it was not easy to develop a prototype for evaluation, so it was possible to make the most of the products that have already been commercialized. In addition, all evaluation items used in previous studies were collected and used as the basis for the evaluation model to be proposed later. As a result of the analysis, considering the purpose of the social AI personal assistant, the role as supporting the user emotionally through social interaction with the user is important, but it was found that the evaluation measures related to hedonic value that are commonly used were still insufficient. In Chapter 4, evaluation measures that can be used in the initial stage of product development for social AI personal assistant were selected. Selected evaluation measures were used to evaluate three types of social robots and relationship among evaluation factors were induced through this evaluation. A process was proposed to understand to various opinions related to social robots and to derive evaluation items, and a case study was conducted in which a total of 230 people evaluated three social robots concept images using the evaluation items finally selected through this process. As a result, it is shown that consumersโ€™ attitude toward products was built through the utilitarian dimension and the hedonic dimension. In addition, there is positive relationship between ease of use and utility in the utilitarian dimension, and among aesthetic pleasure, attractiveness of personality, affective value in the hedonic dimension. Moreover, it is confirmed that the evaluation model derived from this study showed superior explanatory power compared to the previously proposed technology acceptance model. In Chapter 5, the model was validated again by applying the evaluation measure and the relationship among evaluation factors derived in Chapter 4 to other products. 100 UX experts with expertise in the field of social AI personal assistants and 100 users who use the voice assistant service often, watched two concept videos of the voice assistant service to help users in the onboarding situation of mobile phones and evaluated these concepts. As a result of the evaluation, there is no significant difference in the evaluation results between the UX expert and the real user group, so the structural equation model analysis was conducted using all the data obtained from the UX expert and the real user group. As a result, results similar to those in Chapter 4 are obtained, and it is expected that the model could be generalized to social AI personal assistant products and applied for future research. This dissertation proposes evaluation measure and relationship among evaluation factors that can be applied when conducting user evaluation in the initial stage of social AI personal assistant development. In addition, case studies using social AI personal assistant products and services were conducted to validate it. With the findings of this study, it is expected that researchers who need to conduct user evaluation to clarify product concepts in the early stages of product development will be able to apply evaluation measures effectively. It is expected that the significance of this dissertation will become clearer if further research is conducted comparing the finished product of social AI personal assistants with the video type stimulus in the early stage of development.๋ณธ ๋…ผ๋ฌธ์€ ์ตœ๊ทผ ๋น ๋ฅด๊ฒŒ ๋ฐœ์ „ํ•˜๊ณ  ์žˆ๋Š” social AI personal assistant์˜ ๊ฐœ๋ฐœ ์ดˆ๊ธฐ ๋‹จ๊ณ„์— ํ™œ์šฉ ๊ฐ€๋Šฅํ•œ ์‚ฌ์šฉ์ž ํ‰๊ฐ€ ํ•ญ๋ชฉ์„ ๊ฐœ๋ฐœํ•˜๊ณ  ํ‰๊ฐ€ ํ•ญ๋ชฉ ๊ฐ„์˜ ๊ด€๊ณ„๋ฅผ ๊ฒ€์ฆํ•˜๋Š” ๊ฒƒ์„ ๋ชฉํ‘œ๋กœ ํ•œ๋‹ค. ๊ฐœ์ธ ๋””๋ฐ”์ด์Šค์˜ ๋ฐœ๋‹ฌ๋กœ ์ธํ•ด, ๊ฐ ๋””๋ฐ”์ด์Šค์—์„œ ์ƒ์„ฑ๋˜๋Š” ๋ฐ์ดํ„ฐ๊ฐ€ ํญ๋ฐœ์ ์œผ๋กœ ์ฆ๊ฐ€ํ•˜๊ณ  ์žˆ๊ณ , ์ด๋ฅผ ํ™œ์šฉํ•œ ๊ฐœ์ธ์šฉ AI ์„œ๋น„์Šค ๋ฐ ์ œํ’ˆ์ด ๋‹ค์–‘ํ•˜๊ฒŒ ์ œ์•ˆ๋˜๊ณ  ์žˆ๋‹ค. ํ•˜์ง€๋งŒ ๊ทธ ๊ด€์‹ฌ์— ๋น„ํ•ด, social AI personal assistant ์ œํ’ˆ์˜ ์‹ค์ œ ์‹œ์žฅ์€ ์•„์ง ์„ฑ์ˆ™ํ•˜์ง€ ์•Š์€ ๋‹จ๊ณ„์ด๋‹ค. ์ด๋Ÿฌํ•œ ์ƒํ™ฉ์—์„œ ์ œํ’ˆ์„ ๋น ๋ฅด๊ฒŒ ํ™•์‚ฐ์‹œํ‚ค๊ณ  ์ผ๋ฐ˜ ์†Œ๋น„์ž๋“ค์ด ์‰ฝ๊ฒŒ ์ œํ’ˆ์„ ์ˆ˜์šฉํ•  ์ˆ˜ ์žˆ๊ฒŒ ํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š”, ์†Œ๋น„์ž์˜ ๊ธฐ๋Œ€์™€ ์ธ์‹์„ ์ถฉ๋ถ„ํžˆ ์ดํ•ดํ•˜๊ณ  ๊ทธ๋ฅผ ์ถฉ์กฑ์‹œํ‚ฌ ์ˆ˜ ์žˆ๋Š” ์ œํ’ˆ์„ ๊ฐœ๋ฐœํ•˜๋Š” ๊ฒƒ์ด ์ค‘์š”ํ•˜๋‹ค. ์ด์— ๋”ฐ๋ผ ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ์ œํ’ˆ ๊ฐœ๋ฐœ ์ดˆ๊ธฐ ๋‹จ๊ณ„์— ํ™œ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ์‚ฌ์šฉ์ž ํ‰๊ฐ€ ํ•ญ๋ชฉ์„ ์ œ์•ˆํ•˜๊ณ  ํ‰๊ฐ€ ํ•ญ๋ชฉ ๊ฐ„ ๊ด€๊ณ„๋ฅผ ๋„์ถœํ•˜๋Š” ๊ฒƒ์„ ๋ชฉํ‘œ๋กœ ํ•œ๋‹ค. ๋จผ์ € 2์žฅ์—์„œ๋Š” social AI personal assistant์˜ ํŠน์ง•, ์ œํ’ˆ ๊ฐœ๋ฐœ ์ดˆ๊ธฐ ๋‹จ๊ณ„์—์„œ ์ด๋ฃจ์–ด์ง€๋Š” ์‚ฌ์šฉ์ž ํ‰๊ฐ€์˜ ์ค‘์š”์„ฑ ๋ฐ ๊ธฐ์กด ์‚ฌ์šฉ์ž ํ‰๊ฐ€ ๋ชจ๋ธ์˜ ํ•œ๊ณ„์ ์„ ์กฐ์‚ฌํ•˜์˜€๋‹ค. ๊ธฐ์กด์— ๊ธฐ์ˆ  ์ˆ˜์šฉ ๋ชจ๋ธ ๋ฐ AI personal assistant ์ œํ’ˆ์˜ ํ‰๊ฐ€ ๋ชจ๋ธ๋“ค์ด ๋‹ค์–‘ํ•˜๊ฒŒ ์ œ์•ˆ๋˜์–ด ์™”์œผ๋‚˜, ์ œํ’ˆ ๊ฐœ๋ฐœ ์ดˆ๊ธฐ ๋‹จ๊ณ„์— ํ™œ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ํ‰๊ฐ€ ๋ชจ๋ธ์€ ๋ถ€์กฑํ•˜์˜€๊ณ , ์ œํ’ˆ ์ „๋ฐ˜์„ ํ‰๊ฐ€ํ•  ์ˆ˜ ์žˆ๋Š” ํ‰๊ฐ€ ๋ชจ๋ธ์˜ ๋ถ€์žฌ๋กœ ๋Œ€๋ถ€๋ถ„์˜ ๊ธฐ์กด ์—ฐ๊ตฌ์—์„œ๋Š” ๋‘ ๊ฐ€์ง€ ์ด์ƒ์˜ ํ‰๊ฐ€ ๋ชจ๋ธ์„ ๊ฒฐํ•ฉ, ์ˆ˜์ •ํ•˜์—ฌ ์‚ฌ์šฉํ•œ ๊ฒƒ์„ ์•Œ ์ˆ˜ ์žˆ์—ˆ๋‹ค. 3์žฅ์—์„œ๋Š” AI personal assistant ๊ด€๋ จ ๊ธฐ์กด ์—ฐ๊ตฌ์—์„œ ํ™œ์šฉ๋œ ํ‰๊ฐ€ ํ•ญ๋ชฉ์„ ๊ฒ€ํ† ํ•˜์˜€๋‹ค. ์ด 40๊ฐœ์˜ ์—ฐ๊ตฌ๋ฅผ ๋ฆฌ๋ทฐํ•˜์—ฌ, ๊ธฐ์กด์— ํ™œ์šฉ๋˜๊ณ  ์žˆ๋Š” ํ‰๊ฐ€ ํ•ญ๋ชฉ์˜ ์ข…๋ฅ˜ ๋ฐ ํ•œ๊ณ„์ ์„ ์•Œ์•„๋ณด์•˜๋‹ค. ๊ทธ ๊ฒฐ๊ณผ, ํ‰๊ฐ€๋ฅผ ์œ„ํ•œ ํ”„๋กœํ† ํƒ€์ž… ๊ฐœ๋ฐœ์ด ์‰ฝ์ง€ ์•Š๊ธฐ์— ์ด๋ฏธ ์ƒ์šฉํ™”๋œ ์ œํ’ˆ๋“ค์„ ์ตœ๋Œ€ํ•œ ํ™œ์šฉํ•˜๋Š” ๊ฒƒ์„ ์•Œ ์ˆ˜ ์žˆ์—ˆ์œผ๋ฉฐ, ์ œํ’ˆ ์ „๋ฐ˜์„ ํ‰๊ฐ€ํ•œ ์‚ฌ๋ก€๋Š” ๋ถ€์กฑํ•จ์„ ์•Œ ์ˆ˜ ์žˆ์—ˆ๋‹ค. ๋˜ํ•œ ๊ธฐ์กด ์—ฐ๊ตฌ๋“ค์ด ์‚ฌ์šฉํ•œ ํ‰๊ฐ€ ํ•ญ๋ชฉ์„ ๋ชจ๋‘ ์ˆ˜์ง‘ ๋ฐ ์ •๋ฆฌํ•˜์—ฌ ์ดํ›„ ์ œ์•ˆํ•  ํ‰๊ฐ€ ๋ชจ๋ธ์˜ ๊ธฐ๋ฐ˜ ์ž๋ฃŒ๋กœ ํ™œ์šฉํ•˜์˜€๋‹ค. ๋ถ„์„ ๊ฒฐ๊ณผ, social AI personal assistant์˜ ๋ชฉ์ ์„ ๊ณ ๋ คํ•ด๋ณด์•˜์„ ๋•Œ, ์‚ฌ์šฉ์ž์™€์˜ ์‚ฌํšŒ์  ์ธํ„ฐ๋ž™์…˜์„ ํ†ตํ•ด ์‚ฌ์šฉ์ž์˜ ๊ฐ์ •์ ์ธ ๋ฉด์„ ์ฑ„์›Œ์ฃผ๋Š” ์—ญํ• ์ด ์ค‘์š”ํ•˜์ง€๋งŒ, ๊ณตํ†ต์ ์œผ๋กœ ํ™œ์šฉํ•˜๊ณ  ์žˆ๋Š” ๊ฐ์ •์  ๊ฐ€์น˜ ๊ด€๋ จ ํ‰๊ฐ€ ํ•ญ๋ชฉ์ด ๋ถ€์กฑํ•œ ๊ฒƒ์œผ๋กœ ๋‚˜ํƒ€๋‚ฌ๋‹ค. 4์žฅ์—์„œ๋Š” social AI personal assistant ์ œํ’ˆ ๊ฐœ๋ฐœ ์ดˆ๊ธฐ ๋‹จ๊ณ„์—์„œ ํ™œ์šฉ ๊ฐ€๋Šฅํ•œ ํ‰๊ฐ€ ํ•ญ๋ชฉ์„ ์ˆ˜์ง‘ ๋ฐ ์ œ์•ˆํ•˜๊ณ , ํ‰๊ฐ€ ํ•ญ๋ชฉ์„ ํ™œ์šฉํ•˜์—ฌ social robots์„ ํ‰๊ฐ€ํ•œ ๋’ค ์ด๋ฅผ ํ†ตํ•ด ํ‰๊ฐ€ ํ•ญ๋ชฉ ๊ฐ„์˜ ๊ด€๊ณ„๋ฅผ ๋„์ถœํ•˜์˜€๋‹ค. Social robots ๊ด€๋ จ ์˜๊ฒฌ์„ ๋‹ค์–‘ํ•˜๊ฒŒ ์ฒญ์ทจํ•˜๊ณ  ํ‰๊ฐ€ ํ•ญ๋ชฉ์„ ๋„์ถœํ•˜๋Š” ํ”„๋กœ์„ธ์Šค๋ฅผ ์ œ์•ˆํ•˜์˜€์œผ๋ฉฐ, ๋ณธ ํ”„๋กœ์„ธ์Šค๋ฅผ ํ†ตํ•ด ์ตœ์ข… ์„ ์ •๋œ ํ‰๊ฐ€ ํ•ญ๋ชฉ์„ ์ด์šฉํ•˜์—ฌ, ์ด 230๋ช…์ด ์„ธ ๊ฐ€์ง€ social robots ์ปจ์…‰ ์˜์ƒ์„ ํ‰๊ฐ€ํ•˜๋Š” ์‚ฌ๋ก€ ์—ฐ๊ตฌ๋ฅผ ์ง„ํ–‰ํ•˜์˜€๋‹ค. ํ‰๊ฐ€ ๊ฒฐ๊ณผ, ์ œํ’ˆ์— ๋Œ€ํ•œ ์†Œ๋น„์ž ํƒœ๋„๋Š” Utilitarian dimension๊ณผ Hedonic dimension์„ ํ†ตํ•ด ํ˜•์„ฑ๋˜์—ˆ๊ณ , Utilitarian dimension ๋‚ด ์‚ฌ์šฉ์„ฑ ๋ฐ ์ œํ’ˆ ํšจ์šฉ์„ฑ, Hedonic dimension์— ํฌํ•จ๋˜๋Š” ์‹ฌ๋ฏธ์  ๋งŒ์กฑ๋„, ์„ฑ๊ฒฉ์˜ ๋งค๋ ฅ๋„, ๊ฐ์„ฑ์  ๊ฐ€์น˜ ๊ฐ๊ฐ์€ ์„œ๋กœ ๊ธ์ •์ ์ธ ์ƒ๊ด€๊ด€๊ณ„๋ฅผ ์ง€๋‹˜์„ ์•Œ ์ˆ˜ ์žˆ์—ˆ๋‹ค. ๋˜ํ•œ ๊ธฐ์กด์— ์ œ์•ˆ๋œ ๊ธฐ์ˆ  ์ˆ˜์šฉ ๋ชจ๋ธ ๋Œ€๋น„ ๋ณธ ์—ฐ๊ตฌ์—์„œ ๋„์ถœํ•œ ํ‰๊ฐ€ ๋ชจ๋ธ์ด ์šฐ์ˆ˜ํ•œ ์„ค๋ช…๋ ฅ์„ ๋ณด์ž„์„ ํ™•์ธํ•˜์˜€๋‹ค. 5์žฅ์—์„œ๋Š” 4์žฅ์—์„œ ๋„์ถœ๋œ ํ‰๊ฐ€ ๋ชจ๋ธ์„ ํƒ€ ์ œํ’ˆ์— ์ ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ๋‹ค์‹œ ํ•œ๋ฒˆ ๊ฒ€์ฆํ•˜์˜€๋‹ค. ํ•ด๋‹น ๋ถ„์•ผ์— ์ „๋ฌธ์„ฑ์„ ์ง€๋‹Œ UX ์ „๋ฌธ๊ฐ€ 100๋ช… ๋ฐ ์Œ์„ฑ ๋น„์„œ ์„œ๋น„์Šค๋ฅผ ์‹ค์ œ ์‚ฌ์šฉํ•˜๋Š” ์‹ค์‚ฌ์šฉ์ž 100๋ช…์ด, ํœด๋Œ€ํฐ ์˜จ๋ณด๋”ฉ ์ƒํ™ฉ์—์„œ ์‚ฌ์šฉ์ž๋ฅผ ๋„์™€์ฃผ๋Š” ์Œ์„ฑ ๋น„์„œ ์„œ๋น„์Šค์˜ ์ปจ์…‰ ์˜์ƒ ๋‘ ๊ฐ€์ง€๋ฅผ ๋ณด๊ณ  ์ปจ์…‰์— ๋Œ€ํ•œ ํ‰๊ฐ€๋ฅผ ์ง„ํ–‰ํ•˜์˜€๋‹ค. ํ‰๊ฐ€ ๊ฒฐ๊ณผ UX ์ „๋ฌธ๊ฐ€์™€ ์‹ค์‚ฌ์šฉ์ž ๊ทธ๋ฃน ๊ฐ„์—๋Š” ํ‰๊ฐ€ ๊ฒฐ๊ณผ์— ์œ ์˜๋ฏธํ•œ ์ฐจ์ด๋ฅผ ๋ณด์ด์ง€ ์•Š์•˜๊ธฐ ๋•Œ๋ฌธ์—, UX ์ „๋ฌธ๊ฐ€์™€ ์‹ค์‚ฌ์šฉ์ž ๊ทธ๋ฃน์—์„œ ์–ป์€ ๋ฐ์ดํ„ฐ ์ „์ฒด๋ฅผ ํ™œ์šฉํ•˜์—ฌ ๊ตฌ์กฐ ๋ฐฉ์ •์‹ ๋ชจ๋ธ ๋ถ„์„์„ ์ง„ํ–‰ํ•˜์˜€๋‹ค. ๊ทธ ๊ฒฐ๊ณผ 5์žฅ๊ณผ ์œ ์‚ฌํ•œ ์ˆ˜์ค€์˜ ๊ฒฐ๊ณผ๋ฅผ ์–ป์—ˆ๊ณ , ์ถ”ํ›„ ํ•ด๋‹น ๋ชจ๋ธ์„ social AI personal assistant ์ œํ’ˆ์— ์ผ๋ฐ˜ํ™”ํ•˜์—ฌ ํ™œ์šฉํ•  ์ˆ˜ ์žˆ์„ ๊ฒƒ์œผ๋กœ ํŒ๋‹จํ•˜์˜€๋‹ค. ๋ณธ ๋…ผ๋ฌธ์€ social AI personal assistant ๊ด€๋ จ ์ œํ’ˆ ๋ฐ ์„œ๋น„์Šค์˜ ๊ฐœ๋ฐœ ์ดˆ๊ธฐ ๋‹จ๊ณ„์—์„œ ์‚ฌ์šฉ์ž ํ‰๊ฐ€๋ฅผ ์ง„ํ–‰ํ•  ๋•Œ ํ™œ์šฉ ๊ฐ€๋Šฅํ•œ ํ‰๊ฐ€ ํ•ญ๋ชฉ ๋ฐ ํ‰๊ฐ€ ํ•ญ๋ชฉ ๊ฐ„์˜ ๊ด€๊ณ„๋ฅผ ๋„์ถœํ•˜์˜€๋‹ค. ๋˜ํ•œ ์ด๋ฅผ ๊ฒ€์ฆํ•˜๊ธฐ ์œ„ํ•˜์—ฌ social AI personal assistant ์ œํ’ˆ ๋ฐ ์„œ๋น„์Šค๋ฅผ ํ™œ์šฉํ•œ ์‚ฌ๋ก€์—ฐ๊ตฌ๋ฅผ ์ง„ํ–‰ํ•˜์˜€๋‹ค. ๋ณธ ์—ฐ๊ตฌ ๊ฒฐ๊ณผ๋Š” ์ถ”ํ›„ ์ œํ’ˆ ๊ฐœ๋ฐœ ์ดˆ๊ธฐ ๋‹จ๊ณ„์—์„œ ์ œํ’ˆ์˜ ์ปจ์…‰์„ ๋ช…ํ™•ํžˆ ํ•˜๊ธฐ ์œ„ํ•œ ์‚ฌ์šฉ์ž ํ‰๊ฐ€๋ฅผ ์‹ค์‹œํ•ด์•ผ ํ•˜๋Š” ์—ฐ๊ตฌ์ง„์ด ํšจ์œจ์ ์œผ๋กœ ํ™œ์šฉํ•  ์ˆ˜ ์žˆ์„ ๊ฒƒ์œผ๋กœ ๊ธฐ๋Œ€๋œ๋‹ค. ์ถ”ํ›„ ์ด ๋ถ€๋ถ„์˜ ๊ฒ€์ฆ์„ ์œ„ํ•ด, social AI personal assistants์˜ ์™„์ œํ’ˆ๊ณผ ๊ฐœ๋ฐœ ์ดˆ๊ธฐ ๋‹จ๊ณ„์˜ video type stimulus๋ฅผ ๋น„๊ตํ•˜๋Š” ์ถ”๊ฐ€ ์—ฐ๊ตฌ๊ฐ€ ์ด๋ฃจ์–ด์ง„๋‹ค๋ฉด ๋ณธ ์—ฐ๊ตฌ์˜ ์˜๋ฏธ๋ฅผ ๋ณด๋‹ค ๋ช…ํ™•ํ•˜๊ฒŒ ์ œ์‹œํ•  ์ˆ˜ ์žˆ์„ ๊ฒƒ์œผ๋กœ ์ƒ๊ฐ๋œ๋‹ค.Chapter 1 Introduction 1 1.1 Background and motivation 1 1.1 Research objectives 5 1.2 Dissertation outline 7 Chapter 2 Literature review 9 2.1 Social AI personal assistant 9 2.2 User centered design process 13 2.3 Technology acceptance models 16 2.4 Evaluation measures for social AI personal assistant 22 2.5 Existing evaluation methodologies for social AI personal assistant 27 Chapter 3 Collection of existing evaluation measures for social AI personal assistants 40 3.1 Background 40 3.2 Methodology 43 3.3 Result 51 3.4 Discussion 60 Chapter 4 Development of an evaluation model for social AI personal assistants 63 4.1 Background 63 4.2 Methodology 66 4.2.1 Developing evaluation measures for social AI personal assistants 68 4.2.2 Conducting user evaluation for social robots 74 4.3 Result 77 4.3.1 Descriptive statistics 77 4.3.2 Hypothesis development and testing 80 4.3.3 Comparison with existing technology acceptance models 88 4.4 Discussion 93 Chapter 5 Verification of an evaluation model with voice assistant services 95 5.1 Background 95 5.2 Methodology 98 5.2.1 Design of evaluation questionnaires for voice assistant services 99 5.2.2 Validation of relationship among evaluation factors 103 5.3 Result 108 5.3.1 Descriptive statistics 108 5.3.2 Hypothesis development and testing 111 5.3.3 Comparison with existing technology acceptance models 118 5.4 Discussion 121 Chapter 6 Conclusion 124 6.1 Summary of this study 124 6.2 Contribution of this study 126 6.3 Limitation and future work 128 Bibliography 129 Appendix A. Evaluation measures for social AI personal assistant collected in Chapter 4 146 Appendix B. Questionnaires for evaluation of social robots 154 Appendix C. Questionnaires for evaluation of voice assistant service 166๋ฐ•
    • โ€ฆ
    corecore