2,184 research outputs found

    CLiFF Notes: Research In Natural Language Processing at the University of Pennsylvania

    Get PDF
    The Computational Linguistics Feedback Forum (CLIFF) is a group of students and faculty who gather once a week to discuss the members\u27 current research. As the word feedback suggests, the group\u27s purpose is the sharing of ideas. The group also promotes interdisciplinary contacts between researchers who share an interest in Cognitive Science. There is no single theme describing the research in Natural Language Processing at Penn. There is work done in CCG, Tree adjoining grammars, intonation, statistical methods, plan inference, instruction understanding, incremental interpretation, language acquisition, syntactic parsing, causal reasoning, free word order languages, ... and many other areas. With this in mind, rather than trying to summarize the varied work currently underway here at Penn, we suggest reading the following abstracts to see how the students and faculty themselves describe their work. Their abstracts illustrate the diversity of interests among the researchers, explain the areas of common interest, and describe some very interesting work in Cognitive Science. This report is a collection of abstracts from both faculty and graduate students in Computer Science, Psychology and Linguistics. We pride ourselves on the close working relations between these groups, as we believe that the communication among the different departments and the ongoing inter-departmental research not only improves the quality of our work, but makes much of that work possible

    RenderMe-360: A Large Digital Asset Library and Benchmarks Towards High-fidelity Head Avatars

    Full text link
    Synthesizing high-fidelity head avatars is a central problem for computer vision and graphics. While head avatar synthesis algorithms have advanced rapidly, the best ones still face great obstacles in real-world scenarios. One of the vital causes is inadequate datasets -- 1) current public datasets can only support researchers to explore high-fidelity head avatars in one or two task directions; 2) these datasets usually contain digital head assets with limited data volume, and narrow distribution over different attributes. In this paper, we present RenderMe-360, a comprehensive 4D human head dataset to drive advance in head avatar research. It contains massive data assets, with 243+ million complete head frames, and over 800k video sequences from 500 different identities captured by synchronized multi-view cameras at 30 FPS. It is a large-scale digital library for head avatars with three key attributes: 1) High Fidelity: all subjects are captured by 60 synchronized, high-resolution 2K cameras in 360 degrees. 2) High Diversity: The collected subjects vary from different ages, eras, ethnicities, and cultures, providing abundant materials with distinctive styles in appearance and geometry. Moreover, each subject is asked to perform various motions, such as expressions and head rotations, which further extend the richness of assets. 3) Rich Annotations: we provide annotations with different granularities: cameras' parameters, matting, scan, 2D/3D facial landmarks, FLAME fitting, and text description. Based on the dataset, we build a comprehensive benchmark for head avatar research, with 16 state-of-the-art methods performed on five main tasks: novel view synthesis, novel expression synthesis, hair rendering, hair editing, and talking head generation. Our experiments uncover the strengths and weaknesses of current methods. RenderMe-360 opens the door for future exploration in head avatars.Comment: Technical Report; Project Page: 36; Github Link: https://github.com/RenderMe-360/RenderMe-36

    μ΄μ•ΌκΈ°ν˜• μ„€λͺ…문을 ν™œμš©ν•œ λŒ€κ·œλͺ¨ λΉ„λ””μ˜€ ν•™μŠ΅ 연ꡬ

    Get PDF
    ν•™μœ„λ…Όλ¬Έ (박사) -- μ„œμšΈλŒ€ν•™κ΅ λŒ€ν•™μ› : κ³΅κ³ΌλŒ€ν•™ 컴퓨터곡학뢀, 2021. 2. 김건희.Extensive contributions are being made to develop intelligent agents that can recognize and communicate with the world. In this sense, various video-language tasks have drawn a lot of interests in computer vision research, including image/video captioning, video retrieval and video question answering. It can be applied to high-level computer vision tasks and various future industries such as search engines, social marketing, automated driving, and robotics support through QA / dialog generation for the surrounding environment. However, despite these developments, video-language learning suffers from a higher degree of complexity. This thesis investigates methodologies for learning the relationship between videos and free-formed languages, including explanations, conversations, and question-and-answers, so that the machine can easily adapt to target downstream tasks. First, we introduce several methods to learn the relationship between long sentences and videos efficiently. We introduce the approaches for supervising human attention transfer for the video attention model, which shows the video attention mechanism can benefit from explicit human gaze labels. Next, we introduce the end-to-end semantic attention method, which further reduces the visual attention algorithm's complexity by using the representative visual concept word detected by the attention-based detector. As a follow-up study on previous methods, we introduce a JSFusion (Joint Sequence Fusion) method that enables efficient video search and QA by enabling many-to-many matching of attention model. Next, we introduce the CiSIN(Character in Story Identification Network), which uses Attention to increase the performance of character grounding and character re-identification in the movie. Finally, we introduce Transitional Adaptation, which promotes the caption generation models to generates coherent narratives for long videos. In summary, this thesis presents a novel approaches for automatic video description generation/retrieval and shows the benefits of extracting linguistic knowledge for object and motion in the video as well as the advantage of multimodal audio-visual learning for understanding videos. Since the proposed methods are easily adapted to any video-language tasks, it is expected to be applied to the latest models, bringing additional performance improvements. Moving forward, we plan to design an unsupervised video learning framework that can solve many challenges in the industry by integrating an unlimited amount of video, audio, and free-formed language data from the web.μ‹œκ°-μ–Έμ–΄ ν•™μŠ΅μ€ 이미지/λΉ„λ””μ˜€ μΊ‘μ…˜(Image/Video captioning), μ‹œκ° μ§ˆμ˜μ‘λ‹΅(Visual Question and Answering), λΉ„λ””μ˜€ 검색(Video Retrieval), μž₯λ©΄ 이해(scene understanding), 이벀트 인식(event detection) λ“± κ³ μ°¨μ›μ˜ 컴퓨터 λΉ„μ „ νƒœμŠ€ν¬(task)뿐만 μ•„λ‹ˆλΌ μ£Όλ³€ ν™˜κ²½μ— λŒ€ν•œ 질의 응닡 및 λŒ€ν™” 생성(Dialogue Generation)으둜 인터넷 검색 뿐만 μ•„λ‹ˆλΌ 졜근 ν™œλ°œν•œ μ†Œμ…œ λ§ˆμΌ€νŒ…(Social Marketing) 자율 μ£Όν–‰(Automated Driving), λ‘œλ³΄ν‹±μŠ€(Robotics)을 λ³΄μ‘°ν•˜λŠ” λ“± μ—¬λŸ¬ 미래 산업에 적용될 수 μžˆμ–΄ ν™œλ°œνžˆ μ—°κ΅¬λ˜κ³  μžˆλŠ” μ€‘μš”ν•œ 뢄야이닀. 컴퓨터 λΉ„μ Όκ³Ό μžμ—°μ–΄ μ²˜λ¦¬λŠ” μ΄λŸ¬ν•œ μ€‘μš”μ„±μ„ λ°”νƒ•μœΌλ‘œ 각자 κ³ μœ ν•œ μ˜μ—­μ—μ„œ λ°œμ „μ„ κ±°λ“­ν•΄ μ™”μœΌλ‚˜, 졜근 λ”₯λŸ¬λ‹μ˜ λ“±μž₯κ³Ό ν•¨κ»˜ λˆˆλΆ€μ‹œκ²Œ λ°œμ „ν•˜λ©΄μ„œ μ„œλ‘œλ₯Ό λ³΄μ™„ν•˜λ©° ν•™μŠ΅ κ²°κ³Όλ₯Ό ν–₯μƒμ‹œν‚€λŠ” λ“± 큰 μ‹œλ„ˆμ§€ 효과λ₯Ό λ°œνœ˜ν•˜κ²Œ λ˜μ—ˆλ‹€. ν•˜μ§€λ§Œ 이런 λ°œμ „μ—λ„ λΆˆκ΅¬ν•˜κ³ , λΉ„λ””μ˜€-μ–Έμ–΄κ°„ ν•™μŠ΅μ€ 문제의 λ³΅μž‘λ„κ°€ ν•œμΈ΅ λ†’μ•„ 어렀움을 κ²ͺ게 λ˜λŠ” κ²½μš°κ°€ λ§Žλ‹€. λ³Έ λ…Όλ¬Έμ—μ„œλŠ” λΉ„λ””μ˜€μ™€ 이에 λŒ€μ‘ν•˜λŠ” μ„€λͺ…, λŒ€ν™”, 질의 응닡 λ“± 더 λ‚˜μ•„κ°€ 자유 ν˜•νƒœμ˜ μ–Έμ–΄ (Free-formed language)κ°„μ˜ 관계λ₯Ό λ”μš± 효율적으둜 ν•™μŠ΅ν•˜κ³ , λͺ©ν‘œ μž„λ¬΄μ— 잘 λŒ€μ‘ν•  수 μžˆλ„λ‘ κ°œμ„ ν•˜λŠ” 것을 λͺ©ν‘œλ‘œ ν•œλ‹€. λ¨Όμ €, μ‹œκ°μ  λ³΅μž‘λ„κ°€ 이미지보닀 높은 λΉ„λ””μ˜€μ™€ κΈ΄ λ¬Έμž₯ μ‚¬μ΄μ˜ 관계λ₯Ό 효율적으둜 ν•™μŠ΅ν•˜κΈ° μœ„ν•œ μ—¬λŸ¬ 방법듀을 μ†Œκ°œν•œλ‹€. μΈκ°„μ˜ 주의 인식(Attention) λͺ¨λΈμ„ λΉ„λ””μ˜€-μ–Έμ–΄ λͺ¨λΈμ— 지도 ν•™μŠ΅ ν•˜λŠ” 방법을 μ†Œκ°œν•˜κ³ , μ΄μ–΄μ„œ λΉ„λ””μ˜€μ—μ„œ μš°μ„  κ²€μΆœλœ λŒ€ν‘œ μ‹œκ° 단어λ₯Ό 맀개둜 ν•˜μ—¬ 주의 인식(Attention) μ•Œκ³ λ¦¬μ¦˜μ˜ λ³΅μž‘λ„λ₯Ό λ”μš± μ€„μ΄λŠ” 의미 쀑심 주의 인식 (Semantic Attention) 방법, μ–΄ν…μ…˜ λͺ¨λΈμ˜ λ‹€λŒ€λ‹€ 맀칭을 기반으둜 효율적인 λΉ„λ””μ˜€ 검색 및 μ§ˆμ˜μ‘λ‹΅μ„ κ°€λŠ₯μΌ€ ν•˜λŠ” λΉ„λ””μ˜€-μ–Έμ–΄κ°„ μœ΅ν•© (Joint Sequence Fusion) 방법 λ“± λΉ„λ””μ˜€ 주의 인식을 효율적으둜 ν•™μŠ΅μ‹œν‚¬ 수 μžˆλŠ” 방법듀을 μ œμ‹œν•œλ‹€. λ‹€μŒμœΌλ‘œλŠ”, 주의 인식(Attention) λͺ¨λΈμ΄ 물체-단어 κ°„ 관계λ₯Ό λ„˜μ–΄ λΉ„λ””μ˜€ μƒμ—μ„œ 인물 검색 (Person Searching) 그리고 인물 재 식별 (Person Re-Identification)을 λ™μ‹œμ— μˆ˜ν–‰ν•˜λ©° μƒμŠΉμž‘μš©μ„ μΌμœΌν‚€λŠ” μŠ€ν† λ¦¬ 속 캐릭터 인식 신경망 (Character in Story Identification Network) 을 μ†Œκ°œν•˜λ©°, λ§ˆμ§€λ§‰μœΌλ‘œ 자기 지도 ν•™μŠ΅(Self-supervised Learning)을 톡해 주의 인식(Attention) 기반 μ–Έμ–΄ λͺ¨λΈμ΄ κΈ΄ λΉ„λ””μ˜€μ— λŒ€ν•œ μ„€λͺ…을 μ—°κ΄€μ„± 있게 잘 생성할 수 μžˆλ„λ‘ μœ λ„ν•˜λŠ” 방법을 μ†Œκ°œν•œλ‹€. μš”μ•½ν•˜μžλ©΄, 이 ν•™μœ„ λ…Όλ¬Έμ—μ„œ μ œμ•ˆν•œ μƒˆλ‘œμš΄ 방법둠듀은 λΉ„λ””μ˜€-μ–Έμ–΄ ν•™μŠ΅μ— ν•΄λ‹Ήν•˜λŠ” λΉ„λ””μ˜€ μΊ‘μ…˜(Video captioning), λΉ„λ””μ˜€ 검색(Video Retrieval), μ‹œκ° μ§ˆμ˜μ‘λ‹΅(Video Question and Answering)등을 ν•΄κ²°ν•  수 μžˆλŠ” 기술적 λ””λ”€λŒμ΄ 되며, λΉ„λ””μ˜€ μΊ‘μ…˜ ν•™μŠ΅μ„ 톡해 ν•™μŠ΅λœ 주의 인식 λͺ¨λ“ˆμ€ 검색 및 μ§ˆμ˜μ‘λ‹΅, 인물 검색 λ“± 각 λ„€νŠΈμ›Œν¬μ— μ΄μ‹λ˜λ©΄μ„œ μƒˆλ‘œμš΄ λ¬Έμ œλ“€μ— λŒ€ν•΄ λ™μ‹œμ— 졜고 μˆ˜μ€€(State-of-the-art)의 μ„±λŠ₯을 λ‹¬μ„±ν•˜μ˜€λ‹€. 이λ₯Ό 톡해 λΉ„λ””μ˜€-μ–Έμ–΄ ν•™μŠ΅μœΌλ‘œ 얻은 μ–Έμ–΄ μ§€μ‹μ˜ 이전은 μ‹œκ°-청각을 μ•„μš°λ₯΄λŠ” λΉ„λ””μ˜€ λ©€ν‹°λͺ¨λ‹¬ ν•™μŠ΅μ— 큰 도움이 λ˜λŠ” 것을 μ‹€ν—˜μ μœΌλ‘œ 보여쀀닀. ν–₯ν›„ μž‘μ—… λ°©ν–₯ (Future Work)μœΌλ‘œλŠ” μ•žμ„œ μ—°κ΅¬ν•œ λ‚΄μš©λ“€μ„ 기반으둜 μ›Ή 속에 μ‘΄μž¬ν•˜λŠ” λŒ€κ·œλͺ¨μ˜ μ–Έμ–΄, λΉ„λ””μ˜€, μ˜€λ””μ˜€ 데이터λ₯Ό 톡합해 ν•™μŠ΅μ— ν™œμš©ν•˜μ—¬ μ‚°μ—…κ³„μ˜ λ§Žμ€ λ‚œμ œλ₯Ό ν•΄κ²°ν•  수 μžˆλŠ” 비지도 ν•™μŠ΅ λͺ¨λΈμ„ λ§Œλ“€κ³ μž ν•œλ‹€.Chapter 1 Introduction 1.1 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 1.2 Outline of the thesis . . . . . . . . . . . . . . . . . . . . . . . . .8 Chapter 2 Related Work 2.1 Video Captioning . . . . . . . . . . . . . . . . . . . . . . . . . . .9 2.2 Video Retrieval with Natural Language . . . . . . . . . . . . . . 12 2.3 Video Question and Answering . . . . . . . . . . . . . . . . . . . 13 2.4 Cross-modal Representation Learning for Vision and LanguageTasks . . . . 15 Chapter 3 Human Attention Transfer for Video Captioning18 3.1 Introduction 3.2 Video Datasets for Caption and Gaze . . . . . . . . . . . . . . . 21 3.3 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 3.3.1 Video Pre-processing and Description . . . . . . . . . . . 22 3.3.2The Recurrent Gaze Prediction (RGP) Model . . . . . . . 23 3.3.3Construction of Visual Feature Pools . . . . . . . . . . . . 24 3.3.4The Decoder for Caption Generation . . . . . . . . . . . . 26 3.3.5Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 3.4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 3.4.1Evaluation of Gaze Prediction . . . . . . . . . . . . . . . . 29 3.4.2Evaluation of Video Captioning . . . . . . . . . . . . . . . 32 3.4.3Human Evaluation via AMT . . . . . . . . . . . . . . . . 35 3.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 Chapter 4 Semantic Word Attention for Video QA and VideoCaptioning 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 4.1.1Related Work . . . . . . . . . . . . . . . . . . . . . . . . . 39 4.1.2Contributions . . . . . . . . . . . . . . . . . . . . . . . . . 40 4.2 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 4.2.1Preprocessing . . . . . . . . . . . . . . . . . . . . . . . . . 41 4.2.2An Attention Model for Concept Detection . . . . . . . . 42 4.2.3Video-to-Language Models . . . . . . . . . . . . . . . . . 45 4.2.4A Model for Description . . . . . . . . . . . . . . . . . . . 45 4.2.5A Model for Fill-in-the-Blank . . . . . . . . . . . . . . . . 48 4.2.6A Model for Multiple-Choice Test . . . . . . . . . . . . . 50 4.2.7A Model for Retrieval . . . . . . . . . . . . . . . . . . . . 51 4.3 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 4.3.1The LSMDC Dataset and Tasks . . . . . . . . . . . . . . 52 4.3.2Quantitative Results . . . . . . . . . . . . . . . . . . . . . 54 4.3.3Qualitative Results . . . . . . . . . . . . . . . . . . . . . . 56 4.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 Chapter 5 Joint Sequnece Fusion Attention for Multimodal Sequence Data 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 5.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 5.3 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 5.3.1Preprocessing . . . . . . . . . . . . . . . . . . . . . . . . . 63 5.3.2The Joint Semantic Tensor . . . . . . . . . . . . . . . . . 65 5.3.3The Convolutional Hierarchical Decoder . . . . . . . . . . 66 5.3.4An Illustrative Example of How the JSFusion Model Works 68 5.3.5Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 5.3.6Implementation of Video-Language Models . . . . . . . . 69 5.4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 5.4.1LSMDC Dataset and Tasks . . . . . . . . . . . . . . . . . 71 5.4.2MSR-VTT-(RET/MC) Dataset and Tasks . . . . . . . . . 73 5.4.3Quantitative Results . . . . . . . . . . . . . . . . . . . . . 74 5.4.4Qualitative Results . . . . . . . . . . . . . . . . . . . . . . 76 5.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 Chapter 6 Character Re-Identification and Character Ground-ing for Movie Understanding 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 6.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 6.3 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 6.3.1Video Preprocessing . . . . . . . . . . . . . . . . . . . . . 84 6.3.2Visual Track Embedding . . . . . . . . . . . . . . . . . . . 85 6.3.3Textual Character Embedding . . . . . . . . . . . . . . . 86 6.3.4Character Grounding . . . . . . . . . . . . . . . . . . . . 87 6.3.5Re-Identification . . . . . . . . . . . . . . . . . . . . . . . 88 6.3.6Joint Training . . . . . . . . . . . . . . . . . . . . . . . . 90 6.4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 6.4.1Experimental Setup . . . . . . . . . . . . . . . . . . . . . 92 6.4.2Quantitative Results . . . . . . . . . . . . . . . . . . . . . 93 6.4.3Qualitative Results . . . . . . . . . . . . . . . . . . . . . . 95 6.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 Chapter 7 Transitional Adaptation of Pretrained Models forVisual Storytelling 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 7.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 7.3 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 7.3.1The Visual Encoder . . . . . . . . . . . . . . . . . . . . . 104 7.3.2The Language Generator . . . . . . . . . . . . . . . . . . 104 7.3.3Adaptation training . . . . . . . . . . . . . . . . . . . . . 105 7.3.4The Sequential Coherence Loss . . . . . . . . . . . . . . . 105 7.3.5Training with the adaptation Loss . . . . . . . . . . . . . 107 7.3.6Fine-tuning and Inference . . . . . . . . . . . . . . . . . . 107 7.4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 7.4.1Experimental Setup . . . . . . . . . . . . . . . . . . . . . 109 7.4.2Quantitative Results . . . . . . . . . . . . . . . . . . . . . 112 7.4.3Further Analyses . . . . . . . . . . . . . . . . . . . . . . . 112 7.4.4Human Evaluation Results . . . . . . . . . . . . . . . . . 115 7.4.5Qualitative Results . . . . . . . . . . . . . . . . . . . . . . 116 7.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 Chapter 8 Conclusion 8.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 8.2 Future Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 Bibliography ... 123 μš”μ•½ ... 148 Acknowledgements ... 150Docto

    Foundations and Recent Trends in Multimodal Machine Learning: Principles, Challenges, and Open Questions

    Full text link
    Multimodal machine learning is a vibrant multi-disciplinary research field that aims to design computer agents with intelligent capabilities such as understanding, reasoning, and learning through integrating multiple communicative modalities, including linguistic, acoustic, visual, tactile, and physiological messages. With the recent interest in video understanding, embodied autonomous agents, text-to-image generation, and multisensor fusion in application domains such as healthcare and robotics, multimodal machine learning has brought unique computational and theoretical challenges to the machine learning community given the heterogeneity of data sources and the interconnections often found between modalities. However, the breadth of progress in multimodal research has made it difficult to identify the common themes and open questions in the field. By synthesizing a broad range of application domains and theoretical frameworks from both historical and recent perspectives, this paper is designed to provide an overview of the computational and theoretical foundations of multimodal machine learning. We start by defining two key principles of modality heterogeneity and interconnections that have driven subsequent innovations, and propose a taxonomy of 6 core technical challenges: representation, alignment, reasoning, generation, transference, and quantification covering historical and recent trends. Recent technical achievements will be presented through the lens of this taxonomy, allowing researchers to understand the similarities and differences across new approaches. We end by motivating several open problems for future research as identified by our taxonomy

    Mental content : consequences of the embodied mind paradigm

    Get PDF
    The central difference between objectivist cognitivist semantics and embodied cognition consists in the fact that the latter is, in contrast to the former, mindful of binding meaning to context-sensitive mental systems. According to Lakoff/Johnson's experientialism, conceptual structures arise from preconceptual kinesthetic image-schematic and basic-level structures. Gallese and Lakoff introduced the notion of exploiting sensorimotor structures for higherlevel cognition. Three different types of X-schemas realise three types of environmentally embedded simulation: Areas that control movements in peri-personal space; canonical neurons of the ventral premotor cortex that fire when a graspable object is represented; the firing of mirror neurons while perceiving certain movements of conspecifics. ..

    Similarity Reasoning over Semantic Context-Graphs

    Get PDF
    Similarity is a central cognitive mechanism for humans which enables a broad range of perceptual and abstraction processes, including recognizing and categorizing objects, drawing parallelism, and predicting outcomes. It has been studied computationally through models designed to replicate human judgment. The work presented in this dissertation leverages general purpose semantic networks to derive similarity measures in a problem-independent manner. We model both general and relational similarity using connectivity between concepts within semantic networks. Our first contribution is to model general similarity using concept connectivity, which we use to partition vocabularies into topics without the need of document corpora. We apply this model to derive topics from unstructured dialog, specifically enabling an early literacy primer application to support parents in having better conversations with their young children, as they are using the primer together. Second, we model relational similarity in proportional analogies. To do so, we derive relational parallelism by searching in semantic networks for similar path pairs that connect either side of this analogy statement. We then derive human readable explanations from the resulting similar path pair. We show that our model can answer broad-vocabulary analogy questions designed for human test takers with high confidence. The third contribution is to enable symbolic plan repair in robot planning through object substitution. When a failure occurs due to unforeseen changes in the environment, such as missing objects, we enable the planning domain to be extended with a number of alternative objects such that the plan can be repaired and execution to continue. To evaluate this type of similarity, we use both general and relational similarity. We demonstrate that the task context is essential in establishing which objects are interchangeable

    Evaluating the impact of variation in automatically generated embodied object descriptions

    Get PDF
    Institute for Communicating and Collaborative SystemsThe primary task for any system that aims to automatically generate human-readable output is choice: the input to the system is usually well-specified, but there can be a wide range of options for creating a presentation based on that input. When designing such a system, an important decision is to select which aspects of the output are hard-wired and which allow for dynamic variation. Supporting dynamic choice requires additional representation and processing effort in the system, so it is important to ensure that incorporating variation has a positive effect on the generated output. In this thesis, we concentrate on two types of output generated by a multimodal dialogue system: linguistic descriptions of objects drawn from a database, and conversational facial displays of an embodied talking head. In a series of experiments, we add different types of variation to one of these types of output. The impact of each implementation is then assessed through a user evaluation in which human judges compare outputs generated by the basic version of the system to those generated by the modified version; in some cases, we also use automated metrics to compare the versions of the generated output. This series of implementations and evaluations allows us to address three related issues. First, we explore the circumstances under which users perceive and appreciate variation in generated output. Second, we compare two methods of including variation into the output of a corpus-based generation system. Third, we compare human judgements of output quality to the predictions of a range of automated metrics. The results of the thesis are as follows. The judges generally preferred output that incorporated variation, except for a small number of cases where other aspects of the output obscured it or the variation was not marked. In general, the output of systems that chose the majority option was judged worse than that of systems that chose from a wider range of outputs. However, the results for non-verbal displays were mixed: users mildly preferred agent outputs where the facial displays were generated using stochastic techniques to those where a simple rule was used, but the stochastic facial displays decreased users’ ability to identify contextual tailoring in speech while the rule-based displays did not. Finally, automated metrics based on simple corpus similarity favour generation strategies that do not diverge far from the average corpus examples, which are exactly the strategies that human judges tend to dislike. Automated metrics that measure other properties of the generated output correspond more closely to users’ preferences
    • …
    corecore