18,381 research outputs found

    Segmenting and summarizing general events in a long-term lifelog

    Get PDF
    Lifelogging aims to capture a person’s life experiences using digital devices. When captured over an extended period of time a lifelog can potentially contain millions of files from various sources in a range of formats. For lifelogs containing such massive numbers of items, we believe it is important to group them into meaningful sets and summarize them, so that users can search and browse their lifelog data efficiently. Existing studies have explored the segmentation of continuously captured images over short periods of at most a few days into small groups of “events” (episodes). Yet, for long-term lifelogs, higher levels of abstraction are desirable due to the very large number of “events” which will occur over an extended period. We aim to segment a long-term lifelog at the level of general events which typically extend beyond a daily boundary, and to select summary information to represent these events. We describe our current work on higher level segmentation and summary information extraction for long term life logs and report a preliminary pilot study on a real long-term lifelog collection

    OBJ2TEXT: Generating Visually Descriptive Language from Object Layouts

    Full text link
    Generating captions for images is a task that has recently received considerable attention. In this work we focus on caption generation for abstract scenes, or object layouts where the only information provided is a set of objects and their locations. We propose OBJ2TEXT, a sequence-to-sequence model that encodes a set of objects and their locations as an input sequence using an LSTM network, and decodes this representation using an LSTM language model. We show that our model, despite encoding object layouts as a sequence, can represent spatial relationships between objects, and generate descriptions that are globally coherent and semantically relevant. We test our approach in a task of object-layout captioning by using only object annotations as inputs. We additionally show that our model, combined with a state-of-the-art object detector, improves an image captioning model from 0.863 to 0.950 (CIDEr score) in the test benchmark of the standard MS-COCO Captioning task.Comment: Accepted at EMNLP 201

    A critical investigation of the Osterwalder business model canvas: an in-depth case study

    Get PDF
    Although the Osterwalder business model canvas (BMC) is used by professionals worldwide, it has not yet been subject to a thorough investigation in academic literature. In this first contribution we present the results of an intensive, interactive process of data analysis, visual synthesis and textual rephrasing to gain insight into the business model of a single case (health television). The (textual and visual) representation of the business model needs to be consistent and powerful. Therefore, we start from the total value per customer segment. Besides the offer (or core value) additional value is created through customer related activities. The understanding of activities both on the strategic and tactical level reveals more insight into the total value creation. Moreover, value elements for one customer segment can induce value for others. The interaction between value for customer segments and activities results in a powerful customer value centred business model representation. Total value to customers generates activities and costs on the one hand and a revenue model on the other hand. Gross margins and sales volumes explain how value for customers contributes to profit. Another main challenge in business model mapping is in denominating the critical resources behind the activities. The Osterwalder business model canvas lacks consistency and power due to many overlaps which in turn are caused by the fixed architecture, the latter too easily leading to a filling-in exercise. Through its business model representation a company should first of all gain thorough understanding of it. Only then companies can evaluate the model and finally consider some adaptations

    Finding new music: a diary study of everyday encounters with novel songs

    Get PDF
    This paper explores how we, as individuals, purposefully or serendipitously encounter 'new music' (that is, music that we haven’t heard before) and relates these behaviours to music information retrieval activities such as music searching and music discovery via use of recommender systems. 41 participants participated in a three-day diary study, in which they recorded all incidents that brought them into contact with new music. The diaries were analyzed using a Grounded Theory approach. The results of this analysis are discussed with respect to location, time, and whether the music encounter was actively sought or occurred passively. Based on these results, we outline design implications for music information retrieval software, and suggest an extension of 'laid back' searching

    Foreground and background text in retrieval

    Get PDF
    Our hypothesis is that certain clauses have foreground functions in text, while other clauses have background functions and that these functions are expressed or reflected in the syntactic structure of the clause. Presumably these clauses will have differing utility for automatic approaches to text understanding; a summarization system might want to utilize background clauses to capture commonalities between numbers of documents while an indexing system might use foreground clauses in order to capture specific characteristics of a certain document
    • 

    corecore