1,051 research outputs found

    A Crowdsourcing Procedure for the Discovery of Non-Obvious Attributes of Social Image

    Full text link
    Research on mid-level image representations has conventionally concentrated relatively obvious attributes and overlooked non-obvious attributes, i.e., characteristics that are not readily observable when images are viewed independently of their context or function. Non-obvious attributes are not necessarily easily nameable, but nonetheless they play a systematic role in people`s interpretation of images. Clusters of related non-obvious attributes, called interpretation dimensions, emerge when people are asked to compare images, and provide important insight on aspects of social images that are considered relevant. In contrast to aesthetic or affective approaches to image analysis, non-obvious attributes are not related to the personal perspective of the viewer. Instead, they encode a conventional understanding of the world, which is tacit, rather than explicitly expressed. This paper introduces a procedure for discovering non-obvious attributes using crowdsourcing. We discuss this procedure using a concrete example of a crowdsourcing task on Amazon Mechanical Turk carried out in the domain of fashion. An analysis comparing discovered non-obvious attributes with user tags demonstrated the added value delivered by our procedure.Comment: 6 pages, 3 figures, Extended version of paper to appear in CrowdMM 2014: International ACM Workshop on Crowdsourcing for Multimedi

    Crowdsourced intuitive visual design feedback

    Get PDF
    For many people images are a medium preferable to text and yet, with the exception of star ratings, most formats for conventional computer mediated feedback focus on text. This thesis develops a new method of crowd feedback for designers based on images. Visual summaries are generated from a crowd’s feedback images chosen in response to a design. The summaries provide the designer with impressionistic and inspiring visual feedback. The thesis sets out the motivation for this new method, describes the development of perceptually organised image sets and a summarisation algorithm to implement it. Evaluation studies are reported which, through a mixed methods approach, provide evidence of the validity and potential of the new image-based feedback method. It is concluded that the visual feedback method would be more appealing than text for that section of the population who may be of a visual cognitive style. Indeed the evaluation studies are evidence that such users believe images are as good as text when communicating their emotional reaction about a design. Designer participants reported being inspired by the visual feedback where, comparably, they were not inspired by text. They also reported that the feedback can represent the perceived mood in their designs, and that they would be enthusiastic users of a service offering this new form of visual design feedback

    Exploratory Browsing

    Get PDF
    In recent years the digital media has influenced many areas of our life. The transition from analogue to digital has substantially changed our ways of dealing with media collections. Today‟s interfaces for managing digital media mainly offer fixed linear models corresponding to the underlying technical concepts (folders, events, albums, etc.), or the metaphors borrowed from the analogue counterparts (e.g., stacks, film rolls). However, people‟s mental interpretations of their media collections often go beyond the scope of linear scan. Besides explicit search with specific goals, current interfaces can not sufficiently support the explorative and often non-linear behavior. This dissertation presents an exploration of interface design to enhance the browsing experience with media collections. The main outcome of this thesis is a new model of Exploratory Browsing to guide the design of interfaces to support the full range of browsing activities, especially the Exploratory Browsing. We define Exploratory Browsing as the behavior when the user is uncertain about her or his targets and needs to discover areas of interest (exploratory), in which she or he can explore in detail and possibly find some acceptable items (browsing). According to the browsing objectives, we group browsing activities into three categories: Search Browsing, General Purpose Browsing and Serendipitous Browsing. In the context of this thesis, Exploratory Browsing refers to the latter two browsing activities, which goes beyond explicit search with specific objectives. We systematically explore the design space of interfaces to support the Exploratory Browsing experience. Applying the methodology of User-Centered Design, we develop eight prototypes, covering two main usage contexts of browsing with personal collections and in online communities. The main studied media types are photographs and music. The main contribution of this thesis lies in deepening the understanding of how people‟s exploratory behavior has an impact on the interface design. This thesis contributes to the field of interface design for media collections in several aspects. With the goal to inform the interface design to support the Exploratory Browsing experience with media collections, we present a model of Exploratory Browsing, covering the full range of exploratory activities around media collections. We investigate this model in different usage contexts and develop eight prototypes. The substantial implications gathered during the development and evaluation of these prototypes inform the further refinement of our model: We uncover the underlying transitional relations between browsing activities and discover several stimulators to encourage a fluid and effective activity transition. Based on this model, we propose a catalogue of general interface characteristics, and employ this catalogue as criteria to analyze the effectiveness of our prototypes. We also present several general suggestions for designing interfaces for media collections

    Identifying the Effects of Unexpected Change in a Distributed Collection of Web Documents

    Get PDF
    It is not unusual for digital collections to degrade and suffer from problems associated with unexpected change. In previous analyses, I have found that categorizing the degree of change affecting a digital collection over time is a difficult task. More specifically, I found that categorizing this degree of change is not a binary problem where documents are either unchanged or they have changed so dramatically that they do not fit within the scope of the collection. It is, in part, a characterization of the intent of the change. In this dissertation, I present a study that compares change detection methods based on machine learning algorithms against the assessment made by human subjects in a user study. Consequently, this dissertation focuses on two research questions. First, how can we categorize the various degrees of change that documents can endure? This point becomes increasingly interesting if we take into account that the resources found in a digital library are often curated and maintained by experts with affiliations to professionally managed institutions. And second, how do the automatic detection methods fare against the human assessment of change in the ACM conference list? The results of this dissertation are threefold. First, I provide a categorization framework that highlights the different instances of change that I found in an analysis of the Association for Computing Machinery conference list. Second, I focus on a set of procedures to classify the documents according to the characteristics of change that they exhibit. Finally, I evaluate the classification procedures against the assessment of human subjects. Taking into account the results of the user evaluation and the inability of the test subjects to recognize some instances of change, the main conclusion that I derive from my dissertation is that managing the effects of unexpected change is a more serious problem than had previously been anticipated, thus requiring the immediate attention of collection managers and curators

    Balch internet research and analysis tool development case study

    Get PDF
    The Internet has become increasing popular as a vehicle to deliver surveys. An essential objective of research is to collect accurate data and there has been little work to insure that Internet survey systems are employing best practices as defined by academic and professional research to collect data. My dissertation reviews the current literature relating to best practices in Internet survey design and best practices in software design and development. I then document the development and deployment of an Open Source and publicly licensed Internet survey system that allows researchers to easily create, deploy, and analyze the results of Internet surveys. The resultant Internet survey design product, the Balch Internet Research and Analysis Tool (http://birat.net) is a full-featured Internet survey system which addresses best Internet research practices as defined by academic and professional research. The system was designed and coded by the author and is considered by him to be both innovative and unique to the field. The dissertation then reviews the system features, describes how the system was deployed, and discusses the strategies used to increase use and adoption of the system

    Semantic Annotation of Digital Objects by Multiagent Computing: Applications in Digital Heritage

    Get PDF
    Heritage organisations around the world are participating in broad scale digitisation projects, where traditional forms of heritage materials are being transcribed into digital representations in order to assist with their long-term preservation, facilitate cataloguing, and increase their accessibility to researchers and the general public. These digital formats open up a new world of opportunities for applying computational information retrieval techniques to heritage collections, making it easier than ever before to explore and document these materials. One of the key benefits of being able to easily share digital heritage collections is the strengthening and support of community memory, where members of a community contribute their perceptions and recollections of historical and cultural events so that this knowledge is not forgotten and lost over time. With the ever-growing popularity of digitally-native media and the high level of computer literacy in modern society, this is set to become a critical area for preservation in the immediate future

    Journalistic Knowledge Platforms: from Idea to Realisation

    Get PDF
    Journalistiske kunnskapsplattformer (JKPer) er en type intelligente informasjonssystemer designet for å forbedre nyhetsproduksjonsprosesser ved å kombinere stordata, kunstig intelligens (KI) og kunnskapsbaser for å støtte journalister. Til tross for sitt potensial for å revolusjonere journalistikkfeltet, har adopsjonen av JKPer vært treg, med forskere og store nyhetsutløp involvert i forskning og utvikling av JKPer. Den langsomme adopsjonen kan tilskrives den tekniske kompleksiteten til JKPer, som har ført til at nyhetsorganisasjoner stoler på flere uavhengige og oppgavespesifikke produksjonssystemer. Denne situasjonen kan øke ressurs- og koordineringsbehovet og kostnadene, samtidig som den utgjør en trussel om å miste kontrollen over data og havne i leverandørlåssituasjoner. De tekniske kompleksitetene forblir en stor hindring, ettersom det ikke finnes en allerede godt utformet systemarkitektur som ville lette realiseringen og integreringen av JKPer på en sammenhengende måte over tid. Denne doktoravhandlingen bidrar til teorien og praksisen rundt kunnskapsgrafbaserte JKPer ved å studere og designe en programvarearkitektur som referanse for å lette iverksettelsen av konkrete løsninger og adopsjonen av JKPer. Den første bidraget til denne doktoravhandlingen gir en grundig og forståelig analyse av ideen bak JKPer, fra deres opprinnelse til deres nåværende tilstand. Denne analysen gir den første studien noensinne av faktorene som har bidratt til den langsomme adopsjonen, inkludert kompleksiteten i deres sosiale og tekniske aspekter, og identifiserer de største utfordringene og fremtidige retninger for JKPer. Den andre bidraget presenterer programvarearkitekturen som referanse, som gir en generisk blåkopi for design og utvikling av konkrete JKPer. Den foreslåtte referansearkitekturen definerer også to nye typer komponenter ment for å opprettholde og videreutvikle KI-modeller og kunnskapsrepresentasjoner. Den tredje presenterer et eksempel på iverksettelse av programvarearkitekturen som referanse og beskriver en prosess for å forbedre effektiviteten til informasjonsekstraksjonspipelines. Denne rammen muliggjør en fleksibel, parallell og samtidig integrering av teknikker for naturlig språkbehandling og KI-verktøy. I tillegg diskuterer denne avhandlingen konsekvensene av de nyeste KI-fremgangene for JKPer og ulike etiske aspekter ved bruk av JKPer. Totalt sett gir denne PhD-avhandlingen en omfattende og grundig analyse av JKPer, fra teorien til designet av deres tekniske aspekter. Denne forskningen tar sikte på å lette vedtaket av JKPer og fremme forskning på dette feltet.Journalistic Knowledge Platforms (JKPs) are a type of intelligent information systems designed to augment news creation processes by combining big data, artificial intelligence (AI) and knowledge bases to support journalists. Despite their potential to revolutionise the field of journalism, the adoption of JKPs has been slow, with scholars and large news outlets involved in the research and development of JKPs. The slow adoption can be attributed to the technical complexity of JKPs that led news organisation to rely on multiple independent and task-specific production system. This situation can increase the resource and coordination footprint and costs, at the same time it poses a threat to lose control over data and face vendor lock-in scenarios. The technical complexities remain a major obstacle as there is no existing well-designed system architecture that would facilitate the realisation and integration of JKPs in a coherent manner over time. This PhD Thesis contributes to the theory and practice on knowledge-graph based JKPs by studying and designing a software reference architecture to facilitate the instantiation of concrete solutions and the adoption of JKPs. The first contribution of this PhD Thesis provides a thorough and comprehensible analysis of the idea of JKPs, from their origins to their current state. This analysis provides the first-ever study of the factors that have contributed to the slow adoption, including the complexity of their social and technical aspects, and identifies the major challenges and future directions of JKPs. The second contribution presents the software reference architecture that provides a generic blueprint for designing and developing concrete JKPs. The proposed reference architecture also defines two novel types of components intended to maintain and evolve AI models and knowledge representations. The third presents an instantiation example of the software reference architecture and details a process for improving the efficiency of information extraction pipelines. This framework facilitates a flexible, parallel and concurrent integration of natural language processing techniques and AI tools. Additionally, this Thesis discusses the implications of the recent AI advances on JKPs and diverse ethical aspects of using JKPs. Overall, this PhD Thesis provides a comprehensive and in-depth analysis of JKPs, from the theory to the design of their technical aspects. This research aims to facilitate the adoption of JKPs and advance research in this field.Doktorgradsavhandlin
    corecore