4,700 research outputs found

    Guided generation of pedagogical concept maps from the Wikipedia

    Get PDF
    We propose a new method for guided generation of concept maps from open accessonline knowledge resources such as Wikies. Based on this method we have implemented aprototype extracting semantic relations from sentences surrounding hyperlinks in the Wikipedia’sarticles and letting a learner to create customized learning objects in real-time based oncollaborative recommendations considering her earlier knowledge. Open source modules enablepedagogically motivated exploration in Wiki spaces, corresponding to an intelligent tutoringsystem. The method extracted compact noun–verb–noun phrases, suggested for labeling arcsbetween nodes that were labeled with article titles. On average, 80 percent of these phrases wereuseful while their length was only 20 percent of the length of the original sentences. Experimentsindicate that even simple analysis algorithms can well support user-initiated information retrievaland building intuitive learning objects that follow the learner’s needs.Peer reviewe

    Communicating Uncertainty in Digital Humanities Visualization Research

    Get PDF
    Due to their historical nature, humanistic data encompass multiple sources of uncertainty. While humanists are accustomed to handling such uncertainty with their established methods, they are cautious of visualizations that appear overly objective and fail to communicate this uncertainty. To design more trustworthy visualizations for humanistic research, therefore, a deeper understanding of its relation to uncertainty is needed. We systematically reviewed 126 publications from digital humanities literature that use visualization as part of their research process, and examined how uncertainty was handled and represented in their visualizations. Crossing these dimensions with the visualization type and use, we identified that uncertainty originated from multiple steps in the research process from the source artifacts to their datafication. We also noted how besides known uncertainty coping strategies, such as excluding data and evaluating its effects, humanists also embraced uncertainty as a separate dimension important to retain. By mapping how the visualizations encoded uncertainty, we identified four approaches that varied in terms of explicitness and customization. This work contributes with two empirical taxonomies of uncertainty and it's corresponding coping strategies, as well as with the foundation of a research agenda for uncertainty visualization in the digital humanities. Our findings further the synergy among humanists and visualization researchers, and ultimately contribute to the development of more trustworthy, uncertainty-aware visualizations

    Network Analysis and Feminist Artists

    Get PDF
    This article examines the benefits and drawbacks of using social network analysis to study feminist artists’ networks. Looking at two of the author’s digital humanities projects, it explores the systemic and structural barriers that limit the utility of social network analysis for feminist artists. The first project on the social network of artist Carolee Schneemann analyzed her female circles through a correspondence network. The second project attempted to trace the circulation of feminist art manifestos in American feminist periodicals. Three factors are identified as constraining network analysis in these case studies, the lack of feminist artists’ archives, an insufficient amount of machine-readable material, and the limited availability of metadata pertaining to them. Network analysis therefore is best view as a supplemental to other digital approaches for studying this group

    Personalized Cinemagraphs using Semantic Understanding and Collaborative Learning

    Full text link
    Cinemagraphs are a compelling way to convey dynamic aspects of a scene. In these media, dynamic and still elements are juxtaposed to create an artistic and narrative experience. Creating a high-quality, aesthetically pleasing cinemagraph requires isolating objects in a semantically meaningful way and then selecting good start times and looping periods for those objects to minimize visual artifacts (such a tearing). To achieve this, we present a new technique that uses object recognition and semantic segmentation as part of an optimization method to automatically create cinemagraphs from videos that are both visually appealing and semantically meaningful. Given a scene with multiple objects, there are many cinemagraphs one could create. Our method evaluates these multiple candidates and presents the best one, as determined by a model trained to predict human preferences in a collaborative way. We demonstrate the effectiveness of our approach with multiple results and a user study.Comment: To appear in ICCV 2017. Total 17 pages including the supplementary materia

    "Enlisting 'Vertues Noble & Excelent': Behavior, Credit, and Knowledge Organization in the Social Edition"

    Get PDF
    A part of the special issue of DHQ on feminisms and digital humanities, this paper takes as its starting place Greg Crane’s exhortation that there is a "need to shift from lone editorials and monumental editions to editors ... who coordinate contributions from many sources and oversee living editions." In response to Crane, the exploration of the "living edition" detailed here examines the process of creating a publicly editable edition and considers what that edition, the process by which it was built, and the platform in which it was produced means for editions that support and promote gender equity. Drawing on the scholarship about the culture of the Wikimedia suite of projects, and the gendered trolling experienced by members of our team in the production of the Social Edition of the Devonshire Manuscript in Wikibooks, and interviews with our advisory group, we argue that while the Wikimedia projects are often openly hostile online spaces, the Wikimedia suite of projects are so important to the contemporary circulation of knowledge, that the key is to encourage gender equity in social behavior, credit sharing, and knowledge organization in Wikimedia, rather than abandon it for a more controlled collaborative environment for edition production and dissemination

    Historical collaborative geocoding

    Full text link
    The latest developments in digital have provided large data sets that can increasingly easily be accessed and used. These data sets often contain indirect localisation information, such as historical addresses. Historical geocoding is the process of transforming the indirect localisation information to direct localisation that can be placed on a map, which enables spatial analysis and cross-referencing. Many efficient geocoders exist for current addresses, but they do not deal with the temporal aspect and are based on a strict hierarchy (..., city, street, house number) that is hard or impossible to use with historical data. Indeed historical data are full of uncertainties (temporal aspect, semantic aspect, spatial precision, confidence in historical source, ...) that can not be resolved, as there is no way to go back in time to check. We propose an open source, open data, extensible solution for geocoding that is based on the building of gazetteers composed of geohistorical objects extracted from historical topographical maps. Once the gazetteers are available, geocoding an historical address is a matter of finding the geohistorical object in the gazetteers that is the best match to the historical address. The matching criteriae are customisable and include several dimensions (fuzzy semantic, fuzzy temporal, scale, spatial precision ...). As the goal is to facilitate historical work, we also propose web-based user interfaces that help geocode (one address or batch mode) and display over current or historical topographical maps, so that they can be checked and collaboratively edited. The system is tested on Paris city for the 19-20th centuries, shows high returns rate and is fast enough to be used interactively.Comment: WORKING PAPE

    Horizon Report 2009

    Get PDF
    El informe anual Horizon investiga, identifica y clasifica las tecnologías emergentes que los expertos que lo elaboran prevén tendrán un impacto en la enseñanza aprendizaje, la investigación y la producción creativa en el contexto educativo de la enseñanza superior. También estudia las tendencias clave que permiten prever el uso que se hará de las mismas y los retos que ellos suponen para las aulas. Cada edición identifica seis tecnologías o prácticas. Dos cuyo uso se prevé emergerá en un futuro inmediato (un año o menos) dos que emergerán a medio plazo (en dos o tres años) y dos previstas a más largo plazo (5 años)
    corecore