137 research outputs found

    Metadata Visualization of Cultural Heritage Information within a Collaborative Environment

    Full text link
    Cultural content on the Web is available in various domains (cultural objects, datasets, geospatial data, moving images, scholarly texts and visual resources), concerns various topics, is written in different languages, targeted to both laymen and experts, and provided by different communities (libraries, archives museums and information industry) and individuals (Figure 1). The integration of information technologies and cultural heritage content on the Web is expected to have an impact on everyday life from the point of view of institutions, communities and individuals. In particular, collaborative environment scan recreate 3D navigable worlds that can offer new insights into our cultural heritage (Chan 2007). However, the main barrier is to find and relate cultural heritage information by end-users of cultural contents, as well as by organisations and communities managing and producing them. In this paper, we explore several visualisation techniques for supporting cultural interfaces, where the role of metadata is essential for supporting the search and communication among end-users (Figure 2). A conceptual framework was developed to integrate the data, purpose, technology, impact, and form components of a collaborative environment, Our preliminary results show that collaborative environments can help with cultural heritage information sharing and communication tasks because of the way in which they provide a visual context to end-users. They can be regarded as distributed virtual reality systems that offer graphically realised, potentially infinite, digital information landscapes. Moreover, collaborative environments also provide a new way of interaction between an end-user and a cultural heritage data set. Finally, the visualisation of metadata of a dataset plays an important role in helping end-users in their search for heritage contents on the Web

    Developing an ontology of mathematical logic

    Get PDF
    An ontology provides a mechanism to formally represent a body of knowledge. Ontologies are one of the key technologies supporting the Semantic Web and the desire to add meaning to the information available on the World Wide Web. They provide the mechanism to describe a set of concepts, their properties and their relations to give a shared representation of knowledge. The MALog project are developing an ontology to support the development of high-quality learning materials in the general area of mathematical logic. This ontology of mathematical logic will form the basis of the semantic architecture allowing us to relate different learning objects and recommend appropriate learning paths. This paper reviews the technologies used to construct the ontology, the use of the ontology to support learning object development and explores the potential future use of the ontology

    Towards a Semantic Wiki Experience – Desktop Integration and Interactivity in WikSAR

    Get PDF
    Common Wiki systems such as MediaWiki lack semantic annotations. WikSAR (Semantic Authoring and Retrieval within a Wiki), a prototype of a semantic Wiki, offers effortless semantic authoring. Instant gratification of users is achieved by context aware means of navigation, interactive graph visualisation of the emerging ontology, as well as semantic retrieval possibilities. Embedding queries into Wiki pages creates views (as dependant collections) on the information space. Desktop integration includes accessing dates (e.g. reminders) entered in the Wiki via local calendar applications, maintaining bookmarks, and collecting web quotes within the Wiki. Approaches to reference documents on the local file system are sketched out, as well as an enhancement of the Wiki interface to suggest appropriate semantic annotations to the user

    Large Graph Data Visualisation on the Web

    Get PDF
    Grafové databáze poskytují způsob uložení dat, který se zásadně liší od relačního modelu. Cílem této práce je poté vizualizovat tyto data a stanovit maximální objem, který jsou webové prohlížeče schopny najednou zpracovat. K tomuto účelu byla naimplementována interaktivní webová aplikace. Pro uložení dat je využit model RDF (Resource Description Framework). Ten reprezentuje data formou trojic se strukturou subjekt - predikát - objekt. Komunikace s touto databází, která běží na serveru je realizována pomocí REST API, samotný klient je poté implementován v jazyce JavaScript, kde vizualizaci zajišťuje HTML prvek canvas. Tu je možné provést pomocí třech speciálně navrhnutých metod: greedy, greedy-swap a force-directed. Výsledné hranice byly primárně zjištěny testováním časových náročností jednotlivých částí a silně závisejí na záměru uživatele. Limit byl stanoven na 150000 trojic v případě, kdy je nutné vykreslit maximální objem dat. Pokud je naopak cílem kvalita vizualizace a plynulost aplikace, tak se limit pohybuje v řádech tisíců.Graph databases provide a form of data storage that is fundamentally different from a relational model. The goal of this thesis is to visualize the data and determine the maximum volume that current web browsers are able to process at once. For this purpose, an interactive web application was implemented. Data are stored using the RDF (Resource Description Framework) model, which represents them as triples with a form of subject - predicate - object. Communication between this database, which runs on server and client is realized via REST API. The client itself is then implemented in JavaScript. Visualization is performed by using the HTML element canvas and can be done in different ways by applying three specially designed methods: greedy, greedy-swap and force-directed. The resulting boundaries were determined primarily by measuring time complexities of different parts and were heavily influenced by user's goals. If it is necessary to visualize as much data as possible, then 150000 triples were set to be the limiting volume. On the other hand, if the goal is maximum quality and application smoothness, then the limit doesn't exceed a few thousand.

    Library catalogue records as a research resource:introducing 'A Big Data History of Music'

    Get PDF
    Librarians and archivists are increasingly collecting and working with large quantities of digital data. In science, business, and now the humanities, the production and analysis of vast amounts of data (so-called ‘big data research’) have become fundamental activities. This article introduces the project A Big Data History of Music, a collaboration between Royal Holloway, University of London, and the British Library. The project has made the British Library’s catalogue records for printed and manuscript music available as open data, and has explored how the analysis and visualisation of huge numbers of bibliographic records can open new perspectives for researchers into music history. In addition to the British Library data (over a million records), the project drew on a further million bibliographic descriptions from RISM, which have also recently been released as open data. To show the challenges posed by the heterogeneous nature of the data, the article outlines the different structures of the various catalogue records used in the project, and summarises how the British Library data was cleaned and enhanced prior to its public release. Examples are given of how music-bibliographical data can be analysed and visualised, and how scholars and citizen scientists can engage with this data through hackathons, large-scale data analyses, and database construction. It is hoped this article will encourage other research libraries to consider making their catalogue records available as open data

    Are we talking about the same structure?: A unified approach to hypertext links, xml, rdf and zigzag

    Get PDF
    There are many different hypertext systems and paradigms, each with their apparent advantages. However the distinctions are perhaps not as significant as they seem. If we can reduce the core linking functionality to some common structure, which allows us to consider hypertext systems within a common model, we could identify what, if anything, distinguishes hypertext systems from each other. This paper offers such a common structure, showing the conceptual similarities between each of these systems and paradigms

    "Smart Content Scraping" pour la construction de réseaux d'auteurs

    Get PDF
    National audience> est un projet de recherche mené par une chercheuse en SHS de l'université de Tokyo (Japon), dont l’objectif est de mettre en évidence la relation de causalité entre les réseaux et les phénomènes sociaux (crise écologique, etc..). Notre collaboration dans le cadre de ce projet vise avant tout à synthétiser le réseau d’auteurs hétérogènes à travers plusieurs types de relations, ceci pour une période donnée, incluant une crise écologique (>, USA, the 1930’s). Dans cet objectif, nous nous intéresserons, d’une part, à un ensemble de données bibliographiques pour la construction d’un réseau de co-auteurs, et d’autre part, nous combinerons ces données avec le corpus textuel qui s’y réfère afin de considérer une autre dimension de ce réseau en mettant en évidence un autre type de relation, les relations de citation entre auteurs. Nous nous appuierons sur les méthodes, outils et techniques de fouille de textes pour construire, fiabiliser, analyser et enrichir les réseaux

    El Periodismo de datos y la Web Semántica

    Get PDF
    Uno de los fenómenos más interesantes del periodismo contemporáneo es el denominado Periodismo de Datos, donde la evolución del periodismo asistido por ordenador y las representaciones de datos han puesto la atención en la usabilidad, la interacción, la visualización y la participación de los usuarios. El trabajo periodístico se ve alterado desde el inicio, produciéndose una extraordinaria colaboración entre periodistas y también la cooperación con diseñadores e informáticos, produciendo nuevas narrativas visuales para artículos o reportajes a partir de la utilización de un gran volumen de datos, muchos de ellos provenientes de la Web Semántica, una revolución cultural sobre la propiedad y uso de los datos que afecta a los procesos de producción de información y conocimientoOne of the most interesting phenomena of contemporary journalism is called Data Journalism , where the evolution of computer-assisted reporting and data representation have put attention on usability, user interaction, visualization and user participation. have focused attention to usability, user interaction and user participation. The journalism is altered from the begining, producing an extraordinary collaboration between journalists and also cooperation with designers and hackers, producing new visual narratives for the stories that come from big datasets, many of them from Semantic Web, a cultural revolution about ownership anda data use affecting to production processes of information and knowledge.L’un des phénomènes les plus intéressants du journalisme contemporain est appelé le journalisme de données, où l’évolution du journalisme assisté par ordinateur et de représentations de données ont attiré l’attention sur la convivialité, l’interaction, la visualisation et la participation des usagers. Le journalisme est altérée dès le début, en produisant une extraordinaire collaboration entre les journalistes et la coopération avec des designers et informatiques, la production de nouveaux récits visuels pour les fonctionnalités de l’utilisation d’un grand volume de données, beaucoup d’entre eux de la Web sémantique, une révolution culturelle sur la propriété et l’utilisation des données qui affecte les processus de production des informations et des connaissance

    Du livre imprimé au Web sémantique : le projet du Dictionnaire des éditeurs français du xixe siècle

    Get PDF
    Cet article vise à présenter les enjeux techniques du projet DEF19 (Dictionnaire des éditeurs français du xixe siècle). Il montre comment les choix scientifiques opérés pour définir le périmètre de ce dictionnaire (définition de ce qu’est un éditeur, recensement de celles et ceux qui entrent dans cette catégorie…) ont en grande partie déterminé les solutions techniques. Il décrit les diverses expériences menées pour mettre au point la base de données, à savoir les logiciels expérimentés et les problèmes qui ont été rencontrés (intégration de données hétérogènes issues de bases différentes, notamment), ainsi que la façon dont ces derniers ont pu être résolus. Il analyse l’intérêt que le Web sémantique, à travers l’usage de la plateforme Omeka S, peut représenter pour une telle entreprise scientifique, en montrant également les diverses potentialités qui se sont ouvertes au fil de la réalisation du projet en termes de collaborations techniques et institutionnelles.This article aims to describe technical issues of DEF19 project (Dictionnaire des éditeurs français du xixe siècle, i.e. 19th-century French Publishers Dictionary). Scientific choices have been made (what is a publisher and who is a publisher in 19th-century France?), which have widely affected technical answers. Several software solutions have been considered to build the database, each of them posing specific problems (and resulting in different solutions), particularly in integrating data coming from various preexisting databases. Our final choice, relying on Semantic Web technology through the Omeka S platform, offers many development potentials for such an academic challenge. These possibilities are described by the authors, together with technical and institutional cooperations allowed by this tool and gradually unveiled since the beginning of the project
    corecore