180 research outputs found

    MAPPA. Methodologies applied to archaeological potential Predictivity

    Get PDF
    The fruitful cooperation over the years between the university teaching staff of Univerità di Pisa (Pisa University), the officials of the Soprintendenza per i Beni Archeologici della Toscana (Superintendency for Archaeological Heritage of Tuscany), the officials of the Soprintendenza per i Beni Architettonici, Paesaggistici, Artistici ed Etnoantropologici per le Province di Pisa e Livorno (Superintendency for Architectural, Landscape and Ethno-anthropological Heritage for the Provinces of Pisa and Livorno), and the Comune di Pisa (Municipality of Pisa) has favoured a great deal of research on issues regarding archaeological heritage and the reconstruction of the environmental and landscape context in which Pisa has evolved throughout the centuries of its history. The desire to merge this remarkable know-how into an organic framework and, above all, to make it easily accessible, not only to the scientific community and professional categories involved, but to everyone, together with the wish to provide Pisa with a Map of archaeological potential (the research, protection and urban planning tool capable of converging the heritage protection needs of the remains of the past with the development requirements of the future) led to the development of the MAPPA project – Methodologies applied to archaeological potential predictivity - funded by Regione Toscana in 2010. The two-year project started on 1 July 2011 and will end on 30 June 2013. The first year of research was dedicated to achieving the first objective, that is, to retrieving the results of archaeological investigations from the archives of Superintendencies and University and from the pages of scientific publications, and to making them easily accessible; these results have often never been published or have often been published incompletely and very slowly. For this reason, a webGIS (“MappaGIS” that may freely accessed at http://mappaproject.arch.unipi.it/?page_id=452) was created and will be followed by a MOD (Mappa Open Data archaeological archive), the first Italian archive of open archaeological data, in line with European directives regarding access to Public Administration data and recently implemented by the Italian government also (the beta version of the archive can be viewed at http://mappaproject.arch.unipi.it/?page_id=454). Details are given in this first volume about the operational decisions that led to the creation of the webGIS: the software used, the system architecture, the organisation of information and its structuring into various information layers. But not only. The creation of the webGIS also gave us the opportunity to focus on a series of considerations alongside the work carried out by the MAPPA Laboratory researchers. We took the decision to publish these considerations with a view to promoting debate within the scientific community and, more in general, within the professional categories involved (e.g. public administrators, university researchers, archaeology professionals). This allowed us to overcome the critical aspects that emerged, such as the need to update the archaeological excavation documentation and data archiving systems in order to adjust them to the new standards provided by IT development; most of all, the need for greater and more rapid spreading of information, without which research cannot truly progress. Indeed, it is by comparing and connecting new data in every possible and, at times, unexpected way that research can truly thrive

    The Thesaurus of Archaeological Toponymy

    Get PDF

    Dewey linked data: Making connections with old friends and new acquaintances

    Get PDF
    This paper explores the history, uses cases, and future plans associated with availability of the Dewey Decimal Classification (DDC) system as linked data. Parts of DDC system have been available as linked data since 2009. Initial efforts included the DDC Summaries  in eleven languages exposed as linked data in dewey.info. In 2010, the content of dewey.info was further extended by the addition of assignable numbers and captions from the Abridged Edition 14 data files in English, Italian, and Vietnamese. During 2012, we will add assignable numbers and captions from the latest full edition database, DDC 23. In addition to the “old friends” of different Dewey language versions, institutions such as the British Library and Deutsche Nationalbibliothek have made use of Dewey linked data in bibliographic records and authority files, and AGROVOC has linked to our data at a general level. We expect to extend our linked data network shortly to “new acquaintances” such as GeoNames, ISO 639-3 language codes, and Mathematics Subject Classification. In particular, the paper examines the linking process to GeoNames as an example of cross-domain vocabulary alignment. In addition to linking plans, the paper reports on use cases that facilitate machine-assisted categorization and support discovery in the semantic web environment.This paper explores the history, uses cases, and future plans associated with availability of the Dewey Decimal Classification (DDC) system as linked data. Parts of DDC system have been available as linked data since 2009. Initial efforts included the DDC Summaries  in eleven languages exposed as linked data in dewey.info. In 2010, the content of dewey.info was further extended by the addition of assignable numbers and captions from the Abridged Edition 14 data files in English, Italian, and Vietnamese. During 2012, we will add assignable numbers and captions from the latest full edition database, DDC 23. In addition to the “old friends” of different Dewey language versions, institutions such as the British Library and Deutsche Nationalbibliothek have made use of Dewey linked data in bibliographic records and authority files, and AGROVOC has linked to our data at a general level. We expect to extend our linked data network shortly to “new acquaintances” such as GeoNames, ISO 639-3 language codes, and Mathematics Subject Classification. In particular, the paper examines the linking process to GeoNames as an example of cross-domain vocabulary alignment. In addition to linking plans, the paper reports on use cases that facilitate machine-assisted categorization and support discovery in the semantic web environment

    ODINet un framework innovativo per l\u27accesso e la diffusione on-line di dati strutturati ed eterogenei

    Get PDF
    ODINet is a research and development project, approved as part of the Regional Operational Programme through the European Regional Development Fund 2007-2013. The project involves the construction of a semantic search engine prototype able to catalog the data in a ontological graph, to extract the most relevant information depending on user requests and return them in a highly usable way. The application domain concerns the social, economic and health, in order to cover most of the data held by public bodies in the national context. The focus of this report is in the first place the description of the semantic components of the platform, emphasizing how the ontologies have been used to build an index in the form of graph. We also present a description of our semantic searches and finally an analysis of the results obtained in the final stage of the ODINEt prototype testing

    ODINet - Online Data Integration Network

    Get PDF
    Along with the expansion of Open Data and according to the latest EU directives for open access, the attention of public administration, research bodies and business is on web publishing of data in open format. However, a specialized search engine on the datasets, with similar role to that of Google for web pages, is not yet widespread. This article presents the Online Data Integration Network (ODINet) project, which aims to define a new technological framework for access to and online dissemination of structured and heterogeneous data through innovative methods of cataloging, searching and display of data on the web. In this article, we focus on the semantic component of our platform, emphasizing how we built and used ontologies. We further describe the Social Network Analysis (SNA) techniques we exploited to analyze it and to retrieve the required information. The testing phase of the project, that is still in progress, has already demonstrated the validity of the ODINet approach

    Bibliographic Control in the Digital Ecosystem

    Get PDF
    With the contributions of international experts, the book aims to explore the new boundaries of universal bibliographic control. Bibliographic control is radically changing because the bibliographic universe is radically changing: resources, agents, technologies, standards and practices. Among the main topics addressed: library cooperation networks; legal deposit; national bibliographies; new tools and standards (IFLA LRM, RDA, BIBFRAME); authority control and new alliances (Wikidata, Wikibase, Identifiers); new ways of indexing resources (artificial intelligence); institutional repositories; new book supply chain; “discoverability” in the IIIF digital ecosystem; role of thesauri and ontologies in the digital ecosystem; bibliographic control and search engines

    The Thesaurus of Archaeological Toponymy

    Get PDF

    Formal concept matching and reinforcement learning in adaptive information retrieval

    Get PDF
    The superiority of the human brain in information retrieval (IR) tasks seems to come firstly from its ability to read and understand the concepts, ideas or meanings central to documents, in order to reason out the usefulness of documents to information needs, and secondly from its ability to learn from experience and be adaptive to the environment. In this work we attempt to incorporate these properties into the development of an IR model to improve document retrieval. We investigate the applicability of concept lattices, which are based on the theory of Formal Concept Analysis (FCA), to the representation of documents. This allows the use of more elegant representation units, as opposed to keywords, in order to better capture concepts/ideas expressed in natural language text. We also investigate the use of a reinforcement leaming strategy to learn and improve document representations, based on the information present in query statements and user relevance feedback. Features or concepts of each document/query, formulated using FCA, are weighted separately with respect to the documents they are in, and organised into separate concept lattices according to a subsumption relation. Furthen-nore, each concept lattice is encoded in a two-layer neural network structure known as a Bidirectional Associative Memory (BAM), for efficient manipulation of the concepts in the lattice representation. This avoids implementation drawbacks faced by other FCA-based approaches. Retrieval of a document for an information need is based on concept matching between concept lattice representations of a document and a query. The learning strategy works by making the similarity of relevant documents stronger and non-relevant documents weaker for each query, depending on the relevance judgements of the users on retrieved documents. Our approach is radically different to existing FCA-based approaches in the following respects: concept formulation; weight assignment to object-attribute pairs; the representation of each document in a separate concept lattice; and encoding concept lattices in BAM structures. Furthermore, in contrast to the traditional relevance feedback mechanism, our learning strategy makes use of relevance feedback information to enhance document representations, thus making the document representations dynamic and adaptive to the user interactions. The results obtained on the CISI, CACM and ASLIB Cranfield collections are presented and compared with published results. In particular, the performance of the system is shown to improve significantly as the system learns from experience.The School of Computing, University of Plymouth, UK

    Product Family Design Knowledge Representation, Aggregation, Reuse, and Analysis

    Get PDF
    A flexible information model for systematic development and deployment of product families during all phases of the product realization process is crucial for product-oriented organizations. In current practice, information captured while designing products in a family is often incomplete, unstructured, and is mostly proprietary in nature, making it difficult to index, search, refine, reuse, distribute, browse, aggregate, and analyze knowledge across heterogeneous organizational information systems. To this end, we propose a flexible knowledge management framework to capture, reorganize, and convert both linguistic and parametric product family design information into a unified network, which is called a networked bill of material (NBOM) using formal concept analysis (FCA); encode the NBOM as a cyclic, labeled graph using the Web Ontology Language (OWL) that designers can use to explore, search, and aggregate design information across different phases of product design as well as across multiple products in a product family; and analyze the set of products in a product family based on both linguistic and parametric information. As part of the knowledge management framework, a PostgreSQL database schema has been formulated to serve as a central design repository of product design knowledge, capable of housing the instances of the NBOM. Ontologies encoding the NBOM are utilized as a metalayer in the database schema to connect the design artifacts as part of a graph structure. Representing product families by preconceived common ontologies shows promise in promoting component sharing, and assisting designers search, explore, and analyze linguistic and parametric product family design information. An example involving a family of seven one-time-use cameras with different functions that satisfy a variety of customer needs is presented to demonstrate the implementation of the proposed framework

    Verso la costruzione di una biblioteca digitale.

    Get PDF
    A data base of the "Antonio Zampolli Fund" has been created and the respective catalogue has been published1. The work of analysis and selection of texts for cataloguing helped in creating this bibliography, in large part built on references extracted by books and journals. Very old bibliographical references have also been retrieved by curricula prepared by Professor Zampolli for various projects and commissions
    • …
    corecore