1,165 research outputs found

    Interoperable Registers and Registries in the EU: Perspectives from INSPIRE

    Get PDF
    INSPIRE is a EU-wide data and service infrastructure for the cross-border sharing of environmental data and for their use in support to policy making. This paper introduces the context, requirements and issues for registers and registries in INSPIRE, including persistent identifiers, versioning, multi-linguality, extensibility, linking and alignment with existing registers and cross-sector interoperability and re-use. In our presentation, besides highlighting open issues relevant not only in the scope of INSPIRE, we will report the results of an INSPIRE workshop on registers/registries taking place on 22-23 January 2014.JRC.H.6-Digital Earth and Reference Dat

    Building the Synergy between Public Sector and Research Data Infrastructures

    Get PDF
    INSPIRE is a European Directive aiming to establish a EU-wide spatial data infrastructure to give cross-border access to information that can be used to support EU environmental policies, as well as other policies and activities having an impact on the environment. In order to ensure cross-border interoperability of data infrastructures operated by EU Member States, INSPIRE sets out a framework based on common specifications for metadata, data, network services, data and service sharing, monitoring and reporting. The implementation of INSPIRE has reached important milestones: the INSPIRE Geoportal was launched in 2011 providing a single access point for the discovery of INSPIRE data and services across EU Member States (currently, about 300K), while all the technical specifications for the interoperability of data across the 34 INSPIRE themes were adopted at the end of 2013. During this period a number of EU and international initiatives has been launched, concerning cross-domain interoperability and (Linked) Open Data. In particular, the EU Open Data Portal, launched in December 2012, made provisions to access government and scientific data from EU institutions and bodies, and the EU ISA Programme (Interoperability Solutions for European Public Administrations) promotes cross-sector interoperability by sharing and re-using EU-wide and national standards and components. Moreover, the Research Data Alliance (RDA), an initiative jointly funded by the European Commission, the US National Science Foundation and the Australian Research Council, was launched in March 2013 to promote scientific data sharing and interoperability. The Joint Research Centre of the European Commission (JRC), besides being the technical coordinator of the implementation of INSPIRE, is also actively involved in the initiatives promoting cross-sector re-use in INSPIRE, and sustainable approaches to address the evolution of technologies - in particular, how to support Linked Data in INSPIRE and the use of global persistent identifiers. It is evident that government and scientific data infrastructures are currently facing a number of issues that have already been addressed in INSPIRE. Sharing experiences and competencies will avoid re-inventing the wheel, and help promoting the cross-domain adoption of consistent solutions. Actually, one of the lessons learnt from INSPIRE and the initiatives in which JRC is involved, is that government and research data are not two separate worlds. Government data are commonly used as a basis to create scientific data, and vice-versa. Consequently, it is fundamental to adopt a consistent approach to address interoperability and data management issues shared by both government and scientific data. The presentation illustrates some of the lessons learnt during the implementation of INSPIRE and in work on data and service interoperability coordinated with European and international initiatives.We describe a number of critical interoperability issues and barriers affecting both scientific and government data, concerning, e.g., data terminologies, quality and licensing, and propose how these problems could be effectively addressed by a closer collaboration of the government and scientific communities, and the sharing of experiences and practices.JRC.H.6 - Digital Earth and Reference Dat

    BASILISCo : an advanced methodology for text complexity calculation

    Get PDF
    LAUREA MAGISTRALEQuesta tesi presenta una nuova strategia mirata ad affrontare il problema della complessità di lettura, presentando un approccio basato sull'analisi del Lessico e della Semantica. Contrariamente ai metodi comuni oggigiorno, che propongono una classificazione generale della complessità di lettura, quest'approccio genera la sua forza dall'analisi indipendente dei due domini menzionati in precedenza. Grazie a quest'approccio, è possibile fornire al creatore di contenuti, interessato nel valutare la complessità del suo lavoro, un'analisi più specifica dell'opera distinguendo chiaramente tra le varie tipologie di complessità. Questo si rivelerà essere un grande beneficio per l'autore, il quale sarà in grado di sistemare la complessità del suo lavoro conformemente al risultato fornito dall'applicativo. La peculiarità di questo approccio, e di conseguenza la sua innovatività, è associata alla modalità in cui le due complessità sono calcolate. La Complessità Lessicale è stata implementata usando una tecnica presa in prestito da un compito simile tipico dell'Elaborazione del Linguaggio Naturale (ELN o NLP in inglese): selezione del contenuto. Le due attività presentano dei bisogni simili; nel caso sella selezione di contenuto, vogliamo riconoscere il concetto che meglio distingue un certo documento, mentre, nell'individuazione della complessità lessicale, l'obiettivo è individuare quali parole meglio rappresentano un certo livello di complessità. La Complessità Sintattica, invece, è stata implementata usando un approccio basato sull'Apprendimento Profondo (o Deep Learning in inglese). La difficoltà del compito ha reso questa scelta quasi obbligatoria. Infatti, mentre può essere “semplice” assegnare una parola ad un certo livello di complessità, non è così semplice con le caratteristiche grammaticali, a meno che non vengano eseguite delle ricerche linguistiche mirate. Data questa premessa, la scelta di usare un sistema in grado di inferire automaticamente l'insieme di elementi che caratterizzano ogni livello di complessità, è quasi obbligatoria. La procedura è stata implementata per la lingua inglese, tuttavia, può essere facilmente adottata anche ad altri linguaggi, semplicemente cambiando il dataset usato. L'intero processo è infatti indipendente dal linguaggio e può essere facilmente trasposto ad ogni altro linguaggio per cui sono disponibili dei corpora. Questo implica che l'approccio può essere usato anche in un contesto di apprendimento di una seconda lingua (L2 Learning).This thesis introduces a novel strategy targeted at tackling the problem of reading complexity by presenting an approach based on the analysis of Lexicon and Semantic. Contrary to the state of the art methods, that proposes a general classification of the reading complexity, this approach generates its strength by the independent analysis of the two mentioned domains. Thanks to this approach, it is possible to provide to a content creator, interested in evaluating the complexity of his work, a more specific analysis of the document by clearly distinguishing the complexity of different areas. This will prove out to be of great benefit for the author, since he will be able to properly adjust the complexity of his work, according to the results provided by the software. The peculiarity of this approach and the intrinsic innovation introduced is correlated with the modality used to compute the two mentioned complexity. Lexical Complexity has been implemented using a technique borrowed by a similar task of Natural Language Processing: content selection. The two activities present similar needs, within the content selection task, we need to recognize the concept that best distinguishes a document, meanwhile, in the assessment of lexical complexity, we want to identify which words better discriminate specific levels of complexity. Syntactic Complexity, instead, has been implemented using a deep learning-based approach. In this case, the difficulty of the task mandated such a choice. While it can be “simple” to associate a word to a specific level of complexity, it is not so easy with grammatical features, unless specific linguistic researches are applied. Given these premises, the choice of using a system that can automatically infer the set of features that characterize each level of complexity is almost mandatory. The system has been implemented for English, however, it can be easily adapted to other languages, by simply changing the cores corpora. The entire process is, in fact, language independent and can be easily transposed to any other language, for which feasible corpora do exist. This implies that the approach can also be applied in the context of a Second Language Learning (L2 Learning)

    Inside the "African Cattle Complex": Animal Burials in the Holocene Central Sahara

    Get PDF
    Cattle pastoralism is an important trait of African cultures. Ethnographic studies describe the central role played by domestic cattle within many societies, highlighting its social and ideological values well beyond its mere function as 'walking larder'. Historical depth of this African legacy has been repeatedly assessed in an archaeological perspective, mostly emphasizing a continental vision. Nevertheless, in- depth site-specific studies, with a few exceptions, are lacking. Despite the long tradition of a multi-disciplinary approach to the analysis of pastoral systems in Africa, rarely do early and middle Holocene archaeological contexts feature in the same area the combination of settlement, ceremonial and rock art features so as to be multi- dimensionally explored: the Messak plateau in the Libyan central Sahara represents an outstanding exception. Known for its rich Pleistocene occupation and abundant Holocene rock art, the region, through our research, has also shown to preserve the material evidence of a complex ritual dated to the Middle Pastoral (6080-5120 BP or 5200-3800 BC). This was centred on the frequent deposition in stone monuments of disarticulated animal remains, mostly cattle. Animal burials are known also from other African contexts, but regional extent of the phenomenon, state of preservation of monuments, and associated rock art makes the Messak case unique. GIS analysis, excavation data, radiocarbon dating, zooarchaeological and isotopic (Sr, C, O) analyses of animal remains and botanical data are used to explore this highly formalized ritual and lifestyles of a pastoral community in the Holocene Sahara

    RDF and PIDs for INSPIRE: a missing item in ARE3NA

    Get PDF
    The presentation will outline intermediate results of a study in the context of geospatial data sharing across borders and at European level. The study is aiming to develop a common approach to generating common RDF schemas for representing INSPIRE data and metadata, as well as guidelines for the governance of persistent identifiers (PIDs). These are important elements for enabling the re-use of INSPIRE data in other sectors, in particular in e-government. The results of the study may feed into a proposal for additional encoding rules and guidelines for INSPIRE and will be performed in close collaboration with the INSPIRE Maintenance and Implementation Group and the ISA Programme’s Spatial Information and Services Working Group.JRC.H.6-Digital Earth and Reference Dat

    Social Search: retrieving information in Online Social Platforms -- A Survey

    Full text link
    Social Search research deals with studying methodologies exploiting social information to better satisfy user information needs in Online Social Media while simplifying the search effort and consequently reducing the time spent and the computational resources utilized. Starting from previous studies, in this work, we analyze the current state of the art of the Social Search area, proposing a new taxonomy and highlighting current limitations and open research directions. We divide the Social Search area into three subcategories, where the social aspect plays a pivotal role: Social Question&Answering, Social Content Search, and Social Collaborative Search. For each subcategory, we present the key concepts and selected representative approaches in the literature in greater detail. We found that, up to now, a large body of studies model users' preferences and their relations by simply combining social features made available by social platforms. It paves the way for significant research to exploit more structured information about users' social profiles and behaviors (as they can be inferred from data available on social platforms) to optimize their information needs further

    JRC Data Policy

    Get PDF
    The work on the JRC Data Policy followed the task identified in the JRC Management Plan 2014 to develop a dedicated data policy to complement the JRC Policy on Open Access to Scientific Publications and Supporting Guidance, and to promote open access to research data in the context of Horizon 2020. Important policy commitments and the relevant regulatory basis within the European Union and the European Commission include: the Commission Decision on the reuse of Commission documents, Commission communication on better access to scientific information, Commission communication on a reinforced European research area partnership for excellence and growth, Commission recommendation on access to and preservation of scientific information, and the EU implementation of the G8 Open Data Charter.JRC.H.6-Digital Earth and Reference Dat
    corecore