1,666 research outputs found
LocLinkVis: A Geographic Information Retrieval-Based System for Large-Scale Exploratory Search
In this paper we present LocLinkVis (Locate-Link-Visualize); a system which
supports exploratory information access to a document collection based on
geo-referencing and visualization. It uses a gazetteer which contains
representations of places ranging from countries to buildings, and that is used
to recognize toponyms, disambiguate them into places, and to visualize the
resulting spatial footprints.Comment: SEM'1
BlogForever D2.4: Weblog spider prototype and associated methodology
The purpose of this document is to present the evaluation of different solutions for capturing blogs, established methodology and to describe the developed blog spider prototype
Building an Archive with Saada
Saada transforms a set of heterogeneous FITS files or VOTables of various
categories (images, tables, spectra ...) in a database without writing code.
Databases created with Saada come with a rich Web interface and an Application
Programming Interface (API). They support the four most common VO services.
Such databases can mix various categories of data in multiple collections. They
allow a direct access to the original data while providing a homogenous view
thanks to an internal data model compatible with the characterization axis
defined by the VO. The data collections can be bound to each other with
persistent links making relevant browsing paths and allowing data-mining
oriented queries.Comment: 18 pages, 5 figures Special VO issu
HaIRST: Harvesting Institutional Resources in Scotland Testbed. Final Project Report
The HaIRST project conducted research into the design, implementation and deployment of a pilot service for UK-wide access of autonomously created institutional resources in Scotland, the aim being to investigate and advise on some of the technical, cultural, and organisational requirements associated with the deposit, disclosure, and discovery of institutional resources in the JISC Information Environment. The project involved a consortium of Scottish higher and further education institutions, with significant assistance from the Scottish Library and Information Council. The project investigated the use of technologies based on the Open Archives Initiative (OAI), including the implementation of OAI-compatible repositories for metadata which describe and link to institutional digital resources, the use of the OAI protocol for metadata harvesting (OAI-PMH) to automatically copy the metadata from multiple repositories to a central repository, and the creation of a service to search and identify resources described in the central repository. An important aim of the project was to identify issues of metadata interoperability arising from the requirements of individual institutional repositories and their impact on services based on the aggregation of metadata through harvesting. The project also sought to investigate issues in using these technologies for a wide range of resources including learning, teaching and administrative materials as well as the research and scholarly communication materials considered by many of the other projects in the JISC Focus on Access to Institutional Resources (FAIR) Programme, of which HaIRST was a part. The project tested and implemented a number of open source software packages supporting OAI, and was successful in creating a pilot service which provides effective information retrieval of a range of resources created by the project consortium institutions. The pilot service has been extended to cover research and scholarly communication materials produced by other Scottish universities, and administrative materials produced by a non-educational institution in Scotland. It is an effective testbed for further research and development in these areas. The project has worked extensively with a new OAI standard for 'static repositories' which offers a low-barrier, low-cost mechanism for participation in OAI-based consortia by smaller institutions with a low volume of resources. The project identified and successfully tested tools for transforming pre-existing metadata into a format compliant with OAI standards. The project identified and assessed OAI-related documentation in English from around the world, and has produced metadata for retrieving and accessing it. The project created a Web-based advisory service for institutions and consortia. The OAI Scotland Information Service (OAISIS) provides links to related standards, guidance and documentation, and discusses the findings of HaIRST relating to interoperability and the pilot harvesting service. The project found that open source packages relating to OAI can be installed and made to interoperate to create a viable method of sharing institutional resources within a consortium. HaIRST identified issues affecting the interoperability of shared metadata and suggested ways of resolving them to improve the effectiveness and efficiency of shared information retrieval environments based on OAI. The project demonstrated that application of OAI technologies to administrative materials is an effective way for institutions to meet obligations under Freedom of Information legislation
D5.3 Overview of Online Tutorials and Instruction Manuals
UIDB/03213/2020
UIDP/03213/2020The ELEXIS Curriculum is an integrated set of training materials which contextualizes ELEXIS tools and services inside a broader, systematic pedagogic narrative. This means that the goal of the ELEXIS Curriculum is not simply to inform users about the functionalities of particular tools and services developed within the project, but to show how such tools and services are a) embedded in both lexicographic theory and practice; and b) representative of and contributing to the development of digital skills among lexicographers. The scope and rationale of the curriculum are described in more detail in the Deliverable D5.2 Guidelines for Producing ELEXIS Tutorials and Instruction Manuals. The goal of this deliverable, as stated in the project DOW, is to provide “a clear, structured overview of tutorials and instruction manuals developed within the project.”publishersversionpublishe
Feature Extraction and Duplicate Detection for Text Mining: A Survey
Text mining, also known as Intelligent Text Analysis is an important research area. It is very difficult to focus on the most appropriate information due to the high dimensionality of data. Feature Extraction is one of the important techniques in data reduction to discover the most important features. Proce- ssing massive amount of data stored in a unstructured form is a challenging task. Several pre-processing methods and algo- rithms are needed to extract useful features from huge amount of data. The survey covers different text summarization, classi- fication, clustering methods to discover useful features and also discovering query facets which are multiple groups of words or phrases that explain and summarize the content covered by a query thereby reducing time taken by the user. Dealing with collection of text documents, it is also very important to filter out duplicate data. Once duplicates are deleted, it is recommended to replace the removed duplicates. Hence we also review the literature on duplicate detection and data fusion (remove and replace duplicates).The survey provides existing text mining techniques to extract relevant features, detect duplicates and to replace the duplicate data to get fine grained knowledge to the user
- …