52 research outputs found

    A Topic Recommender for Journalists

    Get PDF
    The way in which people acquire information on events and form their own opinion on them has changed dramatically with the advent of social media. For many readers, the news gathered from online sources become an opportunity to share points of view and information within micro-blogging platforms such as Twitter, mainly aimed at satisfying their communication needs. Furthermore, the need to deepen the aspects related to news stimulates a demand for additional information which is often met through online encyclopedias, such as Wikipedia. This behaviour has also influenced the way in which journalists write their articles, requiring a careful assessment of what actually interests the readers. The goal of this paper is to present a recommender system, What to Write and Why, capable of suggesting to a journalist, for a given event, the aspects still uncovered in news articles on which the readers focus their interest. The basic idea is to characterize an event according to the echo it receives in online news sources and associate it with the corresponding readers’ communicative and informative patterns, detected through the analysis of Twitter and Wikipedia, respectively. Our methodology temporally aligns the results of this analysis and recommends the concepts that emerge as topics of interest from Twitter and Wikipedia, either not covered or poorly covered in the published news articles

    Annotating digital libraries and electronic editions in a collaborative and semantic perspective

    Get PDF
    The distinction between digital libraries and electronic editions is becom-ing more and more subtle. The practice of annotation represents a point of conver-gence of two only apparently separated worlds. The aim of this paper is to present a model of collaborative semantic annotation of texts (SemLib project), suggesting a system that find in Semantic Web and Linked Data the solution technologies for en-abling structured semantic annotation, also in the field of electronic editions in Digi-tal Humanities domain. The main purpose of SemLib is to develop an application so to make easy for developers the integration of annotation software in digital librar-ies, which are different both for technical implementations and managed contents, and provide to users, indifferently from their cultural backgrounds, a simple system which could be used as a front-end. We present, for this purpose, a final example of semantic annotation in a specific context: a digital edition of a literary text and the issues that an annotation task involves

    GRAPH CNN WITH RADIUS DISTANCE FOR SEMANTIC SEGMENTATION OF HISTORICAL BUILDINGS TLS POINT CLOUDS

    Get PDF
    Abstract. Point clouds obtained via Terrestrial Laser Scanning (TLS) surveys of historical buildings are generally transformed into semantically structured 3D models with manual and time-consuming workflows. The importance of automatizing this process is widely recognized within the research community. Recently, deep neural architectures have been applied for semantic segmentation of point clouds, but few studies have evaluated them in the Cultural Heritage domain, where complex shapes and mouldings make this task challenging. In this paper, we describe our experiments with the DGCNN architecture to semantically segment historical buildings point clouds, acquired with TLS. We propose a variation of the original approach where a radius distance based technique is used instead of K-Nearest Neighbors (KNN) to represent the neighborhood of points. We show that our approach provides better results by evaluating it on two real TLS point clouds, representing two Italian historical buildings: the Ducal Palace in Urbino and the Palazzo Ferretti in Ancona

    Deep learning for semantic segmentation of 3D point cloud.

    Get PDF
    Cultural Heritage is a testimony of past human activity, and, as such, its objects exhibit great variety in their nature, size and complexity; from small artefacts and museum items to cultural landscapes, from historical building and ancient monuments to city centers and archaeological sites. Cultural Heritage around the globe suffers from wars, natural disasters and human negligence. The importance of digital documentation is well recognized and there is an increasing pressure to document our heritage both nationally and internationally. For this reason, the three-dimensional scanning and modeling of sites and artifacts of cultural heritage have remarkably increased in recent years. The semantic segmentation of point clouds is an essential step of the entire pipeline; in fact, it allows to decompose complex architectures in single elements, which are then enriched with meaningful information within Building Information Modelling software. Notwithstanding, this step is very time consuming and completely entrusted on the manual work of domain experts, far from being automatized. This work describes a method to label and cluster automatically a point cloud based on a supervised Deep Learning approach, using a state-of-the-art Neural Network called PointNet++. Despite other methods are known, we have choose PointNet++ as it reached significant results for classifying and segmenting 3D point clouds. PointNet++ has been tested and improved, by training the network with annotated point clouds coming from a real survey and to evaluate how performance changes according to the input training data. It can result of great interest for the research community dealing with the point cloud semantic segmentation, since it makes public a labelled dataset of CH elements for further tests

    HyperJournal software, PHP scripting and Semantic Web technologies for the Open Access

    Get PDF
    In this article we present an high level overview of the HyperJournal project, an effort to provide novel possibilities both in Scientific Publishing and in access to Scientific Contributions, according to the Open Access movement guidelines. All the work has been implemented using the PHP Web Script language and interfacing with Java modules such as Sesame and RDFGrowth. Such interfaces, here illustrated, are of general use for project with similar needs. While the HyperJournal project itself is in its infancy stage, a first release is already available for download and public use, thus representing one of the certainly not many real and deployable examples of Semantic Web applications

    Curating a Document Collection via Crowdsourcing with Pundit 2.0

    Get PDF
    Pundit 2.0 is a semantic web annotation system that supports users in creating structured data on top of web pages. Annotations in Pundit are RDF triples that users build starting from web page elements, as text or images. Annotations can be made public and developers can access and combine them into RDF knowledge graphs, while authorship of each triple is always retrievable. In this demo we showcase Pundit 2.0 and demonstrate how it can be used to enhance a digital library, by providing a data crowdsourcing platform. Pundit enables users to annotate different kind of entities and to contribute to the collaborative creation of a knowledge graph. This, in turn, refines in real-time the exploration functionalities of the library's faceted search, providing an immediate added value out of the annotation effort. Ad-hoc configurations can be used to drive specific visualisations, like the timeline-map shown in this demo
    corecore