27,123 research outputs found
Collaborative recommendations with content-based filters for cultural activities via a scalable event distribution platform
Nowadays, most people have limited leisure time and the offer of (cultural) activities to spend this time is enormous. Consequently, picking the most appropriate events becomes increasingly difficult for end-users. This complexity of choice reinforces the necessity of filtering systems that assist users in finding and selecting relevant events. Whereas traditional filtering tools enable e.g. the use of keyword-based or filtered searches, innovative recommender systems draw on user ratings, preferences, and metadata describing the events. Existing collaborative recommendation techniques, developed for suggesting web-shop products or audio-visual content, have difficulties with sparse rating data and can not cope at all with event-specific restrictions like availability, time, and location. Moreover, aggregating, enriching, and distributing these events are additional requisites for an optimal communication channel. In this paper, we propose a highly-scalable event recommendation platform which considers event-specific characteristics. Personal suggestions are generated by an advanced collaborative filtering algorithm, which is more robust on sparse data by extending user profiles with presumable future consumptions. The events, which are described using an RDF/OWL representation of the EventsML-G2 standard, are categorized and enriched via smart indexing and open linked data sets. This metadata model enables additional content-based filters, which consider event-specific characteristics, on the recommendation list. The integration of these different functionalities is realized by a scalable and extendable bus architecture. Finally, focus group conversations were organized with external experts, cultural mediators, and potential end-users to evaluate the event distribution platform and investigate the possible added value of recommendations for cultural participation
BlogForever: D3.1 Preservation Strategy Report
This report describes preservation planning approaches and strategies recommended by the BlogForever project as a core component of a weblog repository design. More specifically, we start by discussing why we would want to preserve weblogs in the first place and what it is exactly that we are trying to preserve. We further present a review of past and present work and highlight why current practices in web archiving do not address the needs of weblog preservation adequately. We make three distinctive contributions in this volume: a) we propose transferable practical workflows for applying a combination of established metadata and repository standards in developing a weblog repository, b) we provide an automated approach to identifying significant properties of weblog content that uses the notion of communities and how this affects previous strategies, c) we propose a sustainability plan that draws upon community knowledge through innovative repository design
D7.3 Training materials
This Deliverable gives a detailed description of the comprehensive training programme and of the open educational content that the University of Padua has accomplished up to now for the project "Linked Heritage: Coordination of standard and technologies for the enrichment of Europeana" (CIP Best Practice Network). The final version of D7.3 will be released by the end of the project, when all the Learning Objects will be finished
Building a Disciplinary, World-Wide Data Infrastructure
Sharing scientific data, with the objective of making it fully discoverable,
accessible, assessable, intelligible, usable, and interoperable, requires work
at the disciplinary level to define in particular how the data should be
formatted and described. Each discipline has its own organization and history
as a starting point, and this paper explores the way a range of disciplines,
namely materials science, crystallography, astronomy, earth sciences,
humanities and linguistics get organized at the international level to tackle
this question. In each case, the disciplinary culture with respect to data
sharing, science drivers, organization and lessons learnt are briefly
described, as well as the elements of the specific data infrastructure which
are or could be shared with others. Commonalities and differences are assessed.
Common key elements for success are identified: data sharing should be science
driven; defining the disciplinary part of the interdisciplinary standards is
mandatory but challenging; sharing of applications should accompany data
sharing. Incentives such as journal and funding agency requirements are also
similar. For all, it also appears that social aspects are more challenging than
technological ones. Governance is more diverse, and linked to the discipline
organization. CODATA, the RDA and the WDS can facilitate the establishment of
disciplinary interoperability frameworks. Being problem-driven is also a key
factor of success for building bridges to enable interdisciplinary research.Comment: Proceedings of the session "Building a disciplinary, world-wide data
infrastructure" of SciDataCon 2016, held in Denver, CO, USA, 12-14 September
2016, to be published in ICSU CODATA Data Science Journal in 201
A large multilingual and multi-domain dataset for recommender systems
This paper presents a multi-domain interests dataset to train and test Recommender Systems, and the methodology to create the dataset
from Twitter messages in English and Italian. The English dataset includes an average of 90 preferences per user on music, books,
movies, celebrities, sport, politics and much more, for about half million users. Preferences are either extracted from messages of
users who use Spotify, Goodreads and other similar content sharing platforms, or induced from their ”topical” friends, i.e., followees
representing an interest rather than a social relation between peers. In addition, preferred items are matched with Wikipedia articles
describing them. This unique feature of our dataset provides a mean to derive a semantic categorization of the preferred items, exploiting
available semantic resources linked to Wikipedia such as the Wikipedia Category Graph, DBpedia, BabelNet and others
On systematic approaches for interpreted information transfer of inspection data from bridge models to structural analysis
In conjunction with the improved methods of monitoring damage and degradation processes, the interest in reliability assessment of reinforced concrete bridges is increasing in recent years. Automated imagebased inspections of the structural surface provide valuable data to extract quantitative information about deteriorations, such as crack patterns. However, the knowledge gain results from processing this information in a structural context, i.e. relating the damage artifacts to building components. This way, transformation to structural analysis is enabled. This approach sets two further requirements: availability of structural bridge information and a standardized storage for interoperability with subsequent analysis tools. Since the involved large datasets are only efficiently processed in an automated manner, the implementation of the complete workflow from damage and building data to structural analysis is targeted in this work. First, domain concepts are derived from the back-end tasks: structural analysis, damage modeling, and life-cycle assessment. The common interoperability format, the Industry Foundation Class (IFC), and processes in these domains are further assessed. The need for usercontrolled interpretation steps is identified and the developed prototype thus allows interaction at subsequent model stages. The latter has the advantage that interpretation steps can be individually separated into either a structural analysis or a damage information model or a combination of both. This approach to damage information processing from the perspective of structural analysis is then validated in different case studies
A quantitative analysis of the impact of a computerised information system on nurses' clinical practice using a realistic evaluation framework
Objective: To explore nurses' perceptions of the impact on clinical practice of the use of a computerised hospital information system.
Design: A realistic evaluation design based on Pawson and Tilley's work has been used across all the phases of the study. This is a theory-driven approach and focuses evaluation on the study of what works, for whom and in what circumstances. These relationships are constructed as context-mechanisms-outcomes (CMO) configurations.
Measurements: A questionnaire was distributed to all nurses working in in-patient units of a university hospital in Spain (n = 227). Quantitative data were analysed using SPSS 13.0. Descriptive statistics were used for an overall overview of nurses' perception. Inferential analysis, including both bivariate and multivariate methods (path analysis), was used for cross-tabulation of variables searching for CMO relationships.
Results: Nurses (n = 179) participated in the study (78.8% response rate). Overall satisfaction with the IT system was positive. Comparisons with context variables show how nursing units' context had greater influence on perceptions than users' characteristics. Path analysis illustrated that the influence of unit context variables are on outcomes and not on mechanisms.
Conclusion: Results from the study looking at subtle variations in users and units provide insight into how important professional culture and working practices could be in IT (information technology) implementation. The socio-technical approach on IT systems evaluation suggested in the recent literature appears to be an adequate theoretical underpinning for IT evaluation research. Realistic evaluation has proven to be an adequate method for IT evaluation. (C) 2009 Elsevier Ireland Ltd. All rights reserved
- …