80,092 research outputs found
Metadata enrichment for digital heritage: users as co-creators
This paper espouses the concept of metadata enrichment through an expert and user-focused approach to metadata creation and management. To this end, it is argued the Web 2.0 paradigm enables users to be proactive metadata creators. As Shirky (2008, p.47) argues Web 2.0âs social tools enable âaction by loosely structured groups, operating without managerial direction and outside the profit motiveâ. Lagoze (2010, p. 37) advises, âthe participatory nature of Web 2.0 should not be dismissed as just a popular phenomenon [or fad]â. Carletti (2016) proposes a participatory digital cultural heritage approach where Web 2.0 approaches such as crowdsourcing can be sued to enrich digital cultural objects. It is argued that âheritage crowdsourcing, community-centred projects or other forms of public participationâ. On the other hand, the new collaborative approaches of Web 2.0 neither negate nor replace contemporary standards-based metadata approaches. Hence, this paper proposes a mixed metadata approach where user created metadata augments expert-created metadata and vice versa. The metadata creation process no longer remains to be the sole prerogative of the metadata expert. The Web 2.0 collaborative environment would now allow users to participate in both adding and re-using metadata. The case of expert-created (standards-based, top-down) and user-generated metadata (socially-constructed, bottom-up) approach to metadata are complementary rather than mutually-exclusive. The two approaches are often mistakenly considered as dichotomies, albeit incorrectly (Gruber, 2007; Wright, 2007) .
This paper espouses the importance of enriching digital information objects with descriptions pertaining the about-ness of information objects. Such richness and diversity of description, it is argued, could chiefly be achieved by involving users in the metadata creation process. This paper presents the importance of the paradigm of metadata enriching and metadata filtering for the cultural heritage domain. Metadata enriching states that a priori metadata that is instantiated and granularly structured by metadata experts is continually enriched through socially-constructed (post-hoc) metadata, whereby users are pro-actively engaged in co-creating metadata. The principle also states that metadata that is enriched is also contextually and semantically linked and openly accessible. In addition, metadata filtering states that metadata resulting from implementing the principle of enriching should be displayed for users in line with their needs and convenience. In both enriching and filtering, users should be considered as prosumers, resulting in what is called collective metadata intelligence
MORMED: towards a multilingual social networking platform facilitating medicine 2.0
The broad adoption of Web 2.0 tools has signalled a new era of "Medicine 2.0" in the field of medical informatics. The support for collaboration within online communities and the sharing of information in social networks offers the opportunity for new communication channels among patients, medical experts, and researchers. This paper introduces MORMED, a novel multilingual social networking and content management platform that exemplifies the Medicine 2.0 paradigm, and aims to achieve knowledge commonality by promoting sociality, while also transcending language barriers through automated translation. The MORMED platform will be piloted in a community interested in the treatment of rare diseases (Lupus or Antiphospholipid Syndrome)
Integration and mining of malaria molecular, functional and pharmacological data: how far are we from a chemogenomic knowledge space?
The organization and mining of malaria genomic and post-genomic data is
highly motivated by the necessity to predict and characterize new biological
targets and new drugs. Biological targets are sought in a biological space
designed from the genomic data from Plasmodium falciparum, but using also the
millions of genomic data from other species. Drug candidates are sought in a
chemical space containing the millions of small molecules stored in public and
private chemolibraries. Data management should therefore be as reliable and
versatile as possible. In this context, we examined five aspects of the
organization and mining of malaria genomic and post-genomic data: 1) the
comparison of protein sequences including compositionally atypical malaria
sequences, 2) the high throughput reconstruction of molecular phylogenies, 3)
the representation of biological processes particularly metabolic pathways, 4)
the versatile methods to integrate genomic data, biological representations and
functional profiling obtained from X-omic experiments after drug treatments and
5) the determination and prediction of protein structures and their molecular
docking with drug candidate structures. Progresses toward a grid-enabled
chemogenomic knowledge space are discussed.Comment: 43 pages, 4 figures, to appear in Malaria Journa
Human Computation and Convergence
Humans are the most effective integrators and producers of information,
directly and through the use of information-processing inventions. As these
inventions become increasingly sophisticated, the substantive role of humans in
processing information will tend toward capabilities that derive from our most
complex cognitive processes, e.g., abstraction, creativity, and applied world
knowledge. Through the advancement of human computation - methods that leverage
the respective strengths of humans and machines in distributed
information-processing systems - formerly discrete processes will combine
synergistically into increasingly integrated and complex information processing
systems. These new, collective systems will exhibit an unprecedented degree of
predictive accuracy in modeling physical and techno-social processes, and may
ultimately coalesce into a single unified predictive organism, with the
capacity to address societies most wicked problems and achieve planetary
homeostasis.Comment: Pre-publication draft of chapter. 24 pages, 3 figures; added
references to page 1 and 3, and corrected typ
Do we really need to catch them all? A new User-guided Social Media Crawling method
With the growing use of popular social media services like Facebook and
Twitter it is challenging to collect all content from the networks without
access to the core infrastructure or paying for it. Thus, if all content cannot
be collected one must consider which data are of most importance. In this work
we present a novel User-guided Social Media Crawling method (USMC) that is able
to collect data from social media, utilizing the wisdom of the crowd to decide
the order in which user generated content should be collected to cover as many
user interactions as possible. USMC is validated by crawling 160 public
Facebook pages, containing content from 368 million users including 1.3 billion
interactions, and it is compared with two other crawling methods. The results
show that it is possible to cover approximately 75% of the interactions on a
Facebook page by sampling just 20% of its posts, and at the same time reduce
the crawling time by 53%. In addition, the social network constructed from the
20% sample contains more than 75% of the users and edges compared to the social
network created from all posts, and it has similar degree distribution
How algorithmic popularity bias hinders or promotes quality
Algorithms that favor popular items are used to help us select among many
choices, from engaging articles on a social media news feed to songs and books
that others have purchased, and from top-raked search engine results to
highly-cited scientific papers. The goal of these algorithms is to identify
high-quality items such as reliable news, beautiful movies, prestigious
information sources, and important discoveries --- in short, high-quality
content should rank at the top. Prior work has shown that choosing what is
popular may amplify random fluctuations and ultimately lead to sub-optimal
rankings. Nonetheless, it is often assumed that recommending what is popular
will help high-quality content "bubble up" in practice. Here we identify the
conditions in which popularity may be a viable proxy for quality content by
studying a simple model of cultural market endowed with an intrinsic notion of
quality. A parameter representing the cognitive cost of exploration controls
the critical trade-off between quality and popularity. We find a regime of
intermediate exploration cost where an optimal balance exists, such that
choosing what is popular actually promotes high-quality items to the top.
Outside of these limits, however, popularity bias is more likely to hinder
quality. These findings clarify the effects of algorithmic popularity bias on
quality outcomes, and may inform the design of more principled mechanisms for
techno-social cultural markets
Designing novel applications inspired by emerging media technologies
The field of Human-Computer Interaction provides a number of useful tools and methods for obtaining information on end-users and their usage context to inform the design of computer systems, yet relatively little is known on how to go about designing for a completely novel application where there is no user base, no existing practice of use available at the start. The success of the currently available HCI methodology that focuses on understanding usersâ needs and establishing requirements is well-deserved in making computing applications usable in terms of fitting them to end-usersâ usage contexts. However, too much emphasis on identifying user needs tends to stifle other more exploratory design activities where new types of applications are invented in order to discover or create new activities currently not practiced. In this paper, we argue that a great starting point of novel application design is not the problem space (trying to rigorously define the user requirements) but the solution space (trying to leverage emerging computational technologies and growing design knowledge for various interaction platforms), and we build a foundation for a pragmatic design methodology supported by the authorsâ extensive experience in designing novel applications inspired by emerging media technologies
Exploring manuscripts: sharing ancient wisdoms across the semantic web
Recent work in digital humanities has seen researchers in-creasingly producing online editions of texts and manuscripts, particularly in adoption of the TEI XML format for online publishing. The benefits of semantic web techniques are un-derexplored in such research, however, with a lack of sharing and communication of research information. The Sharing Ancient Wisdoms (SAWS) project applies linked data prac-tices to enhance and expand on what is possible with these digital text editions. Focussing on Greek and Arabic col-lections of ancient wise sayings, which are often related to each other, we use RDF to annotate and extract seman-tic information from the TEI documents as RDF triples. This allows researchers to explore the conceptual networks that arise from these interconnected sayings. The SAWS project advocates a semantic-web-based methodology, en-hancing rather than replacing current workflow processes, for digital humanities researchers to share their findings and collectively benefit from each otherâs work
- âŠ