26,455 research outputs found
Structuring and extracting knowledge for the support of hypothesis generation in molecular biology
Background: Hypothesis generation in molecular and cellular biology is an empirical process in which knowledge derived from prior experiments is distilled into a comprehensible model. The requirement of automated support is exemplified by the difficulty of considering all relevant facts that are contained in the millions of documents available from PubMed. Semantic Web provides tools for sharing prior knowledge, while information retrieval and information extraction techniques enable its extraction from literature. Their combination makes prior knowledge available for computational analysis and inference. While some tools provide complete solutions that limit the control over the modeling and extraction processes, we seek a methodology that supports control by the experimenter over these critical processes. Results: We describe progress towards automated support for the generation of biomolecular hypotheses. Semantic Web technologies are used to structure and store knowledge, while a workflow extracts knowledge from text. We designed minimal proto-ontologies in OWL for capturing different aspects of a text mining experiment: the biological hypothesis, text and documents, text mining, and workflow provenance. The models fit a methodology that allows focus on the requirements of a single experiment while supporting reuse and posterior analysis of extracted knowledge from multiple experiments. Our workflow is composed of services from the 'Adaptive Information Disclosure Application' (AIDA) toolkit as well as a few others. The output is a semantic model with putative biological relations, with each relation linked to the corresponding evidence. Conclusion: We demonstrated a 'do-it-yourself' approach for structuring and extracting knowledge in the context of experimental research on biomolecular mechanisms. The methodology can be used to bootstrap the construction of semantically rich biological models using the results of knowledge extraction processes. Models specific to particular experiments can be constructed that, in turn, link with other semantic models, creating a web of knowledge that spans experiments. Mapping mechanisms can link to other knowledge resources such as OBO ontologies or SKOS vocabularies. AIDA Web Services can be used to design personalized knowledge extraction procedures. In our example experiment, we found three proteins (NF-Kappa B, p21, and Bax) potentially playing a role in the interplay between nutrients and epigenetic gene regulation
Discovering the Impact of Knowledge in Recommender Systems: A Comparative Study
Recommender systems engage user profiles and appropriate filtering techniques
to assist users in finding more relevant information over the large volume of
information. User profiles play an important role in the success of
recommendation process since they model and represent the actual user needs.
However, a comprehensive literature review of recommender systems has
demonstrated no concrete study on the role and impact of knowledge in user
profiling and filtering approache. In this paper, we review the most prominent
recommender systems in the literature and examine the impression of knowledge
extracted from different sources. We then come up with this finding that
semantic information from the user context has substantial impact on the
performance of knowledge based recommender systems. Finally, some new clues for
improvement the knowledge-based profiles have been proposed.Comment: 14 pages, 3 tables; International Journal of Computer Science &
Engineering Survey (IJCSES) Vol.2, No.3, August 201
Is adaptation of e-advertising the way forward?
E-advertising is a multi-billion dollar industry that has shown exponential growth in the last few years. However, although the number of users accessing the Internet increases, users don’t respond positively to adverts. Adaptive e-advertising may be the key to ensuring effectiveness of the ads reaching their target. Moreover, social networks are good sources of user information and can be used to extract user behaviour and characteristics for presentation of personalized advertising. Here we present a two-sided study based on two questionnaires, one directed to Internet users and the other to businesses. Our study shows that businesses agree that personalized advertising is the best way for the future, to maximize effectiveness and profit. In addition, our results indicate that most Internet users would prefer adaptive advertisements. From this study, we can propose a new design for a system that meets both Internet users’ and businesses’ requirements
The effectiveness of web-based interventions designed to decrease alcohol consumption – a systematic review
OBJECTIVE
To review the published literature on the effectiveness of web-based interventions designed to decrease consumption of alcohol and/or prevent alcohol abuse.
METHOD
Relevant articles published up to, and including, May 2006 were identified through electronic searches of Medline, PsycInfo, Embase, Cochrane Library, ASSIA, Web of Science and Science Direct. Reference lists of all articles identified for inclusion were checked for articles of relevance. An article was included if its stated or implied purpose was to evaluate a web-based intervention designed to decrease consumption of alcohol and/or to prevent alcohol abuse. Studies were reliably selected and quality-assessed, and data were independently extracted and interpreted by two authors.
RESULTS
Initial searches identified 191 articles of which 10 were eligible for inclusion. Of these, five provided a process evaluation only, with the remaining five providing some pre-to post-intervention measure of effectiveness. In general the percentage quality criteria met was relatively low and only one of the 10 articles selected was a randomized control trial.
CONCLUSION
The current review provides inconsistent evidence on the effectiveness of eIectronic screening and brief intervention (eSBI) for alcohol use. Process research suggests that web-based interventions are generally well received. However further controlled trials are needed to fully investigate their efficacy, to determine which elements are keys to outcome and to understand if different elements are required in order to engage low- and high-risk drinkers
Recommended from our members
NoTube – making TV a medium for personalized interaction
In this paper, we introduce NoTube’s vision on deploying semantics in interactive TV context in order to contextualize distributed applications and lift them to a new level of service that provides context-dependent and personalized selection of TV content. Additionally, lifting content consumption from a single-user activity to a community-based experience in a connected multi-device environment is central to the project. Main research questions relate to (1) data integration and enrichment - how to achieve unified and simple access to dynamic, growing and distributed multimedia content of diverse formats? (2) user and context modeling - what is an appropriate framework for context modeling, incorporating task-, domain and device-specific viewpoints? (3) context-aware discovery of resources - how could rather fuzzy matchmaking between potentially infinite contexts and available media resources be achieved? (4) collaborative architecture for TV content personalization - how can the combined information about data, context and user be put at disposal of both content providers and end-users in the view of creating extremely personalized services under controlled privacy and security policies? Thus, with the grand challenge in mind - to put the TV viewer back in the driver's seat – we focus on TV content as a medium for personalized interaction between people based on a service architecture that caters for a variety of content metadata, delivery channels and rendering devices
Knowledge will Propel Machine Understanding of Content: Extrapolating from Current Examples
Machine Learning has been a big success story during the AI resurgence. One
particular stand out success relates to learning from a massive amount of data.
In spite of early assertions of the unreasonable effectiveness of data, there
is increasing recognition for utilizing knowledge whenever it is available or
can be created purposefully. In this paper, we discuss the indispensable role
of knowledge for deeper understanding of content where (i) large amounts of
training data are unavailable, (ii) the objects to be recognized are complex,
(e.g., implicit entities and highly subjective content), and (iii) applications
need to use complementary or related data in multiple modalities/media. What
brings us to the cusp of rapid progress is our ability to (a) create relevant
and reliable knowledge and (b) carefully exploit knowledge to enhance ML/NLP
techniques. Using diverse examples, we seek to foretell unprecedented progress
in our ability for deeper understanding and exploitation of multimodal data and
continued incorporation of knowledge in learning techniques.Comment: Pre-print of the paper accepted at 2017 IEEE/WIC/ACM International
Conference on Web Intelligence (WI). arXiv admin note: substantial text
overlap with arXiv:1610.0770
Where are your Manners? Sharing Best Community Practices in the Web 2.0
The Web 2.0 fosters the creation of communities by offering users a wide
array of social software tools. While the success of these tools is based on
their ability to support different interaction patterns among users by imposing
as few limitations as possible, the communities they support are not free of
rules (just think about the posting rules in a community forum or the editing
rules in a thematic wiki). In this paper we propose a framework for the sharing
of best community practices in the form of a (potentially rule-based)
annotation layer that can be integrated with existing Web 2.0 community tools
(with specific focus on wikis). This solution is characterized by minimal
intrusiveness and plays nicely within the open spirit of the Web 2.0 by
providing users with behavioral hints rather than by enforcing the strict
adherence to a set of rules.Comment: ACM symposium on Applied Computing, Honolulu : \'Etats-Unis
d'Am\'erique (2009
PACMAS: A Personalized, Adaptive, and Cooperative MultiAgent System Architecture
In this paper, a generic architecture, designed to
support the implementation of applications aimed at managing
information among different and heterogeneous sources,
is presented. Information is filtered and organized according
to personal interests explicitly stated by the user. User pro-
files are improved and refined throughout time by suitable
adaptation techniques. The overall architecture has been called
PACMAS, being a support for implementing Personalized, Adaptive,
and Cooperative MultiAgent Systems. PACMAS agents are
autonomous and flexible, and can be made personal, adaptive and
cooperative, depending on the given application. The peculiarities
of the architecture are highlighted by illustrating three relevant
case studies focused on giving a support to undergraduate and
graduate students, on predicting protein secondary structure, and
on classifying newspaper articles, respectively
- …