58 research outputs found
A Relation-Based Page Rank Algorithm for Semantic Web Search Engines
With the tremendous growth of information available to end users through the Web, search engines come to play ever a more critical role. Nevertheless, because of their general-purpose approach, it is always less uncommon that obtained result sets provide a burden of useless pages. The next-generation Web architecture, represented by the Semantic Web, provides the layered architecture possibly allowing overcoming this limitation. Several search engines have been proposed, which allow increasing information retrieval accuracy by exploiting a key content of Semantic Web resources, that is, relations. However, in order to rank results, most of the existing solutions need to work on the whole annotated knowledge base. In this paper, we propose a relation-based page rank algorithm to be used in conjunction with Semantic Web search engines that simply relies on information that could be extracted from user queries and on annotated resources. Relevance is measured as the probability that a retrieved resource actually contains those relations whose existence was assumed by the user at the time of query definitio
On the Precision of Search Engines: Results from a Controlled Experiment
Handling the growing amount of digital information is one of the major challenges when dealing with the World Wide Web (WWW). In particular, users crave for an effective and efficient retrieval of needed information. In this context, search engines adopt a key role. Besides conventional search engines such as Google, semantic search engines have emerged as an alternative approach in recent years. The quality of search results delivered by search engines is in
influenced by many criteria. This paper picks up one specific issue, the precision, and investigates and compares the precision of current both conventional (i.e., non-semantic) and semantic search engines based on a controlled experiment with 77 participants. Specifically, Google, AltaVista, MetaGer, Hakia, Kngine, and WolframAlpha are investigated and compared
PosMed (Positional Medline): prioritizing genes with an artificial neural network comprising medical documents to accelerate positional cloning
PosMed (http://omicspace.riken.jp/) prioritizes candidate genes for positional cloning by employing our original database search engine GRASE, which uses an inferential process similar to an artificial neural network comprising documental neurons (or ‘documentrons’) that represent each document contained in databases such as MEDLINE and OMIM. Given a user-specified query, PosMed initially performs a full-text search of each documentron in the first-layer artificial neurons and then calculates the statistical significance of the connections between the hit documentrons and the second-layer artificial neurons representing each gene. When a chromosomal interval(s) is specified, PosMed explores the second-layer and third-layer artificial neurons representing genes within the chromosomal interval by evaluating the combined significance of the connections from the hit documentrons to the genes. PosMed is, therefore, a powerful tool that immediately ranks the candidate genes by connecting phenotypic keywords to the genes through connections representing not only gene–gene interactions but also other biological interactions (e.g. metabolite–gene, mutant mouse–gene, drug–gene, disease–gene and protein–protein interactions) and ortholog data. By utilizing orthologous connections, PosMed facilitates the ranking of human genes based on evidence found in other model species such as mouse. Currently, PosMed, an artificial superbrain that has learned a vast amount of biological knowledge ranging from genomes to phenomes (or ‘omic space’), supports the prioritization of positional candidate genes in humans, mouse, rat and Arabidopsis thaliana
Investigalog, conocimiento e investigación más allá de la 2.0
El presente artículo analiza algunas de las formas de generación y gestión del conocimiento en la actualidad y las posibilidades futuras desde la irrupción de la web 3.0, en donde una parte del análisis se completa con la exploración sobre la relación entre las tecnologías informáticas de hoy y las nuevas visiones del conocimiento. Las características de esta sociedad global de la información, en un marco general de acceso a la misma para generar conocimiento y gestionarlo de forma efectiva, se traducen fundamentalmente en: acceso para todos, empowerment para todos y cooperación, bajo el esquema de inteligencia colectiva. El artículo muestra un ejemplo concreto de tecnología web que incursiona en este terreno: Investigalog.El presente artículo analiza algunas de las formas de generación y gestión del
conocimiento en la actualidad y las posibilidades futuras desde la irrupción de la web 3.0, en donde
una parte del análisis se completa con la exploración sobre la relación entre las tecnologías
informáticas de hoy y las nuevas visiones del conocimiento. Las características de esta sociedad
global de la información, en un marco general de acceso a la misma para generar conocimiento y
gestionarlo de forma efectiva, se traducen fundamentalmente en: acceso para todos, empowerment
para todos y cooperación, bajo el esquema de inteligencia colectiva. El artículo muestra un ejemplo
concreto de tecnología web que incursiona en este terreno: InvestigalogThis article discusses some of the ways of generation and management knowledge today
and the possibilities of the web 3.0. Part of the analysis is completed with the exploration of the
relationship between computer technologies and these new views of knowledge. The characteristics
of the global information society, in a general framework for access to it to generate knowledge and
manage it effectively, result mainly in access for all, empowerment and cooperation for all, under the
scheme of collective intelligence. This paper shows a concrete example of web technology to
understand and development this field: Investigalog
- …