1,612 research outputs found

    HEALTH GeoJunction: place-time-concept browsing of health publications

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The volume of health science publications is escalating rapidly. Thus, keeping up with developments is becoming harder as is the task of finding important cross-domain connections. When geographic location is a relevant component of research reported in publications, these tasks are more difficult because standard search and indexing facilities have limited or no ability to identify geographic foci in documents. This paper introduces <it><smcaps>HEALTH</smcaps> GeoJunction</it>, a web application that supports researchers in the task of quickly finding scientific publications that are relevant geographically and temporally as well as thematically.</p> <p>Results</p> <p><it><smcaps>HEALTH</smcaps> GeoJunction </it>is a geovisual analytics-enabled web application providing: (a) web services using computational reasoning methods to extract place-time-concept information from bibliographic data for documents and (b) visually-enabled place-time-concept query, filtering, and contextualizing tools that apply to both the documents and their extracted content. This paper focuses specifically on strategies for visually-enabled, iterative, facet-like, place-time-concept filtering that allows analysts to quickly drill down to scientific findings of interest in PubMed abstracts and to explore relations among abstracts and extracted concepts in place and time. The approach enables analysts to: find publications without knowing all relevant query parameters, recognize unanticipated geographic relations within and among documents in multiple health domains, identify the thematic emphasis of research targeting particular places, notice changes in concepts over time, and notice changes in places where concepts are emphasized.</p> <p>Conclusions</p> <p>PubMed is a database of over 19 million biomedical abstracts and citations maintained by the National Center for Biotechnology Information; achieving quick filtering is an important contribution due to the database size. Including geography in filters is important due to rapidly escalating attention to geographic factors in public health. The implementation of mechanisms for iterative place-time-concept filtering makes it possible to narrow searches efficiently and quickly from thousands of documents to a small subset that meet place-time-concept constraints. Support for a <it>more-like-this </it>query creates the potential to identify unexpected connections across diverse areas of research. Multi-view visualization methods support understanding of the place, time, and concept components of document collections and enable comparison of filtered query results to the full set of publications.</p

    Personalized Web Search Techniques - A Review

    Get PDF
    Searching is one of the commonly used task on the Internet. Search engines are the basic tool of the internet, from which related information can be collected according to the specified query or keyword given by the user, and are extremely popular for recurrently used sites. With the remarkable development of the World Wide Web (WWW), the information search has grown to be a major business segment of a global, competitive and money-making market. A perfect search engine is the one which should travel through all the web pages inthe WWW and should list the related information based on the given user keyword. In spite of the recent developments on web search technologies, there are still many conditions in which search engine users obtains the non-relevant search results from the search engines. A personalized Web search has various levels of efficiency for different users, queries, and search contexts. Even though personalized search has been a major research area for many years and many personalization approaches have been examined, it is still uncertain whether personalization is always significant on different queries for diverse users and under different search contexts. This paper focusses on the survey of many efficient personalized Web search approaches which were proposed by many authors

    Un environnement de spécification et de découverte pour la réutilisation des composants logiciels dans le développement des logiciels distribués

    Get PDF
    Notre travail vise à élaborer une solution efficace pour la découverte et la réutilisation des composants logiciels dans les environnements de développement existants et couramment utilisés. Nous proposons une ontologie pour décrire et découvrir des composants logiciels élémentaires. La description couvre à la fois les propriétés fonctionnelles et les propriétés non fonctionnelles des composants logiciels exprimées comme des paramètres de QoS. Notre processus de recherche est basé sur la fonction qui calcule la distance sémantique entre la signature d'un composant et la signature d'une requête donnée, réalisant ainsi une comparaison judicieuse. Nous employons également la notion de " subsumption " pour comparer l'entrée-sortie de la requête et des composants. Après sélection des composants adéquats, les propriétés non fonctionnelles sont employées comme un facteur distinctif pour raffiner le résultat de publication des composants résultats. Nous proposons une approche de découverte des composants composite si aucun composant élémentaire n'est trouvé, cette approche basée sur l'ontologie commune. Pour intégrer le composant résultat dans le projet en cours de développement, nous avons développé l'ontologie d'intégration et les deux services " input/output convertor " et " output Matching ".Our work aims to develop an effective solution for the discovery and the reuse of software components in existing and commonly used development environments. We propose an ontology for describing and discovering atomic software components. The description covers both the functional and non functional properties which are expressed as QoS parameters. Our search process is based on the function that calculates the semantic distance between the component interface signature and the signature of a given query, thus achieving an appropriate comparison. We also use the notion of "subsumption" to compare the input/output of the query and the components input/output. After selecting the appropriate components, the non-functional properties are used to refine the search result. We propose an approach for discovering composite components if any atomic component is found, this approach based on the shared ontology. To integrate the component results in the project under development, we developed the ontology integration and two services " input/output convertor " and " output Matching "

    User interfaces supporting casual data-centric interactions on the Web

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.Includes bibliographical references (p. 131-134).Today's Web is full of structured data, but much of it is transmitted in natural language text or binary images that are not conducive to further machine processing by the time it reaches the user's web browser. Consequently, casual users-those without programming skills-are limited to whatever features that web sites offer. Encountering a few dozens of addresses of public schools listed in a table on one web site and a few dozens of private schools on another web site, a casual user would have to painstakingly copy and paste each and every address into an online map service, copy and paste the schools' names, to get a unified view of where the schools are relative to her home. Any more sophisticated operations on data encountered on the Web-such as re-plotting the results of a scientific experiment found online just because the user wants to test a different theory-would be tremendously difficult. Conversely, to publish structured data to the Web, a casual user settles for static data files or HTML pages that offer none of the features provided by commercial sites such as searching, filtering, maps, timelines, etc., or even as basic a feature as sorting. To offer a rich experience on her site, the casual user must single-handedly build a three-tier web application that normally takes a team of engineers several months. This thesis explores user interfaces for casual users-those without programming skills-to extract and reuse data from today's Web as well as publish data into the Web in richly browsable and reusable form. By assuming that casual users most often deal with small and simple data sets, declarative syntaxes and direct manipulation techniques can be supported for tasks previously done only with programming in experts' tools. User studies indicated that tools built with such declarative syntaxes and direct manipulation techniques could be used by casual users. Moreover, the data publishing tool built from this research has been used by actual users on the Web for many purposes, from presenting educational materials in classroom to listing products for very small businesses.by David F. Huynh.Ph.D

    Connected Information Management

    Get PDF
    Society is currently inundated with more information than ever, making efficient management a necessity. Alas, most of current information management suffers from several levels of disconnectedness: Applications partition data into segregated islands, small notes don’t fit into traditional application categories, navigating the data is different for each kind of data; data is either available at a certain computer or only online, but rarely both. Connected information management (CoIM) is an approach to information management that avoids these ways of disconnectedness. The core idea of CoIM is to keep all information in a central repository, with generic means for organization such as tagging. The heterogeneity of data is taken into account by offering specialized editors. The central repository eliminates the islands of application-specific data and is formally grounded by a CoIM model. The foundation for structured data is an RDF repository. The RDF editing meta-model (REMM) enables form-based editing of this data, similar to database applications such as MS access. Further kinds of data are supported by extending RDF, as follows. Wiki text is stored as RDF and can both contain structured text and be combined with structured data. Files are also supported by the CoIM model and are kept externally. Notes can be quickly captured and annotated with meta-data. Generic means for organization and navigation apply to all kinds of data. Ubiquitous availability of data is ensured via two CoIM implementations, the web application HYENA/Web and the desktop application HYENA/Eclipse. All data can be synchronized between these applications. The applications were used to validate the CoIM ideas

    ENHANCING IMAGE FINDABILITY THROUGH A DUAL-PERSPECTIVE NAVIGATION FRAMEWORK

    Get PDF
    This dissertation focuses on investigating whether users will locate desired images more efficiently and effectively when they are provided with information descriptors from both experts and the general public. This study develops a way to support image finding through a human-computer interface by providing subject headings and social tags about the image collection and preserving the information scent (Pirolli, 2007) during the image search experience. In order to improve search performance most proposed solutions integrating experts’ annotations and social tags focus on how to utilize controlled vocabularies to structure folksonomies which are taxonomies created by multiple users (Peters, 2009). However, these solutions merely map terms from one domain into the other without considering the inherent differences between the two. In addition, many websites reflect the benefits of using both descriptors by applying a multiple interface approach (McGrenere, Baecker, & Booth, 2002), but this type of navigational support only allows users to access one information source at a time. By contrast, this study is to develop an approach to integrate these two features to facilitate finding resources without changing their nature or forcing users to choose one means or the other. Driven by the concept of information scent, the main contribution of this dissertation is to conduct an experiment to explore whether the images can be found more efficiently and effectively when multiple access routes with two information descriptors are provided to users in the dual-perspective navigation framework. This framework has proven to be more effective and efficient than the subject heading-only and tag-only interfaces for exploratory tasks in this study. This finding can assist interface designers who struggle with determining what information is best to help users and facilitate the searching tasks. Although this study explicitly focuses on image search, the result may be applicable to wide variety of other domains. The lack of textual content in image systems makes them particularly hard to locate using traditional search methods. While the role of professionals in describing items in a collection of images, the role of the crowd in assigning social tags augments this professional effort in a cost effective manner

    Personalizing Interactions with Information Systems

    Get PDF
    Personalization constitutes the mechanisms and technologies necessary to customize information access to the end-user. It can be defined as the automatic adjustment of information content, structure, and presentation tailored to the individual. In this chapter, we study personalization from the viewpoint of personalizing interaction. The survey covers mechanisms for information-finding on the web, advanced information retrieval systems, dialog-based applications, and mobile access paradigms. Specific emphasis is placed on studying how users interact with an information system and how the system can encourage and foster interaction. This helps bring out the role of the personalization system as a facilitator which reconciles the user’s mental model with the underlying information system’s organization. Three tiers of personalization systems are presented, paying careful attention to interaction considerations. These tiers show how progressive levels of sophistication in interaction can be achieved. The chapter also surveys systems support technologies and niche application domains

    Digital Image Access & Retrieval

    Get PDF
    The 33th Annual Clinic on Library Applications of Data Processing, held at the University of Illinois at Urbana-Champaign in March of 1996, addressed the theme of "Digital Image Access & Retrieval." The papers from this conference cover a wide range of topics concerning digital imaging technology for visual resource collections. Papers covered three general areas: (1) systems, planning, and implementation; (2) automatic and semi-automatic indexing; and (3) preservation with the bulk of the conference focusing on indexing and retrieval.published or submitted for publicatio
    • …
    corecore