9,584 research outputs found
Digital Image Access & Retrieval
The 33th Annual Clinic on Library Applications of Data Processing, held at the University of Illinois at Urbana-Champaign in March of 1996, addressed the theme of "Digital Image Access & Retrieval." The papers from this conference cover a wide range of topics concerning digital imaging technology for visual resource collections. Papers covered three general areas: (1) systems, planning, and implementation; (2) automatic and semi-automatic indexing; and (3) preservation with the bulk of the conference focusing on indexing and retrieval.published or submitted for publicatio
Neurocognitive Informatics Manifesto.
Informatics studies all aspects of the structure of natural and artificial information systems. Theoretical and abstract approaches to information have made great advances, but human information processing is still unmatched in many areas, including information management, representation and understanding. Neurocognitive informatics is a new, emerging field that should help to improve the matching of artificial and natural systems, and inspire better computational algorithms to solve problems that are still beyond the reach of machines. In this position paper examples of neurocognitive inspirations and promising directions in this area are given
Visual analysis of anatomy ontologies and related genomic information
Challenges in scientific research include the difficulty in obtaining overviews of the large
amount of data required for analysis, and in resolving the differences in terminology used
to store and interpret information in multiple, independently created data sets. Ontologies
provide one solution for analysis involving multiple data sources, improving cross-referencing
and data integration.
This thesis looks at harnessing advanced human perception to reduce the cognitive load
in the analysis of the multiple, complex data sets the bioinformatics user group studied use
in research, taking advantage also of usersâ domain knowledge, to build mental models of
data that map to its underlying structure. Guided by a user-centred approach, prototypes
were developed to provide a visual method for exploring usersâ information requirements
and to identify solutions for these requirements. 2D and 3D node-link graphs were built to
visualise the hierarchically structured ontology data, to improve analysis of individual and
comparison of multiple data sets, by providing overviews of the data, followed by techniques
for detailed analysis of regions of interest.
Iterative, heuristic and structured user evaluations were used to assess and refine the
options developed for the presentation and analysis of the ontology data. The evaluation
results confirmed the advantages that visualisation provides over text-based analysis, and
also highlighted the advantages of each of 2D and 3D for visual data analysis.Overseas Research Students Awards SchemeJames Watt Scholarshi
CHORUS Deliverable 2.2: Second report - identification of multi-disciplinary key issues for gap analysis toward EU multimedia search engines roadmap
After addressing the state-of-the-art during the first year of Chorus and establishing the existing landscape in
multimedia search engines, we have identified and analyzed gaps within European research effort during our second year.
In this period we focused on three directions, notably technological issues, user-centred issues and use-cases and socio-
economic and legal aspects. These were assessed by two central studies: firstly, a concerted vision of functional breakdown
of generic multimedia search engine, and secondly, a representative use-cases descriptions with the related discussion on
requirement for technological challenges. Both studies have been carried out in cooperation and consultation with the
community at large through EC concertation meetings (multimedia search engines cluster), several meetings with our
Think-Tank, presentations in international conferences, and surveys addressed to EU projects coordinators as well as
National initiatives coordinators. Based on the obtained feedback we identified two types of gaps, namely core
technological gaps that involve research challenges, and âenablersâ, which are not necessarily technical research
challenges, but have impact on innovation progress. New socio-economic trends are presented as well as emerging legal
challenges
Making Sense of Document Collections with Map-Based Visualizations
As map-based visualizations of documents become more ubiquitous, there is a greater need for them to support intellectual and creative high-level cognitive activities with collections of non-cartographic materials -- documents. This dissertation concerns the conceptualization of map-based visualizations as tools for sensemaking and collection understanding. As such, map-based visualizations would help people use georeferenced documents to develop understanding, gain insight, discover knowledge, and construct meaning. This dissertation explores the role of graphical representations (such as maps, Kohonen maps, pie charts, and other) and interactions with them for developing map-based visualizations capable of facilitating sensemaking activities such as collection understanding. While graphical representations make document collections more perceptually and cognitively accessible, interactions allow users to adapt representations to usersâ contextual needs. By interacting with representations of documents or collections and being able to construct representations of their own, people are better able to make sense of information, comprehend complex structures, and integrate new information into their existing mental models. In sum, representations and interactions may reduce cognitive load and consequently expedite the overall time necessary for completion of sensemaking activities, which typically take much time to accomplish. The dissertation proceeds in three phases. The first phase develops a conceptual framework for translating ontological properties of collections to representations and for supporting visual tasks by means of graphical representations. The second phase concerns the cognitive benefits of interaction. It conceptualizes how interactions can help people during complex sensemaking activities. Although the interactions are explained on the example of a prototype built with Google Maps, they are independent iv of Google Maps and can be applicable to various other technologies. The third phase evaluates the utility, analytical capabilities and usability of the additional representations when users interact with a visualization prototype â VIsual COLlection EXplorer. The findings suggest that additional representations can enhance understanding of map-based visualizations of library collections: specifically, they can allow users to see trends, gaps, and patterns in ontological properties of collections
Clustering cliques for graph-based summarization of the biomedical research literature
BACKGROUND: Graph-based notions are increasingly used in biomedical data mining and knowledge discovery tasks. In this paper, we present a clique-clustering method to automatically summarize graphs of semantic predications produced from PubMed citations (titles and abstracts). RESULTS: SemRep is used to extract semantic predications from the citations returned by a PubMed search. Cliques were identified from frequently occurring predications with highly connected arguments filtered by degree centrality. Themes contained in the summary were identified with a hierarchical clustering algorithm based on common arguments shared among cliques. The validity of the clusters in the summaries produced was compared to the Silhouette-generated baseline for cohesion, separation and overall validity. The theme labels were also compared to a reference standard produced with major MeSH headings. CONCLUSIONS: For 11 topics in the testing data set, the overall validity of clusters from the system summary was 10% better than the baseline (43% versus 33%). While compared to the reference standard from MeSH headings, the results for recall, precision and F-score were 0.64, 0.65, and 0.65 respectively
MOG 2007:Workshop on Multimodal Output Generation: CTIT Proceedings
This volume brings together presents a wide variety of work offering different perspectives on multimodal generation. Two different strands of work can be distinguished: half of the gathered papers present current work on embodied conversational agents (ECAâs), while the other half presents current work on multimedia applications. Two general research questions are shared by all: what output modalities are most suitable in which situation, and how should different output modalities be combined
Joint Video and Text Parsing for Understanding Events and Answering Queries
We propose a framework for parsing video and text jointly for understanding
events and answering user queries. Our framework produces a parse graph that
represents the compositional structures of spatial information (objects and
scenes), temporal information (actions and events) and causal information
(causalities between events and fluents) in the video and text. The knowledge
representation of our framework is based on a spatial-temporal-causal And-Or
graph (S/T/C-AOG), which jointly models possible hierarchical compositions of
objects, scenes and events as well as their interactions and mutual contexts,
and specifies the prior probabilistic distribution of the parse graphs. We
present a probabilistic generative model for joint parsing that captures the
relations between the input video/text, their corresponding parse graphs and
the joint parse graph. Based on the probabilistic model, we propose a joint
parsing system consisting of three modules: video parsing, text parsing and
joint inference. Video parsing and text parsing produce two parse graphs from
the input video and text respectively. The joint inference module produces a
joint parse graph by performing matching, deduction and revision on the video
and text parse graphs. The proposed framework has the following objectives:
Firstly, we aim at deep semantic parsing of video and text that goes beyond the
traditional bag-of-words approaches; Secondly, we perform parsing and reasoning
across the spatial, temporal and causal dimensions based on the joint S/T/C-AOG
representation; Thirdly, we show that deep joint parsing facilitates subsequent
applications such as generating narrative text descriptions and answering
queries in the forms of who, what, when, where and why. We empirically
evaluated our system based on comparison against ground-truth as well as
accuracy of query answering and obtained satisfactory results
- âŠ