9,332 research outputs found

    TRECVid 2005 experiments at Dublin City University

    Get PDF
    In this paper we describe our experiments in the automatic and interactive search tasks and the BBC rushes pilot task of TRECVid 2005. Our approach this year is somewhat different than previous submissions in that we have implemented a multi-user search system using a DiamondTouch tabletop device from Mitsubishi Electric Research Labs (MERL).We developed two versions of oursystem one with emphasis on efficient completion of the search task (FĂ­schlĂĄr-DT Efficiency) and the other with more emphasis on increasing awareness among searchers (FĂ­schlĂĄr-DT Awareness). We supplemented these runs with a further two runs one for each of the two systems, in which we augmented the initial results with results from an automatic run. In addition to these interactive submissions we also submitted three fully automatic runs. We also took part in the BBC rushes pilot task where we indexed the video by semi-automatic segmentation of objects appearing in the video and our search/browsing system allows full keyframe and/or object-based searching. In the interactive search experiments we found that the awareness system outperformed the efficiency system. We also found that supplementing the interactive results with results of an automatic run improves both the Mean Average Precision and Recall values for both system variants. Our results suggest that providing awareness cues in a collaborative search setting improves retrieval performance. We also learned that multi-user searching is a viable alternative to the traditional single searcher paradigm, provided the system is designed to effectively support collaboration

    Visual Representation of Text in Web Documents and Its Interpretation

    No full text
    This paper examines the uses of text and its representation on Web documents in terms of the challenges in its interpretation. Particular attention is paid to the significant problem of non-uniform representation of text. This non-uniformity is mainly due to the presence of semantically important text in image form as opposed to the standard encoded text. The issues surrounding text representation in Web documents are discussed in the context of colour perception and spatial representation. The characteristics of the representation of text in image form are examined and research towards interpreting these images of text is briefly described

    Visual Representation of Text in Web Documents and Its Interpretation

    No full text
    This paper examines the uses of text and its representation on Web documents in terms of the challenges in its interpretation. Particular attention is paid to the significant problem of non-uniform representation of text. This non-uniformity is mainly due to the presence of semantically important text in image form as opposed to the standard encoded text. The issues surrounding text representation in Web documents are discussed in the context of colour perception and spatial representation. The characteristics of the representation of text in image form are examined and research towards interpreting these images of text is briefly described

    Using high resolution displays for high resolution cardiac data

    Get PDF
    The ability to perform fast, accurate, high resolution visualization is fundamental to improving our understanding of anatomical data. As the volumes of data increase from improvements in scanning technology, the methods applied to rendering and visualization must evolve. In this paper we address the interactive display of data from high resolution MRI scanning of a rabbit heart and subsequent histological imaging. We describe a visualization environment involving a tiled LCD panel display wall and associated software which provide an interactive and intuitive user interface. The oView software is an OpenGL application which is written for the VRJuggler environment. This environment abstracts displays and devices away from the application itself, aiding portability between different systems, from desktop PCs to multi-tiled display walls. Portability between display walls has been demonstrated through its use on walls at both Leeds and Oxford Universities. We discuss important factors to be considered for interactive 2D display of large 3D datasets, including the use of intuitive input devices and level of detail aspects

    Learning Visual Features from Snapshots for Web Search

    Full text link
    When applying learning to rank algorithms to Web search, a large number of features are usually designed to capture the relevance signals. Most of these features are computed based on the extracted textual elements, link analysis, and user logs. However, Web pages are not solely linked texts, but have structured layout organizing a large variety of elements in different styles. Such layout itself can convey useful visual information, indicating the relevance of a Web page. For example, the query-independent layout (i.e., raw page layout) can help identify the page quality, while the query-dependent layout (i.e., page rendered with matched query words) can further tell rich structural information (e.g., size, position and proximity) of the matching signals. However, such visual information of layout has been seldom utilized in Web search in the past. In this work, we propose to learn rich visual features automatically from the layout of Web pages (i.e., Web page snapshots) for relevance ranking. Both query-independent and query-dependent snapshots are considered as the new inputs. We then propose a novel visual perception model inspired by human's visual search behaviors on page viewing to extract the visual features. This model can be learned end-to-end together with traditional human-crafted features. We also show that such visual features can be efficiently acquired in the online setting with an extended inverted indexing scheme. Experiments on benchmark collections demonstrate that learning visual features from Web page snapshots can significantly improve the performance of relevance ranking in ad-hoc Web retrieval tasks.Comment: CIKM 201

    Text Extraction from Web Images Based on A Split-and-Merge Segmentation Method Using Color Perception

    No full text
    This paper describes a complete approach to the segmentation and extraction of text from Web images for subsequent recognition, to ultimately achieve both effective indexing and presentation by non-visual means (e.g., audio). The method described here (the first in the authors’ systematic approach to exploit human colour perception) enables the extraction of text in complex situations such as in the presence of varying colour (characters and background). More precisely, in addition to using structural features, the segmentation follows a split-and-merge strategy based on the Hue-Lightness- Saturation (HLS) representation of colour as a first approximation of an anthropocentric expression of the differences in chromaticity and lightness. Character-like components are then extracted as forming textlines in a number of orientations and along curves

    Point Cloud Framework for Rendering 3D Models Using Google Tango

    Get PDF
    This project seeks to demonstrate the feasibility of point cloud meshing for capturing and modeling three dimensional objects on consumer smart phones and tablets. Traditional methods of capturing objects require hundreds of images, are very slow and consume a large amount of cellular data for the average consumer. Software developers need a starting point for capturing and meshing point clouds to create 3D models as hardware manufacturers provide the tools to capture point cloud data. The project uses Googles Tango computer vision library for Android to capture point clouds on devices with depth-sensing hardware. The point clouds are combined and meshed as models for use in 3D rendering projects. We expect our results to be embraced by the Android market because capturing point clouds is fast and does not carry a large data footprint
    • 

    corecore