25,335 research outputs found

    Digital Image Access & Retrieval

    Get PDF
    The 33th Annual Clinic on Library Applications of Data Processing, held at the University of Illinois at Urbana-Champaign in March of 1996, addressed the theme of "Digital Image Access & Retrieval." The papers from this conference cover a wide range of topics concerning digital imaging technology for visual resource collections. Papers covered three general areas: (1) systems, planning, and implementation; (2) automatic and semi-automatic indexing; and (3) preservation with the bulk of the conference focusing on indexing and retrieval.published or submitted for publicatio

    Three-dimensional distribution of primary melt inclusions in garnets by X-ray microtomography

    Get PDF
    open6X-ray computed microtomography (X-mu CT) is applied here to investigate in a non-invasive way the three-dimensional (3D) spatial distribution of primary melt and fluid inclusions in gamets from the metapeitic enclaves of El Hoyazo and from the migmatitcs of Sierra Alpujata, Spain. Attention is focused on a particular case of inhomogeneous distribution of inclusions, characterized by inclusion-rich cores and almost inclusion-free rims (i.e., zonal arrangement), that has been previously investigated in detail only by means of 2D conventional methods. Different experimental X-mu CT configurations, both synchrotron radiation- and X-ray tube-based, are employed to explore the limits of the technique. The internal features of the samples are successfully imaged, with spatial resolution down to a few micrometers. By means of dedicated image processing protocols, the lighter melt and fluid inclusions can be separated from the heavier host garnet and from other non-relevant features (e.g., other mineral phases or large voids). This allows evaluating the volumetric density of inclusions within spherical shells as a function of the radial distance from the center of the host garnets. The 3D spatial distribution of heavy mineral inclusions is investigated as well and compared with that of melt inclusions. Data analysis reveals the occurrence of a clear peak of melt and fluid inclusions density, ranging approximately from 1/3 to 1/2 of the radial distance from the center of the distribution and a gradual decrease from the peak outward. heavy mineral inclusions appear to be almost absent in the central portion of the garnets and more randomly arranged, showing no correlation with the distribution of melt and fluid inclusions. To reduce the effect of geometric artifacts arising from the non-spherical shape of the distribution, the inclusion density was calculated also along narrow prisms with different orientations, obtaining plots of pseudo-linear distributions. The results show that the core-rim transition is characterized by a rapid (but not step-like) decrease in inclusion density, occurring in a continuous mode. X-ray tomographic data, combined with electron microprobe chemical profiles of selected elements, suggest that despite the inhomogeneous distribution of inclusions, the investigated garnets have grown in one single progressive episode in the presence of anatectic melt. The continuous drop of inclusion density suggests a similar decline in (radial) garnet growth, which is a natural consequence in the case of a constant reaction rate. Our results confirm the advantages of high-resolution X-mu CT compared to conventional destructive 2D observations for the analysis of the spatial distribution of micrometer-scale inclusions in minerals, owing to its non-invasive 3D capabilities. The same approach can be extended to the study of different microstructural features in samples from a wide variety of geological settings.openParisatto, Matteo; Turina, Alice; Cruciani, Giuseppe; Mancini, Lucia; Peruzzo, Luca; Cesare, BernardoParisatto, Matteo; Turina, Alice; Cruciani, Giuseppe; Mancini, Lucia; Peruzzo, Luca; Cesare, Bernard

    From Keyword Search to Exploration: How Result Visualization Aids Discovery on the Web

    No full text
    A key to the Web's success is the power of search. The elegant way in which search results are returned is usually remarkably effective. However, for exploratory search in which users need to learn, discover, and understand novel or complex topics, there is substantial room for improvement. Human computer interaction researchers and web browser designers have developed novel strategies to improve Web search by enabling users to conveniently visualize, manipulate, and organize their Web search results. This monograph offers fresh ways to think about search-related cognitive processes and describes innovative design approaches to browsers and related tools. For instance, while key word search presents users with results for specific information (e.g., what is the capitol of Peru), other methods may let users see and explore the contexts of their requests for information (related or previous work, conflicting information), or the properties that associate groups of information assets (group legal decisions by lead attorney). We also consider the both traditional and novel ways in which these strategies have been evaluated. From our review of cognitive processes, browser design, and evaluations, we reflect on the future opportunities and new paradigms for exploring and interacting with Web search results

    Where and Who? Automatic Semantic-Aware Person Composition

    Full text link
    Image compositing is a method used to generate realistic yet fake imagery by inserting contents from one image to another. Previous work in compositing has focused on improving appearance compatibility of a user selected foreground segment and a background image (i.e. color and illumination consistency). In this work, we instead develop a fully automated compositing model that additionally learns to select and transform compatible foreground segments from a large collection given only an input image background. To simplify the task, we restrict our problem by focusing on human instance composition, because human segments exhibit strong correlations with their background and because of the availability of large annotated data. We develop a novel branching Convolutional Neural Network (CNN) that jointly predicts candidate person locations given a background image. We then use pre-trained deep feature representations to retrieve person instances from a large segment database. Experimental results show that our model can generate composite images that look visually convincing. We also develop a user interface to demonstrate the potential application of our method.Comment: 10 pages, 9 figure

    A lightweight web video model with content and context descriptions for integration with linked data

    Get PDF
    The rapid increase of video data on the Web has warranted an urgent need for effective representation, management and retrieval of web videos. Recently, many studies have been carried out for ontological representation of videos, either using domain dependent or generic schemas such as MPEG-7, MPEG-4, and COMM. In spite of their extensive coverage and sound theoretical grounding, they are yet to be widely used by users. Two main possible reasons are the complexities involved and a lack of tool support. We propose a lightweight video content model for content-context description and integration. The uniqueness of the model is that it tries to model the emerging social context to describe and interpret the video. Our approach is grounded on exploiting easily extractable evolving contextual metadata and on the availability of existing data on the Web. This enables representational homogeneity and a firm basis for information integration among semantically-enabled data sources. The model uses many existing schemas to describe various ontology classes and shows the scope of interlinking with the Linked Data cloud

    A framework for automatic semantic video annotation

    Get PDF
    The rapidly increasing quantity of publicly available videos has driven research into developing automatic tools for indexing, rating, searching and retrieval. Textual semantic representations, such as tagging, labelling and annotation, are often important factors in the process of indexing any video, because of their user-friendly way of representing the semantics appropriate for search and retrieval. Ideally, this annotation should be inspired by the human cognitive way of perceiving and of describing videos. The difference between the low-level visual contents and the corresponding human perception is referred to as the ‘semantic gap’. Tackling this gap is even harder in the case of unconstrained videos, mainly due to the lack of any previous information about the analyzed video on the one hand, and the huge amount of generic knowledge required on the other. This paper introduces a framework for the Automatic Semantic Annotation of unconstrained videos. The proposed framework utilizes two non-domain-specific layers: low-level visual similarity matching, and an annotation analysis that employs commonsense knowledgebases. Commonsense ontology is created by incorporating multiple-structured semantic relationships. Experiments and black-box tests are carried out on standard video databases for action recognition and video information retrieval. White-box tests examine the performance of the individual intermediate layers of the framework, and the evaluation of the results and the statistical analysis show that integrating visual similarity matching with commonsense semantic relationships provides an effective approach to automated video annotation
    corecore