236 research outputs found

    A study of spatial data models and their application to selecting information from pictorial databases

    Get PDF
    People have always used visual techniques to locate information in the space surrounding them. However with the advent of powerful computer systems and user-friendly interfaces it has become possible to extend such techniques to stored pictorial information. Pictorial database systems have in the past primarily used mathematical or textual search techniques to locate specific pictures contained within such databases. However these techniques have largely relied upon complex combinations of numeric and textual queries in order to find the required pictures. Such techniques restrict users of pictorial databases to expressing what is in essence a visual query in a numeric or character based form. What is required is the ability to express such queries in a form that more closely matches the user's visual memory or perception of the picture required. It is suggested in this thesis that spatial techniques of search are important and that two of the most important attributes of a picture are the spatial positions and the spatial relationships of objects contained within such pictures. It is further suggested that a database management system which allows users to indicate the nature of their query by visually placing iconic representations of objects on an interface in spatially appropriate positions, is a feasible method by which pictures might be found from a pictorial database. This thesis undertakes a detailed study of spatial techniques using a combination of historical evidence, psychological conclusions and practical examples to demonstrate that the spatial metaphor is an important concept and that pictures can be readily found by visually specifying the spatial positions and relationships between objects contained within them

    Content based retrieval of PET neurological images

    Get PDF
    Medical image management has posed challenges to many researchers, especially when the images have to be indexed and retrieved using their visual content that is meaningful to clinicians. In this study, an image retrieval system has been developed for 3D brain PET (Position emission tomography) images. It has been found that PET neurological images can be retrieved based upon their diagnostic status using only data pertaining to their content, and predominantly the visual content. During the study PET scans are spatially normalized, using existing techniques, and their visual data is quantified. The mid-sagittal-plane of each individual 3D PET scan is found and then utilized in the detection of abnormal asymmetries, such as tumours or physical injuries. All the asymmetries detected are referenced to the Talairarch and Tournoux anatomical atlas. The Cartesian co- ordinates in Talairarch space, of detected lesion, are employed along with the associated anatomical structure(s) as the indices within the content based image retrieval system. The anatomical atlas is then also utilized to isolate distinct anatomical areas that are related to a number of neurodegenerative disorders. After segmentation of the anatomical regions of interest algorithms are applied to characterize the texture of brain intensity using Gabor filters and to elucidate the mean index ratio of activation levels. These measurements are combined to produce a single feature vector that is incorporated into the content based image retrieval system. Experimental results on images with known diagnoses show that physical lesions such as head injuries and tumours can be, to a certain extent, detected correctly. Images with correctly detected and measured lesion are then retrieved from the database of images when a query pertains to the measured locale. Images with neurodegenerative disorder patterns have been indexed and retrieved via texture-based features. Retrieval accuracy is increased, for images from patients diagnosed with dementia, by combining the texture feature and mean index ratio value

    The INCF Digital Atlasing Program: Report on Digital Atlasing Standards in the Rodent Brain

    Get PDF
    The goal of the INCF Digital Atlasing Program is to provide the vision and direction necessary to make the rapidly growing collection of multidimensional data of the rodent brain (images, gene expression, etc.) widely accessible and usable to the international research community. This Digital Brain Atlasing Standards Task Force was formed in May 2008 to investigate the state of rodent brain digital atlasing, and formulate standards, guidelines, and policy recommendations.

Our first objective has been the preparation of a detailed document that includes the vision and specific description of an infrastructure, systems and methods capable of serving the scientific goals of the community, as well as practical issues for achieving
the goals. This report builds on the 1st INCF Workshop on Mouse and Rat Brain Digital Atlasing Systems (Boline et al., 2007, _Nature Preceedings_, doi:10.1038/npre.2007.1046.1) and includes a more detailed analysis of both the current state and desired state of digital atlasing along with specific recommendations for achieving these goals

    A query interface for GISLIS

    Full text link

    Casual Information Visualization on Exploring Spatiotemporal Data

    Get PDF
    The goal of this thesis is to study how the diverse data on the Web which are familiar to everyone can be visualized, and with a special consideration on their spatial and temporal information. We introduce novel approaches and visualization techniques dealing with different types of data contents: interactively browsing large amount of tags linking with geospace and time, navigating and locating spatiotemporal photos or videos in collections, and especially, providing visual supports for the exploration of diverse Web contents on arbitrary webpages in terms of augmented Web browsing

    Visually querying object-oriented databases

    Get PDF
    Bibliography: pages 141-145.As database requirements increase, the ability to construct database queries efficiently becomes more important. The traditional means of querying a database is to write a textual query, such as writing in SQL to query a relational database. Visual query languages are an alternative means of querying a database; a visual query language can embody powerful query abstraction and user feedback techniques, thereby making them potentially easier to use. In this thesis, we develop a visual query system for ODMG-compliant object-oriented databases, called QUIVER. QUIVER has a comprehensive expressive power; apart from supporting data types such as sets, bags, arrays, lists, tuples, objects and relationships, it supports aggregate functions, methods and sub-queries. The language is also consistent, as constructs with similar functionality have similar visual representations. QUIVER uses the DOT layout engine to automatically layout a query; QUIVER queries are easily constructed, as the system does not constrain the spatial arrangement of query items. QUIVER also supports a query library, allowing queries to be saved, retrieved and shared among users. A substantial part of the design has been implemented using the ODMG-compliant database system Oâ‚‚, and the usability of the interface as well as the query language itself is presented. Visual queries are translated to OQL, the standard query language proposed by the ODMG, and query answers are presented using Oâ‚‚ Look. During the course of our investigation, we conducted a user evaluation to compare QUIVER and OQL. The results were extremely encouraging in favour of QUIVER

    Technical augmentation of visual environmental review in the planning process

    Get PDF
    Thesis (M.C.P.)--Massachusetts Institute of Technology, Dept. of Urban Studies and Planning, 1994.Includes bibliographical references (leaves 113-114).by Gregory A. Rossel.M.C.P

    Content based retrieval of PET neurological images

    Get PDF
    Medical image management has posed challenges to many researchers, especially when the images have to be indexed and retrieved using their visual content that is meaningful to clinicians. In this study, an image retrieval system has been developed for 3D brain PET (Position emission tomography) images. It has been found that PET neurological images can be retrieved based upon their diagnostic status using only data pertaining to their content, and predominantly the visual content. During the study PET scans are spatially normalized, using existing techniques, and their visual data is quantified. The mid-sagittal-plane of each individual 3D PET scan is found and then utilized in the detection of abnormal asymmetries, such as tumours or physical injuries. All the asymmetries detected are referenced to the Talairarch and Tournoux anatomical atlas. The Cartesian co- ordinates in Talairarch space, of detected lesion, are employed along with the associated anatomical structure(s) as the indices within the content based image retrieval system. The anatomical atlas is then also utilized to isolate distinct anatomical areas that are related to a number of neurodegenerative disorders. After segmentation of the anatomical regions of interest algorithms are applied to characterize the texture of brain intensity using Gabor filters and to elucidate the mean index ratio of activation levels. These measurements are combined to produce a single feature vector that is incorporated into the content based image retrieval system. Experimental results on images with known diagnoses show that physical lesions such as head injuries and tumours can be, to a certain extent, detected correctly. Images with correctly detected and measured lesion are then retrieved from the database of images when a query pertains to the measured locale. Images with neurodegenerative disorder patterns have been indexed and retrieved via texture-based features. Retrieval accuracy is increased, for images from patients diagnosed with dementia, by combining the texture feature and mean index ratio value.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    CHORUS Deliverable 2.1: State of the Art on Multimedia Search Engines

    Get PDF
    Based on the information provided by European projects and national initiatives related to multimedia search as well as domains experts that participated in the CHORUS Think-thanks and workshops, this document reports on the state of the art related to multimedia content search from, a technical, and socio-economic perspective. The technical perspective includes an up to date view on content based indexing and retrieval technologies, multimedia search in the context of mobile devices and peer-to-peer networks, and an overview of current evaluation and benchmark inititiatives to measure the performance of multimedia search engines. From a socio-economic perspective we inventorize the impact and legal consequences of these technical advances and point out future directions of research

    Highly efficient low-level feature extraction for video representation and retrieval.

    Get PDF
    PhDWitnessing the omnipresence of digital video media, the research community has raised the question of its meaningful use and management. Stored in immense multimedia databases, digital videos need to be retrieved and structured in an intelligent way, relying on the content and the rich semantics involved. Current Content Based Video Indexing and Retrieval systems face the problem of the semantic gap between the simplicity of the available visual features and the richness of user semantics. This work focuses on the issues of efficiency and scalability in video indexing and retrieval to facilitate a video representation model capable of semantic annotation. A highly efficient algorithm for temporal analysis and key-frame extraction is developed. It is based on the prediction information extracted directly from the compressed domain features and the robust scalable analysis in the temporal domain. Furthermore, a hierarchical quantisation of the colour features in the descriptor space is presented. Derived from the extracted set of low-level features, a video representation model that enables semantic annotation and contextual genre classification is designed. Results demonstrate the efficiency and robustness of the temporal analysis algorithm that runs in real time maintaining the high precision and recall of the detection task. Adaptive key-frame extraction and summarisation achieve a good overview of the visual content, while the colour quantisation algorithm efficiently creates hierarchical set of descriptors. Finally, the video representation model, supported by the genre classification algorithm, achieves excellent results in an automatic annotation system by linking the video clips with a limited lexicon of related keywords
    • …
    corecore