3,342 research outputs found

    Historical collaborative geocoding

    Full text link
    The latest developments in digital have provided large data sets that can increasingly easily be accessed and used. These data sets often contain indirect localisation information, such as historical addresses. Historical geocoding is the process of transforming the indirect localisation information to direct localisation that can be placed on a map, which enables spatial analysis and cross-referencing. Many efficient geocoders exist for current addresses, but they do not deal with the temporal aspect and are based on a strict hierarchy (..., city, street, house number) that is hard or impossible to use with historical data. Indeed historical data are full of uncertainties (temporal aspect, semantic aspect, spatial precision, confidence in historical source, ...) that can not be resolved, as there is no way to go back in time to check. We propose an open source, open data, extensible solution for geocoding that is based on the building of gazetteers composed of geohistorical objects extracted from historical topographical maps. Once the gazetteers are available, geocoding an historical address is a matter of finding the geohistorical object in the gazetteers that is the best match to the historical address. The matching criteriae are customisable and include several dimensions (fuzzy semantic, fuzzy temporal, scale, spatial precision ...). As the goal is to facilitate historical work, we also propose web-based user interfaces that help geocode (one address or batch mode) and display over current or historical topographical maps, so that they can be checked and collaboratively edited. The system is tested on Paris city for the 19-20th centuries, shows high returns rate and is fast enough to be used interactively.Comment: WORKING PAPE

    Proceedings of the ECIR2010 workshop on information access for personal media archives (IAPMA2010), Milton Keynes, UK, 28 March 2010

    Get PDF
    Towards e-Memories: challenges of capturing, summarising, presenting, understanding, using, and retrieving relevant information from heterogeneous data contained in personal media archives. This is the proceedings of the inaugural workshop on “Information Access for Personal Media Archives”. It is now possible to archive much of our life experiences in digital form using a variety of sources, e.g. blogs written, tweets made, social network status updates, photographs taken, videos seen, music heard, physiological monitoring, locations visited and environmentally sensed data of those places, details of people met, etc. Information can be captured from a myriad of personal information devices including desktop computers, PDAs, digital cameras, video and audio recorders, and various sensors, including GPS, Bluetooth, and biometric devices. In this workshop research from diverse disciplines was presented on how we can advance towards the goal of effective capture, retrieval and exploration of e-memories

    A Novel Approach to Face Recognition using Image Segmentation based on SPCA-KNN Method

    Get PDF
    In this paper we propose a novel method for face recognition using hybrid SPCA-KNN (SIFT-PCA-KNN) approach. The proposed method consists of three parts. The first part is based on preprocessing face images using Graph Based algorithm and SIFT (Scale Invariant Feature Transform) descriptor. Graph Based topology is used for matching two face images. In the second part eigen values and eigen vectors are extracted from each input face images. The goal is to extract the important information from the face data, to represent it as a set of new orthogonal variables called principal components. In the final part a nearest neighbor classifier is designed for classifying the face images based on the SPCA-KNN algorithm. The algorithm has been tested on 100 different subjects (15 images for each class). The experimental result shows that the proposed method has a positive effect on overall face recognition performance and outperforms other examined methods

    Indexing, learning and content-based retrieval for special purpose image databases

    Get PDF
    This chapter deals with content-based image retrieval in special purpose image databases. As image data is amassed ever more effortlessly, building efficient systems for searching and browsing of image databases becomes increasingly urgent. We provide an overview of the current state-of-the art by taking a tour along the entir

    Digital Image Access & Retrieval

    Get PDF
    The 33th Annual Clinic on Library Applications of Data Processing, held at the University of Illinois at Urbana-Champaign in March of 1996, addressed the theme of "Digital Image Access & Retrieval." The papers from this conference cover a wide range of topics concerning digital imaging technology for visual resource collections. Papers covered three general areas: (1) systems, planning, and implementation; (2) automatic and semi-automatic indexing; and (3) preservation with the bulk of the conference focusing on indexing and retrieval.published or submitted for publicatio

    Interactive real-time three-dimensional visualisation of virtual textiles

    Get PDF
    Virtual textile databases provide a cost-efficient alternative to the use of existing hardcover sample catalogues. By taking advantage of the high performance features offered by the latest generation of programmable graphics accelerator boards, it is possible to combine photometric stereo methods with 3D visualisation methods to implement a virtual textile database. In this thesis, we investigate and combine rotation invariant texture retrieval with interactive visualisation techniques. We use a 3D surface representation that is a generic data representation that allows us to combine real-time interactive 3D visualisation methods with present day texture retrieval methods. We begin by investigating the most suitable data format for the 3D surface representation and identify relief-mapping combined with Bézier surfaces as the most suitable 3D surface representations for our needs, and go on to describe how these representation can be combined for real-time rendering. We then investigate ten different methods of implementing rotation invariant texture retrieval using feature vectors. These results show that first order statistics in the form of histogram data are very effective for discriminating colour albedo information, while rotation invariant gradient maps are effective for distinguishing between different types of micro-geometry using either first or second order statistics.Engineering and physical Sciences Research (EPSRC

    Giving eyes to ICT!, or How does a computer recognize a cow?

    Get PDF
    Het door Schouten en andere onderzoekers op het CWI ontwikkelde systeem berust op het beschrijven van beelden met behulp van fractale meetkunde. De menselijke waarneming blijkt mede daardoor zo efficiënt omdat zij sterk werkt met gelijkenissen. Het ligt dus voor de hand het te zoeken in wiskundige methoden die dat ook doen. Schouten heeft daarom beeldcodering met behulp van 'fractals' onderzocht. Fractals zijn zelfgelijkende meetkundige figuren, opgebouwd door herhaalde transformatie (iteratie) van een eenvoudig basispatroon, dat zich daardoor op steeds kleinere schalen vertakt. Op elk niveau van detaillering lijkt een fractal op zichzelf (Droste-effect). Met fractals kan men vrij eenvoudig bedrieglijk echte natuurvoorstellingen maken. Fractale beeldcodering gaat ervan uit dat het omgekeerde ook geldt: een beeld effectief opslaan in de vorm van de basispatronen van een klein aantal fractals, samen met het voorschrift hoe het oorspronkelijke beeld daaruit te reconstrueren. Het op het CWI in samenwerking met onderzoekers uit Leuven ontwikkelde systeem is mede gebaseerd op deze methode. ISBN 906196502

    Analysis of textural image features for content based retrieval

    Get PDF
    Digital archaelogy and virtual reality with archaeological artefacts have been quite hot research topics in the last years 55,56 . This thesis is a preperation study to build the background knowledge required for the research projects, which aim to computerize the reconstruction of the archaelogical data like pots, marbles or mosaic pieces by shape and ex ural features. Digitalization of the cultural heritage may shorten the reconstruction time which takes tens of years currently 61 ; it will improve the reconstruction robustness by incorporating with the literally available machine vision algorithms and experiences from remote experts working on a no-cost virtual object together. Digitalization can also ease the exhibition of the results for regular people, by multiuser media applications like internet based virtual museums or virtual tours. And finally, it will make possible to archive values with their original texture and shapes for long years far away from the physical risks that the artefacts currently face. On the literature 1,2,3,5,8,11,14,15,16 , texture analysis techniques have been throughly studied and implemented for the purpose of defect analysis purposes by image processing and machine vision scientists. In the last years, these algorithms have been started to be used for similarity analysis of content based image retrieval 1,4,10 . For retrieval systems, the concurrent problems seem to be building efficient and fast systems, therefore, robust image features haven't been focused enough yet. This document is the first performance review of the texture algorithms developed for retrieval and defect analysis together. The results and experiences gained during the thesis study will be used to support the studies aiming to solve the 2D puzzle problem using textural continuity methods on archaelogical artifects, Appendix A for more detail. The first chapter is devoted to learn how the medicine and psychology try to explain the solutions of similiarity and continuity analysis, which our biological model, the human vision, accomplishes daily. In the second chapter, content based image retrieval systems, their performance criterias, similiarity distance metrics and the systems available have been summarized. For the thesis work, a rich texture database has been built, including over 1000 images in total. For the ease of the users, a GUI and a platform that is used for content based retrieval has been designed; The first version of a content based search engine has been coded which takes the source of the internet pages, parses the metatags of images and downloads the files in a loop controlled by our texture algorithms. The preprocessing algorithms and the pattern analysis algorithms required for the robustness of the textural feature processing have been implemented. In the last section, the most important textural feature extraction methods have been studied in detail with the performance results of the codes written in Matlab and run on different databases developed
    corecore