254,086 research outputs found

    Uncertainty Control for Reliable Video Understanding on Complex Environments

    Get PDF
    International audienceThe most popular applications for video understanding are those related to video-surveillance (e.g. alarms, abnormal behaviours, expected events, access control). Video understanding has several other applications of high impact to the society as medical supervision, traffic control, violent acts detection, crowd behaviour analysis, among many others. We propose a new generic video understanding approach able to extract and learn valuable information from noisy video scenes for real-time applications. This approach comprises motion segmentation, object classification, tracking and event learning phases. This work is focused on building the first fundamental blocks allowing a proper management of uncertainty of data in every phase of the video understanding process. The main contributions of the proposed approach are: (i) a new algorithm for tracking multiple objects in noisy environments, (ii) the utilisation of reliability measures for modelling uncertainty in data and for proper selection of valuable information extracted from noisy data, (iii) the improved capability of tracking to manage multiple visual evidence-target associations, (iv) the combination of 2D image data with 3D information in a dynamics model governed by reliability measures for proper control of uncertainty in data, and (v) a new approach for event recognition through incremental event learning, driven by reliability measures for selecting the most stable and relevant data

    Designing an automated prototype tool for preservation quality metadata extraction for ingest into digital repository

    Get PDF
    We present a viable framework for the automated extraction of preservation quality metadata, which is adjusted to meet the needs of, ingest to digital repositories. It has three distinctive features: wide coverage, specialisation and emphasis on quality. Wide coverage is achieved through the use of a distributed system of tool repositories, which helps to implement it over a broad range of document object types. Specialisation is maintained through the selection of the most appropriate metadata extraction tool for each case based on the identification of the digital object genre. And quality is sustained by introducing control points at selected stages of the workflow of the system. The integration of these three features as components in the ingest of material into digital repositories is a defining step ahead in the current quest for improved management of digital resources

    Towards memory supporting personal information management tools

    Get PDF
    In this article we discuss re-retrieving personal information objects and relate the task to recovering from lapse(s) in memory. We propose that fundamentally it is lapses in memory that impede users from successfully re-finding the information they need. Our hypothesis is that by learning more about memory lapses in non-computing contexts and how people cope and recover from these lapses, we can better inform the design of PIM tools and improve the user's ability to re-access and re-use objects. We describe a diary study that investigates the everyday memory problems of 25 people from a wide range of backgrounds. Based on the findings, we present a series of principles that we hypothesize will improve the design of personal information management tools. This hypothesis is validated by an evaluation of a tool for managing personal photographs, which was designed with respect to our findings. The evaluation suggests that users' performance when re-finding objects can be improved by building personal information management tools to support characteristics of human memory

    Data-Mining a Large Digital Sky Survey: From the Challenges to the Scientific Results

    Get PDF
    The analysis and an efficient scientific exploration of the Digital Palomar Observatory Sky Survey (DPOSS) represents a major technical challenge. The input data set consists of 3 Terabytes of pixel information, and contains a few billion sources. We describe some of the specific scientific problems posed by the data, including searches for distant quasars and clusters of galaxies, and the data-mining techniques we are exploring in addressing them. Machine-assisted discovery methods may become essential for the analysis of such multi-Terabyte data sets. New and future approaches involve unsupervised classification and clustering analysis in the Giga-object data space, including various Bayesian techniques. In addition to the searches for known types of objects in this data base, these techniques may also offer the possibility of discovering previously unknown, rare types of astronomical objects.Comment: Invited paper, to appear in Applications of Digital Image Processing XX, ed. A. Tescher, Proc. S.P.I.E. vol. 3164, in press; 10 pages, a self-contained TeX file, and 3 separate postscript figure
    • 

    corecore