199,202 research outputs found

    Towards a comprehensive 3D dynamic facial expression database

    Get PDF
    Human faces play an important role in everyday life, including the expression of person identity, emotion and intentionality, along with a range of biological functions. The human face has also become the subject of considerable research effort, and there has been a shift towards understanding it using stimuli of increasingly more realistic formats. In the current work, we outline progress made in the production of a database of facial expressions in arguably the most realistic format, 3D dynamic. A suitable architecture for capturing such 3D dynamic image sequences is described and then used to record seven expressions (fear, disgust, anger, happiness, surprise, sadness and pain) by 10 actors at 3 levels of intensity (mild, normal and extreme). We also present details of a psychological experiment that was used to formally evaluate the accuracy of the expressions in a 2D dynamic format. The result is an initial, validated database for researchers and practitioners. The goal is to scale up the work with more actors and expression types

    Content-based Image Retrieval by Spatial Similarity

    Get PDF
    Similarity-based retrieval of images is an important task in image databases. Most of the user's queries are on retrieving those database images that are spatially similar to a query image. In defence strategies, one wants to know a number of armoured vehicles, such as battle tanks, portable missile launching vehicles, etc. moving towards it, so that one can decide counter strategy. Content-based spatial similarity retrieval of images can be used to locate spatial relationship of various objects in a specific area from the aerial photographs and to retrieve images similar to the query image from image database. A content-based image retrieval system that efficiently and effectively retrieves information from a defence image database along with the architecture for retrieving images by spatial similarity is presented. A robust algorithm SIMdef for retrieval by spatial similarity is proposed that utilises both directional and topological relations for computing similarity between images, retrieves similar images and recognises images even after they undergo modelling transformations (translation, scale and rotation). A case study for some of the common objects, used in defence applications using SIMdef algorithm, has been done

    A real-time distributed analysis automation for hurricane surface wind observations

    Get PDF
    From 1993 until 1999, the Hurricane Research Division of the National Oceanic and Atmospheric Administration (NOAA) produced real-time analyses of surface wind observations to help determine a storm\u27s wind intensity and extent. Limitations of the real-time analysis system included platform and filesystem dependency, lacking data integrity and feasibility for Internet deployment. In 2000, a new system was developed, built upon a Java prototype of a quality control graphical client interface for wind observations and an object-relational database. The objective was to integrate them in a distributed object approach with the legacy code responsible for the actual real-time wind analysis and image product generation. Common Object Request Broker Architecture (CORBA) was evaluated, but Java Remote Method Invocation (AMI) offered important advantages in terms of reuse and deployment. Even more substantial, though, were the efforts towards object-oriented redesign, implementation and testing of the quality control interface and its database performance interaction. As a result, a full-featured application can now be launched from the Web, potentially accessible by tropical cyclone forecast and warning centers worldwide

    A distributed camera system for multi-resolution surveillance

    Get PDF
    We describe an architecture for a multi-camera, multi-resolution surveillance system. The aim is to support a set of distributed static and pan-tilt-zoom (PTZ) cameras and visual tracking algorithms, together with a central supervisor unit. Each camera (and possibly pan-tilt device) has a dedicated process and processor. Asynchronous interprocess communications and archiving of data are achieved in a simple and effective way via a central repository, implemented using an SQL database. Visual tracking data from static views are stored dynamically into tables in the database via client calls to the SQL server. A supervisor process running on the SQL server determines if active zoom cameras should be dispatched to observe a particular target, and this message is effected via writing demands into another database table. We show results from a real implementation of the system comprising one static camera overviewing the environment under consideration and a PTZ camera operating under closed-loop velocity control, which uses a fast and robust level-set-based region tracker. Experiments demonstrate the effectiveness of our approach and its feasibility to multi-camera systems for intelligent surveillance

    Cognitive visual tracking and camera control

    Get PDF
    Cognitive visual tracking is the process of observing and understanding the behaviour of a moving person. This paper presents an efficient solution to extract, in real-time, high-level information from an observed scene, and generate the most appropriate commands for a set of pan-tilt-zoom (PTZ) cameras in a surveillance scenario. Such a high-level feedback control loop, which is the main novelty of our work, will serve to reduce uncertainties in the observed scene and to maximize the amount of information extracted from it. It is implemented with a distributed camera system using SQL tables as virtual communication channels, and Situation Graph Trees for knowledge representation, inference and high-level camera control. A set of experiments in a surveillance scenario show the effectiveness of our approach and its potential for real applications of cognitive vision

    PynPoint: a modular pipeline architecture for processing and analysis of high-contrast imaging data

    Full text link
    The direct detection and characterization of planetary and substellar companions at small angular separations is a rapidly advancing field. Dedicated high-contrast imaging instruments deliver unprecedented sensitivity, enabling detailed insights into the atmospheres of young low-mass companions. In addition, improvements in data reduction and PSF subtraction algorithms are equally relevant for maximizing the scientific yield, both from new and archival data sets. We aim at developing a generic and modular data reduction pipeline for processing and analysis of high-contrast imaging data obtained with pupil-stabilized observations. The package should be scalable and robust for future implementations and in particular well suitable for the 3-5 micron wavelength range where typically (ten) thousands of frames have to be processed and an accurate subtraction of the thermal background emission is critical. PynPoint is written in Python 2.7 and applies various image processing techniques, as well as statistical tools for analyzing the data, building on open-source Python packages. The current version of PynPoint has evolved from an earlier version that was developed as a PSF subtraction tool based on PCA. The architecture of PynPoint has been redesigned with the core functionalities decoupled from the pipeline modules. Modules have been implemented for dedicated processing and analysis steps, including background subtraction, frame registration, PSF subtraction, photometric and astrometric measurements, and estimation of detection limits. The pipeline package enables end-to-end data reduction of pupil-stabilized data and supports classical dithering and coronagraphic data sets. As an example, we processed archival VLT/NACO L' and M' data of beta Pic b and reassessed the planet's brightness and position with an MCMC analysis, and we provide a derivation of the photometric error budget.Comment: 16 pages, 9 figures, accepted for publication in A&A, PynPoint is available at https://github.com/PynPoint/PynPoin

    The INCF Digital Atlasing Program: Report on Digital Atlasing Standards in the Rodent Brain

    Get PDF
    The goal of the INCF Digital Atlasing Program is to provide the vision and direction necessary to make the rapidly growing collection of multidimensional data of the rodent brain (images, gene expression, etc.) widely accessible and usable to the international research community. This Digital Brain Atlasing Standards Task Force was formed in May 2008 to investigate the state of rodent brain digital atlasing, and formulate standards, guidelines, and policy recommendations.

Our first objective has been the preparation of a detailed document that includes the vision and specific description of an infrastructure, systems and methods capable of serving the scientific goals of the community, as well as practical issues for achieving
the goals. This report builds on the 1st INCF Workshop on Mouse and Rat Brain Digital Atlasing Systems (Boline et al., 2007, _Nature Preceedings_, doi:10.1038/npre.2007.1046.1) and includes a more detailed analysis of both the current state and desired state of digital atlasing along with specific recommendations for achieving these goals

    A Portable Active Binocular Robot Vision Architecture for Scene Exploration

    Get PDF
    We present a portable active binocular robot vision archi- tecture that integrates a number of visual behaviours. This vision archi- tecture inherits the abilities of vergence, localisation, recognition and si- multaneous identification of multiple target object instances. To demon- strate the portability of our vision architecture, we carry out qualitative and comparative analysis under two different hardware robotic settings, feature extraction techniques and viewpoints. Our portable active binoc- ular robot vision architecture achieved average recognition rates of 93.5% for fronto-parallel viewpoints and, 83% percentage for anthropomorphic viewpoints, respectively
    • ā€¦
    corecore