2 research outputs found

    INCREMENTAL QUERY PROCESSING IN INFORMATION FUSION SYSTEMS

    Get PDF
    This dissertation studies the methodology and techniques of information retrieval in fusion systems where information referring to same objects is assessed on the basis of data from multiple heterogeneous data sources. A wide range of important applications can be categorized as information fusion systems e.g. multisensor surveillance system, local search system, multisource medical diagnose system, and so on. Up to the time of this dissertation, most information retrieval methods in fusion systems are highly domain specific, and most query systems do not address fusion problem with enough efforts. In this dissertation, I describe a broadly applicable query based information retrieval approach in general fusion systems: user information needs are interpreted as fusion queries, and the query processing techniques e.g. source dependence graph (SDG), query refinement and optimization are described. Aiming to remove the query building bottleneck, a novel incremental query method is proposed, which can eliminate the accumulated complexity in query building as well as in query execution. Query pattern is defined to capture and reuse repeated structures in the incremental queries. Several new techniques for query pattern matching and learning are described in detail. Some important experiments in a real-world multisensor fusion system, i.e. the intelligent vehicle tracking (IVET) system, have been presented to validate the proposed methodology and techniques

    Face Alive Icons

    No full text
    Facial expression is one of the primary communication means of the human. However, realistic facial expression images are not used in popular communication tools on portable devices because of the difficulties in: 1) Acquisition; 2) Transference; 3) Display. In this paper, we propose a system tackling these problems to synthesize facial expression images from photographs for the devices with limited processing power, network bandwidth and display area, which is referred as “LLL ” environment. The facial images are reduced to small-sized face alive icons (FAI). Expressions are decomposed into the expression-unrelated facial features and the expression-related expressional features. The common features are captured and reused across expressions by the discrete model built through the statistical analysis on the training dataset. Semantic synthesis rules are also constructed which reveal the inner relations of expressions. Verified by an experimental prototype system, the approach can produce acceptable facial expressional images utilizing much less computing, network and storage resource than the traditional approaches. 1
    corecore