3 research outputs found

    Food for talk: photo frames to support social connectedness for elderly people in a nursing home

    Get PDF
    Social connectedness is crucial to someone’s well-being. A case study is conducted to test whether the social connectedness of elderly people living in a nursing home and their family and friends can be improved through a photo frame. A SIM-based photo frame is used to keep the elderly people informed about the comings and goings of their loved ones. Eight elderly people living in a nursing home participated in this case study for 6-7 weeks. A content analysis of the photos revealed that the photos often were related to special events or holidays that happened in the past. Interviews indicated that the photos mainly served as food for talk, i.e. the photos initiated conversations between the elderly people mutually, with their family members and with the healthcare professionals. They all liked the photo frame and it didn’t serve as a means to exchange news, but as a catalyst to talk –mainly- about the past

    Bridging semantic gap: learning and integrating semantics for content-based retrieval

    Full text link
    Digital cameras have entered ordinary homes and produced^incredibly large number of photos. As a typical example of broad image domain, unconstrained consumer photos vary significantly. Unlike professional or domain-specific images, the objects in the photos are ill-posed, occluded, and cluttered with poor lighting, focus, and exposure. Content-based image retrieval research has yet to bridge the semantic gap between computable low-level information and high-level user interpretation. In this thesis, we address the issue of semantic gap with a structured learning framework to allow modular extraction of visual semantics. Semantic image regions (e.g. face, building, sky etc) are learned statistically, detected directly from image without segmentation, reconciled across multiple scales, and aggregated spatially to form compact semantic index. To circumvent the ambiguity and subjectivity in a query, a new query method that allows spatial arrangement of visual semantics is proposed. A query is represented as a disjunctive normal form of visual query terms and processed using fuzzy set operators. A drawback of supervised learning is the manual labeling of regions as training samples. In this thesis, a new learning framework to discover local semantic patterns and to generate their samples for training with minimal human intervention has been developed. The discovered patterns can be visualized and used in semantic indexing. In addition, three new class-based indexing schemes are explored. The winnertake- all scheme supports class-based image retrieval. The class relative scheme and the local classification scheme compute inter-class memberships and local class patterns as indexes for similarity matching respectively. A Bayesian formulation is proposed to unify local and global indexes in image comparison and ranking that resulted in superior image retrieval performance over those of single indexes. Query-by-example experiments on 2400 consumer photos with 16 semantic queries show that the proposed approaches have significantly better (18% to 55%) average precisions than a high-dimension feature fusion approach. The thesis has paved two promising research directions, namely the semantics design approach and the semantics discovery approach. They form elegant dual frameworks that exploits pattern classifiers in learning and integrating local and global image semantics

    Using Dual Cascading Learning Frameworks for Image Indexing

    No full text
    To bridge the semantic gap in content-based image retrieval, detecting meaningful visual entities (e.g. faces, sky, foliage, buildings etc) in image content and classifying images into semantic categories based on trained pattern classifiers have become active research trends. In this paper, we present dual cascading learning frameworks that extract and combine intra-image and inter-class semantics for image indexing and retrieval. In the supervised learning version, support vector detectors are trained on semantic support regions without image segmentation. The reconciled and aggregated detection-based indexes then serve as input for support vector learning of image classifiers to generate class-relative image indexes. During retrieval, similarities based on both indexes are combined to rank images. In the unsupervised learning..
    corecore