7,620 research outputs found

    Improving Touch Based Gesture Interfaces

    Get PDF
    The lack of affordance in gesture interfaces make interaction non-intuitive and time has to be invested in learning the various gestures. This can be difficult for first-time users. The Visual Gestures on Maps (VGMaps) mobile application was developed to test if the inclusion of visual cues improves the efficiency and intuitiveness of touch-based gestures. User testing showed that visual cues made no difference with regards to the basic touch gestures, such as swiping and flicking, but an improvement was noted with more advanced gestures (multi-touch zoom)

    Solving Visual Madlibs with Multiple Cues

    Get PDF
    This paper focuses on answering fill-in-the-blank style multiple choice questions from the Visual Madlibs dataset. Previous approaches to Visual Question Answering (VQA) have mainly used generic image features from networks trained on the ImageNet dataset, despite the wide scope of questions. In contrast, our approach employs features derived from networks trained for specialized tasks of scene classification, person activity prediction, and person and object attribute prediction. We also present a method for selecting sub-regions of an image that are relevant for evaluating the appropriateness of a putative answer. Visual features are computed both from the whole image and from local regions, while sentences are mapped to a common space using a simple normalized canonical correlation analysis (CCA) model. Our results show a significant improvement over the previous state of the art, and indicate that answering different question types benefits from examining a variety of image cues and carefully choosing informative image sub-regions
    • …
    corecore