11,837 research outputs found

    Enabling hand-crafted visual markers at scale

    Get PDF
    As locative media and augmented reality spread into the everyday world so it becomes important to create aesthetic visual markers at scale. We explore a designer-centred approach in which skilled designers handcraft seed designs that are automatically recombined to create many markers as subtle variants of a common theme. First, we extend the d-touch topological approach to creating visual markers that has previously been shown to support creative design with two new techniques: area order codes and visual checksums. We then show how the topological structure of such markers provides the basis for recombining designs to generate many variations. We demonstrate our approach through the creation of beautiful, personalized and interactive wallpaper. We reflect on how technologies must enable designers to balance goals of scalability, aesthetics and reliability in creating beautiful interactive decoration. Copyright is held by the owner/author(s).This research has been supported by the Horizon Digital Economy Research Institute (EPSRC Grant No. EP/G065802/1 and EP/M02315X/1) and the ‘Living With Interactive Decorative Patterns’ project (EPSRC Grant No. EP/L023717/1)

    Deep-learning feature descriptor for tree bark re-identification

    Get PDF
    L’habilité de visuellement ré-identifier des objets est une capacité fondamentale des systèmes de vision. Souvent, ces systèmes s’appuient sur une collection de signatures visuelles basées sur des descripteurs comme SIFT ou SURF. Cependant, ces descripteurs traditionnels ont été conçus pour un certain domaine d’aspects et de géométries de surface (relief limité). Par conséquent, les surfaces très texturées telles que l’écorce des arbres leur posent un défi. Alors, cela rend plus difficile l’utilisation des arbres comme points de repère identifiables à des fins de navigation (robotique) ou le suivi du bois abattu le long d’une chaîne logistique (logistique). Nous proposons donc d’utiliser des descripteurs basés sur les données, qui une fois entraîné avec des images d’écorce, permettront la ré-identification de surfaces d’arbres. À cet effet, nous avons collecté un grand ensemble de données contenant 2 400 images d’écorce présentant de forts changements d’éclairage, annotées par surface et avec la possibilité d’être alignées au pixels près. Nous avons utilisé cet ensemble de données pour échantillonner parmis plus de 2 millions de parcelle d’image de 64x64 pixels afin d’entraîner nos nouveaux descripteurs locaux DeepBark et SqueezeBark. Notre méthode DeepBark a montré un net avantage par rapport aux descripteurs fabriqués à la main SIFT et SURF. Par exemple, nous avons démontré que DeepBark peut atteindre une mAP de 87.2% lorsqu’il doit retrouver 11 images d’écorce pertinentes, i.e correspondant à la même surface physique, à une image requête parmis 7,900 images. Notre travail suggère donc qu’il est possible de ré-identifier la surfaces des arbres dans un contexte difficile, tout en rendant public un nouvel ensemble de données.The ability to visually re-identify objects is a fundamental capability in vision systems. Oftentimes,it relies on collections of visual signatures based on descriptors, such as SIFT orSURF. However, these traditional descriptors were designed for a certain domain of surface appearances and geometries (limited relief). Consequently, highly-textured surfaces such as tree bark pose a challenge to them. In turn, this makes it more difficult to use trees as identifiable landmarks for navigational purposes (robotics) or to track felled lumber along a supply chain (logistics). We thus propose to use data-driven descriptors trained on bark images for tree surface re-identification. To this effect, we collected a large dataset containing 2,400 bark images with strong illumination changes, annotated by surface and with the ability to pixel align them. We used this dataset to sample from more than 2 million 64 64 pixel patches to train our novel local descriptors DeepBark and SqueezeBark. Our DeepBark method has shown a clear advantage against the hand-crafted descriptors SIFT and SURF. For instance, we demonstrated that DeepBark can reach a mAP of 87.2% when retrieving 11 relevant barkimages, i.e. corresponding to the same physical surface, to a bark query against 7,900 images. ur work thus suggests that re-identifying tree surfaces in a challenging illuminations contextis possible. We also make public our dataset, which can be used to benchmark surfacere-identification techniques

    Vision-and-Language Navigation: Interpreting visually-grounded navigation instructions in real environments

    Full text link
    A robot that can carry out a natural-language instruction has been a dream since before the Jetsons cartoon series imagined a life of leisure mediated by a fleet of attentive robot helpers. It is a dream that remains stubbornly distant. However, recent advances in vision and language methods have made incredible progress in closely related areas. This is significant because a robot interpreting a natural-language navigation instruction on the basis of what it sees is carrying out a vision and language process that is similar to Visual Question Answering. Both tasks can be interpreted as visually grounded sequence-to-sequence translation problems, and many of the same methods are applicable. To enable and encourage the application of vision and language methods to the problem of interpreting visually-grounded navigation instructions, we present the Matterport3D Simulator -- a large-scale reinforcement learning environment based on real imagery. Using this simulator, which can in future support a range of embodied vision and language tasks, we provide the first benchmark dataset for visually-grounded natural language navigation in real buildings -- the Room-to-Room (R2R) dataset.Comment: CVPR 2018 Spotlight presentatio

    Deepening visitor engagement with museum exhibits through hand-crafted visual markers

    Get PDF
    Visual markers, in particular QR codes, have become widely adopted in museums to enable low cost interactive applications. However, visitors often do not engage with them. In this paper we explore the application of visual makers that can be designed to be meaningful and that can be created by visitors themselves. We study both the use of these markers as labels for portraits that link to audio recordings and as a mechanism for visitors to contribute their own reflections to the exhibition by drawing a marker and linking an audio comment. Our findings show visitors appreciated the use of the aesthetic markers and engaged with them at three levels – physical placement, aesthetic content and digital content. We suggest that these different levels need to be considered in the design of future visiting systems, which make use of such markers, to ensure they are mutually supporting in shaping the experience

    Brand personality and language: an analysis of Tiffany and Pandora product descriptions

    Get PDF
    openThe research investigates the use of the English language in brands communication strategy. The first chapter aims to give an overall basic knowledge regarding brand communication and its main features: brand personality and brand engagement. To subsequently continue in the following chapter, with an analysis of the linguistic aspects regarding web-based communications and how they influence individuals in their perceptions of a brand or company. The third chapter aims to apply the knowledge gathered in the first two chapters of this dissertation to the examination of the linguistic differences present in the brands Tiffany and Pandora, in a comparison of the two. Key words: brand communication, communication strategy, linguistic analysis, Tiffany, Pandor
    • …
    corecore