8 research outputs found

    Adaptive Training of Video Sets for Image Recognition on Mobile Phones

    Get PDF
    We present an enhancement towards adaptive video training for PhoneGuide, a digital museum guidance system for ordinary camera–equipped mobile phones. It enables museum visitors to identify exhibits by capturing photos of them. In this article, a combined solution of object recognition and pervasive tracking is extended to a client–server–system for improving data acquisition and for supporting scale–invariant object recognition

    GPU-based Image Analysis on Mobile Devices

    Get PDF
    With the rapid advances in mobile technology many mobile devices are capable of capturing high quality images and video with their embedded camera. This paper investigates techniques for real-time processing of the resulting images, particularly on-device utilizing a graphical processing unit. Issues and limitations of image processing on mobile devices are discussed, and the performance of graphical processing units on a range of devices measured through a programmable shader implementation of Canny edge detection.Comment: Proceedings of Image and Vision Computing New Zealand 201

    Street navigation using visual information on mobile phones

    Get PDF

    Who is Here: Location Aware Face Recognition

    Get PDF
    Abstract Face recognition has many challenges. For instance, the illumination, various facial expression and different viewpoints add difficulties to identify the same person from a bunch of images. Searching over a huge set of images will only amplify such difficulties. We introduce the location aware face recognition framework for mobile-taken photos to alleviate the hardness. With the help of location sensor on the mobile devices, we collect images with location information. We propose an algorithm to reduce the search space of face recognition and therefore achieve better accuracy. Photos are clustered by locations on the server. Each location is then associated with a face classifier. Every client can send a "Who is Here" type query to the server by uploading an image with the location. The algorithm on the server will search over the given location and identify the person on the image. Experiments are conducted on mobile devices. The results are quite promising that higher accuracy is achieved and the query can be answered in near real-time

    A longitudinal review of Mobile HCI research Methods

    Get PDF
    This paper revisits a research methods survey from 2003 and contrasts it with a survey from 2010. The motivation is to gain insight about how mobile HCI research has evolved over the last decade in terms of approaches and focus. The paper classifies 144 publications from 2009 published in 10 prominent outlets by their research methods and purpose. Comparing this to the survey for 2000-02 show that mobile HCI research has changed methodologically. From being almost exclusively driven by engineering and applied research, current mobile HCI is primarily empirically driven, involves a high number of field studies, and focus on evaluating and understanding, as well as engineering. It has also become increasingly multi-methodological, combining and diversifying methods from different disciplines. At the same time, new opportunities and challenges have emerged

    Adaptive Image Classification on Mobile Phones

    Get PDF
    The advent of high-performance mobile phones has opened up the opportunity to develop new context-aware applications for everyday life. In particular, applications for context-aware information retrieval in conjunction with image-based object recognition have become a focal area of recent research. In this thesis we introduce an adaptive mobile museum guidance system that allows visitors in a museum to identify exhibits by taking a picture with their mobile phone. Besides approaches to object recognition, we present different adaptation techniques that improve classification performance. After providing a comprehensive background of context-aware mobile information systems in general, we present an on-device object recognition algorithm and show how its classification performance can be improved by capturing multiple images of a single exhibit. To accomplish this, we combine the classification results of the individual pictures and consider the perspective relations among the retrieved database images. In order to identify multiple exhibits in pictures we present an approach that uses the spatial relationships among the objects in images. They make it possible to infer and validate the locations of undetected objects relative to the detected ones and additionally improve classification performance. To cope with environmental influences, we introduce an adaptation technique that establishes ad-hoc wireless networks among the visitors’ mobile devices to exchange classification data. This ensures constant classification rates under varying illumination levels and changing object placement. Finally, in addition to localization using RF-technology, we present an adaptation technique that uses user-generated spatio-temporal pathway data for person movement prediction. Based on the history of previously visited exhibits, the algorithm determines possible future locations and incorporates these predictions into the object classification process. This increases classification performance and offers benefits comparable to traditional localization approaches but without the need for additional hardware. Through multiple field studies and laboratory experiments we demonstrate the benefits of each approach and show how they influence the overall classification rate.Die Einführung von Mobiltelefonen mit eingebauten Sensoren wie Kameras, GPS oder Beschleunigungssensoren, sowie Kommunikationstechniken wie Bluetooth oder WLAN ermöglicht die Entwicklung neuer kontextsensitiver Anwendungen für das tägliche Leben. Insbesondere Applikationen im Bereich kontextsensitiver Informationsbeschaffung in Verbindung mit bildbasierter Objekterkennung sind in den Fokus der aktuellen Forschung geraten. Der Beitrag dieser Arbeit ist die Entwicklung eines bildbasierten, mobilen Museumsführersystems, welches unterschiedliche Adaptionstechniken verwendet, um die Objekterkennung zu verbessern. Es wird gezeigt, wie Ojekterkennungsalgorithmen auf Mobiltelefonen realisiert werden können und wie die Erkennungsrate verbessert wird, indem man zum Beispiel ad-hoc Netzwerke einsetzt oder Bewegungsvorhersagen von Personen berücksichtigt

    From GeoVisualization to visual-analytics: methodologies and techniques for human-information discourse

    Get PDF
    2010 - 2011The objective of our research is to give support to decision makers when facing problems which require rapid solutions in spite of the complexity of scenarios under investigation. In order to achieve this goal our studies have been focused on GeoVisualization and GeoVisual Analytics research field, which play a relevant role in this scope, because they exploit results from several disciplines, such as exploratory data analysis and GIScience, to provide expert users with highly interactive tools by which they can both visually synthesize information from large datasets and perform complex analytical tasks. The research we are carrying out along this line is meant to develop software applications capable both to build an immediate overview of a scenario and to explore elements featuring it. To this aim, we are defining methodologies and techniques which embed key aspects from different disciplines, such as augmented reality and location-based services. Their integration is targeted to realize advanced tools where the geographic component role is primary and is meant to contribute to a human-information discourse... [edited by author]X n.s

    Designing usable mobile interfaces for spatial data

    Get PDF
    2010 - 2011This dissertation deals mainly with the discipline of Human-­‐Computer Interaction (HCI), with particular attention on the role that it plays in the domain of modern mobile devices. Mobile devices today offer a crucial support to a plethora of daily activities for nearly everyone. Ranging from checking business mails while traveling, to accessing social networks while in a mall, to carrying out business transactions while out of office, to using all kinds of online public services, mobile devices play the important role to connect people while physically apart. Modern mobile interfaces are therefore expected to improve the user's interaction experience with the surrounding environment and offer different adaptive views of the real world. The goal of this thesis is to enhance the usability of mobile interfaces for spatial data. Spatial data are particular data in which the spatial component plays an important role in clarifying the meaning of the data themselves. Nowadays, this kind of data is totally widespread in mobile applications. Spatial data are present in games, map applications, mobile community applications and office automations. In order to enhance the usability of spatial data interfaces, my research investigates on two major issues: 1. Enhancing the visualization of spatial data on small screens 2. Enhancing the text-­‐input methods I selected the Design Science Research approach to investigate the above research questions. The idea underling this approach is “you build artifact to learn from it”, in other words researchers clarify what is new in their design. The new knowledge carried out from the artifact will be presented in form of interaction design patterns in order to support developers in dealing with issues of mobile interfaces. The thesis is organized as follows. Initially I present the broader context, the research questions and the approaches I used to investigate them. Then the results are split into two main parts. In the first part I present the visualization technique called Framy. The technique is designed to support users in visualizing geographical data on mobile map applications. I also introduce a multimodal extension of Framy obtained by adding sounds and vibrations. After that I present the process that turned the multimodal interface into a means to allow visually impaired users to interact with Framy. Some projects involving the design principles of Framy are shown in order to demonstrate the adaptability of the technique in different contexts. The second part concerns the issue related to text-­‐input methods. In particular I focus on the work done in the area of virtual keyboards for mobile devices. A new kind of virtual keyboard called TaS provides users with an input system more efficient and effective than the traditional QWERTY keyboard. Finally, in the last chapter, the knowledge acquired is formalized in form of interaction design patterns. [edited by author]X n.s
    corecore