198 research outputs found

    Interaction Methods for Smart Glasses : A Survey

    Get PDF
    Since the launch of Google Glass in 2014, smart glasses have mainly been designed to support micro-interactions. The ultimate goal for them to become an augmented reality interface has not yet been attained due to an encumbrance of controls. Augmented reality involves superimposing interactive computer graphics images onto physical objects in the real world. This survey reviews current research issues in the area of human-computer interaction for smart glasses. The survey first studies the smart glasses available in the market and afterwards investigates the interaction methods proposed in the wide body of literature. The interaction methods can be classified into hand-held, touch, and touchless input. This paper mainly focuses on the touch and touchless input. Touch input can be further divided into on-device and on-body, while touchless input can be classified into hands-free and freehand. Next, we summarize the existing research efforts and trends, in which touch and touchless input are evaluated by a total of eight interaction goals. Finally, we discuss several key design challenges and the possibility of multi-modal input for smart glasses.Peer reviewe

    Press-n-Paste : Copy-and-Paste Operations with Pressure-sensitive Caret Navigation for Miniaturized Surface in Mobile Augmented Reality

    Get PDF
    Publisher Copyright: © 2021 ACM.Copy-and-paste operations are the most popular features on computing devices such as desktop computers, smartphones and tablets. However, the copy-and-paste operations are not sufficiently addressed on the Augmented Reality (AR) smartglasses designated for real-time interaction with texts in physical environments. This paper proposes two system solutions, namely Granularity Scrolling (GS) and Two Ends (TE), for the copy-and-paste operations on AR smartglasses. By leveraging a thumb-size button on a touch-sensitive and pressure-sensitive surface, both the multi-step solutions can capture the target texts through indirect manipulation and subsequently enables the copy-and-paste operations. Based on the system solutions, we implemented an experimental prototype named Press-n-Paste (PnP). After the eight-session evaluation capturing 1,296 copy-and-paste operations, 18 participants with GS and TE achieve the peak performance of 17,574 ms and 13,951 ms per copy-and-paste operation, with 93.21% and 98.15% accuracy rates respectively, which are as good as the commercial solutions using direct manipulation on touchscreen devices. The user footprints also show that PnP has a distinctive feature of miniaturized interaction area within 12.65 mm∗14.48 mm. PnP not only proves the feasibility of copy-and-paste operations with the flexibility of various granularities on AR smartglasses, but also gives significant implications to the design space of pressure widgets as well as the input design on smart wearables.Peer reviewe

    Ultra-Efficient On-Device Object Detection on AI-Integrated Smart Glasses with TinyissimoYOLO

    Full text link
    Smart glasses are rapidly gaining advanced functionality thanks to cutting-edge computing technologies, accelerated hardware architectures, and tiny AI algorithms. Integrating AI into smart glasses featuring a small form factor and limited battery capacity is still challenging when targeting full-day usage for a satisfactory user experience. This paper illustrates the design and implementation of tiny machine-learning algorithms exploiting novel low-power processors to enable prolonged continuous operation in smart glasses. We explore the energy- and latency-efficient of smart glasses in the case of real-time object detection. To this goal, we designed a smart glasses prototype as a research platform featuring two microcontrollers, including a novel milliwatt-power RISC-V parallel processor with a hardware accelerator for visual AI, and a Bluetooth low-power module for communication. The smart glasses integrate power cycling mechanisms, including image and audio sensing interfaces. Furthermore, we developed a family of novel tiny deep-learning models based on YOLO with sub-million parameters customized for microcontroller-based inference dubbed TinyissimoYOLO v1.3, v5, and v8, aiming at benchmarking object detection with smart glasses for energy and latency. Evaluations on the prototype of the smart glasses demonstrate TinyissimoYOLO's 17ms inference latency and 1.59mJ energy consumption per inference while ensuring acceptable detection accuracy. Further evaluation reveals an end-to-end latency from image capturing to the algorithm's prediction of 56ms or equivalently 18 fps, with a total power consumption of 62.9mW, equivalent to a 9.3 hours of continuous run time on a 154mAh battery. These results outperform MCUNet (TinyNAS+TinyEngine), which runs a simpler task (image classification) at just 7.3 fps per second

    Smart Assistive Technology for People with Visual Field Loss

    Get PDF
    Visual field loss results in the lack of ability to clearly see objects in the surrounding environment, which affects the ability to determine potential hazards. In visual field loss, parts of the visual field are impaired to varying degrees, while other parts may remain healthy. This defect can be debilitating, making daily life activities very stressful. Unlike blind people, people with visual field loss retain some functional vision. It would be beneficial to intelligently augment this vision by adding computer-generated information to increase the users' awareness of possible hazards by providing early notifications. This thesis introduces a smart hazard attention system to help visual field impaired people with their navigation using smart glasses and a real-time hazard classification system. This takes the form of a novel, customised, machine learning-based hazard classification system that can be integrated into wearable assistive technology such as smart glasses. The proposed solution provides early notifications based on (1) the visual status of the user and (2) the motion status of the detected object. The presented technology can detect multiple objects at the same time and classify them into different hazard types. The system design in this work consists of four modules: (1) a deep learning-based object detector to recognise static and moving objects in real-time, (2) a Kalman Filter-based multi-object tracker to track the detected objects over time to determine their motion model, (3) a Neural Network-based classifier to determine the level of danger for each hazard using its motion features extracted while the object is in the user's field of vision, and (4) a feedback generation module to translate the hazard level into a smart notification to increase user's cognitive perception using the healthy vision within the visual field. For qualitative system testing, normal and personalised defected vision models were implemented. The personalised defected vision model was created to synthesise the visual function for the people with visual field defects. Actual central and full-field test results were used to create a personalised model that is used in the feedback generation stage of this system, where the visual notifications are displayed in the user's healthy visual area. The proposed solution will enhance the quality of life for people suffering from visual field loss conditions. This non-intrusive, wearable hazard detection technology can provide obstacle avoidance solution, and prevent falls and collisions early with minimal information

    Exploring the use of smart glasses, gesture control, and environmental data in augmented reality games

    Get PDF
    Abstract. In the last decade, augmented reality has become a popular trend. Big corporations like Microsoft, Facebook, and Google started to invest in augmented reality because they saw the potential that it has especially with the rising of the consumer version of the head mounted displays such as Microsoft’s HoloLens and the ODG’s R7. However, there is a gap in the knowledge about the interaction with such devices since they are fairly new and an average consumer cannot yet afford them due to their relatively high prices. In this thesis, the Ghost Hunters game is described. The game is a mobile augmented reality pervasive game that uses the environment light data to charge the in-game “goggles”. The game has two different versions, a smartphone and smart glasses version. The Ghost Hunters game was implemented for exploring the use of two different types of interactions methods, buttons and natural hand gestures for both smartphones and smart glasses. In addition to that, the thesis sought to explore the use of ambient light in augmented reality games. First, the thesis defines the essential concepts related to games and augmented reality based on the literature and then describes the current state of the art of pervasive games and smart glasses. Second, both the design and implementation of the Ghost Hunters game are described in detail. Afterwards, the three rounds of field trials that were conducted to investigate the suitability of the two previously mentioned interaction methods are described and discussed. The findings suggest that smart glasses are more immersive than smartphones in context of pervasive AR games. Moreover, prior AR experience has a significant positive impact on the immersion of smart glasses users. Similarly, males were more immersed in the game than females. Hand gestures were proven to be more usable than the buttons on both devices. However, the interaction method did not affect the game engagement at all, but surprisingly it did affect the way users perceive the UI with smart glasses. Users that used the physical buttons were more likely to notice the UI elements than the users who used the hand gestures

    Internet-of-Things Architectures for Secure Cyber-Physical Spaces: the VISOR Experience Report

    Get PDF
    Internet of things (IoT) technologies are becoming a more and more widespread part of civilian life in common urban spaces, which are rapidly turning into cyber-physical spaces. Simultaneously, the fear of terrorism and crime in such public spaces is ever-increasing. Due to the resulting increased demand for security, video-based IoT surveillance systems have become an important area for research. Considering the large number of devices involved in the illicit recognition task, we conducted a field study in a Dutch Easter music festival in a national interest project called VISOR to select the most appropriate device configuration in terms of performance and results. We iteratively architected solutions for the security of cyber-physical spaces using IoT devices. We tested the performance of multiple federated devices encompassing drones, closed-circuit television, smart phone cameras, and smart glasses to detect real-case scenarios of potentially malicious activities such as mosh-pits and pick-pocketing. Our results pave the way to select optimal IoT architecture configurations -- i.e., a mix of CCTV, drones, smart glasses, and camera phones in our case -- to make safer cyber-physical spaces' a reality

    Alternative realities : from augmented reality to mobile mixed reality

    Get PDF
    This thesis provides an overview of (mobile) augmented and mixed reality by clarifying the different concepts of reality, briefly covering the technology behind mobile augmented and mixed reality systems, conducting a concise survey of existing and emerging mobile augmented and mixed reality applications and devices. Based on the previous analysis and the survey, this work will next attempt to assess what mobile augmented and mixed reality could make possible, and what related applications and environments could offer to users, if tapped into their full potential. Additionally, this work briefly discusses what might be the cause for mobile augmented reality not yet being widely adopted to everyday use, even though many such applications already exist for the smartphone platform, and smartglass systems slowly becoming increasingly common. Other related topics and issues that are briefly covered include information security and privacy issues related to mobile augmented and mixed reality systems, the link between mobile mixed reality and ubiquitous computing, previously conducted user studies, as well as user needs and user experience issues. The overall purpose of this thesis is to demonstrate what is already possible to implement on the mobile platform (including both hand-held devices and head-mounted configurations) by using augmented and mixed reality interfaces, and to consider how mobile mixed reality systems could be improved, based on existing products, studies and lessons learned from the survey conducted in this thesis
    • 

    corecore