5,106 research outputs found
The Future of the Internet III
Presents survey results on technology experts' predictions on the Internet's social, political, and economic impact as of 2020, including its effects on integrity and tolerance, intellectual property law, and the division between personal and work lives
Leveraging Multimodal Interaction and Adaptive Interfaces for Location-based Augmented Reality Islamic Tourism Application
A Location-based Augmented Reality (LBAR) application leveraging multimodal interaction and adaptive interface based on Islamic tourism information was proposed to enhance user experience while travelling. LBAR has the potential to improve tourist experience and help tourists to access relevant information, thus improving their knowledge regarding touristic destination while increasing levels of their entertainment throughout the process. In LBAR application, Point of Interest (POI) displayed are exposed to the “occlusion problem” where the AR contents are visually redundant and overlapping with one another causing the users to loose valuable information. Previous research have suggested the design of AR POI which help user to see the augmented POI clearly. The user can click on the desired POI but it still displays a large amount of POI. From our best study, there is limitation of research studying on how to minimize the amount of displayed POI based on user’s current needs. Therefore, in this paper we suggest to use an adaptive user interface and multimodal interaction to solve this problem. We discussed the process of analysing and designing the user interfaces of previous studies. The proposed mobile solution was presented by explaining the application contents, the combination of adaptive multimodal inputs, system’s flow chart and multimodal task definition. Then the user evaluation was conducted to measure the level of satisfaction in terms of the usability of the application. A total of 24 Islamic tourists have participated in this study. The findings revealed that the average SUS score of 75.83 of respondents agree in terms of satisfaction of the LBAR application to be utilized while traveling. Finally, we conclude this paper by providing the suggestion of future works
WristSketcher: Creating Dynamic Sketches in AR with a Sensing Wristband
Restricted by the limited interaction area of native AR glasses (e.g., touch
bars), it is challenging to create sketches in AR glasses. Recent works have
attempted to use mobile devices (e.g., tablets) or mid-air bare-hand gestures
to expand the interactive spaces and can work as the 2D/3D sketching input
interfaces for AR glasses. Between them, mobile devices allow for accurate
sketching but are often heavy to carry, while sketching with bare hands is
zero-burden but can be inaccurate due to arm instability. In addition, mid-air
bare-hand sketching can easily lead to social misunderstandings and its
prolonged use can cause arm fatigue. As a new attempt, in this work, we present
WristSketcher, a new AR system based on a flexible sensing wristband for
creating 2D dynamic sketches, featuring an almost zero-burden authoring model
for accurate and comfortable sketch creation in real-world scenarios.
Specifically, we have streamlined the interaction space from the mid-air to the
surface of a lightweight sensing wristband, and implemented AR sketching and
associated interaction commands by developing a gesture recognition method
based on the sensing pressure points on the wristband. The set of interactive
gestures used by our WristSketcher is determined by a heuristic study on user
preferences. Moreover, we endow our WristSketcher with the ability of animation
creation, allowing it to create dynamic and expressive sketches. Experimental
results demonstrate that our WristSketcher i) faithfully recognizes users'
gesture interactions with a high accuracy of 96.0%; ii) achieves higher
sketching accuracy than Freehand sketching; iii) achieves high user
satisfaction in ease of use, usability and functionality; and iv) shows
innovation potentials in art creation, memory aids, and entertainment
applications
Lessons learned from the design of a mobile multimedia system in the Moby Dick project
Recent advances in wireless networking technology and the exponential development of semiconductor technology have engendered a new paradigm of computing, called personal mobile computing or ubiquitous computing. This offers a vision of the future with a much richer and more exciting set of architecture research challenges than extrapolations of the current desktop architectures. In particular, these devices will have limited battery resources, will handle diverse data types, and will operate in environments that are insecure, dynamic and which vary significantly in time and location. The research performed in the MOBY DICK project is about designing such a mobile multimedia system. This paper discusses the approach made in the MOBY DICK project to solve some of these problems, discusses its contributions, and accesses what was learned from the project
Interaction Methods for Smart Glasses : A Survey
Since the launch of Google Glass in 2014, smart glasses have mainly been designed to support micro-interactions. The ultimate goal for them to become an augmented reality interface has not yet been attained due to an encumbrance of controls. Augmented reality involves superimposing interactive computer graphics images onto physical objects in the real world. This survey reviews current research issues in the area of human-computer interaction for smart glasses. The survey first studies the smart glasses available in the market and afterwards investigates the interaction methods proposed in the wide body of literature. The interaction methods can be classified into hand-held, touch, and touchless input. This paper mainly focuses on the touch and touchless input. Touch input can be further divided into on-device and on-body, while touchless input can be classified into hands-free and freehand. Next, we summarize the existing research efforts and trends, in which touch and touchless input are evaluated by a total of eight interaction goals. Finally, we discuss several key design challenges and the possibility of multi-modal input for smart glasses.Peer reviewe
Fall prevention intervention technologies: A conceptual framework and survey of the state of the art
In recent years, an ever increasing range of technology-based applications have been developed with the goal of assisting in the delivery of more effective and efficient fall prevention interventions. Whilst there have been a number of studies that have surveyed technologies for a particular sub-domain of fall prevention, there is no existing research which surveys the full spectrum of falls prevention interventions and characterises the range of technologies that have augmented this landscape. This study presents a conceptual framework and survey of the state of the art of technology-based fall prevention systems which is derived from a systematic template analysis of studies presented in contemporary research literature. The framework proposes four broad categories of fall prevention intervention system: Pre-fall prevention; Post-fall prevention; Fall injury prevention; Cross-fall prevention. Other categories include, Application type, Technology deployment platform, Information sources, Deployment environment, User interface type, and Collaborative function. After presenting the conceptual framework, a detailed survey of the state of the art is presented as a function of the proposed framework. A number of research challenges emerge as a result of surveying the research literature, which include a need for: new systems that focus on overcoming extrinsic falls risk factors; systems that support the environmental risk assessment process; systems that enable patients and practitioners to develop more collaborative relationships and engage in shared decision making during falls risk assessment and prevention activities. In response to these challenges, recommendations and future research directions are proposed to overcome each respective challenge.The Royal Society, grant Ref: RG13082
Vision based 3D Gesture Tracking using Augmented Reality and Virtual Reality for Improved Learning Applications
3D gesture recognition and tracking based augmented reality and virtual reality have become a big interest of research because of advanced technology in smartphones. By interacting with 3D objects in augmented reality and virtual reality, users get better understanding of the subject matter where there have been requirements of customized hardware support and overall experimental performance needs to be satisfactory. This research investigates currently various vision based 3D gestural architectures for augmented reality and virtual reality. The core goal of this research is to present analysis on methods, frameworks followed by experimental performance on recognition and tracking of hand gestures and interaction with virtual objects in smartphones. This research categorized experimental evaluation for existing methods in three categories, i.e. hardware requirement, documentation before actual experiment and datasets. These categories are expected to ensure robust validation for practical usage of 3D gesture tracking based on augmented reality and virtual reality. Hardware set up includes types of gloves, fingerprint and types of sensors. Documentation includes classroom setup manuals, questionaries, recordings for improvement and stress test application. Last part of experimental section includes usage of various datasets by existing research. The overall comprehensive illustration of various methods, frameworks and experimental aspects can significantly contribute to 3D gesture recognition and tracking based augmented reality and virtual reality.Peer reviewe
- …