3,525 research outputs found

    H Space: Interactive Augmented Reality Art

    Get PDF
    open accessThis artwork exploits recent research into augmented reality systems, such as the HoloLens, for building creative interaction in augmented reality. The work is being conducted in the context of interactive art experiences. The first version of the audience experience of the artwork, “H Space”, was informally tested in the SIGGRAPH 2018 Art Gallery context. Experiences with a later, improved, version was evaluated at Tsinghua University. The latest distributed version will be shown in Sydney. The paper describes the concept, the background in both the art and the technological domain and points to some of the key computer human interaction art research issues that the work highlights

    Exploring the Front Touch Interface for Virtual Reality Headsets

    Full text link
    In this paper, we propose a new interface for virtual reality headset: a touchpad in front of the headset. To demonstrate the feasibility of the front touch interface, we built a prototype device, explored VR UI design space expansion, and performed various user studies. We started with preliminary tests to see how intuitively and accurately people can interact with the front touchpad. Then, we further experimented various user interfaces such as a binary selection, a typical menu layout, and a keyboard. Two-Finger and Drag-n-Tap were also explored to find the appropriate selection technique. As a low-cost, light-weight, and in low power budget technology, a touch sensor can make an ideal interface for mobile headset. Also, front touch area can be large enough to allow wide range of interaction types such as multi-finger interactions. With this novel front touch interface, we paved a way to new virtual reality interaction methods

    MilliSonic: Pushing the Limits of Acoustic Motion Tracking

    Full text link
    Recent years have seen interest in device tracking and localization using acoustic signals. State-of-the-art acoustic motion tracking systems however do not achieve millimeter accuracy and require large separation between microphones and speakers, and as a result, do not meet the requirements for many VR/AR applications. Further, tracking multiple concurrent acoustic transmissions from VR devices today requires sacrificing accuracy or frame rate. We present MilliSonic, a novel system that pushes the limits of acoustic based motion tracking. Our core contribution is a novel localization algorithm that can provably achieve sub-millimeter 1D tracking accuracy in the presence of multipath, while using only a single beacon with a small 4-microphone array.Further, MilliSonic enables concurrent tracking of up to four smartphones without reducing frame rate or accuracy. Our evaluation shows that MilliSonic achieves 0.7mm median 1D accuracy and a 2.6mm median 3D accuracy for smartphones, which is 5x more accurate than state-of-the-art systems. MilliSonic enables two previously infeasible interaction applications: a) 3D tracking of VR headsets using the smartphone as a beacon and b) fine-grained 3D tracking for the Google Cardboard VR system using a small microphone array

    A Utility Framework for Selecting Immersive Interactive Capability and Technology for Virtual Laboratories

    Get PDF
    There has been an increase in the use of virtual reality (VR) technology in the education community since VR is emerging as a potent educational tool that offers students with a rich source of educational material and makes learning exciting and interactive. With a rise of popularity and market expansion in VR technology in the past few years, a variety of consumer VR electronics have boosted educators and researchers’ interest in using these devices for practicing engineering and science laboratory experiments. However, little is known about how such devices may be well-suited for active learning in a laboratory environment. This research aims to address this gap by formulating a utility framework to help educators and decision-makers efficiently select a type of VR device that matches with their design and capability requirements for their virtual laboratory blueprint. Furthermore, a framework use case is demonstrated by not only surveying five types of VR devices ranging from low-immersive to full-immersive along with their capabilities (i.e., hardware specifications, cost, and availability) but also considering the interaction techniques in each VR device based on the desired laboratory task. To validate the framework, a research study is carried out to compare these five VR devices and investigate which device can provide an overall best-fit for a 3D virtual laboratory content that we implemented based on the interaction level, usability and performance effectiveness

    Soft, wireless periocular wearable electronics for real-time detection of eye vergence in a virtual reality toward mobile eye therapies

    Get PDF
    Ocular disorders are currently affecting the developed world, causing loss of productivity in adults and children. While the cause of such disorders is not clear, neurological issues are often considered as the biggest possibility. Treatment of strabismus and vergence requires an invasive surgery or clinic-based vision therapy that has been used for decades due to the lack of alternatives such as portable therapeutic tools. Recent advancement in electronic packaging and image processing techniques have opened the possibility for optics-based portable eye tracking approaches, but several technical and safety hurdles limit the implementation of the technology in wearable applications. Here, we introduce a fully wearable, wireless soft electronic system that offers a portable, highly sensitive tracking of eye movements (vergence) via the combination of skin-conformal sensors and a virtual reality system. Advancement of material processing and printing technologies based on aerosol jet printing enables reliable manufacturing of skin-like sensors, while a flexible electronic circuit is prepared by the integration of chip components onto a soft elastomeric membrane. Analytical and computational study of a data classification algorithm provides a highly accurate tool for real-time detection and classification of ocular motions. In vivo demonstration with 14 human subjects captures the potential of the wearable electronics as a portable therapy system, which can be easily synchronized with a virtual reality headset

    Parameter tuning for enhancing inter-subject emotion classification in four classes for vr-eeg predictive analytics

    Get PDF
    The following research describes the potential in classifying emotions using wearable EEG headset while using a virtual environment to stimulate the responses of the users. Current developments on emotion classification have always steered towards the use of a clinical-grade EEG headset with a 2D monitor screen for stimuli evocations which may introduce additional artifacts or inaccurate readings into the dataset due to users unable to provide their full attention from the given stimuli even though the stimuli presentated should have been advantageous in provoking emotional reactions. Furthermore, the clinical-grade EEG headset requires a lengthy duration to setup and avoiding any hindrance such as hairs hindering the electrodes from collecting the brainwave signals or electrodes coming loose thus requiring additional time to work to fix the issue. With the lengthy duration of setting up the EEG headset, the user may expereince fatigue and become incapable of responding naturally to the emotion being presented from the stimuli. Therefore, this research introduces the use of a wearable low-cost EEG headset with dry electrodes that requires only a trivial amount of time to set up and a Virtual Reality (VR) headset for the presentation of the emotional stimuli in an immersive VR environment which is paired with earphones to provide the full immersive experience needed for the evocation of the emotion. The 360 video stimuli are designed and stitched together according to the arousal-valence space (AVS) model with each quadrant having an 80-second stimuli presentation period followed by a 10-second rest period in between quadrants. The EEG dataset is then collected through the use of a wearable low-cost EEG using four channels located at TP9, TP10, AF7, AF8. The collected dataset is then fed into the machine learning algorithms, namely KNN, SVM and Deep Learning with the dataset focused on inter-subject test approaches using 10-fold cross-validation. The results obtained found that SVM using Radial Basis Function Kernel 1 achieved the highest accuracy at 85.01%. This suggests that the use of a wearable low-cost EEG headset with a significantly lower resolution signal compared to clinical-grade equipment which utilizes only a very limited number of electrodes appears to be highly promising as an emotion classification BCI tool and may thus spur up open up myriad practical, affordable and cost-friendly solutions in applying to the medical, education, military, and entertainment domains

    Design Interfaces with VR

    Get PDF
    Virtual Reality (VR) is maturing as a technology. Now that mainstream head-mounted displays (HMDs) are consumer-affordable, the space of application development has begun in earnest. Some of this development transitions existing applications (e.g. computer games) to work with a 3D tracked interface while others explore completely novel and innovative uses of VR. The idea of using VR in architectural practice has a long history. As a tool with the potential to allow 3D visualisation at 1:1scale, the use-case for architectural visualisation has seemed natural and obvious since the early days of the technology. However, the realisation of this idea was not initially straightforward. In 2000 UCL built a CAVE-like VR projection theatre – this is a 3m x 3m room where three of the four walls and the floor are stereo displays, viewed through tracked stereo glasses allowing perspective-correct stereo views. This was driven by a state-of-the-art SGI computer, many times more powerful than any standard PC (and about 20 times the size). However, despite this vast graphics processing power, most architectural models, could not easily be adapted to this new technology. These models had been designed for accurate renderings of detailed geometry. Twenty minutes of processing with standard computer graphics applications on a desktop PC could produce a beautiful rendering of a view into this model, but VR demands real-time frame rates (ideally at least 60 frames per second) and the models were simply too large and detailed for this. These tensions between designs for single viewpoint renderings and designs for real-time rendering are now better understood, and advances in both graphics hardware and software have improved this situation. However recent trends in consumer VR towards standalone headsets mean that simulations are now driven by the same graphics processors that drive the mobile devices in our pockets. Aside from these technical hurdles, the cost has been the main contributing factor to the relatively slow uptake of VR as a tool for exploring design, but now that we have affordable devices available, what are the factors that still hinder progress

    Emerging Affect Detection Methodologies in VR and future directions.

    Get PDF
    The uses of Virtual reality are constantly evolving, from healthcare treatments to evaluating commercial products, all of which would benefit from a better understanding of the emotional state of the individual. There is ongoing research into developing specially adapted methods for the recognition of the user’s affect while immersed within Virtual Reality. This paper outlines the approaches attempted and the available methodologies that embed sensors into wearable devices for real-time affect detection. These emerging technologies are introducing innovative ways of studying and interpreting emotion related data produced within immersive experience
    • …
    corecore