1,547 research outputs found

    Wearable Mixed Reality System In Less Than 1 Pound

    Get PDF
    We have designed a wearable Mixed Reality (MR) framework which allows to real-time render game-like 3D scenes on see-through head-mounted displays (see through HMDs) and to localize the user position within aknown internet wireless area. Our equipment weights less than 1 Pound (0.45 Kilos). The information visualized on the mobile device could be sent on-demand from a remote server and realtime rendered onboard.We present our PDA-based platform as a valid alternative to use in wearable MR contexts under less mobility and encumbering constraints: our approach eliminates the typical backpack with a laptop, a GPS antenna and a heavy HMD usually required in this cases. A discussion about our results and user experiences with our approach using a handheld for 3D rendering is presented as well

    A virtual 3D mobile guide in the INTERMEDIA project

    Get PDF
    In this paper, we introduce a European research project, interactive media with personal networked devices (INTERMEDIA) in which we seek to progress beyond home and device-centric convergence toward truly user-centric convergence of multimedia. Our vision is to make the user the multimedia center: the user as the point at which multimedia services and the means for interacting with them converge. This paper proposes the main research goals in providing users with a personalized interface and content independent of physical networked devices, and space and time. As a case study, we describe an indoors, mobile mixed reality guide system: Chloe@University. With a see-through head-mounted display (HMD) connected to a small wearable computing device, Chloe@University provides users with an efficient way to guide someone in a building. A 3D virtual character in front of the user guides him/her to the required destinatio

    AI-based smart sensing and AR for gait rehabilitation assessment

    Get PDF
    Health monitoring is crucial in hospitals and rehabilitation centers. Challenges can affect the reliability and accuracy of health data. Human error, patient compliance concerns, time, money, technology, and environmental factors might cause these issues. In order to improve patient care, healthcare providers must address these challenges. We propose a non-intrusive smart sensing system that uses a SensFloor smart carpet and an inertial measurement unit (IMU) wearable sensor on the user’s back to monitor position and gait characteristics. Furthermore, we implemented machine learning (ML) algorithms to analyze the data collected from the SensFloor and IMU sensors. The system generates real-time data that are stored in the cloud and are accessible to physical therapists and patients. Additionally, the system’s real-time dashboards provide a comprehensive analysis of the user’s gait and balance, enabling personalized training plans with tailored exercises and better rehabilitation outcomes. Using non-invasive smart sensing technology, our proposed solution enables healthcare facilities to monitor patients’ health and enhance their physical rehabilitation plans.info:eu-repo/semantics/publishedVersio

    An Ergonomics Investigation of the Application of Virtual Reality on Training for a Precision Task

    Get PDF
    Virtual reality is rapidly expanding its capabilities and accessibility to consumers. The application of virtual reality in training for precision tasks has been limited to specialized equipment such as a haptic glove or a haptic stylus, but not studied for handheld controllers in consumer-grade systems such as the HTC Vive. A straight-line precision steadiness task was adopted in virtual reality to emulate basic linear movements in industrial operations and disability rehabilitation. This study collected the total time and the error time for the straight-line task in both virtual reality and a physical control experiment for 48 participants. The task was performed at four different gap widths, 4mm, 5mm, 6mm, and 7mm, to see the effects of virtual reality at different levels of precision. Average error ratios were then calculated and analyzed for strong associations to various factors. The results indicated that a combination of Environment x Gap Width factors significantly affected average error ratios, with a p-value of 0.000. This human factors study also collected participants’ ratings of user experience dimensions, such as difficulty, comfort, strain, reliability, and effectiveness, for both physical and virtual environments in a questionnaire. The results indicate that the ratings for difficulty, reliability, and effectiveness were significantly different, with virtual reality rating consistently rating worse than the physical environment. An analysis of questionnaire responses indicates a significant association of overall environment preference (physical or virtual) with performance data, with a p-value of 0.027. In general, virtual reality yielded higher error among participants. As the difficulty of the task increased, the performance in virtual reality degraded significantly. Virtual reality has great potential for a variety of precision applications, but the technology in consumer-grade hardware must improve significantly to enable these applications. Virtual reality is difficult to implement without previous experience or specialized knowledge in programming, which makes the technology currently inaccessible for many people. Future work is needed to investigate a larger variety of precision tasks and movements to expand the body of knowledge of virtual reality applications for training purposes

    Augmented Reality on Mobile Devices to Improve the Academic Achievement and Independence of Students with Disabilities

    Get PDF
    Augmented reality (AR) is a technology that overlays digital information on a live view of the physical world to create a blended experience. AR can provide unique experiences and opportunities to learn and interact with information in the physical world (Craig, 2013). The purpose of this dissertation was to investigate uses of AR on mobile devices to improve the academic and functional skills of students with disabilities. The first chapter is a literature review providing a clear understanding of AR and its connections with existing learning theories and evidence-based practices that are relevant for meeting the needs of individuals with disabilities. This chapter explores the available research on mobile devices, AR educational applications, and AR research involving students with disabilities. The purpose of Study 1 was to examine the effects of an augmented reality vocabulary instruction for science terms on college-aged students with ID. A multiple probe across skills design was used to determine if there was a functional relation between the AR vocabulary instruction and the acquisition of correctly defined and labeled science terms. The results indicated that all participants learned new science vocabulary terms using the augmented reality vocabulary instruction. Study 2 examined the effects of using an AR navigation, Google Maps, and a paper map as navigation aids for four college-aged students with ID enrolled in a PSE program. Using an adapted alternating treatments design, students used the three navigation aids to travel independently to unknown businesses in a large downtown city to seek employment opportunities. During the intervention phase, students used a mobile device with Google maps and the AR application to navigate to unfamiliar businesses. Results from Study 2 indicated all students improved navigation decision making when using AR. In the final chapter, both studies are discussed in relation to the AR research literature and as potential interventions. Findings from the studies include the capabilities of ARon mobile devices, academic and functional applications of this technology for students with disabilities, implications for mobile learning, and limitations of this technology. Recommendations for future research are presented to further examine using AR for students with disabilities

    A virtual 3D mobile guide in the INTERMEDIA project

    Get PDF
    In this paper, we introduce a European research project, interactive media with personal networked devices (INTERMEDIA) in which we seek to progress beyond home and device-centric convergence toward truly user-centric convergence of multimedia. Our vision is to make the user the multimedia center: the user as the point at which multimedia services and the means for interacting with them converge. This paper proposes the main research goals in providing users with a personalized interface and content independent of physical networked devices, and space and time. As a case study, we describe an indoors, mobile mixed reality guide system: Chloe@University. With a see-through head-mounted display (HMD) connected to a small wearable computing device, Chloe@University provides users with an efficient way to guide someone in a building. A 3D virtual character in front of the user guides him/her to the required destination

    Fine-grained Haptics: Sensing and Actuating Haptic Primary Colours (force, vibration, and temperature)

    Get PDF
    This thesis discusses the development of a multimodal, fine-grained visual-haptic system for teleoperation and robotic applications. This system is primarily composed of two complementary components: an input device known as the HaptiTemp sensor (combines “Haptics” and “Temperature”), which is a novel thermosensitive GelSight-like sensor, and an output device, an untethered multimodal finegrained haptic glove. The HaptiTemp sensor is a visuotactile sensor that can sense haptic primary colours known as force, vibration, and temperature. It has novel switchable UV markers that can be made visible using UV LEDs. The switchable markers feature is a real novelty of the HaptiTemp because it can be used in the analysis of tactile information from gel deformation without impairing the ability to classify or recognise images. The use of switchable markers in the HaptiTemp sensor is the solution to the trade-off between marker density and capturing high-resolution images using one sensor. The HaptiTemp sensor can measure vibrations by counting the number of blobs or pulses detected per unit time using a blob detection algorithm. For the first time, temperature detection was incorporated into a GelSight-like sensor, making the HaptiTemp sensor a haptic primary colours sensor. The HaptiTemp sensor can also do rapid temperature sensing with a 643 ms response time for the 31°C to 50°C temperature range. This fast temperature response of the HaptiTemp sensor is comparable to the withdrawal reflex response in humans. This is the first time a sensor can trigger a sensory impulse that can mimic a human reflex in the robotic community. The HaptiTemp sensor can also do simultaneous temperature sensing and image classification using a machine vision camera—the OpenMV Cam H7 Plus. This capability of simultaneous sensing and image classification has not been reported or demonstrated by any tactile sensor. The HaptiTemp sensor can be used in teleoperation because it can communicate or transmit tactile analysis and image classification results using wireless communication. The HaptiTemp sensor is the closest thing to the human skin in tactile sensing, tactile pattern recognition, and rapid temperature response. In order to feel what the HaptiTemp sensor is touching from a distance, a corresponding output device, an untethered multimodal haptic hand wearable, is developed to actuate the haptic primary colours sensed by the HaptiTemp sensor. This wearable can communicate wirelessly and has fine-grained cutaneous feedback to feel the edges or surfaces of the tactile images captured by the HaptiTemp sensor. This untethered multimodal haptic hand wearable has gradient kinesthetic force feedback that can restrict finger movements based on the force estimated by the HaptiTemp sensor. A retractable string from an ID badge holder equipped with miniservos that control the stiffness of the wire is attached to each fingertip to restrict finger movements. Vibrations detected by the HaptiTemp sensor can be actuated by the tapping motion of the tactile pins or by a buzzing minivibration motor. There is also a tiny annular Peltier device, or ThermoElectric Generator (TEG), with a mini-vibration motor, forming thermo-vibro feedback in the palm area that can be activated by a ‘hot’ or ‘cold’ signal from the HaptiTemp sensor. The haptic primary colours can also be embedded in a VR environment that can be actuated by the multimodal hand wearable. A VR application was developed to demonstrate rapid tactile actuation of edges, allowing the user to feel the contours of virtual objects. Collision detection scripts were embedded to activate the corresponding actuator in the multimodal haptic hand wearable whenever the tactile matrix simulator or hand avatar in VR collides with a virtual object. The TEG also gets warm or cold depending on the virtual object the participant has touched. Tests were conducted to explore virtual objects in 2D and 3D environments using Leap Motion control and a VR headset (Oculus Quest 2). Moreover, a fine-grained cutaneous feedback was developed to feel the edges or surfaces of a tactile image, such as the tactile images captured by the HaptiTemp sensor, or actuate tactile patterns in 2D or 3D virtual objects. The prototype is like an exoskeleton glove with 16 tactile actuators (tactors) on each fingertip, 80 tactile pins in total, made from commercially available P20 Braille cells. Each tactor can be controlled individually to enable the user to feel the edges or surfaces of images, such as the high-resolution tactile images captured by the HaptiTemp sensor. This hand wearable can be used to enhance the immersive experience in a virtual reality environment. The tactors can be actuated in a tapping manner, creating a distinct form of vibration feedback as compared to the buzzing vibration produced by a mini-vibration motor. The tactile pin height can also be varied, creating a gradient of pressure on the fingertip. Finally, the integration of the high-resolution HaptiTemp sensor, and the untethered multimodal, fine-grained haptic hand wearable is presented, forming a visuotactile system for sensing and actuating haptic primary colours. Force, vibration, and temperature sensing tests with corresponding force, vibration, and temperature actuating tests have demonstrated a unified visual-haptic system. Aside from sensing and actuating haptic primary colours, touching the edges or surfaces of the tactile images captured by the HaptiTemp sensor was carried out using the fine-grained cutaneous feedback of the haptic hand wearable

    Exploring Dual-Camera-Based Augmented Reality for Cultural Heritage Sites

    Get PDF
    Context: Augmented Reality (AR) provides a novel approach for presenting cultural heritage content. Recent advances in AR research and the uptake of powerful mobile devices means AR is a viable option for heritage institutions, but there are challenges that must be overcome before high-quality AR is commonplace. Aims: This project details the development of an AR “magic camera” system featuring novel dual-camera marker-based tracking, allowing users to take AR photos at outdoor heritage sites using a tablet computer. The aims of the project were to assess the feasibility of the tracking method, evaluate the usability of the AR system, and explore implications for the heritage sector. Method: A prototype system was developed. A user study was designed, where participants had to recreate reference images as closely as possible using an iPad and the AR system around the University grounds. Data, such as completion time and error rates, were collected for analysis. The images produced were rated for quality by three experts. Results: Participants responded positively to the system, and the new tracking method was used successfully. The usability study uncovered a number of issues, most of which are solvable in future software versions. However, some issues, such as difficulty orientating objects, rely on improving hardware and software before they can be fixed, but these problems did not affect the quality of the images produced. Participants completed each task more quickly after initial slowness, and while the system was frustrating for some, most found the experience enjoyable. Conclusion: The study successfully uncovered usability problems. The dual-camera tracking element was successful, but the marker-based element encountered lighting problems and high false-positive rates. Orientating objects using inertial sensors was not intuitive; more research in this area would be beneficial. The heritage sector must consider development, maintenance and training costs, and site modification issues

    Design and Implementation of a wearable, context-aware MR framework for the Chloe@University application

    Get PDF
    In this paper, we present the technical details and the challenges we faced during the development and evaluation phases of our wearable indoor guiding system which consists of a virtual personal assistant guiding the user to his/her desired destination. The main issues that will be discussed can be classiïŹed in three categories: context detection, real-time 3D rendering and user interaction
    • 

    corecore