615 research outputs found

    Visualization techniques to aid in the analysis of multi-spectral astrophysical data sets

    Get PDF
    This report describes our project activities for the period Sep. 1991 - Oct. 1992. Our activities included stabilizing the software system STAR, porting STAR to IDL/widgets (improved user interface), targeting new visualization techniques for multi-dimensional data visualization (emphasizing 3D visualization), and exploring leading-edge 3D interface devices. During the past project year we emphasized high-end visualization techniques, by exploring new tools offered by state-of-the-art visualization software (such as AVS3 and IDL4/widgets), by experimenting with tools still under research at the Department of Computer Science (e.g., use of glyphs for multidimensional data visualization), and by researching current 3D input/output devices as they could be used to explore 3D astrophysical data. As always, any project activity is driven by the need to interpret astrophysical data more effectively

    Parkinson\u27s Symptoms quantification using wearable sensors

    Get PDF
    Parkinson’s disease (PD) is a common neurodegenerative disorder affecting more than one million people in the United States and seven million people worldwide. Motor symptoms such as tremor, slowness of movements, rigidity, postural instability, and gait impairment are commonly observed in PD patients. Currently, Parkinsonian symptoms are usually assessed in clinical settings, where a patient has to complete some predefined motor tasks. Then a physician assigns a score based on the United Parkinson’s Disease Rating Scale (UPDRS) after observing the motor task. However, this procedure suffers from inter subject variability. Also, patients tend to show fewer symptoms during clinical visit, which leads to false assumption of the disease severity. The objective of this study is to overcome this limitations by building a system using Inertial Measurement Unit (IMU) that can be used at clinics and in home to collect PD symptoms data and build algorithms that can quantify PD symptoms more effectively. Data was acquired from patients seen at movement disorders Clinic at Sanford Health in Fargo, ND. Subjects wore Physilog IMUs and performed tasks for tremor, bradykinesia and gait according to the protocol approved by Sanford IRB. The data was analyzed using modified algorithm that was initially developed using data from normal subjects emulating PD symptoms. For tremor measurement, the study showed that sensor signals collected from the index finger more accurately predict tremor severity compared to signals from a sensor placed on the wrist. For finger tapping, a task measuring bradykinesia, the algorithm could predict with more than 80% accuracy when a set of features were selected to train the prediction model. Regarding gait, three different analysis were done to find the effective parameters indicative of severity of PD. Gait speed measurement algorithm was first developed using treadmill as a reference. Then, it was shown that the features selected could predict PD gait with 85.5% accuracy

    Evaluation of Alternative Face Detection Techniques and Video Segment Lengths on Sign Language Detection

    Get PDF
    Sign language is the primary medium of communication for people who are hearing impaired. Sign language videos are hard to discover in video sharing sites as the text-based search is based on metadata rather than the content of the videos. The sign language community currently shares content through ad-hoc mechanisms as no library meets their requirements. Low cost or even real-time classification techniques are valuable to create a sign language digital library with its content being updated as new videos are uploaded to YouTube and other video sharing sites. Prior research was able to detect sign language videos using face detection and background subtraction with recall and precision that is suitable to create a digital library. This approach analyzed one minute of each video being classified. Polar Motion Profiles achieved better recall with videos containing multiple signers but at a significant computational cost as it included five face trackers. This thesis explores techniques to reduce the computation time involved in feature extraction without overly impacting precision and recall deeply. This thesis explores three optimizations to the above techniques. First, we compared the individual performance of the five face detectors and determined the best performing single face detector. Second, we evaluated the performance detection using Polar Motion Profiles when face detection was performed on sampled frames rather than detecting in every frame. From our results, Polar Motion Profiles performed well even when the information between frames is sacrificed. Finally, we looked at the effect of using shorter video segment lengths for feature extraction. We found that the drop in precision is minor as video segments were made shorter from the initial empirical length of a minute. Through our work, we found an empirical configuration that can classify videos with close to two orders of magnitude less computation but with precision and recall not too much below the original voting scheme. Our model improves detection time of sign language videos that in turn would help enrich the digital library with fresh content quickly. Future work can be focused on enabling diarization by segmenting the video to find sign language content and non-sign language content with effective background subtraction techniques for shorter videos

    An empirical evaluation of two natural hand interaction systems in augmented reality

    Get PDF
    Human-computer interaction based on hand gesture tracking is not uncommon in Augmented Reality. In fact, the most recent optical Augmented Reality devices include this type of natural interaction. However, due to hardware and system limitations, these devices, more often than not, settle for semi-natural interaction techniques, which may not always be appropriate for some of the tasks needed in Augmented Reality applications. For this reason, we compare two different optical Augmented Reality setups equipped with hand tracking. The first one is based on a Microsoft HoloLens (released in 2016) and the other one is based on a Magic Leap One (released more than two years later). Both devices offer similar solutions for the visualization and registration problems but differ in the hand tracking approach, since the former uses a metaphoric hand-gesture tracking and the latter relies on an isomorphic approach. We raise seven research questions regarding these two setups, which we answer after performing two task-based experiments using virtual elements, of different sizes, that are moved using natural hand interaction. The questions deal with the accuracy and performance achieved with these setups and also with user preference, recommendation and perceived usefulness. For this purpose, we collect both subjective and objective data about the completion of these tasks. Our initial hypothesis was that there would be differences, in favor of the isomorphic and newer setup, in the use of hand interaction. However, the results surprisingly show that there are very small objective differences between these setups, and the isomorphic approach is not significantly better in terms of accuracy and mistakes, although it allows a faster completion of one of the tasks. In addition, no remarkable statistically significant differences can be found between the two setups in the subjective datasets gathered through a specific questionnaire. We also analyze the opinions of the participants in terms of usefulness, preference and recommendation. The results show that, although the Magic Leap-based system gets more support, the differences are not statistically significant

    Motion-Based Video Games for Stroke Rehabilitation with Reduced Compensatory Motions

    Get PDF
    Stroke is the leading cause of long-term disability among adults in industrialized nations, with 80% of people who survive strokes experiencing motor disabilities. Recovery requires daily exercise with a high number of repetitions, often without therapist supervision. Motion-based video games can help motivate people with stroke to perform the necessary exercises to recover. We explore the design space of video games for stroke rehabilitation using Wii remotes and webcams as input devices, and share the lessons we learned about what makes games therapeutically useful. We demonstrate the feasibility of using games for home-based stroke therapy with a six-week case study. We show that exercise with games can help recovery even 17 years after the stroke, and share the lessons that we learned for game systems to be used at home as a part of outpatient therapy. As a major issue with home-based therapy, we identify that unsupervised exercises lead to compensatory motions that can impede recovery and create new health issues. We reliably detect torso compensation in shoulder exercises using a custom harness, and develop a game that meaningfully uses both exercise and compensation as inputs. We provide in-game feedback that reduces compensation in a number of ways. We evaluate alternative ways for reducing compensation in controlled experiments and show that using techniques from operant conditioning are effective in significantly reducing compensatory behavior compared to existing approaches

    A new taxonomy for locomotion in virtual environments

    Get PDF
    The concept of virtual reality, although evolving due to technological advances, has always been fundamentally defined as a revolutionary way for humans to interact with computers. The revolution comes from the concept of immersion, which is the essence of virtual reality. Users are no longer passive observers of information, but active participants that have leaped through the computer screen and are now part of the information. This has tremendous implications on how users interact with computer information in the virtual world.;Perhaps the most common form of interaction in a virtual environment is locomotion. The term locomotion is used to indicate a user\u27s control of movement through the virtual environment. There are many ways for a user to change his viewpoint in the virtual world. Because virtual reality is a relatively young field, no standard interfaces exist for interaction, particularly locomotion, in a virtual world. There have been few attempts to formally classify the ways in which virtual locomotion can occur. These classification schemes do not take into account the various interaction devices such as joysticks and vehicle mock-ups that are used to perform the locomotion. Nor do they account for the differences in display devices, such as head-mounted displays, monitors, or projected walls.;This work creates a new classification system for virtual locomotion methods. The classification provides guidelines for designers of new VR applications, on what types of locomotion are best suited to the requirements of new applications. Unlike previous taxonomies, this work incorporates display devices, interaction devices, and travel tasks, along with identifying two major components of travel: translation and rotation. The classification also identifies important sub-components of these two.;In addition, we have experimentally validated the importance of display device and rotation method in this new classification system. This was accomplished through a large-scale user experiment. Users performed an architectural walkthrough of a virtual building. Both objective and subjective measures indicate that choice of display device is extremely important to the task of locomotion, and that for each display device, the choice of rotation method is also important
    • …
    corecore