1,146 research outputs found

    A new approach to study gait impairments in Parkinson’s disease based on mixed reality

    Get PDF
    Dissertação de mestrado integrado em Engenharia Biomédica (especialização em Eletrónica Médica)Parkinson’s disease (PD) is the second most common neurodegenerative disorder after Alzheimer's disease. PD onset is at 55 years-old on average, and its incidence increases with age. This disease results from dopamine-producing neurons degeneration in the basal ganglia and is characterized by various motor symptoms such as freezing of gait, bradykinesia, hypokinesia, akinesia, and rigidity, which negatively impact patients’ quality of life. To monitor and improve these PD-related gait disabilities, several technology-based methods have emerged in the last decades. However, these solutions still require more customization to patients’ daily living tasks in order to provide more objective, reliable, and long-term data about patients’ motor conditions in home-related contexts. Providing this quantitative data to physicians will ensure more personalized and better treatments. Also, motor rehabilitation sessions fostered by assistance devices require the inclusion of quotidian tasks to train patients for their daily motor challenges. One of the most promising technology-based methods is virtual, augmented, and mixed reality (VR/AR/MR), which immerse patients in virtual environments and provide sensory stimuli (cues) to assist with these disabilities. However, further research is needed to improve and conceptualize efficient and patient-centred VR/AR/MR approaches and increase their clinical evidence. Bearing this in mind, the main goal of this dissertation was to design, develop, test, and validate virtual environments to assess and train PD-related gait impairments using mixed reality smart glasses, integrated with another high-technological motion tracking device. Using specific virtual environments that trigger PD-related gait impairments (turning, doorways, and narrow spaces), it is hypothesized that patients can be assessed and trained in their daily challenges related to walking. Also, this tool integrates on-demand visual cues to provide visual biofeedback and foster motor training. This solution was validated with end-users to test the identified hypothesis. The results showed that, in fact, mixed reality has the potential to recreate real-life environments that often provoke PD-related gait disabilities, by placing virtual objects on top of the real world. On the contrary, biofeedback strategies did not significantly improve the patients’ motor performance. The user experience evaluation showed that participants enjoyed participating in the activity and felt that this tool can help their motor performance.A doença de Parkinson (DP) é a segunda doença neurodegenerativa mais comum depois da doença de Alzheimer. O início da DP ocorre, em média, aos 55 anos de idade, e a sua incidência aumenta com a idade. Esta doença resulta da degeneração dos neurónios produtores de dopamina nos gânglios basais e é caracterizada por vários sintomas motores como o congelamento da marcha, bradicinesia, hipocinesia, acinesia, e rigidez, que afetam negativamente a qualidade de vida dos pacientes. Nas últimas décadas surgiram métodos tecnológicos para monitorizar e treinar estas desabilidades da marcha. No entanto, estas soluções ainda requerem uma maior personalização relativamente às tarefas diárias dos pacientes, a fim de fornecer dados mais objetivos, fiáveis e de longo prazo sobre o seu desempenho motor em contextos do dia-a-dia. Através do fornecimento destes dados quantitativos aos médicos, serão assegurados tratamentos mais personalizados. Além disso, as sessões de reabilitação motora, promovidas por dispositivos de assistência, requerem a inclusão de tarefas quotidianas para treinar os pacientes para os seus desafios diários. Um dos métodos tecnológicos mais promissores é a realidade virtual, aumentada e mista (RV/RA/RM), que imergem os pacientes em ambientes virtuais e fornecem estímulos sensoriais para ajudar nestas desabilidades. Contudo, é necessária mais investigação para melhorar e conceptualizar abordagens RV/RA/RM eficientes e centradas no paciente e ainda aumentar as suas evidências clínicas. Tendo isto em mente, o principal objetivo desta dissertação foi conceber, desenvolver, testar e validar ambientes virtuais para avaliar e treinar as incapacidades de marcha relacionadas com a DP usando óculos inteligentes de realidade mista, integrados com outro dispositivo de rastreio de movimento. Utilizando ambientes virtuais específicos que desencadeiam desabilidades da marcha (rodar, portas e espaços estreitos), é possível testar hipóteses de que os pacientes possam ser avaliados e treinados nos seus desafios diários. Além disso, esta ferramenta integra pistas visuais para fornecer biofeedback visual e fomentar a reabilitação motora. Esta solução foi validada com utilizadores finais de forma a testar as hipóteses identificadas. Os resultados mostraram que, de facto, a realidade mista tem o potencial de recriar ambientes da vida real que muitas vezes provocam deficiências de marcha relacionadas à DP. Pelo contrário, as estratégias de biofeedback não provocaram melhorias significativas no desempenho motor dos pacientes. A avaliação feita pelos pacientes mostrou que estes gostaram de participar nos testes e sentiram que esta ferramenta pode auxiliar no seu desempenho motor

    Quadruped Pupper Robotics: Dynamics and Control

    Get PDF
    The purpose of this project is to provide insights on the Pupper Robot, from Hands-On Robotics (handsonrobotics.org), for future studies and research. The Hands-On Robotics (HOR) team aims to provide robotics kits and educational curricula to explore agile locomotion, motor control, and AI for community colleges and high schools. We worked with the HOR team in this project to help them better achieve their goals. The main objectives of this project include: 1. Build the robot and analyze the dynamical behaviors of the robot. 2. Investigate the robot control from both hardware and software perspectives. 3. Design a new gait for the Pupper Robot. 4. Create an implementation guide for future groups, documenting knowledge we have learned during the project. By the end of this project, we achieved the following: A. Built a fully functioning robot. B. Investigated the theoretical underpinnings of quadruped robots, including inverse kinematics and gait generation theories. C. Understood and reflected on the control structure of the robot. D. Implemented a new jumping gait which allows the robot to leap forward and land on balance. E. Composed detailed guides on robot building instructions, controller files installation, simulator installation, and simulator modifications

    Towards Naturalistic Interfaces of Virtual Reality Systems

    Get PDF
    Interaction plays a key role in achieving realistic experience in virtual reality (VR). Its realization depends on interpreting the intents of human motions to give inputs to VR systems. Thus, understanding human motion from the computational perspective is essential to the design of naturalistic interfaces for VR. This dissertation studied three types of human motions, including locomotion (walking), head motion and hand motion in the context of VR. For locomotion, the dissertation presented a machine learning approach for developing a mechanical repositioning technique based on a 1-D treadmill for interacting with a unique new large-scale projective display, called the Wide-Field Immersive Stereoscopic Environment (WISE). The usability of the proposed approach was assessed through a novel user study that asked participants to pursue a rolling ball at variable speed in a virtual scene. In addition, the dissertation studied the role of stereopsis in avoiding virtual obstacles while walking by asking participants to step over obstacles and gaps under both stereoscopic and non-stereoscopic viewing conditions in VR experiments. In terms of head motion, the dissertation presented a head gesture interface for interaction in VR that recognizes real-time head gestures on head-mounted displays (HMDs) using Cascaded Hidden Markov Models. Two experiments were conducted to evaluate the proposed approach. The first assessed its offline classification performance while the second estimated the latency of the algorithm to recognize head gestures. The dissertation also conducted a user study that investigated the effects of visual and control latency on teleoperation of a quadcopter using head motion tracked by a head-mounted display. As part of the study, a method for objectively estimating the end-to-end latency in HMDs was presented. For hand motion, the dissertation presented an approach that recognizes dynamic hand gestures to implement a hand gesture interface for VR based on a static head gesture recognition algorithm. The proposed algorithm was evaluated offline in terms of its classification performance. A user study was conducted to compare the performance and the usability of the head gesture interface, the hand gesture interface and a conventional gamepad interface for answering Yes/No questions in VR. Overall, the dissertation has two main contributions towards the improvement of naturalism of interaction in VR systems. Firstly, the interaction techniques presented in the dissertation can be directly integrated into existing VR systems offering more choices for interaction to end users of VR technology. Secondly, the results of the user studies of the presented VR interfaces in the dissertation also serve as guidelines to VR researchers and engineers for designing future VR systems

    Low-Cost Automatic Ambient Assisted Living system

    Get PDF
    The file attached to this record is the author's final peer reviewed version.The recent increase in ageing population in countries around the world has brought a lot of attention toward research and development of ambient assisted living (AAL) systems. These systems should be inexpensive to be installed in elderly homes, protecting their privacy and more importantly being non-invasive and smart. In this paper, we introduce an inexpensive system that utilises off-the-shelf sensor to grab RGB-D data. This data is then fed into different learning algorithms for classification different activity types. We achieve a very good success rate (99.9%) for human activity recognition (HAR) with the help of light-weighted and fast random forests (RF)

    Design For Auditory Displays: Identifying Temporal And Spatial Information Conveyance Principles

    Get PDF
    Designing auditory interfaces is a challenge for current human-systems developers. This is largely due to a lack of theoretical guidance for directing how best to use sounds in today\u27s visually-rich graphical user interfaces. This dissertation provided a framework for guiding the design of audio interfaces to enhance human-systems performance. This doctoral research involved reviewing the literature on conveying temporal and spatial information using audio, using this knowledge to build three theoretical models to aid the design of auditory interfaces, and empirically validating select components of the models. The three models included an audio integration model that outlines an end-to-end process for adding sounds to interactive interfaces, a temporal audio model that provides a framework for guiding the timing for integration of these sounds to meet human performance objectives, and a spatial audio model that provides a framework for adding spatialization cues to interface sounds. Each model is coupled with a set of design guidelines theorized from the literature, thus combined, the developed models put forward a structured process for integrating sounds in interactive interfaces. The developed models were subjected to a three phase validation process that included review by Subject Matter Experts (SMEs) to assess the face validity of the developed models and two empirical studies. For the SME review, which assessed the utility of the developed models and identified opportunities for improvement, a panel of three audio experts was selected to respond to a Strengths, Weaknesses, Opportunities, and Threats (SWOT) validation questionnaire. Based on the SWOT analysis, the main strengths of the models included that they provide a systematic approach to auditory display design and that they integrate a wide variety of knowledge sources in a concise manner. The main weaknesses of the models included the lack of a structured process for amending the models with new principles, some branches were not considered parallel or completely distinct, and lack of guidance on selecting interface sounds. The main opportunity identified by the experts was the ability of the models to provide a seminal body of knowledge that can be used for building and validating auditory display designs. The main threats identified by the experts were that users may not know where to start and end with each model, the models may not provide comprehensive coverage of all uses of auditory displays, and the models may act as a restrictive influence on designers or they may be used inappropriately. Based on the SWOT analysis results, several changes were made to the models prior to the empirical studies. Two empirical evaluation studies were conducted to test the theorized design principles derived from the revised models. The first study focused on assessing the utility of audio cues to train a temporal pacing task and the second study combined both temporal (i.e., pace) and spatial audio information, with a focus on examining integration issues. In the pace study, there were four different auditory conditions used for training pace: 1) a metronome, 2) non-spatial auditory earcons, 3) a spatialized auditory earcon, and 4) no audio cues for pace training. Sixty-eight people participated in the study. A pre- post between subjects experimental design was used, with eight training trials. The measure used for assessing pace performance was the average deviation from a predetermined desired pace. The results demonstrated that a metronome was not effective in training participants to maintain a desired pace, while, spatial and non-spatial earcons were effective strategies for pace training. Moreover, an examination of post-training performance as compared to pre-training suggested some transfer of learning. Design guidelines were extracted for integrating auditory cues for pace training tasks in virtual environments. In the second empirical study, combined temporal (pacing) and spatial (location of entities within the environment) information were presented. There were three different spatialization conditions used: 1) high fidelity using subjective selection of a best-fit head related transfer function, 2) low fidelity using a generalized head-related transfer function, and 3) no spatialization. A pre- post between subjects experimental design was used, with eight training trials. The performance measures were average deviation from desired pace and time and accuracy to complete the task. The results of the second study demonstrated that temporal, non-spatial auditory cues were effective in influencing pace while other cues were present. On the other hand, spatialized auditory cues did not result in significantly faster task completion. Based on these results, a set of design guidelines was proposed that can be used to direct the integration of spatial and temporal auditory cues for supporting training tasks in virtual environments. Taken together, the developed models and the associated guidelines provided a theoretical foundation from which to direct user-centered design of auditory interfaces

    Use of stance control knee-ankle-foot orthoses : a review of the literature

    Get PDF
    The use of stance control orthotic knee joints are becoming increasingly popular as unlike locked knee-ankle-foot orthoses, these joints allow the limb to swing freely in swing phase while providing stance phase stability, thus aiming to promote a more physiological and energy efficient gait. It is of paramount importance that all aspects of this technology is monitored and evaluated as the demand for evidence based practice and cost effective rehabilitation increases. A robust and thorough literature review was conducted to retrieve all articles which evaluated the use of stance control orthotic knee joints. All relevant databases were searched, including The Knowledge Network, ProQuest, Web of Knowledge, RECAL Legacy, PubMed and Engineering Village. Papers were selected for review if they addressed the use and effectiveness of commercially available stance control orthotic knee joints and included participant(s) trialling the SCKAFO. A total of 11 publications were reviewed and the following questions were developed and answered according to the best available evidence: 1. The effect SCKAFO (stance control knee-ankle-foot orthoses) systems have on kinetic and kinematic gait parameters 2. The effect SCKAFO systems have on the temporal and spatial parameters of gait 3. The effect SCKAFO systems have on the cardiopulmonary and metabolic cost of walking. 4. The effect SCKAFO systems have on muscle power/generation 5. Patient’s perceptions/ compliance of SCKAFO systems Although current research is limited and lacks in methodological quality the evidence available does, on a whole, indicate a positive benefit in the use of SCKAFOs. This is with respect to increased knee flexion during swing phase resulting in sufficient ground clearance, decreased compensatory movements to facilitate swing phase clearance and improved temporal and spatial gait parameters. With the right methodological approach, the benefits of using a SCKAFO system can be evidenced and the research more effectively converted into clinical practice

    The effect of prefabricated wrist-hand orthoses on performing activities of daily living

    Get PDF
    Wrist-hand orthoses (WHOs) are commonly prescribed to manage the functional deficit associated with the wrist as a result of rheumatoid changes. The common presentation of the wrist is one of flexion and radial deviation with ulnar deviation of the fingers. This wrist position Results in altered biomechanics compromising hand function during activities of daily living (ADL). A paucity of evidence exists which suggests that improvements in ADL with WHO use are very task specific. Using normal subjects, and thus in the absence of pain as a limiting factor, the impact of ten WHOs on performing five ADLs tasks was investigated. The tasks were selected to represent common grip patterns and tests were performed with and without WHOs by right-handed, females, aged 20-50 years over a ten week period. The time taken to complete each task was recorded and a wrist goniometer, elbow goniometer and a forearm torsiometer were used to measure joint motion. Results show that, although orthoses may restrict the motion required to perform a task, participants do not use the full range of motion which the orthoses permit. The altered wrist position measured may be attributable to a modified method of performing the task or to a necessary change in grip pattern, resulting in an increased time in task performance. The effect of WHO use on ADL is task specific and may initially impede function. This could have an effect on WHO compliance if there appears to be no immediate benefits. This orthotic effect may be related to restriction of wrist motion or an inability to achieve the necessary grip patterns due to the designs of the orthoses

    An expandable walking in place platform

    Get PDF
    The control of locomotion in 3D virtual environments should be an ordinary task, from the user point-of-view. Several navigation metaphors have been explored to control locomotion naturally, such as: real walking, the use of simulators, and walking in place. These have proven that the more natural the approach used to control locomotion, the more immerse the user will feel inside the virtual environment. Overcoming the high cost and complexity for the use of most approaches in the field, we introduce a walking in place platform that is able to identify orientation, speed for displacement, as well as lateral steps, of a person mimicking walking pattern. The detection of this information is made without use of additional sensors attached to user body. Our device is simple to mount, inexpensive and allows almost natural use, with lazy steps, thus releasing the hands for other uses. Also, we explore and test a passive, tactile surface for safe use of our platform. The platform was conceived to be utilized as an interface to control navigation in virtual environments, and augmented reality. Extending our device and techniques, we have elaborated a redirection walking metaphor, to be used together with a cave automatic virtual environment. Another metaphor allowed the use of our technique for navigating in point clouds for tagging of data. We tested the use of our technique associated with two different navigation modes: human walking and vehicle driving. In the human walking approach, the virtual orientation inhibits the displacement when sharp turns are made by the user. In vehicle mode, the virtual orientation and displacement occur together, more similar to a vehicle driving approach. We applied tests to detect preferences of navigation mode and ability to use our device to 52 subjects. We identified a preference for the vehicle driving mode of navigation. The use of statistics revealed that users learned easily the use of our technique for navigation. Users were faster walking in vehicle mode; but human mode allowed precise walking in the virtual test environment. The tactile platform proved to allow safe use of our device, being an effective and simple solution for the field. More than 200 people tested our device: UFRGS Portas Abertas in 2013 and 2014, which was a event to present to local community academic works; during 3DUI 2014, where our work was utilized together with a tool for point cloud manipulation. The main contributions of our work are a new approach for detection of walking in place, which allows simple use, with naturalness of movements, expandable for utilization in large areas (such as public spaces), and that efficiently supply orientation and speed to use in virtual environments or augmented reality, with inexpensive hardware.O controle da locomoção em ambientes virtuais 3D deveria ser uma tarefa simples, do ponto de vista do usuário. Durante os anos, metáforas para navegação têm sido exploradas para permitir o controle da locomoção naturalmente, tais como: caminhada real; uso de simuladores e imitação de caminhada. Estas técnicas provaram que, quanto mais natural à abordagem utilizada para controlar a locomoção, mais imerso o usuário vai se sentir dentro do ambiente virtual. Superando o alto custo e complexidade de uso da maioria das abordagens na área, introduzimos uma plataforma para caminhada no lugar, (usualmente reportado como wal king in place), que é capaz de identificar orientação, velocidade de deslocamento, bem como passos laterais, de uma pessoa imitando a caminhada. A detecção desta informação é feita sem o uso de sensores presos no corpo dos usuários, apenas utilizando a plataforma. Nosso dispositivo é simples de montar, barato e permite seu uso por pessoas comuns de forma quase natural, com passos pequenos, assim deixando as mãos livres para outras tarefas. Nós também exploramos e testamos uma superfície táctil passiva para utilização segura de nossa plataforma. A plataforma foi concebida para ser utilizada como uma interface para navegação em ambientes virtuais. Estendendo o uso de nossa técnica e dis positivo, nós elaboramos uma metáfora para caminhada redirecionada, para ser utilizada em conjunto com cavernas de projeção, (usualmente reportado como Cave automatic vir tual environment (CAVE)). Criamos também uma segunda metáfora para navegação, a qual permitiu o uso de nossa técnica para navegação em nuvem de pontos, auxiliando no processo de etiquetagem destes, como parte da competição para o 3D User Interface que ocorreu em Minessota, nos Estados Unidos, em 2014. Nós testamos o uso da técnica e dispositivos associada com duas nuances de navegação: caminhada humana e controle de veiculo. Na abordagem caminhada humana, a taxa de mudança da orientação gerada pelo usuário ao utilizar nosso dispositivo, inibia o deslocamento quando curvas agudas eram efetuadas. No modo veículo, a orientação e o deslocamento ocorriam conjuntamente quando o usuário utilizava nosso dispositivo e técnicas, similarmente ao processo de controle de direção de um veículo. Nós aplicamos testes para determinar o modo de navegação de preferencia para uti lização de nosso dispositivo, em 52 sujeitos. Identificamos uma preferencia pelo modo de uso que se assimila a condução de um veículo. Testes estatísticos revelaram que os usuários aprenderam facilmente a usar nossa técnica para navegar em ambientes virtuais. Os usuários foram mais rápidos utilizando o modo veículo, mas o modo humano garantiu maior precisão no deslocamento no ambiente virtual. A plataforma táctil provou permi tir o uso seguro de nosso dispositivo, sendo uma solução efetiva e simples para a área. Mais de 200 pessoas testaram nosso dispositivo e técnicas: no evento Portas Abertas da UFRGS em 2013 e 2014, um evento onde são apresentados para a comunidade local os trabalhos executados na universidade; e no 3D User Interface, onde nossa técnica e dis positivos foram utilizados em conjunto com uma ferramenta de seleção de pontos numa competição. As principais contribuições do nosso trabalho são: uma nova abordagem para de tecção de imitação de caminhada, a qual permite um uso simples, com naturalidade de movimentos, expansível para utilização em áreas grandes, como espaços públicos e que efetivamente captura informações de uso e fornece orientação e velocidade para uso em ambientes virtuais ou de realidade aumentada, com uso de hardware barato
    corecore