212 research outputs found

    Development of a mobile technology system to measure shoulder range of motion

    Get PDF
    In patients with shoulder movement impairment, assessing and monitoring shoulder range of motion is important for determining the severity of impairments due to disease or injury and evaluating the effects of interventions. Current clinical methods of goniometry and visual estimation require an experienced user and suffer from low inter-rater reliability. More sophisticated techniques such as optical or electromagnetic motion capture exist but are expensive and restricted to a specialised laboratory environment.;Inertial measurement units (IMU), such as those within smartphones and smartwatches, show promise as tools bridge the gap between laboratory and clinical techniques and accurately measure shoulder range of motion during both clinic assessments and in daily life.;This study aims to develop an Android mobile application for both a smartphone and a smartwatch to assess shoulder range of motion. Initial performance characterisation of the inertial sensing capabilities of both a smartwatch and smartphone running the application was conducted against an industrial inclinometer, free-swinging pendulum and custom-built servo-powered gimbal.;An initial validation study comparing the smartwatch application with a universal goniometer for shoulder ROM assessment was conducted with twenty healthy participants. An impaired condition was simulated by applying kinesiology tape across the participants shoulder girdle. Agreement, intra and inter-day reliability were assessed in both the healthy and impaired states.;Both the phone and watch performed with acceptable accuracy and repeatability during static (within ±1.1°) and dynamic conditions where it was strongly correlated to the pendulum and gimbal data (ICC > 0.9). Both devices could perform accurately within optimal responsiveness range of angular velocities compliant with humerus movement during activities of daily living (frequency response of 377°/s and 358°/s for the phone and watch respectively).;The concurrent agreement between the watch and the goniometer was high in both healthy and impaired states (ICC > 0.8) and between measurement days (ICC > 0.8). The mean absolute difference between the watch and the goniometer were within the accepted minimal clinically important difference for shoulder movement (5.11° to 10.58°).;The results show promise for the use of the developed Android application to be used as a goniometry tool for assessment of shoulder ROM. However, the limits of agreement across all the tests fell out with the acceptable margin and further investigation is required to determine validity. Evaluation of validity in clinical impairment patients is also required to assess the feasibility of the use of the application in clinical practice.In patients with shoulder movement impairment, assessing and monitoring shoulder range of motion is important for determining the severity of impairments due to disease or injury and evaluating the effects of interventions. Current clinical methods of goniometry and visual estimation require an experienced user and suffer from low inter-rater reliability. More sophisticated techniques such as optical or electromagnetic motion capture exist but are expensive and restricted to a specialised laboratory environment.;Inertial measurement units (IMU), such as those within smartphones and smartwatches, show promise as tools bridge the gap between laboratory and clinical techniques and accurately measure shoulder range of motion during both clinic assessments and in daily life.;This study aims to develop an Android mobile application for both a smartphone and a smartwatch to assess shoulder range of motion. Initial performance characterisation of the inertial sensing capabilities of both a smartwatch and smartphone running the application was conducted against an industrial inclinometer, free-swinging pendulum and custom-built servo-powered gimbal.;An initial validation study comparing the smartwatch application with a universal goniometer for shoulder ROM assessment was conducted with twenty healthy participants. An impaired condition was simulated by applying kinesiology tape across the participants shoulder girdle. Agreement, intra and inter-day reliability were assessed in both the healthy and impaired states.;Both the phone and watch performed with acceptable accuracy and repeatability during static (within ±1.1°) and dynamic conditions where it was strongly correlated to the pendulum and gimbal data (ICC > 0.9). Both devices could perform accurately within optimal responsiveness range of angular velocities compliant with humerus movement during activities of daily living (frequency response of 377°/s and 358°/s for the phone and watch respectively).;The concurrent agreement between the watch and the goniometer was high in both healthy and impaired states (ICC > 0.8) and between measurement days (ICC > 0.8). The mean absolute difference between the watch and the goniometer were within the accepted minimal clinically important difference for shoulder movement (5.11° to 10.58°).;The results show promise for the use of the developed Android application to be used as a goniometry tool for assessment of shoulder ROM. However, the limits of agreement across all the tests fell out with the acceptable margin and further investigation is required to determine validity. Evaluation of validity in clinical impairment patients is also required to assess the feasibility of the use of the application in clinical practice

    A formal agent-based personalised mobile system to support emergency response

    Get PDF
    Communication may be seen as a process of sending and accepting information among individuals. It is a vital part of emergency response management, sharing the information of situations, victims, family and friends, rescue organisations and others. The obtained contextual information during a disaster event, however, is often dynamic, partial and may be conflicting with each other. Current communication strategies and solutions for emergency response have limitations - in that they are often designed to support information sharing between organisations and not individuals. As a result, they are often not personalisable. They also cannot make use of opportunistic resources, e.g. people nearby the disaster-struck areas that are ready to help but are not a part of any organisation. However, history has told us such people are often the first responders that provide the most immediate and useful help to the victims. On the other hand, the advanced and rich capabilities of mobile smartphones have become one of the most interesting topics in the field of mobile technologies and applied science. It is especially interesting when it can be expanded to become an effective emergency response tool to discover affected people and connect them with the first responders and their families, friends and communities. At present, research on emergency response is ineffective for handling large-scale disasters where professional rescuers could not reach victims in disaster struck-areas immediately. This is because current approaches are often built to support formal emergency response teams and organizations. Individual emergency response efforts, e.g. searching for missing people (inc. families and friends), are often web-based applications that are also not effective. Other works focus on sensory development that lacks integrated search and rescue approaches. In this thesis, I developed a distributed and personalisable Mobile Kit Disaster Assistant (MKA) system that is underpinned by a formal foundation. It aims at gathering emergency response information held by multiple resources before, during and after a large-scale disaster. As a result, contextual and background information based on a formal framework would be readily available, if a disaster indeed strikes. To this end, my core contribution is to provide a structural formal framework to encapsulate important information that is used to support emergency response at a personal level. Several (conceptual) structures were built to allow an individual to express his/her own individual circumstances, inc. relationships with others and health status that will determine how he/she may communicate with others. The communication framework is consisting of several new components: a rich and holistic Emergency Response Communication Framework, a newly developed Communication and Tracking Ontology (CTO), a newly devised Emergency Response Agent Communication Language (ER-ACL) and a brand-new Emergency Response Agent Communication Protocol (ER-ACP). I have framed the emergency response problem as a multi-agent problem where each smartphone would act as an agent for its user; each user would take on a role depending on requirements and/or the tasks at hand and the above framework is aimed to be used within a peer to peer distributed multiagent system (MAS) to assist emergency response efforts. Based on this formal framework, I have developed a mobile application, the MKA system, to capture important features of EM and to demonstrate the practicalities and value of the proposed formal framework. This system was carefully evaluated by both domain experts and potential users of targeted user groups using both qualitative and quantitative approaches. The overall results are very encouraging. Evaluators appreciated the importance of the tool and believe such tools are vital in saving lives – that is applicable for large-scale disasters as well as for individual life-critical events

    Dynamic motion coupling of body movement for input control

    Get PDF
    Touchless gestures are used for input when touch is unsuitable or unavailable, such as when interacting with displays that are remote, large, public, or when touch is prohibited for hygienic reasons. Traditionally user input is spatially or semantically mapped to system output, however, in the context of touchless gestures these interaction principles suffer from several disadvantages including memorability, fatigue, and ill-defined mappings. This thesis investigates motion correlation as the third interaction principle for touchless gestures, which maps user input to system output based on spatiotemporal matching of reproducible motion. We demonstrate the versatility of motion correlation by using movement as the primary sensing principle, relaxing the restrictions on how a user provides input. Using TraceMatch, a novel computer vision-based system, we show how users can provide effective input through investigation of input performance with different parts of the body, and how users can switch modes of input spontaneously in realistic application scenarios. Secondly, spontaneous spatial coupling shows how motion correlation can bootstrap spatial input, allowing any body movement, or movement of tangible objects, to be appropriated for ad hoc touchless pointing on a per interaction basis. We operationalise the concept in MatchPoint, and demonstrate the unique capabilities through an exploration of the design space with application examples. Finally, we explore how users synchronise with moving targets in the context of motion correlation, revealing how simple harmonic motion leads to better synchronisation. Using the insights gained we explore the robustness of algorithms used for motion correlation, showing how it is possible to successfully detect a user's intent to interact whilst suppressing accidental activations from common spatial and semantic gestures. Finally, we look across our work to distil guidelines for interface design, and further considerations of how motion correlation can be used, both in general and for touchless gestures

    Sensor-based user interface concepts for continuous, around-device and gestural interaction on mobile devices

    Get PDF
    A generally observable trend of the past 10 years is that the amount of sensors embedded in mobile devices such as smart phones and tablets is rising steadily. Arguably, the available sensors are mostly underutilized by existing mobile user interfaces. In this dissertation, we explore sensor-based user interface concepts for mobile devices with the goal of making better use of the available sensing capabilities on mobile devices as well as gaining insights on the types of sensor technologies that could be added to future mobile devices. We are particularly interested how novel sensor technologies could be used to implement novel and engaging mobile user interface concepts. We explore three particular areas of interest for research into sensor-based user interface concepts for mobile devices: continuous interaction, around-device interaction and motion gestures. For continuous interaction, we explore the use of dynamic state-space systems to implement user interfaces based on a constant sensor data stream. In particular, we examine zoom automation in tilt-based map scrolling interfaces. We show that although fully automatic zooming is desirable in certain situations, adding a manual override capability of the zoom level (Semi-Automatic Zooming) will increase the usability of such a system, as shown through a decrease in task completion times and improved user ratings of user study. The presented work on continuous interaction also highlights how the sensors embedded in current mobile devices can be used to support complex interaction tasks. We go on to introduce the concept of Around-Device Interaction (ADI). By extending the interactive area of the mobile device to its entire surface and the physical volume surrounding it we aim to show how the expressivity and possibilities of mobile input can be improved this way. We derive a design space for ADI and evaluate three prototypes in this context. HoverFlow is a prototype allowing coarse hand gesture recognition around a mobile device using only a simple set of sensors. PalmSpace a prototype exploring the use of depth cameras on mobile devices to track the user's hands in direct manipulation interfaces through spatial gestures. Lastly, the iPhone Sandwich is a prototype supporting dual-sided pressure-sensitive multi-touch interaction. Through the results of user studies, we show that ADI can lead to improved usability for mobile user interfaces. Furthermore, the work on ADI contributes suggestions for the types of sensors could be incorporated in future mobile devices to expand the input capabilities of those devices. In order to broaden the scope of uses for mobile accelerometer and gyroscope data, we conducted research on motion gesture recognition. With the aim of supporting practitioners and researchers in integrating motion gestures into their user interfaces at early development stages, we developed two motion gesture recognition algorithms, the $3 Gesture Recognizer and Protractor 3D that are easy to incorporate into existing projects, have good recognition rates and require a low amount of training data. To exemplify an application area for motion gestures, we present the results of a study on the feasibility and usability of gesture-based authentication. With the goal of making it easier to connect meaningful functionality with gesture-based input, we developed Mayhem, a graphical end-user programming tool for users without prior programming skills. Mayhem can be used to for rapid prototyping of mobile gestural user interfaces. The main contribution of this dissertation is the development of a number of novel user interface concepts for sensor-based interaction. They will help developers of mobile user interfaces make better use of the existing sensory capabilities of mobile devices. Furthermore, manufacturers of mobile device hardware obtain suggestions for the types of novel sensor technologies that are needed in order to expand the input capabilities of mobile devices. This allows the implementation of future mobile user interfaces with increased input capabilities, more expressiveness and improved usability

    Smart Sensors for Healthcare and Medical Applications

    Get PDF
    This book focuses on new sensing technologies, measurement techniques, and their applications in medicine and healthcare. Specifically, the book briefly describes the potential of smart sensors in the aforementioned applications, collecting 24 articles selected and published in the Special Issue “Smart Sensors for Healthcare and Medical Applications”. We proposed this topic, being aware of the pivotal role that smart sensors can play in the improvement of healthcare services in both acute and chronic conditions as well as in prevention for a healthy life and active aging. The articles selected in this book cover a variety of topics related to the design, validation, and application of smart sensors to healthcare

    Sensing and Signal Processing in Smart Healthcare

    Get PDF
    In the last decade, we have witnessed the rapid development of electronic technologies that are transforming our daily lives. Such technologies are often integrated with various sensors that facilitate the collection of human motion and physiological data and are equipped with wireless communication modules such as Bluetooth, radio frequency identification, and near-field communication. In smart healthcare applications, designing ergonomic and intuitive human–computer interfaces is crucial because a system that is not easy to use will create a huge obstacle to adoption and may significantly reduce the efficacy of the solution. Signal and data processing is another important consideration in smart healthcare applications because it must ensure high accuracy with a high level of confidence in order for the applications to be useful for clinicians in making diagnosis and treatment decisions. This Special Issue is a collection of 10 articles selected from a total of 26 contributions. These contributions span the areas of signal processing and smart healthcare systems mostly contributed by authors from Europe, including Italy, Spain, France, Portugal, Romania, Sweden, and Netherlands. Authors from China, Korea, Taiwan, Indonesia, and Ecuador are also included

    User-centred design of a task-oriented upper-limb assessment system for stroke

    Get PDF
    During rehabilitation from Stroke, patients require assessment of their upper-limb motor control. Outcome measures can often be subjective and objective data is required to supplement therapist/patient opinion on progress. This can be performed through goniometry; however, goniometry can be time-consuming, have inaccuracies of ±23º, and is therefore, often not used. Motion tracking technology is a possible answer to this problem, but can also be costly, time-consuming and not suitable for the clinical environment. This thesis aims to provide an objective, digital intervention method for assessing range of motion to supplement current outcome measures which is suitable for the clinical environment. This was performed by creating a low-cost technology through a user-centred design approach. Requirements elicitation demonstrated that a motivational, portable, cost-effective, non-invasive, time saving system for assessing functional activities was needed. Therefore, a system which utilised a Microsoft Kinect and EZ430 chronos wrist watch to track patient’s movements during and/or outside of therapy sessions was created. Measurements can be taken in a matter of minutes and provide a high quantity of objective data regarding patient movement. The system was verified, using healthy volunteers, by showing similar error rates in the system across 3 weeks in 10 able-bodied individuals, with error rates produced by a physiotherapist using goniometry. The system was also validated in the clinical setting with 6 stroke patients, over 15 weeks, as selected by 6 occupational therapists and 3 physiotherapists in 2 NHS stroke wards. The approach which has been created in this thesis is objective, repeatable, low-cost, portable, and non-invasive; allowing it to be the first tool for the objective assessment of upper-limb ROM which is efficiently designed and suitable for everyday use in stroke rehabilitation

    Enabling intuitive and efficient physical computing

    Get PDF
    Making tools for technology accessible to everyone is important for diverse and inclusive innovation. Significant effort has already been made to make software innovation more accessible, and this effort has created a movement of citizen developers. These citizen developers have the drive to create, but not necessarily the technical skill to innovate with technology. Software, however, has limited impact in the real world compared to hardware and here, physical computing is democratising access to technological innovation. Using microcontroller programming and networking, citizens can now build interactive devices and systems that respond to the real world. But building a physical computing device is riddled with complexity. Memory efficient but hard to use low-level programming languages are used to program microcontrollers, implementation efficient but hard to use wired protocols are used to compose microcontrollers and peripherals, and energy efficient but hard to configure wireless protocols are used to network devices to each other and to the Internet. This consistent trade off between efficiency and ease of use means that physical computing is inaccessible to some. This thesis seeks to democratise microcontroller programming and networking in order to make physical computing accessible to all. It provides a deep exploration of three areas fundamental to physical computing: programming, hardware composition, and wireless networking, drawing parallels with consumer technologies throughout. Based upon these parallels, it presents requirements for each area that may lead to a more intuitive physical computing experience. It uses these requirements to compare existing work in the space and concludes that no existing technology correctly strikes the balance between efficient operation for microcontrollers and intuitive experiences for citizen developers. It therefore goes onto describe and evaluate three new technologies designed to make physical computing accessible to everyone
    corecore