236 research outputs found

    Optical Methods in Sensing and Imaging for Medical and Biological Applications

    Get PDF
    The recent advances in optical sources and detectors have opened up new opportunities for sensing and imaging techniques which can be successfully used in biomedical and healthcare applications. This book, entitled ‘Optical Methods in Sensing and Imaging for Medical and Biological Applications’, focuses on various aspects of the research and development related to these areas. The book will be a valuable source of information presenting the recent advances in optical methods and novel techniques, as well as their applications in the fields of biomedicine and healthcare, to anyone interested in this subject

    Context-aware home monitoring system for Parkinson's disease patietns : ambient and werable sensing for freezing of gait detection

    Get PDF
    Tesi en modalitat de cotutela: Universitat Politècnica de Catalunya i Technische Universiteit Eindhoven. This PhD Thesis has been developed in the framework of, and according to, the rules of the Erasmus Mundus Joint Doctorate on Interactive and Cognitive Environments EMJD ICE [FPA no. 2010-0012]Parkinson’s disease (PD). It is characterized by brief episodes of inability to step, or by extremely short steps that typically occur on gait initiation or on turning while walking. The consequences of FOG are aggravated mobility and higher affinity to falls, which have a direct effect on the quality of life of the individual. There does not exist completely effective pharmacological treatment for the FOG phenomena. However, external stimuli, such as lines on the floor or rhythmic sounds, can focus the attention of a person who experiences a FOG episode and help her initiate gait. The optimal effectiveness in such approach, known as cueing, is achieved through timely activation of a cueing device upon the accurate detection of a FOG episode. Therefore, a robust and accurate FOG detection is the main problem that needs to be solved when developing a suitable assistive technology solution for this specific user group. This thesis proposes the use of activity and spatial context of a person as the means to improve the detection of FOG episodes during monitoring at home. The thesis describes design, algorithm implementation and evaluation of a distributed home system for FOG detection based on multiple cameras and a single inertial gait sensor worn at the waist of the patient. Through detailed observation of collected home data of 17 PD patients, we realized that a novel solution for FOG detection could be achieved by using contextual information of the patient’s position, orientation, basic posture and movement on a semantically annotated two-dimensional (2D) map of the indoor environment. We envisioned the future context-aware system as a network of Microsoft Kinect cameras placed in the patient’s home that interacts with a wearable inertial sensor on the patient (smartphone). Since the hardware platform of the system constitutes from the commercial of-the-shelf hardware, the majority of the system development efforts involved the production of software modules (for position tracking, orientation tracking, activity recognition) that run on top of the middle-ware operating system in the home gateway server. The main component of the system that had to be developed is the Kinect application for tracking the position and height of multiple people, based on the input in the form of 3D point cloud data. Besides position tracking, this software module also provides mapping and semantic annotation of FOG specific zones on the scene in front of the Kinect. One instance of vision tracking application is supposed to run for every Kinect sensor in the system, yielding potentially high number of simultaneous tracks. At any moment, the system has to track one specific person - the patient. To enable tracking of the patient between different non-overlapped cameras in the distributed system, a new re-identification approach based on appearance model learning with one-class Support Vector Machine (SVM) was developed. Evaluation of the re-identification method was conducted on a 16 people dataset in a laboratory environment. Since the patient orientation in the indoor space was recognized as an important part of the context, the system necessitated the ability to estimate the orientation of the person, expressed in the frame of the 2D scene on which the patient is tracked by the camera. We devised method to fuse position tracking information from the vision system and inertial data from the smartphone in order to obtain patient’s 2D pose estimation on the scene map. Additionally, a method for the estimation of the position of the smartphone on the waist of the patient was proposed. Position and orientation estimation accuracy were evaluated on a 12 people dataset. Finally, having available positional, orientation and height information, a new seven-class activity classification was realized using a hierarchical classifier that combines height-based posture classifier with translational and rotational SVM movement classifiers. Each of the SVM movement classifiers and the joint hierarchical classifier were evaluated in the laboratory experiment with 8 healthy persons. The final context-based FOG detection algorithm uses activity information and spatial context information in order to confirm or disprove FOG detected by the current state-of-the-art FOG detection algorithm (which uses only wearable sensor data). A dataset with home data of 3 PD patients was produced using two Kinect cameras and a smartphone in synchronized recording. The new context-based FOG detection algorithm and the wearable-only FOG detection algorithm were both evaluated with the home dataset and their results were compared. The context-based algorithm very positively influences the reduction of false positive detections, which is expressed through achieved higher specificity. In some cases, context-based algorithm also eliminates true positive detections, reducing sensitivity to the lesser extent. The final comparison of the two algorithms on the basis of their sensitivity and specificity, shows the improvement in the overall FOG detection achieved with the new context-aware home system.Esta tesis propone el uso de la actividad y el contexto espacial de una persona como medio para mejorar la detección de episodios de FOG (Freezing of gait) durante el seguimiento en el domicilio. La tesis describe el diseño, implementación de algoritmos y evaluación de un sistema doméstico distribuido para detección de FOG basado en varias cámaras y un único sensor de marcha inercial en la cintura del paciente. Mediante de la observación detallada de los datos caseros recopilados de 17 pacientes con EP, nos dimos cuenta de que se puede lograr una solución novedosa para la detección de FOG mediante el uso de información contextual de la posición del paciente, orientación, postura básica y movimiento anotada semánticamente en un mapa bidimensional (2D) del entorno interior. Imaginamos el futuro sistema de consciencia del contexto como una red de cámaras Microsoft Kinect colocadas en el hogar del paciente, que interactúa con un sensor de inercia portátil en el paciente (teléfono inteligente). Al constituirse la plataforma del sistema a partir de hardware comercial disponible, los esfuerzos de desarrollo consistieron en la producción de módulos de software (para el seguimiento de la posición, orientación seguimiento, reconocimiento de actividad) que se ejecutan en la parte superior del sistema operativo del servidor de puerta de enlace de casa. El componente principal del sistema que tuvo que desarrollarse es la aplicación Kinect para seguimiento de la posición y la altura de varias personas, según la entrada en forma de punto 3D de datos en la nube. Además del seguimiento de posición, este módulo de software también proporciona mapeo y semántica. anotación de zonas específicas de FOG en la escena frente al Kinect. Se supone que una instancia de la aplicación de seguimiento de visión se ejecuta para cada sensor Kinect en el sistema, produciendo un número potencialmente alto de pistas simultáneas. En cualquier momento, el sistema tiene que rastrear a una persona específica - el paciente. Para habilitar el seguimiento del paciente entre diferentes cámaras no superpuestas en el sistema distribuido, se desarrolló un nuevo enfoque de re-identificación basado en el aprendizaje de modelos de apariencia con one-class Suport Vector Machine (SVM). La evaluación del método de re-identificación se realizó con un conjunto de datos de 16 personas en un entorno de laboratorio. Dado que la orientación del paciente en el espacio interior fue reconocida como una parte importante del contexto, el sistema necesitaba la capacidad de estimar la orientación de la persona, expresada en el marco de la escena 2D en la que la cámara sigue al paciente. Diseñamos un método para fusionar la información de seguimiento de posición del sistema de visión y los datos de inercia del smartphone para obtener la estimación de postura 2D del paciente en el mapa de la escena. Además, se propuso un método para la estimación de la posición del Smartphone en la cintura del paciente. La precisión de la estimación de la posición y la orientación se evaluó en un conjunto de datos de 12 personas. Finalmente, al tener disponible información de posición, orientación y altura, se realizó una nueva clasificación de actividad de seven-class utilizando un clasificador jerárquico que combina un clasificador de postura basado en la altura con clasificadores de movimiento SVM traslacional y rotacional. Cada uno de los clasificadores de movimiento SVM y el clasificador jerárquico conjunto se evaluaron en el experimento de laboratorio con 8 personas sanas. El último algoritmo de detección de FOG basado en el contexto utiliza información de actividad e información de texto espacial para confirmar o refutar el FOG detectado por el algoritmo de detección de FOG actual. El algoritmo basado en el contexto influye muy positivamente en la reducción de las detecciones de falsos positivos, que se expresa a través de una mayor especificidadPostprint (published version

    Context-aware home monitoring system for Parkinson's disease patietns : ambient and werable sensing for freezing of gait detection

    Get PDF
    Parkinson’s disease (PD). It is characterized by brief episodes of inability to step, or by extremely short steps that typically occur on gait initiation or on turning while walking. The consequences of FOG are aggravated mobility and higher affinity to falls, which have a direct effect on the quality of life of the individual. There does not exist completely effective pharmacological treatment for the FOG phenomena. However, external stimuli, such as lines on the floor or rhythmic sounds, can focus the attention of a person who experiences a FOG episode and help her initiate gait. The optimal effectiveness in such approach, known as cueing, is achieved through timely activation of a cueing device upon the accurate detection of a FOG episode. Therefore, a robust and accurate FOG detection is the main problem that needs to be solved when developing a suitable assistive technology solution for this specific user group. This thesis proposes the use of activity and spatial context of a person as the means to improve the detection of FOG episodes during monitoring at home. The thesis describes design, algorithm implementation and evaluation of a distributed home system for FOG detection based on multiple cameras and a single inertial gait sensor worn at the waist of the patient. Through detailed observation of collected home data of 17 PD patients, we realized that a novel solution for FOG detection could be achieved by using contextual information of the patient’s position, orientation, basic posture and movement on a semantically annotated two-dimensional (2D) map of the indoor environment. We envisioned the future context-aware system as a network of Microsoft Kinect cameras placed in the patient’s home that interacts with a wearable inertial sensor on the patient (smartphone). Since the hardware platform of the system constitutes from the commercial of-the-shelf hardware, the majority of the system development efforts involved the production of software modules (for position tracking, orientation tracking, activity recognition) that run on top of the middle-ware operating system in the home gateway server. The main component of the system that had to be developed is the Kinect application for tracking the position and height of multiple people, based on the input in the form of 3D point cloud data. Besides position tracking, this software module also provides mapping and semantic annotation of FOG specific zones on the scene in front of the Kinect. One instance of vision tracking application is supposed to run for every Kinect sensor in the system, yielding potentially high number of simultaneous tracks. At any moment, the system has to track one specific person - the patient. To enable tracking of the patient between different non-overlapped cameras in the distributed system, a new re-identification approach based on appearance model learning with one-class Support Vector Machine (SVM) was developed. Evaluation of the re-identification method was conducted on a 16 people dataset in a laboratory environment. Since the patient orientation in the indoor space was recognized as an important part of the context, the system necessitated the ability to estimate the orientation of the person, expressed in the frame of the 2D scene on which the patient is tracked by the camera. We devised method to fuse position tracking information from the vision system and inertial data from the smartphone in order to obtain patient’s 2D pose estimation on the scene map. Additionally, a method for the estimation of the position of the smartphone on the waist of the patient was proposed. Position and orientation estimation accuracy were evaluated on a 12 people dataset. Finally, having available positional, orientation and height information, a new seven-class activity classification was realized using a hierarchical classifier that combines height-based posture classifier with translational and rotational SVM movement classifiers. Each of the SVM movement classifiers and the joint hierarchical classifier were evaluated in the laboratory experiment with 8 healthy persons. The final context-based FOG detection algorithm uses activity information and spatial context information in order to confirm or disprove FOG detected by the current state-of-the-art FOG detection algorithm (which uses only wearable sensor data). A dataset with home data of 3 PD patients was produced using two Kinect cameras and a smartphone in synchronized recording. The new context-based FOG detection algorithm and the wearable-only FOG detection algorithm were both evaluated with the home dataset and their results were compared. The context-based algorithm very positively influences the reduction of false positive detections, which is expressed through achieved higher specificity. In some cases, context-based algorithm also eliminates true positive detections, reducing sensitivity to the lesser extent. The final comparison of the two algorithms on the basis of their sensitivity and specificity, shows the improvement in the overall FOG detection achieved with the new context-aware home system.Esta tesis propone el uso de la actividad y el contexto espacial de una persona como medio para mejorar la detección de episodios de FOG (Freezing of gait) durante el seguimiento en el domicilio. La tesis describe el diseño, implementación de algoritmos y evaluación de un sistema doméstico distribuido para detección de FOG basado en varias cámaras y un único sensor de marcha inercial en la cintura del paciente. Mediante de la observación detallada de los datos caseros recopilados de 17 pacientes con EP, nos dimos cuenta de que se puede lograr una solución novedosa para la detección de FOG mediante el uso de información contextual de la posición del paciente, orientación, postura básica y movimiento anotada semánticamente en un mapa bidimensional (2D) del entorno interior. Imaginamos el futuro sistema de consciencia del contexto como una red de cámaras Microsoft Kinect colocadas en el hogar del paciente, que interactúa con un sensor de inercia portátil en el paciente (teléfono inteligente). Al constituirse la plataforma del sistema a partir de hardware comercial disponible, los esfuerzos de desarrollo consistieron en la producción de módulos de software (para el seguimiento de la posición, orientación seguimiento, reconocimiento de actividad) que se ejecutan en la parte superior del sistema operativo del servidor de puerta de enlace de casa. El componente principal del sistema que tuvo que desarrollarse es la aplicación Kinect para seguimiento de la posición y la altura de varias personas, según la entrada en forma de punto 3D de datos en la nube. Además del seguimiento de posición, este módulo de software también proporciona mapeo y semántica. anotación de zonas específicas de FOG en la escena frente al Kinect. Se supone que una instancia de la aplicación de seguimiento de visión se ejecuta para cada sensor Kinect en el sistema, produciendo un número potencialmente alto de pistas simultáneas. En cualquier momento, el sistema tiene que rastrear a una persona específica - el paciente. Para habilitar el seguimiento del paciente entre diferentes cámaras no superpuestas en el sistema distribuido, se desarrolló un nuevo enfoque de re-identificación basado en el aprendizaje de modelos de apariencia con one-class Suport Vector Machine (SVM). La evaluación del método de re-identificación se realizó con un conjunto de datos de 16 personas en un entorno de laboratorio. Dado que la orientación del paciente en el espacio interior fue reconocida como una parte importante del contexto, el sistema necesitaba la capacidad de estimar la orientación de la persona, expresada en el marco de la escena 2D en la que la cámara sigue al paciente. Diseñamos un método para fusionar la información de seguimiento de posición del sistema de visión y los datos de inercia del smartphone para obtener la estimación de postura 2D del paciente en el mapa de la escena. Además, se propuso un método para la estimación de la posición del Smartphone en la cintura del paciente. La precisión de la estimación de la posición y la orientación se evaluó en un conjunto de datos de 12 personas. Finalmente, al tener disponible información de posición, orientación y altura, se realizó una nueva clasificación de actividad de seven-class utilizando un clasificador jerárquico que combina un clasificador de postura basado en la altura con clasificadores de movimiento SVM traslacional y rotacional. Cada uno de los clasificadores de movimiento SVM y el clasificador jerárquico conjunto se evaluaron en el experimento de laboratorio con 8 personas sanas. El último algoritmo de detección de FOG basado en el contexto utiliza información de actividad e información de texto espacial para confirmar o refutar el FOG detectado por el algoritmo de detección de FOG actual. El algoritmo basado en el contexto influye muy positivamente en la reducción de las detecciones de falsos positivos, que se expresa a través de una mayor especificida

    Smart Sensors for Healthcare and Medical Applications

    Get PDF
    This book focuses on new sensing technologies, measurement techniques, and their applications in medicine and healthcare. Specifically, the book briefly describes the potential of smart sensors in the aforementioned applications, collecting 24 articles selected and published in the Special Issue “Smart Sensors for Healthcare and Medical Applications”. We proposed this topic, being aware of the pivotal role that smart sensors can play in the improvement of healthcare services in both acute and chronic conditions as well as in prevention for a healthy life and active aging. The articles selected in this book cover a variety of topics related to the design, validation, and application of smart sensors to healthcare

    Wearables for Movement Analysis in Healthcare

    Get PDF
    Quantitative movement analysis is widely used in clinical practice and research to investigate movement disorders objectively and in a complete way. Conventionally, body segment kinematic and kinetic parameters are measured in gait laboratories using marker-based optoelectronic systems, force plates, and electromyographic systems. Although movement analyses are considered accurate, the availability of specific laboratories, high costs, and dependency on trained users sometimes limit its use in clinical practice. A variety of compact wearable sensors are available today and have allowed researchers and clinicians to pursue applications in which individuals are monitored in their homes and in community settings within different fields of study, such movement analysis. Wearable sensors may thus contribute to the implementation of quantitative movement analyses even during out-patient use to reduce evaluation times and to provide objective, quantifiable data on the patients’ capabilities, unobtrusively and continuously, for clinical purposes

    Body measurement estimations using 3D scanner for individuals with severe motor impairments

    Get PDF
    In biomechanics, a still unresolved question is how to estimate with enough accuracy the volume and mass of each body segment of a subject. This is important for several applications ranging from the rehabilitation of injured subjects to the study of athletic performances via the analysis of the dynamic inertia of each body segment. However, traditionally this evaluation is done by referring to anthropometric tables or by approximating the volumes using manual measurements. We propose a novel method based on the 3D reconstruction of the subject’s body using the commercial low-cost camera Kinect v2. The software developed performs body segment separation in a few minutes leveraging alpha shape approximation of 3D polyhedrons to quickly compute a Montecarlo volume estimation. The procedure was evaluated on a total of 30 healthy subjects and the resulting segments’ lengths and masses were compared with the literature

    Sistema para análise automatizada de movimento durante a marcha usando uma câmara RGB-D

    Get PDF
    Nowadays it is still common in clinical practice to assess the gait (or way of walking) of a given subject through the visual observation and use of a rating scale, which is a subjective approach. However, sensors including RGB-D cameras, such as the Microsoft Kinect, can be used to obtain quantitative information that allows performing gait analysis in a more objective way. The quantitative gait analysis results can be very useful for example to support the clinical assessment of patients with diseases that can affect their gait, such as Parkinson’s disease. The main motivation of this thesis was thus to provide support to gait assessment, by allowing to carry out quantitative gait analysis in an automated way. This objective was achieved by using 3-D data, provided by a single RGB-D camera, to automatically select the data corresponding to walking and then detect the gait cycles performed by the subject while walking. For each detected gait cycle, we obtain several gait parameters, which are used together with anthropometric measures to automatically identify the subject being assessed. The automated gait data selection relies on machine learning techniques to recognize three different activities (walking, standing, and marching), as well as two different positions of the subject in relation to the camera (facing the camera and facing away from it). For gait cycle detection, we developed an algorithm that estimates the instants corresponding to given gait events. The subject identification based on gait is enabled by a solution that was also developed by relying on machine learning. The developed solutions were integrated into a system for automated gait analysis, which we found to be a viable alternative to gold standard systems for obtaining several spatiotemporal and some kinematic gait parameters. Furthermore, the system is suitable for use in clinical environments, as well as ambulatory scenarios, since it relies on a single markerless RGB-D camera that is less expensive, more portable, less intrusive and easier to set up, when compared with the gold standard systems (multiple cameras and several markers attached to the subject’s body).Atualmente ainda é comum na prática clínica avaliar a marcha (ou o modo de andar) de uma certa pessoa através da observação visual e utilização de uma escala de classificação, o que é uma abordagem subjetiva. No entanto, existem sensores incluindo câmaras RGB-D, como a Microsoft Kinect, que podem ser usados para obter informação quantitativa que permite realizar a análise da marcha de um modo mais objetivo. Os resultados quantitativos da análise da marcha podem ser muito úteis, por exemplo, para apoiar a avaliação clínica de pessoas com doenças que podem afetar a sua marcha, como a doença de Parkinson. Assim, a principal motivação desta tese foi fornecer apoio à avaliação da marcha, permitindo realizar a análise quantitativa da marcha de forma automatizada. Este objetivo foi atingido usando dados em 3-D, fornecidos por uma única câmara RGB-D, para automaticamente selecionar os dados correspondentes a andar e, em seguida, detetar os ciclos de marcha executados pelo sujeito durante a marcha. Para cada ciclo de marcha identificado, obtemos vários parâmetros de marcha, que são usados em conjunto com medidas antropométricas para identificar automaticamente o sujeito que está a ser avaliado. A seleção automatizada de dados de marcha usa técnicas de aprendizagem máquina para reconhecer três atividades diferentes (andar, estar parado em pé e marchar), bem como duas posições diferentes do sujeito em relação à câmara (de frente para a câmara e de costas para ela). Para a deteção dos ciclos da marcha, desenvolvemos um algoritmo que estima os instantes correspondentes a determinados eventos da marcha. A identificação do sujeito com base na sua marcha é realizada usando uma solução que também foi desenvolvida com base em aprendizagem máquina. As soluções desenvolvidas foram integradas num sistema de análise automatizada de marcha, que demonstrámos ser uma alternativa viável a sistemas padrão de referência para obter vários parâmetros de marcha espácio-temporais e alguns parâmetros angulares. Além disso, o sistema é adequado para uso em ambientes clínicos, bem como em cenários ambulatórios, pois depende de apenas de uma câmara RGB-D que não usa marcadores e é menos dispendiosa, mais portátil, menos intrusiva e mais fácil de configurar, quando comparada com os sistemas padrão de referência (múltiplas câmaras e vários marcadores colocados no corpo do sujeito).Programa Doutoral em Informátic

    Privaatsust säilitava raalnägemise meetodi arendamine kehalise aktiivsuse automaatseks jälgimiseks koolis

    Get PDF
    Väitekirja elektrooniline versioon ei sisalda publikatsiooneKuidas vaadelda inimesi ilma neid nägemata? Öeldakse, et ei ole viisakas jõllitada. Õigus privaatsusele on lausa inimõigus. Siiski on inimkäitumises palju sellist, mida teadlased tahaksid uurida inimesi vaadeldes. Näiteks tahame teada, kas lapsed hakkavad vahetunnis rohkem liikuma, kui koolis keelatakse nutitelefonid? Selle välja selgitamiseks peaks teadlane küsima lapsevanematelt nõusolekut võsukeste vaatlemiseks. Eeldusel, et lapsevanemad annavad loa, oleks klassikaliseks vaatluseks vaja tohutult palju tööjõudu – mitu vaatlejat koolimajas iga päev piisavalt pikal perioodil enne ja pärast nutitelefoni keelu kehtestamist. Doktoritööga püüdsin lahendada korraga privaatsuse probleemi ja tööjõu probleemi, asendades inimvaatleja tehisaruga. Kaasaegsed masinõppe meetodid võimaldavad luua mudeleid, mis tuvastavad automaatselt pildil või videos kujutatud objekte ja nende omadusi. Kui tahame tehisaru, mis tunneb pildil ära inimese, tuleb moodustada masinõppe andmestik, kus on pilte inimestest ja pilte ilma inimesteta. Kui tahame tehisaru, mis eristaks videos madalat ja kõrget kehalist aktiivsust, on vaja vastavat videoandmestikku. Doktoritöös kogusingi andmestiku, kus video laste liikumisest on sünkroniseeritud puusal kantavate aktseleromeetritega, et treenida mudel, mis eristaks videopikslites madalamat ja kõrgemat liikumise intensiivsust. Koostöös Tehonoloogiainstituudi iCV laboriga arendasime välja videoanalüüsi sensori prototüübi, mis suudab reaalaja kiirusel hinnata kaamera vaateväljas olevate inimeste kehalise aktiivsuse taset. Just see, et tehisaru suudab tuletada videost kehalise aktiivsuse informatsiooni ilma neid videokaadreid salvestamata ega inimestele üldsegi näitamata, võimaldab vaadelda inimesi ilma neid nägemata. Väljatöötatud meetod on mõeldud kehalise aktiivsuse mõõtmiseks koolipõhistes teadusuuringutes ning seetõttu on arenduses rõhutatud privaatsuse kaitsmist ja teaduseetikat. Laiemalt vaadates illustreerib doktoritöö aga raalnägemistehnoloogiate potentsiaali töötlemaks visuaalset infot linnaruumis ja töökohtadel ning mitte ainult kehalise aktiivsuse mõõtmiseks kõrgete teaduseetika kriteerimitega. Siin ongi koht avalikuks aruteluks – millistel tingimustel või kas üldse on OK, kui sind jõllitab robot?  How to observe people without seeing them? They say it's not polite to stare. The right to privacy is considered a human right. However, there is much in human behavior that scientists would like to study via observation. For example, we want to know whether children will start moving more during recess if smartphones are banned at school? To figure this out, scientists would have to ask parental consent to carry out the observation. Assuming parents grant permission, a huge amount of labour would be needed for classical observation - several observers in the schoolhouse every day for a sufficiently long period before and after the smartphone ban. With my doctoral thesis, I tried to solve both the problem of privacy and of labor by replacing the human observer with artificial intelligence (AI). Modern machine learning methods allow training models that automatically detect objects and their properties in images or video. If we want an AI that recognizes people in images, we need to form a machine learning dataset with pictures of people and pictures without people. If we want an AI that differentiates between low and high physical activity in video, we need a corresponding video dataset. In my doctoral thesis, I collected a dataset where video of children's movement is synchronized with hip-worn accelerometers to train a model that could differentiate between lower and higher levels of physical activity in video. In collaboration with the ICV lab at the Institute of Technology, we developed a prototype video analysis sensor that can estimate the level of physical activity of people in the camera's field of view at real-time speed. The fact that AI can derive information about physical activity from the video without recording the footage or showing it to anyone at all, makes it possible to observe without seeing. The method is designed for measuring physical activity in school-based research and therefore highly prioritizes privacy protection and research ethics. But more broadly, the thesis illustrates the potential of computer vision technologies for processing visual information in urban spaces and workplaces, and not only for measuring physical activity or adhering to high ethical standards. This warrants wider public discussion – under what conditions or whether at all is it OK to have a robot staring at you?https://www.ester.ee/record=b555972

    Wearable and BAN Sensors for Physical Rehabilitation and eHealth Architectures

    Get PDF
    The demographic shift of the population towards an increase in the number of elderly citizens, together with the sedentary lifestyle we are adopting, is reflected in the increasingly debilitated physical health of the population. The resulting physical impairments require rehabilitation therapies which may be assisted by the use of wearable sensors or body area network sensors (BANs). The use of novel technology for medical therapies can also contribute to reducing the costs in healthcare systems and decrease patient overflow in medical centers. Sensors are the primary enablers of any wearable medical device, with a central role in eHealth architectures. The accuracy of the acquired data depends on the sensors; hence, when considering wearable and BAN sensing integration, they must be proven to be accurate and reliable solutions. This book is a collection of works focusing on the current state-of-the-art of BANs and wearable sensing devices for physical rehabilitation of impaired or debilitated citizens. The manuscripts that compose this book report on the advances in the research related to different sensing technologies (optical or electronic) and body area network sensors (BANs), their design and implementation, advanced signal processing techniques, and the application of these technologies in areas such as physical rehabilitation, robotics, medical diagnostics, and therapy
    corecore