82 research outputs found

    Directional Estimation for Robotic Beating Heart Surgery

    Get PDF
    In robotic beating heart surgery, a remote-controlled robot can be used to carry out the operation while automatically canceling out the heart motion. The surgeon controlling the robot is shown a stabilized view of the heart. First, we consider the use of directional statistics for estimation of the phase of the heartbeat. Second, we deal with reconstruction of a moving and deformable surface. Third, we address the question of obtaining a stabilized image of the heart

    Subjective and objective measures

    Get PDF
    One of the greatest challenges in the study of emotions and emotional states is their measurement. The techniques used to measure emotions depend essentially on the authors’ definition of the concept of emotion. Currently, two types of measures are used: subjective and objective. While subjective measures focus on assessing the conscious recognition of one’s own emotions, objective measures allow researchers to quantify and assess the conscious and unconscious emotional processes. In this sense, when the objective is to evaluate the emotional experience from the subjective point of view of an individual in relation to a given event, then subjective measures such as self-report should be used. In addition to this, when the objective is to evaluate the emotional experience at the most unconscious level of processes such as the physiological response, objective measures should be used. There are no better or worse measures, only measures that allow access to the same phenomenon from different points of view. The chapter’s main objective is to make a survey of the main measures of evaluation of the emotions and emotional states more relevant in the current scientific panorama.info:eu-repo/semantics/acceptedVersio

    Directional Estimation for Robotic Beating Heart Surgery

    Get PDF
    In robotic beating heart surgery, a remote-controlled robot can be used to carry out the operation while automatically canceling out the heart motion. The surgeon controlling the robot is shown a stabilized view of the heart. First, we consider the use of directional statistics for estimation of the phase of the heartbeat. Second, we deal with reconstruction of a moving and deformable surface. Third, we address the question of obtaining a stabilized image of the heart

    Methods and metrics for the improvement of the interaction and the rehabilitation of cerebral palsy through inertial technology

    Get PDF
    Cerebral palsy (CP) is one of the most limiting disabilities in childhood, with 2.2 cases per 1000 1-year survivors. It is a disorder of movement and posture due to a defect or lesion of the immature brain during the pregnancy or the birth. These motor limitations appear frequently in combination with sensory and cognitive alterations generally result in great difficulties for some people with CP to manipulate objects, communicate and interact with their environment, as well as limiting their mobility. Over the last decades, instruments such as personal computers have become a popular tool to overcome some of the motor limitations and promote neural plasticity, especially during childhood. According to some estimations, 65% of youths with CP that present severely limited manipulation skills cannot use standard mice nor keyboards. Unfortunately, even when people with CP use assistive technology for computer access, they face barriers that lead to the use of typical mice, track balls or touch screens for practical reasons. Nevertheless, with the proper customization, novel developments of alternative input devices such as head mice or eye trackers can be a valuable solution for these individuals. This thesis presents a collection of novel mapping functions and facilitation algorithms that were proposed and designed to ease the act of pointing to graphical elements on the screen—the most elemental task in human-computer interaction—to individuals with CP. These developments were implemented to be used with any head mouse, although they were all tested with the ENLAZA, an inertial interface. The development of such techniques required the following approach: Developing a methodology to evaluate the performance of individuals with CP in pointing tasks, which are usually described as two sequential subtasks: navigation and targeting. Identifying the main motor abnormalities that are present in individuals with CP as well as assessing the compliance of these people with standard motor behaviour models such as Fitts’ law. Designing and validating three novel pointing facilitation techniques to be implemented in a head mouse. They were conceived for users with CP and muscle weakness that have great difficulties to maintain their heads in a stable position. The first two algorithms consist in two novel mapping functions that aim to facilitate the navigation phase, whereas the third technique is based in gravity wells and was specially developed to facilitate the selection of elements in the screen. In parallel with the development of the facilitation techniques for the interaction process, we evaluated the feasibility of use inertial technology for the control of serious videogames as a complement to traditional rehabilitation therapies of posture and balance. The experimental validation here presented confirms that this concept could be implemented in clinical practice with good results. In summary, the works here presented prove the suitability of using inertial technology for the development of an alternative pointing device—and pointing algorithms—based on movements of the head for individuals with CP and severely limited manipulation skills and new rehabilitation therapies for the improvement of posture and balance. All the contributions were validated in collaboration with several centres specialized in CP and similar disorders and users with disability recruited in those centres.La parálisis cerebral (PC) es una de las deficiencias más limitantes de la infancia, con un incidencia de 2.2 casos por cada 1000 supervivientes tras un año de vida. La PC se manifiesta principalmente como una alteración del movimiento y la postura y es consecuencia de un defecto o lesión en el cerebro inmaduro durante el embarazo o el parto. Las limitaciones motrices suelen aparecer además en compañía de alteraciones sensoriales y cognitivas, lo que provoca por lo general grandes dificultades de movilidad, de manipulación, de relación y de interacción con el entorno. En las últimas décadas, el ordenador personal se ha extendido como herramienta para la compensación de parte de estas limitaciones motoras y como medio de promoción de la neuroplasticidad, especialmente durante la infancia. Desafortunadamente, cerca de un 65% de las personas PC que son diagnosticadas con limitaciones severas de manipulación son incapaces de utilizar ratones o teclados convencionales. A veces, ni siquiera la tecnología asistencial les resulta de utilidad ya que se encuentran con impedimentos que hacen que opten por usar dispositivos tradicionales aun sin dominar su manejo. Para estas personas, los desarrollos recientes de ratones operados a través de movimientos residuales con la cabeza o la mirada podrían ser una solución válida, siempre y cuando se personalice su manejo. Esta tesis presenta un conjunto de novedosas funciones de mapeo y algoritmos de facilitaci ón que se han propuesto y diseñado con el ánimo de ayudar a personas con PC en las tareas de apuntamiento de objetos en la pantalla —las más elementales dentro de la interacción con el ordenador. Aunque todas las contribuciones se evaluaron con la interfaz inercial ENLAZA, desarrollada igualmente en nuestro grupo, podrían ser aplicadas a cualquier ratón basado en movimientos de cabeza. El desarrollo de los trabajos se resume en las siguientes tareas abordadas: Desarrollo de una metodología para la evaluación de la habilidad de usuarios con PC en tareas de apuntamiento, que se contemplan como el encadenamiento de dos sub-tareas: navegación (alcance) y selección (clic). Identificación de los tipos de alteraciones motrices presentes en individuos con PC y el grado de ajuste de éstos a modelos estándares de comportamiento motriz como puede ser la ley de Fitts. Propuesta y validación de tres técnicas de facilitación del alcance para ser implementadas en un ratón basado en movimientos de cabeza. La facilitación se ha centrado en personas que presentan debilidad muscular y dificultades para mantener la posición de la cabeza. Mientras que los dos primeros algoritmos se centraron en facilitar la navegación, el tercero tuvo como objetivo ayudar en la selección a través de una técnica basada en pozos gravitatorios de proximidad. En paralelo al desarrollo de estos algoritmos de facilitación de la interacción, evaluamos la posibilidad de utilizar tecnología inercial para el control de videojuegos en rehabilitación. Nuestra validación experimental demostró que este concepto puede implementarse en la práctica clínica como complemento a terapias tradicionales de rehabilitación de la postura y el equilibrio. Como conclusión, los trabajos desarrollados en esta tesis vienen a constatar la idoneidad de utilizar sensores inerciales para el desarrollo de interfaces de accesso alternativo al ordenador basados en movimientos residuales de la cabeza para personas con limitaciones severas de manipulación. Esta solución se complementa con algoritmos de facilitación del alcance. Por otra parte, estas soluciones tecnológicas de interfaz con el ordenador representan igualmente un complemento de terapias tradicionales de rehabilitación de la postura y el equilibrio. Todas las contribuciones se validaron en colaboración con una serie de centros especializados en parálisis cerebral y trastornos afines contando con usuarios con discapacidad reclutados en dichos centros.This thesis was completed in the Group of Neural and Cognitive Engineering (gNEC) of the CAR UPM-CSIC with the financial support of the FP7 Framework EU Research Project ABC (EU-2012-287774), the IVANPACE Project (funded by Obra Social de Caja Cantabria, 2012-2013), and the Spanish Ministry of Economy and Competitiveness in the framework of two projects: the Interplay Project (RTC-2014-1812-1) and most recently the InterAAC Project (RTC-2015-4327-1)Programa Oficial de Doctorado en Ingeniería Eléctrica, Electrónica y AutomáticaPresidente: Juan Manuel Belda Lois.- Secretario: María Dolores Blanco Rojas.- Vocal: Luis Fernando Sánchez Sante

    A framework for learning analytics using commodity wearable devices

    Get PDF
    We advocate for and introduce LEARNSense, a framework for learning analytics using commodity wearable devices to capture learner’s physical actions and accordingly infer learner context (e.g., student activities and engagement status in class). Our work is motivated by the observations that: (a) the fine-grained individual-specific learner actions are crucial to understand learners and their context information; (b) sensor data available on the latest wearable devices (e.g., wrist-worn and eye wear devices) can effectively recognize learner actions and help to infer learner context information; (c) the commodity wearable devices that are widely available on the market can provide a hassle-free and non-intrusive solution. Following the above observations and under the proposed framework, we design and implement a sensor-based learner context collector running on the wearable devices. The latest data mining and sensor data processing techniques are employed to detect different types of learner actions and context information. Furthermore, we detail all of the above efforts by offering a novel and exemplary use case: it successfully provides the accurate detection of student actions and infers the student engagement states in class. The specifically designed learner context collector has been implemented on the commodity wrist-worn device. Based on the collected and inferred learner information, the novel intervention and incentivizing feedback are introduced into the system service. Finally, a comprehensive evaluation with the real-world experiments, surveys and interviews demonstrates the effectiveness and impact of the proposed framework and this use case. The F1 score for the student action classification tasks achieve 0.9, and the system can effectively differentiate the defined three learner states. Finally, the survey results show that the learners are satisfied with the use of our system (mean score of 3.7 with a standard deviation of 0.55)

    State of the art of audio- and video based solutions for AAL

    Get PDF
    Working Group 3. Audio- and Video-based AAL ApplicationsIt is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living (AAL) technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters (e.g., heart rate, respiratory rate). Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals (e.g., speech recordings). Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary 4 debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely (i) lifelogging and self-monitoring, (ii) remote monitoring of vital signs, (iii) emotional state recognition, (iv) food intake monitoring, activity and behaviour recognition, (v) activity and personal assistance, (vi) gesture recognition, (vii) fall detection and prevention, (viii) mobility assessment and frailty recognition, and (ix) cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed.publishedVersio

    Smart Sensors for Healthcare and Medical Applications

    Get PDF
    This book focuses on new sensing technologies, measurement techniques, and their applications in medicine and healthcare. Specifically, the book briefly describes the potential of smart sensors in the aforementioned applications, collecting 24 articles selected and published in the Special Issue “Smart Sensors for Healthcare and Medical Applications”. We proposed this topic, being aware of the pivotal role that smart sensors can play in the improvement of healthcare services in both acute and chronic conditions as well as in prevention for a healthy life and active aging. The articles selected in this book cover a variety of topics related to the design, validation, and application of smart sensors to healthcare

    Modeling, Analysis, and Control of a Mobile Robot for \u3ci\u3eIn Vivo\u3c/i\u3e Fluoroscopy of Human Joints during Natural Movements

    Get PDF
    In this dissertation, the modeling, analysis and control of a multi-degree of freedom (mdof) robotic fluoroscope was investigated. A prototype robotic fluoroscope exists, and consists of a 3 dof mobile platform with two 2 dof Cartesian manipulators mounted symmetrically on opposite sides of the platform. One Cartesian manipulator positions the x-ray generator and the other Cartesian manipulator positions the x-ray imaging device. The robotic fluoroscope is used to x-ray skeletal joints of interest of human subjects performing natural movement activities. In order to collect the data, the Cartesian manipulators must keep the x-ray generation and imaging devices accurately aligned while dynamically tracking the desired skeletal joint of interest. In addition to the joint tracking, this also requires the robotic platform to move along with the subject, allowing the manipulators to operate within their ranges of motion. A comprehensive dynamic model of the robotic fluoroscope prototype was created, incorporating the dynamic coupling of the system. Empirical data collected from an RGB-D camera were used to create a human kinematic model that can be used to simulate the joint of interest target dynamics. This model was incorporated into a computer simulation that was validated by comparing the simulation results with actual prototype experiments using the same human kinematic model inputs. The computer simulation was used in a comprehensive dynamic analysis of the prototype and in the development and evaluation of sensing, control, and signal processing approaches that optimize the subject and joint tracking performance characteristics. The modeling and simulation results were used to develop real-time control strategies, including decoupling techniques that reduce tracking error on the prototype. For a normal walking activity, the joint tracking error was less than 20 mm, and the subject tracking error was less than 140 mm
    corecore