10 research outputs found

    Nonlinear predictive threshold model for real-time abnormal gait detection

    Get PDF
    Falls are critical events for human health due to the associated risk of physical and psychological injuries. Several fall related systems have been developed in order to reduce injuries. Among them, fall-risk prediction systems are one of the most promising approaches, as they strive to predict a fall before its occurrence. A category of fall-risk prediction systems evaluates balance and muscle strength through some clinical functional assessment tests, while other prediction systems investigate the recognition of abnormal gait patterns to predict a fall in real-time. The main contribution of this paper is a nonlinear model of user gait in combination with a threshold-based classification in order to recognize abnormal gait patterns with low complexity and high accuracy. In addition, a dataset with realistic parameters is prepared to simulate abnormal walks and to evaluate fall prediction methods. The accelerometer and gyroscope sensors available in a smartphone have been exploited to create the dataset. The proposed approach has been implemented and compared with the state-of-the-art approaches showing that it is able to predict an abnormal walk with a higher accuracy (93.5%) and a higher efficiency (up to 3.5 faster) than other feasible approaches

    Oocyte positional recognition for automatic manipulation in ICSI

    Get PDF
    Polar body position detection is a necessary process in the automation of micromanipulation systems specifically used in intracytoplasmic sperm injection (ICSI) applications. The polar body is an intracellular structure, which accommodates the chromosomes, and the injection must not only avoid this structure but be at the furthest point away from it. This paper aims to develop a vision recognition system for the recognition of the oocyte and its polar body in order to be used to inform the automated injection mechanism to avoid the polar body. The novelty of the paper is its capability to determine the position and orientation of the oocyte and its polar body. The gradient-weighted Hough transform method was employed for the detection of the location of the oocyte and its polar body. Moreover, a new elliptical fitting method was employed for size measurement of the polar bodies and oocytes for the allowance of morphological variance of the oocytes and their polar bodies. The proposed algorithm has been designed to be adaptable with typical commercial inverted microscopes with different criteria. The successful experimental results for this algorithm produce maximum errors of 5% for detection and 10% for reporting respectively

    Development of a Wireless Mobile Computing Platform for Fall Risk Prediction

    Get PDF
    Falls are a major health risk with which the elderly and disabled must contend. Scientific research on smartphone-based gait detection systems using the Internet of Things (IoT) has recently become an important component in monitoring injuries due to these falls. Analysis of human gait for detecting falls is the subject of many research projects. Progress in these systems, the capabilities of smartphones, and the IoT are enabling the advancement of sophisticated mobile computing applications that detect falls after they have occurred. This detection has been the focus of most fall-related research; however, ensuring preventive measures that predict a fall is the goal of this health monitoring system. By performing a thorough investigation of existing systems and using predictive analytics, we built a novel mobile application/system that uses smartphone and smart-shoe sensors to predict and alert the user of a fall before it happens. The major focus of this dissertation has been to develop and implement this unique system to help predict the risk of falls. We used built-in sensors --accelerometer and gyroscope-- in smartphones and a sensor embedded smart-shoe. The smart-shoe contains four pressure sensors with a Wi-Fi communication module to unobtrusively collect data. The interactions between these sensors and the user resulted in distinct challenges for this research while also creating new performance goals based on the unique characteristics of this system. In addition to providing an exciting new tool for fall prediction, this work makes several contributions to current and future generation mobile computing research

    UNOBTRUSIVE Technique Based On Infrared Thermal Imaging For Emotion Recognition In Children- With-asd- Robot Interaction

    Get PDF
    Emoções são relevantes para as relações sociais, e indivíduos com Transtorno do Espectro Autista (TEA) possuem compreensão e expressão de emoções prejudicadas. Esta tese consiste em estudos sobre a análise de emoções em crianças com desenvolvimento típico e crianças com TEA (idade entre 7 e 12 anos), por meio do imageamento térmico infravermelho (ITIV), uma técnica segura e não obtrusiva (isenta de contato), usada para registrar variações de temperatura em regiões de interesse (RIs) da face, tais como testa, nariz, bochechas, queixo e regiões periorbital e perinasal. Um robô social chamado N-MARIA (Novo-Robô Autônomo Móvel para Interação com Autistas) foi usado como estímulo emocional e mediador de tarefas sociais e pedagógicas. O primeiro estudo avaliou a variação térmica facial para cinco emoções (alegria, tristeza, medo, nojo e surpresa), desencadeadas por estímulos audiovisuais afetivos, em crianças com desenvolvimento típico. O segundo estudo avaliou a variação térmica facial para três emoções (alegria, surpresa e medo), desencadeadas pelo robô social N-MARIA, em crianças com desenvolvimento típico. No terceiro estudo, duas sessões foram realizadas com crianças com TEA, nas quais tarefas sociais e pedagógicas foram avaliadas tendo o robô N-MARIA como ferramenta e mediador da interação com as crianças. Uma análise emocional por variação térmica da face foi possível na segunda sessão, na qual o robô foi o estímulo para desencadear alegria, surpresa ou medo. Além disso, profissionais (professores, terapeuta ocupacional e psicóloga) avaliaram a usabilidade do robô social. Em geral, os resultados mostraram que o ITIV foi uma técnica eficiente para avaliar as emoções por meio de variações térmicas. No primeiro estudo, predominantes decréscimos térmicos foram observados na maioria das RIs, com as maiores variações de emissividade induzidas pelo nojo, felicidade e surpresa, e uma precisão maior que 85% para a classificação das cinco emoções. No segundo estudo, as maiores probabilidades de emoções detectadas pelo sistema de classificação foram para surpresa e alegria, e um aumento significativo de temperatura foi predominante no queixo e nariz. O terceiro estudo realizado com crianças com TEA encontrou aumentos térmicos significativos em todas as RIs e uma classificação com a maior probabilidade para surpresa. N-MARIA foi um estímulo promissor capaz de desencadear emoções positivas em crianças. A interação criança-com-TEA-e-robô foi positiva, com habilidades sociais e tarefas pedagógicas desempenhadas com sucesso pelas crianças. Além disso, a usabilidade do robô avaliada por profissionais alcançou pontuação satisfatória, indicando a N-MARIA como uma potencial ferramenta para terapias

    A Cloud-Based Extensible Avatar For Human Robot Interaction

    Get PDF
    Adding an interactive avatar to a human-robot interface requires the development of tools that animate the avatar so as to simulate an intelligent conversation partner. Here we describe a toolkit that supports interactive avatar modeling for human-computer interaction. The toolkit utilizes cloud-based speech-to-text software that provides active listening, a cloud-based AI to generate appropriate textual responses to user queries, and a cloud-based text-to-speech generation engine to generate utterances for this text. This output is combined with a cloud-based 3D avatar animation synchronized to the spoken response. Generated text responses are embedded within an XML structure that allows for tuning the nature of the avatar animation to simulate different emotional states. An expression package controls the avatar's facial expressions. The introduced rendering latency is obscured through parallel processing and an idle loop process that animates the avatar between utterances. The efficiency of the approach is validated through a formal user study

    Miniature Mobile Systems for Inspection of Ferromagnetic Structures

    Get PDF
    Power plants require periodical inspections to control their state. To ensure a safe operation, parts that could fail before the next inspection are repaired or replaced, since a forced outage due to a failure can cost up to millions of dollars per day. Non-Destructive Testing (NDT) methods are used to detect different defects that could occur, such as cracks, thinning, corrosion or pitting. Some parts are inspected directly in situ, but may be difficult to access; these can require opening access holes or building scaffoldings. Other parts are disassembled and inspected in workshops, when the required inspection tools cannot be moved. In this thesis, we developed innovative miniature mobile systems able to move within these small and complex installations and inspect them. Bringing sensors to difficult-to-access places using climbing robots can reduce the inspection time and costs, because some dismantling or scaffolding can be eliminated. New miniature sensors can help to inspect complex parts without disassembling them, and reduce the inspection costs, as well. To perform such inspections, miniature mobile systems require a high mobility and keen sensing capabilities. The following approach was used to develop these systems. First, different innovative climbing robots are developed. They use magnetic adhesion, as most structures are made of ferromagnetic steel. Then, vision is embedded in some of the robots. Performing visual inspections becomes thus possible, as well as controlling the robots remotely, without viewing them. Finally, non-visual NDT sensors are developed and embedded in some of the robots, allowing them to detect defects that simple vision cannot detect. Achieving the miniaturization of the developed systems requires strong system integration during these three steps. A set of examples for the different steps has been designed, implemented and tested to illustrate this approach. The Tripillars robots, for instance, use caterpillars, and are able to climb on surfaces of any inclination and to pass inner angles. The Cy-mag3Ds robots use an innovative magnetic wheel concept, and are able to climb on surfaces of any inclination and to pass inner angles, outer angles and surface flips. The Tubulos robots move in tubes of 25 mm diameter at any inclination. All robots embed the required electronics, actuators, sensors and energy to be controlled remotely by the user. Wireless transmission of the commands signals allows the systems to maintain their full mobility without disturbing cables. Integrating Hall sensors near the magnetic systems allows them to measure the adhesion force. This information improves the security of the robots, since when the adhesion force becomes low, the robots can be stopped before they fall. The Tubulo II uses Magnetic Switchable Devices (MSDs) for adhesion. An MSD is composed of a ferromagnetic stator and one or more moving magnets; it has the advantage of requiring only a low force to switch on or off a high adhesion force. MSDs have the advantage of being easy to clean of the magnetic dust that is present in most real environments and that sticks strongly to magnetic systems. As an additional step toward inspection, a camera is embedded on the Cy-mag3D II and the Tubulos. It allows these robots to inspect visually the structures the robots move in, and to control them remotely. The perspective of a climbing robot in an unknown environment is often not enough to give the user a sense of its scale, and to move efficiently in it. A distance sensor is designed and embedded on the Cy-mag3D II, which increases the user's perception of the environment substantially; Finally, an innovative miniature Magnetic Particle Inspection (MPI) system was developed to inspect turbine blades without disassembling them. An MSD is used to perform the required magnetization. The system can automatically inspect a flat surface, performing all the required steps of MPI: magnetize, spray magnetic particles, record images under UV light and demagnetize. Thanks to the strong integration and miniaturization, the system can potentially inspect complex parts such as steam turbines

    Real-time generation and adaptation of social companion robot behaviors

    Get PDF
    Social robots will be part of our future homes. They will assist us in everyday tasks, entertain us, and provide helpful advice. However, the technology still faces challenges that must be overcome to equip the machine with social competencies and make it a socially intelligent and accepted housemate. An essential skill of every social robot is verbal and non-verbal communication. In contrast to voice assistants, smartphones, and smart home technology, which are already part of many people's lives today, social robots have an embodiment that raises expectations towards the machine. Their anthropomorphic or zoomorphic appearance suggests they can communicate naturally with speech, gestures, or facial expressions and understand corresponding human behaviors. In addition, robots also need to consider individual users' preferences: everybody is shaped by their culture, social norms, and life experiences, resulting in different expectations towards communication with a robot. However, robots do not have human intuition - they must be equipped with the corresponding algorithmic solutions to these problems. This thesis investigates the use of reinforcement learning to adapt the robot's verbal and non-verbal communication to the user's needs and preferences. Such non-functional adaptation of the robot's behaviors primarily aims to improve the user experience and the robot's perceived social intelligence. The literature has not yet provided a holistic view of the overall challenge: real-time adaptation requires control over the robot's multimodal behavior generation, an understanding of human feedback, and an algorithmic basis for machine learning. Thus, this thesis develops a conceptual framework for designing real-time non-functional social robot behavior adaptation with reinforcement learning. It provides a higher-level view from the system designer's perspective and guidance from the start to the end. It illustrates the process of modeling, simulating, and evaluating such adaptation processes. Specifically, it guides the integration of human feedback and social signals to equip the machine with social awareness. The conceptual framework is put into practice for several use cases, resulting in technical proofs of concept and research prototypes. They are evaluated in the lab and in in-situ studies. These approaches address typical activities in domestic environments, focussing on the robot's expression of personality, persona, politeness, and humor. Within this scope, the robot adapts its spoken utterances, prosody, and animations based on human explicit or implicit feedback.Soziale Roboter werden Teil unseres zukünftigen Zuhauses sein. Sie werden uns bei alltäglichen Aufgaben unterstützen, uns unterhalten und uns mit hilfreichen Ratschlägen versorgen. Noch gibt es allerdings technische Herausforderungen, die zunächst überwunden werden müssen, um die Maschine mit sozialen Kompetenzen auszustatten und zu einem sozial intelligenten und akzeptierten Mitbewohner zu machen. Eine wesentliche Fähigkeit eines jeden sozialen Roboters ist die verbale und nonverbale Kommunikation. Im Gegensatz zu Sprachassistenten, Smartphones und Smart-Home-Technologien, die bereits heute Teil des Lebens vieler Menschen sind, haben soziale Roboter eine Verkörperung, die Erwartungen an die Maschine weckt. Ihr anthropomorphes oder zoomorphes Aussehen legt nahe, dass sie in der Lage sind, auf natürliche Weise mit Sprache, Gestik oder Mimik zu kommunizieren, aber auch entsprechende menschliche Kommunikation zu verstehen. Darüber hinaus müssen Roboter auch die individuellen Vorlieben der Benutzer berücksichtigen. So ist jeder Mensch von seiner Kultur, sozialen Normen und eigenen Lebenserfahrungen geprägt, was zu unterschiedlichen Erwartungen an die Kommunikation mit einem Roboter führt. Roboter haben jedoch keine menschliche Intuition - sie müssen mit entsprechenden Algorithmen für diese Probleme ausgestattet werden. In dieser Arbeit wird der Einsatz von bestärkendem Lernen untersucht, um die verbale und nonverbale Kommunikation des Roboters an die Bedürfnisse und Vorlieben des Benutzers anzupassen. Eine solche nicht-funktionale Anpassung des Roboterverhaltens zielt in erster Linie darauf ab, das Benutzererlebnis und die wahrgenommene soziale Intelligenz des Roboters zu verbessern. Die Literatur bietet bisher keine ganzheitliche Sicht auf diese Herausforderung: Echtzeitanpassung erfordert die Kontrolle über die multimodale Verhaltenserzeugung des Roboters, ein Verständnis des menschlichen Feedbacks und eine algorithmische Basis für maschinelles Lernen. Daher wird in dieser Arbeit ein konzeptioneller Rahmen für die Gestaltung von nicht-funktionaler Anpassung der Kommunikation sozialer Roboter mit bestärkendem Lernen entwickelt. Er bietet eine übergeordnete Sichtweise aus der Perspektive des Systemdesigners und eine Anleitung vom Anfang bis zum Ende. Er veranschaulicht den Prozess der Modellierung, Simulation und Evaluierung solcher Anpassungsprozesse. Insbesondere wird auf die Integration von menschlichem Feedback und sozialen Signalen eingegangen, um die Maschine mit sozialem Bewusstsein auszustatten. Der konzeptionelle Rahmen wird für mehrere Anwendungsfälle in die Praxis umgesetzt, was zu technischen Konzeptnachweisen und Forschungsprototypen führt, die in Labor- und In-situ-Studien evaluiert werden. Diese Ansätze befassen sich mit typischen Aktivitäten in häuslichen Umgebungen, wobei der Schwerpunkt auf dem Ausdruck der Persönlichkeit, dem Persona, der Höflichkeit und dem Humor des Roboters liegt. In diesem Rahmen passt der Roboter seine Sprache, Prosodie, und Animationen auf Basis expliziten oder impliziten menschlichen Feedbacks an

    Recent Developments in Smart Healthcare

    Get PDF
    Medicine is undergoing a sector-wide transformation thanks to the advances in computing and networking technologies. Healthcare is changing from reactive and hospital-centered to preventive and personalized, from disease focused to well-being centered. In essence, the healthcare systems, as well as fundamental medicine research, are becoming smarter. We anticipate significant improvements in areas ranging from molecular genomics and proteomics to decision support for healthcare professionals through big data analytics, to support behavior changes through technology-enabled self-management, and social and motivational support. Furthermore, with smart technologies, healthcare delivery could also be made more efficient, higher quality, and lower cost. In this special issue, we received a total 45 submissions and accepted 19 outstanding papers that roughly span across several interesting topics on smart healthcare, including public health, health information technology (Health IT), and smart medicine

    UAV or Drones for Remote Sensing Applications in GPS/GNSS Enabled and GPS/GNSS Denied Environments

    Get PDF
    The design of novel UAV systems and the use of UAV platforms integrated with robotic sensing and imaging techniques, as well as the development of processing workflows and the capacity of ultra-high temporal and spatial resolution data, have enabled a rapid uptake of UAVs and drones across several industries and application domains.This book provides a forum for high-quality peer-reviewed papers that broaden awareness and understanding of single- and multiple-UAV developments for remote sensing applications, and associated developments in sensor technology, data processing and communications, and UAV system design and sensing capabilities in GPS-enabled and, more broadly, Global Navigation Satellite System (GNSS)-enabled and GPS/GNSS-denied environments.Contributions include:UAV-based photogrammetry, laser scanning, multispectral imaging, hyperspectral imaging, and thermal imaging;UAV sensor applications; spatial ecology; pest detection; reef; forestry; volcanology; precision agriculture wildlife species tracking; search and rescue; target tracking; atmosphere monitoring; chemical, biological, and natural disaster phenomena; fire prevention, flood prevention; volcanic monitoring; pollution monitoring; microclimates; and land use;Wildlife and target detection and recognition from UAV imagery using deep learning and machine learning techniques;UAV-based change detection
    corecore