10 research outputs found

    On-Board High-Performance Computing For Multi-Robot Aerial Systems

    Get PDF
    With advancements in low-energy-consumption multi/many core embedded-computing devices, a logical transition for robotic systems is Supercomputing, formally known as high performance computing (HPC), a tool currently used for solving the most complex problems for humankind such as the origin of the universe, the finding of deceases’ cures, etc. As such, HPC has always been focused on scientific inquires. However, its scope can be widening up to include missions carried out with robots. Since a robot could be embedded with computing devices, a set of robots could be set as a cluster of computers, the most reliable HPC infrastructure. The advantages of setting up such an infrastructure are many, from speeding up on-board computation up to providing a multi-robot system with robustness, scalability, user transparency, etc., all key features in supercomputing. This chapter presents a middleware technology for the enabling of high performance computing in multi-robot systems, in particular for aerial robots. The technology can be used for the automatic deployment of cluster computing in multi-robot systems, the utilization of standard HPC technologies, and the development of HPC applications in multiple fields such as precision agriculture, military, civilian, search and rescue, etc

    Nonlinear predictive threshold model for real-time abnormal gait detection

    Get PDF
    Falls are critical events for human health due to the associated risk of physical and psychological injuries. Several fall related systems have been developed in order to reduce injuries. Among them, fall-risk prediction systems are one of the most promising approaches, as they strive to predict a fall before its occurrence. A category of fall-risk prediction systems evaluates balance and muscle strength through some clinical functional assessment tests, while other prediction systems investigate the recognition of abnormal gait patterns to predict a fall in real-time. The main contribution of this paper is a nonlinear model of user gait in combination with a threshold-based classification in order to recognize abnormal gait patterns with low complexity and high accuracy. In addition, a dataset with realistic parameters is prepared to simulate abnormal walks and to evaluate fall prediction methods. The accelerometer and gyroscope sensors available in a smartphone have been exploited to create the dataset. The proposed approach has been implemented and compared with the state-of-the-art approaches showing that it is able to predict an abnormal walk with a higher accuracy (93.5%) and a higher efficiency (up to 3.5 faster) than other feasible approaches

    Oocyte positional recognition for automatic manipulation in ICSI

    Get PDF
    Polar body position detection is a necessary process in the automation of micromanipulation systems specifically used in intracytoplasmic sperm injection (ICSI) applications. The polar body is an intracellular structure, which accommodates the chromosomes, and the injection must not only avoid this structure but be at the furthest point away from it. This paper aims to develop a vision recognition system for the recognition of the oocyte and its polar body in order to be used to inform the automated injection mechanism to avoid the polar body. The novelty of the paper is its capability to determine the position and orientation of the oocyte and its polar body. The gradient-weighted Hough transform method was employed for the detection of the location of the oocyte and its polar body. Moreover, a new elliptical fitting method was employed for size measurement of the polar bodies and oocytes for the allowance of morphological variance of the oocytes and their polar bodies. The proposed algorithm has been designed to be adaptable with typical commercial inverted microscopes with different criteria. The successful experimental results for this algorithm produce maximum errors of 5% for detection and 10% for reporting respectively

    Horizontal Review on Video Surveillance for Smart Cities: Edge Devices, Applications, Datasets, and Future Trends

    Get PDF
    The automation strategy of today’s smart cities relies on large IoT (internet of Things) systems that collect big data analytics to gain insights. Although there have been recent reviews in this field, there is a remarkable gap that addresses four sides of the problem. Namely, the application of video surveillance in smart cities, algorithms, datasets, and embedded systems. In this paper, we discuss the latest datasets used, the algorithms used, and the recent advances in embedded systems to form edge vision computing are introduced. Moreover, future trends and challenges are addressed

    Development of a Wireless Mobile Computing Platform for Fall Risk Prediction

    Get PDF
    Falls are a major health risk with which the elderly and disabled must contend. Scientific research on smartphone-based gait detection systems using the Internet of Things (IoT) has recently become an important component in monitoring injuries due to these falls. Analysis of human gait for detecting falls is the subject of many research projects. Progress in these systems, the capabilities of smartphones, and the IoT are enabling the advancement of sophisticated mobile computing applications that detect falls after they have occurred. This detection has been the focus of most fall-related research; however, ensuring preventive measures that predict a fall is the goal of this health monitoring system. By performing a thorough investigation of existing systems and using predictive analytics, we built a novel mobile application/system that uses smartphone and smart-shoe sensors to predict and alert the user of a fall before it happens. The major focus of this dissertation has been to develop and implement this unique system to help predict the risk of falls. We used built-in sensors --accelerometer and gyroscope-- in smartphones and a sensor embedded smart-shoe. The smart-shoe contains four pressure sensors with a Wi-Fi communication module to unobtrusively collect data. The interactions between these sensors and the user resulted in distinct challenges for this research while also creating new performance goals based on the unique characteristics of this system. In addition to providing an exciting new tool for fall prediction, this work makes several contributions to current and future generation mobile computing research

    UNOBTRUSIVE Technique Based On Infrared Thermal Imaging For Emotion Recognition In Children- With-asd- Robot Interaction

    Get PDF
    Emoções são relevantes para as relações sociais, e indivíduos com Transtorno do Espectro Autista (TEA) possuem compreensão e expressão de emoções prejudicadas. Esta tese consiste em estudos sobre a análise de emoções em crianças com desenvolvimento típico e crianças com TEA (idade entre 7 e 12 anos), por meio do imageamento térmico infravermelho (ITIV), uma técnica segura e não obtrusiva (isenta de contato), usada para registrar variações de temperatura em regiões de interesse (RIs) da face, tais como testa, nariz, bochechas, queixo e regiões periorbital e perinasal. Um robô social chamado N-MARIA (Novo-Robô Autônomo Móvel para Interação com Autistas) foi usado como estímulo emocional e mediador de tarefas sociais e pedagógicas. O primeiro estudo avaliou a variação térmica facial para cinco emoções (alegria, tristeza, medo, nojo e surpresa), desencadeadas por estímulos audiovisuais afetivos, em crianças com desenvolvimento típico. O segundo estudo avaliou a variação térmica facial para três emoções (alegria, surpresa e medo), desencadeadas pelo robô social N-MARIA, em crianças com desenvolvimento típico. No terceiro estudo, duas sessões foram realizadas com crianças com TEA, nas quais tarefas sociais e pedagógicas foram avaliadas tendo o robô N-MARIA como ferramenta e mediador da interação com as crianças. Uma análise emocional por variação térmica da face foi possível na segunda sessão, na qual o robô foi o estímulo para desencadear alegria, surpresa ou medo. Além disso, profissionais (professores, terapeuta ocupacional e psicóloga) avaliaram a usabilidade do robô social. Em geral, os resultados mostraram que o ITIV foi uma técnica eficiente para avaliar as emoções por meio de variações térmicas. No primeiro estudo, predominantes decréscimos térmicos foram observados na maioria das RIs, com as maiores variações de emissividade induzidas pelo nojo, felicidade e surpresa, e uma precisão maior que 85% para a classificação das cinco emoções. No segundo estudo, as maiores probabilidades de emoções detectadas pelo sistema de classificação foram para surpresa e alegria, e um aumento significativo de temperatura foi predominante no queixo e nariz. O terceiro estudo realizado com crianças com TEA encontrou aumentos térmicos significativos em todas as RIs e uma classificação com a maior probabilidade para surpresa. N-MARIA foi um estímulo promissor capaz de desencadear emoções positivas em crianças. A interação criança-com-TEA-e-robô foi positiva, com habilidades sociais e tarefas pedagógicas desempenhadas com sucesso pelas crianças. Além disso, a usabilidade do robô avaliada por profissionais alcançou pontuação satisfatória, indicando a N-MARIA como uma potencial ferramenta para terapias

    A Cloud-Based Extensible Avatar For Human Robot Interaction

    Get PDF
    Adding an interactive avatar to a human-robot interface requires the development of tools that animate the avatar so as to simulate an intelligent conversation partner. Here we describe a toolkit that supports interactive avatar modeling for human-computer interaction. The toolkit utilizes cloud-based speech-to-text software that provides active listening, a cloud-based AI to generate appropriate textual responses to user queries, and a cloud-based text-to-speech generation engine to generate utterances for this text. This output is combined with a cloud-based 3D avatar animation synchronized to the spoken response. Generated text responses are embedded within an XML structure that allows for tuning the nature of the avatar animation to simulate different emotional states. An expression package controls the avatar's facial expressions. The introduced rendering latency is obscured through parallel processing and an idle loop process that animates the avatar between utterances. The efficiency of the approach is validated through a formal user study

    Real-time generation and adaptation of social companion robot behaviors

    Get PDF
    Social robots will be part of our future homes. They will assist us in everyday tasks, entertain us, and provide helpful advice. However, the technology still faces challenges that must be overcome to equip the machine with social competencies and make it a socially intelligent and accepted housemate. An essential skill of every social robot is verbal and non-verbal communication. In contrast to voice assistants, smartphones, and smart home technology, which are already part of many people's lives today, social robots have an embodiment that raises expectations towards the machine. Their anthropomorphic or zoomorphic appearance suggests they can communicate naturally with speech, gestures, or facial expressions and understand corresponding human behaviors. In addition, robots also need to consider individual users' preferences: everybody is shaped by their culture, social norms, and life experiences, resulting in different expectations towards communication with a robot. However, robots do not have human intuition - they must be equipped with the corresponding algorithmic solutions to these problems. This thesis investigates the use of reinforcement learning to adapt the robot's verbal and non-verbal communication to the user's needs and preferences. Such non-functional adaptation of the robot's behaviors primarily aims to improve the user experience and the robot's perceived social intelligence. The literature has not yet provided a holistic view of the overall challenge: real-time adaptation requires control over the robot's multimodal behavior generation, an understanding of human feedback, and an algorithmic basis for machine learning. Thus, this thesis develops a conceptual framework for designing real-time non-functional social robot behavior adaptation with reinforcement learning. It provides a higher-level view from the system designer's perspective and guidance from the start to the end. It illustrates the process of modeling, simulating, and evaluating such adaptation processes. Specifically, it guides the integration of human feedback and social signals to equip the machine with social awareness. The conceptual framework is put into practice for several use cases, resulting in technical proofs of concept and research prototypes. They are evaluated in the lab and in in-situ studies. These approaches address typical activities in domestic environments, focussing on the robot's expression of personality, persona, politeness, and humor. Within this scope, the robot adapts its spoken utterances, prosody, and animations based on human explicit or implicit feedback.Soziale Roboter werden Teil unseres zukünftigen Zuhauses sein. Sie werden uns bei alltäglichen Aufgaben unterstützen, uns unterhalten und uns mit hilfreichen Ratschlägen versorgen. Noch gibt es allerdings technische Herausforderungen, die zunächst überwunden werden müssen, um die Maschine mit sozialen Kompetenzen auszustatten und zu einem sozial intelligenten und akzeptierten Mitbewohner zu machen. Eine wesentliche Fähigkeit eines jeden sozialen Roboters ist die verbale und nonverbale Kommunikation. Im Gegensatz zu Sprachassistenten, Smartphones und Smart-Home-Technologien, die bereits heute Teil des Lebens vieler Menschen sind, haben soziale Roboter eine Verkörperung, die Erwartungen an die Maschine weckt. Ihr anthropomorphes oder zoomorphes Aussehen legt nahe, dass sie in der Lage sind, auf natürliche Weise mit Sprache, Gestik oder Mimik zu kommunizieren, aber auch entsprechende menschliche Kommunikation zu verstehen. Darüber hinaus müssen Roboter auch die individuellen Vorlieben der Benutzer berücksichtigen. So ist jeder Mensch von seiner Kultur, sozialen Normen und eigenen Lebenserfahrungen geprägt, was zu unterschiedlichen Erwartungen an die Kommunikation mit einem Roboter führt. Roboter haben jedoch keine menschliche Intuition - sie müssen mit entsprechenden Algorithmen für diese Probleme ausgestattet werden. In dieser Arbeit wird der Einsatz von bestärkendem Lernen untersucht, um die verbale und nonverbale Kommunikation des Roboters an die Bedürfnisse und Vorlieben des Benutzers anzupassen. Eine solche nicht-funktionale Anpassung des Roboterverhaltens zielt in erster Linie darauf ab, das Benutzererlebnis und die wahrgenommene soziale Intelligenz des Roboters zu verbessern. Die Literatur bietet bisher keine ganzheitliche Sicht auf diese Herausforderung: Echtzeitanpassung erfordert die Kontrolle über die multimodale Verhaltenserzeugung des Roboters, ein Verständnis des menschlichen Feedbacks und eine algorithmische Basis für maschinelles Lernen. Daher wird in dieser Arbeit ein konzeptioneller Rahmen für die Gestaltung von nicht-funktionaler Anpassung der Kommunikation sozialer Roboter mit bestärkendem Lernen entwickelt. Er bietet eine übergeordnete Sichtweise aus der Perspektive des Systemdesigners und eine Anleitung vom Anfang bis zum Ende. Er veranschaulicht den Prozess der Modellierung, Simulation und Evaluierung solcher Anpassungsprozesse. Insbesondere wird auf die Integration von menschlichem Feedback und sozialen Signalen eingegangen, um die Maschine mit sozialem Bewusstsein auszustatten. Der konzeptionelle Rahmen wird für mehrere Anwendungsfälle in die Praxis umgesetzt, was zu technischen Konzeptnachweisen und Forschungsprototypen führt, die in Labor- und In-situ-Studien evaluiert werden. Diese Ansätze befassen sich mit typischen Aktivitäten in häuslichen Umgebungen, wobei der Schwerpunkt auf dem Ausdruck der Persönlichkeit, dem Persona, der Höflichkeit und dem Humor des Roboters liegt. In diesem Rahmen passt der Roboter seine Sprache, Prosodie, und Animationen auf Basis expliziten oder impliziten menschlichen Feedbacks an

    Recent Developments in Smart Healthcare

    Get PDF
    Medicine is undergoing a sector-wide transformation thanks to the advances in computing and networking technologies. Healthcare is changing from reactive and hospital-centered to preventive and personalized, from disease focused to well-being centered. In essence, the healthcare systems, as well as fundamental medicine research, are becoming smarter. We anticipate significant improvements in areas ranging from molecular genomics and proteomics to decision support for healthcare professionals through big data analytics, to support behavior changes through technology-enabled self-management, and social and motivational support. Furthermore, with smart technologies, healthcare delivery could also be made more efficient, higher quality, and lower cost. In this special issue, we received a total 45 submissions and accepted 19 outstanding papers that roughly span across several interesting topics on smart healthcare, including public health, health information technology (Health IT), and smart medicine

    UAV or Drones for Remote Sensing Applications in GPS/GNSS Enabled and GPS/GNSS Denied Environments

    Get PDF
    The design of novel UAV systems and the use of UAV platforms integrated with robotic sensing and imaging techniques, as well as the development of processing workflows and the capacity of ultra-high temporal and spatial resolution data, have enabled a rapid uptake of UAVs and drones across several industries and application domains.This book provides a forum for high-quality peer-reviewed papers that broaden awareness and understanding of single- and multiple-UAV developments for remote sensing applications, and associated developments in sensor technology, data processing and communications, and UAV system design and sensing capabilities in GPS-enabled and, more broadly, Global Navigation Satellite System (GNSS)-enabled and GPS/GNSS-denied environments.Contributions include:UAV-based photogrammetry, laser scanning, multispectral imaging, hyperspectral imaging, and thermal imaging;UAV sensor applications; spatial ecology; pest detection; reef; forestry; volcanology; precision agriculture wildlife species tracking; search and rescue; target tracking; atmosphere monitoring; chemical, biological, and natural disaster phenomena; fire prevention, flood prevention; volcanic monitoring; pollution monitoring; microclimates; and land use;Wildlife and target detection and recognition from UAV imagery using deep learning and machine learning techniques;UAV-based change detection
    corecore